With China having established the National Data Administration, how should enterprises respond to the central government’s regulation of data? Ge Mengying, general counsel and data compliance officer of TalkingData, provides some answers
WITH THE ESTABLISHMENT of the National Data Administration on 25 October 2023, China’s data compliance has entered a new phase. Previously, when the central government was determining the way to manage data, it considered both internal and external factors.
Internally, at the legislative level, it used the Personal Information Protection Law (PIPL) to balance the development of the industry and the protection of personal information, while externally it used the Data Security Law to respond to international competitive pressure. Over time, the central government became aware that data has a beneficial role independent of enterprises and individuals.
Today, the objectives of data management are no longer limited to having enterprises ensure the security of data and, from the individual’s perspective, safeguarding the rights and interests of individuals. Now they include unleashing the developmental dividend of data through data factorisation, while safeguarding the sovereignty and security interests of the country by strengthening its ability to control data.
Change in way data is managed
In the past, enterprises’ data security was originally protected under the umbrella of cybersecurity, but since the Data Security Law went into effect on 1 September 2021, it has begun to be treated as an independent management matter and, after undergoing a complete system design, has become comprehensive, systematic and authoritative in nature.
The Data Security Law delineates two principal types of data, personal information and key data. What are key data? They are not defined by enterprises, but rather by the central government in a top-down manner by assessing the value and the interests that data could affect, then incorporating appropriate types in the key data catalogue and requiring organisations that hold or possess such data to accord it special protection in accordance with central government regulations.
In other words, both personal information and corporate data could be deemed “key data” if they could jeopardise the interests of the whole.
With respect to the classification of data, the Data Security Law adopts a pragmatic approach. It does not change the internal business model and procedures of an enterprise, but only objectively describes the types of data collected and generated in such model and procedures, classifying them according to the object impacted, and the scope or extent of the impact in the event they are compromised, based on the security attributes (integrity, confidentiality and usability) of the data.
This approach also conspicuously reflects the central government’s regulatory requirement that “data are independent of the interests of enterprises and individuals”.
Personal information protection
With the rapid development of artificial intelligence (AI), automated decision-making by machines has penetrated all aspects of business production and operation, as well as everyday life, and will inevitably have a significant impact on individuals, enterprises and the country. AI is set to dominate in such key fields as precision medicine, intelligent transportation, big data credit investigation, etc.
AI is no longer merely a tool for realising human will and ideas, but also a counsellor or assistant to humans, an expert in a certain field or, at the extreme, even a decision maker in significant matters. Accordingly, the PIPL focuses on two major topics of innovation: algorithms and mega-platforms.
The PIPL addresses automated decision-making, putting forward such requirements as ensuring the transparency of decision-making, ensuring the impartiality of decision outcomes, not proffering unreasonable differential treatment, etc., in response to such hot-button issues as big data-enabled price discrimination against existing customers, and non-compliant personalised recommendation.
However, algorithms differ from automated decision-making. Algorithms have a distinctive feature in that they can make a decision or recommendation for a specific individual without requiring the individual’s personal information. Accordingly, algorithms have not been completely incorporated into the framework of the PIPL.
Instead, the Cyberspace Administration of China has issued two dedicated documents: the Regulations for the Administration of Recommendation by Internet Information Service Algorithms; and the Guiding Opinions on Strengthening the Comprehensive Governance of Internet Information Service Algorithms, aimed at regulating algorithms in a directed manner.
Furthermore, the PIPL defines a new category of personal information processors – namely, those that provide critical internet platform services, that have a vast number of users, and that have complex types of business, i.e. “mega internet platforms” – and sets out the special obligations they are required to perform.
In recent years, the relevant functional authorities and regulators have launched dedicated campaigns against the unlawful collection and use of personal information by apps and enterprises, achieving substantial results.
Still, certain issues remain. The number of apps is vast and continually growing, and their ways of collecting personal information are constantly evolving, making it difficult to achieve comprehensive and timely coverage with conventional governance. It is precisely for this reason that the PIPL starts directly at the key stages and expressly sets out the pertinent responsibilities of mega platforms at the level of laws.
Dividends from data factorisation
Data have a certain economic effect, and such economic effect derives mainly from two factors.
The reproducibility and non-competitive nature of data. Data are highly reproducible and repeated use of the same will not reduce their effectiveness. Data have multiple uses and their use by multiple different parties can benefit all of those parties and significantly reduce the costs to the public as a whole. This is the basic driver of the economic benefits that enterprises can derive from data sharing and data reuse.
Data are also aggregative in nature. More insights and economic value can be extracted from aggregated datasets, such datasets are more effective than the application of independent datasets, and the marginal cost of expanding and analysing the data is smaller.
Accordingly, there is tremendous value in data factorisation, but the interests attached to that data must not be ignored. If one wishes to promote data openness and data sharing among organisations, and thereby promote data factorisation, then a necessary precondition is the due protection of such interests. This includes the lawful rights and interests of individual users, the right of enterprises to lawfully use information, and national security and the public interest.
How should enterprises respond?
In the author’s view, it is imperative that the six aspects below are top of mind when an enterprise examines its own data protection and compliance strategies:
- An enterprise should pay attention to compliance in its use of personal information, such as in the areas of precision marketing, big data-enabled price discrimination against existing customers, etc.
- An enterprise should revise its internal methods of classifying data in light of the central government’s data classification requirements; and look at the same from the central government’s perspective.
- An enterprise should respond to a request for data sharing in specific circumstances in the appropriate manner. Against the backdrop of data factorisation, the EU’s Digital Markets Act, Digital Services Act, and domestic market regulators set out certain relevant requirements.
- An enterprise should establish a flexible organisational structure to be able to adapt to new requirements in respect of data security management.
- An enterprise should promote the factorisation and circulation of data, and duly protect the various interests attached to data. We must recognise that the personal information collected by enterprises in their routine operations inevitably intertwines with the existing lawful rights and interests of individuals, and that enterprises expend numerous costs in collecting such information. Accordingly, the circulation of data as a factor should be encouraged.
China’s digital economy is developing rapidly and the compliance risks of enterprises have been fully exposed. In the face of the complex international situation and the rapid development of the new economy, the data compliance work of enterprises in China has become the focus of a multi-party game, which is very different compared to the domestic and foreign environments when protecting personal information in other countries. This is precisely why a more systematic approach is needed to resolve difficulties and open up a broader space to solve complex issues.