Enterprises navigate risk, compliance in era of new AI regulations

By Bai Xiao, Zhong Wen Law Firm
0
340
LinkedIn
Facebook
Twitter
Whatsapp
Telegram
Copy link

In 2023, generative artificial intelligence (GAI) witnessed a shift from niche to mainstream development. With the amalgamation of data, algorithms and computing power, the disruptive nature of AI technology became increasingly prominent, finding widespread applications across various sectors.

Yet, attention is equally drawn to the impacts and risks it poses for labour relations and social ethics. Legislators were still at the stage of establishing macro-level guidelines and deciding on regulating AI in 2023, but 2024 heralds the refinement and implementation of substantive legislative provisions related to AI for enterprises.

This article draws on the current AI legislative frameworks in the EU, China and the US to offer strategic recommendations for businesses to mitigate risks and establish compliant frameworks in the widespread adoption of AI.

Laws in EU, China, US

Bai Xiao
Bai Xiao
Partner
Zhong Wen Law Firm
Tel: +86 10 5178 3535
E-mail:
xiao.bai@zwlawyer.com

On 8 December 2023, the European Parliament, the European Council and the European Commission reached an agreement on the Artificial Intelligence Act, which is poised to become the world’s first comprehensive legislation regulating AI, following in the footsteps of the General Data Protection Regulation as another global standard set by the EU.

The act adopts a tiered regulatory approach, distinguishing four levels of risk for AI systems – unacceptable risk, high risk, limited risk and minimal or no risk – and sets corresponding obligations and requirements. Of note is the prohibition against developing and using specific AI systems that pose unacceptable risks to human safety, along with specific regulatory requirements for high-risk AI systems.

China has an open and inclusive attitude towards the development of AI technology, insisting on the principle of balancing development and security, promoting innovation while requiring an adherence to the rule of law. China’s regulations for AI preceded those of the EU, with seven departments, including the Cyberspace Administration of China, issuing the Interim Measures for the Management of Generative Artificial Intelligence Services in July 2023, providing a Chinese solution to the challenges of AI regulation, while China’s special AI legislation is still in the works.

Besides legal tools such as the Personal Information Protection Law, the Data Security Law and the Cybersecurity Law, where applicable, targeted legislation is mainly carried out at the level of departmental regulations or normative documents such as the Interim Measures and the Internet Information Service Algorithm Recommendation Management Regulations, and Regulations on the In-depth Synthesis Management of Internet Information Services. These help in formulating flexible and cautious regulatory measures against new risks brought about by AI products.

The US has yet to have systematic legislation specifically governing AI. On 4 October 2022, the White House Office of Science and Technology Policy released the Blueprint for an AI Bill of Rights, making suggestions regarding system security, algorithmic discrimination protections, data privacy, notice and explanation, alternatives to AI, and reserves.

Risks facing enterprises

AI, especially GAI, operates on the core architecture of data input, processing and output. While data and information are protected as legal entities, their boundaries are not clearly delineated because of their dispersed, intangible and replicable nature, and theoretical paradigms and regulatory systems have yet to mature.

Identifying rights holders and obtaining their informed consent both pose feasibility obstacles and involve cost-benefit considerations. Therefore, AI technologies used for data and information processing inevitably entail associated risks.

Because of human limitations in understanding cognitive structures, AI faces ethical risks, which are a point of controversy between optimism and security-oriented ideology. However, at this stage, AI remains a tool subject to human control, so AI risks include both perils stemming from technical flaws in AI itself, and dangers arising from human misuse of the technology.

Typical legal risks that enterprises face in the research and application of AI technology include illegal collection and processing of personal information, privacy breaches, exposure of trade secrets, infringement of intellectual property rights, unfair competition, generation of illegal or fabricated content, data poisoning, and algorithmic biases.

Compliance recommendations

With the proliferation of AI applications, more enterprises are beginning to use AI to support their business needs. From employees using AI to generate work documents and write program code, to management using AI to assist in decision-making, enterprises particularly value the cost-reduction and efficiency enhancement functions of AI. However, the vulnerabilities and threats inherent in AI tools are likely to bring systemic risks to enterprises.

Enterprises may be developers, providers, users or combinations of these roles, with compliance emphasis varying accordingly. As developers and providers, enterprises should focus on complying with relevant legislative provisions in data and information collection and processing, and should adhere to review requirements in content output.

As users, enterprises must enhance discernment, use AI tools cautiously with sensitive information, and remain vigilant against service providers engaging in illegal data collection and processing.

From a practical perspective, it is recommended to start AI compliance work from the following aspects:

(1) Establish a risk identification and assessment mechanism, conducting regular self-inspections and assessments. Enterprises should promptly identify, record, review, evaluate and report potential risks based on their business characteristics and the use of AI tools, and prepare contingency plans accordingly;

(2) Familiarise yourself with legislative and regulatory requirements. Tiered regulatory supervision is the trend according to existing legislation. Enterprises should pay attention to legislative updates and regulatory dynamics, and integrate relevant requirements into products and services, as well as contracts with suppliers and users;

(3) Establish internal management systems, clarify usage norms and security strategies for AI tools, conduct regular risk control and compliance training for employees in line with new AI regulations, and ensure compliance through incentive mechanisms; and

(4) Cultivate a talent pool and implement compliance obligations in specific positions. For example, set up targeted AI compliance positions such as information security officers, data privacy officers and model audit officers in line with new AI regulations.


Bai Xiao is a partner at Zhong Wen Law Firm. She can be contacted at +86 10 5178 3535 or by e-mail at xiao.bai@zwlawyer.com

LinkedIn
Facebook
Twitter
Whatsapp
Telegram
Copy link