Generative AI: EU’s framework for classifying risks

By Ma Weiyang and Wang Ze, Lianggao Law Firm
0
106
LinkedIn
Facebook
Twitter
Whatsapp
Telegram
Copy link

The operation of generative artificial intelligence (AI) is divided roughly into three stages: data capture, training, and output. The core of each stage lies in processing and making use of massive data; the attached data risks vary by stage.

The compliance of data sources and whether the data is contaminated, abused or leaked have increasingly come into focus in data security, either in the preparatory stage of data learning and training, or in the subsequent stage of data output and storage.

This year, the Artificial Intelligence Act, which the EU approved in a sweeping vote, systematically provided for generative AI data protections. This article aims to analyse the protective measures outlined in the act concerning generative AI data, with a specific focus on the applicability of risk classification-based regulation within this framework.

Act overview

Ma Weiyang, Lianggao Law Firm
Ma Weiyang
Associate, Vice President,
Intellectual Property Research Institute
Lianggao Law Firm
Tel: +86 138 1168 7832
E-mail: 13811687832@139.com

On 13 March 2024, the European Parliament approved the AI act with 523 votes in favour, 46 votes against, and 49 abstentions. It became the first comprehensive regulation on AI in the world.

The draft act was first proposed by the European Commission in April 2021, suggesting a “risk-based approach” to defining technology-neutral AI systems. In December 2022, the EU Council adopted its common position on the act. Following the draft’s adoption by the parliament through a vote in June 2023, the EU member states, the council and the parliament engaged in tripartite talks and tentatively agreed on specific provisions of the act in December.

Despite this tentative agreement, the legislative process continued. Finally, after three years of consultations and discussions at all levels in the EU, the act was formally approved in March 2024, and is expected to take effect in June 2024.

The act aims to improve the EU’s internal market and promote the application of trustworthy AI systems. In addition, it sets a unified and clear regulatory framework for generative AI models in many aspects.

Together with the EU’s General Data Protection Regulation (GDPR), the act will promote the healthy development of AI and protect users’ IP rights and data security.

Data risk regulation

The act provides a classification-based framework for data risk regulation, categorising AI applications into four risk levels: minimal, limited, high, and unacceptable. It sets different regulatory requirements for the different risk levels.

Wang Ze, Lianggao Law Firm
Wang Ze
Researcher, Intellectual Property Research Institute
Lianggao Law Firm
Tel: +86 151 3554 2483
E-mail: wangze2483@163.com

Minimal-risk applications. For generative AI applications with minimal risks, such as some simple text generation tools, the act requires compliance with basic data protection principles including user consent and data minimisation. Such applications are subject to regular self-examination and reporting to ensure data protection compliance.

Limited-risk applications. For generative AI applications with moderate risks, such as art creation tools involving users’ personal information, the act imposes stricter regulatory measures in addition to compliance with the principle of basic data protection. For example, such applications should establish a more sophisticated mechanism for data protection, including data access controls and security audits. In addition, regulators may also conduct regular inspections and evaluations of such applications.

High-risk applications. According to article 10, chapter III of the act, stricter regulatory measures are imposed on high-risk generative AI applications, such as those involving sensitive data or critical infrastructures. Such applications not only should meet stricter data protection requirements but also may face more frequent regulatory inspections and tougher penalties. In addition, regulators may require regular risk assessments and security audits to ensure the compliance and security of the applications’ data protection.

Unacceptable-risk applications. Chapter III of the act provides that AI systems posing a threat to human beings are strictly prohibited from being released to market, put into service, or used in the EU. Such AI systems include social scoring, systems aimed at manipulating children or other vulnerable groups, and real-time remote biometric identification systems.

Inspirations for China

For AI as a frontier technology, China issued the New-Generation Artificial Intelligence Development Plan and the Interim Measures for the Management of Generative Artificial Intelligence Services. However, the legal requirements have not yet been systematically integrated and the regulatory measures are not detailed enough.

In contrast, the EU’s act is the world’s first comprehensive, binding and reliable regulatory framework for AI, manifesting the EU’s stricter regulatory stance towards personal privacy and data security protection.

AI regulation that is based on risks classification allows for the protection of the security and privacy of user data in a more targeted manner, giving a boost to healthy development of generative AI. This approach provides a meaningful point of reference for China in its further legislation on AI.


Ma Weiyang is an associate and the vice president of the Intellectual Property Research Institute at Lianggao Law Firm. He can be contacted by phone at +86 138 1168 7832 and by email at 13811687832@139.com
Wang Ze is a researcher of the Intellectual Property Research Institute at Lianggao Law Firm. She can be contacted by phone at +86 151 3554 2483 and by email at wangze2483@163.com

LinkedIn
Facebook
Twitter
Whatsapp
Telegram
Copy link