ChatGPT sparks draft China rules to regulate AIGC

0
606
LinkedIn
Facebook
Twitter
Whatsapp
Telegram
Copy link

China released draft rules aimed at regulating the development of artificial intelligence generated content (AIGC) products such as ChatGPT on 11 April and, if implemented, would mark the world’s first comprehensive written law on AIGC.

Although similar regulations have been introduced previously, a lawyer notes that enforcement has faced controversy, and improvement and clarification of the draft concepts would be necessary for it to take effect.

This draft from the Cyberspace Administration of China contains 21 articles setting the requirements regarding generative AI service providers’ admission, algorithm design, training data selection, personal privacy and trade secrets.

Cai Peng, Zhong Lun Law Firm
Cai Peng

Significantly, the draft focuses on the generation of AI results, prohibiting providers from generating “content that discriminates on the basis of the user’s race, country, gender, etc.” and preventing the release of “fake information”. Additionally, AI training data must be demonstrably authentic to guarantee its “authenticity”.

Zhong Lun Law Firm’s partner, Cai Peng, argues that certain content produced by today’s AI technology, such as articles, pictures and videos, is “intrinsically untruthful”. Due to the developmental phase, AI technology guaranteeing complete accuracy and truthfulness poses significant difficulties.

“If this new bill is enacted without further elaboration or revision to the relevant concepts, it can easily cause confusion among businesses, ultimately restricting the development of the industry,” says Cai.

AIGC refers to algorithms, models and rules used to generate text, images, sound, video and code, among other forms of content. US tech company OpenAI’s ChatGPT, which launched in November 2022 and quickly became the talk of the town, is a typical example of such technology.

Chinese tech giants, such as Baidu, Alibaba and Sense Time, then followed the trend and promptly announced their entry into the fray.

In fact, on 10 January, China had already enforced a new regulation on internet-synthesised content, which constrains information that encompasses AIGC. But the illegal use of such techniques to create fake audio and video for fraudulent purposes attracted more attention at that time.

Cai notes that the lack of clarity regarding the differences between the draft and the launched measures in terms of regulatory targets could potentially lead to duplicate regulations and increase the response costs of businesses.

“There is currently a situation where legislation on AI service compliance is out of practice, hard to launch and enforcement is not yet in place,” Cai says. “How to understand and implement the existing laws and regulations even within the industry is full of controversy, without uniform standards.”

Based on previous advice on AI compliance for clients, Cai feels that “one new regulation after another has put a lot of pressure on their [AI service providers] legal compliance work as well.”

But the trend of AI is so overwhelming that even the relatively conservative judicial sector is taking action. Last year, the Supreme People’s Court issued an opinion on the integration of AI with judicial work to strengthen its application in the legal industry.

However, the current draft may give the impression that China intends to limit the development of AIGC, which is counter to the court’s efforts. But Cai disagrees that there is a conflict between the court’s action and the draft.

He says that the opinion of the Supreme People’s Court is to encourage the use of AI in the judicial field, allowing people to harness advanced technology, while the proposed rules for AIGC are not to restrict its development, but to promote the healthy growth of the industry from the perspective of balancing rights and interests, and data security.

LinkedIn
Facebook
Twitter
Whatsapp
Telegram
Copy link