It’s the new wave of artificial intelligence-related evolution, but just what are the legal risks associated with AI-generated content?
ChatGPT, an application of artificial intelligence generated content (AIGC) that utilises AI technology to automatically generate text, images and audio-video content, has taken the world by storm in recent months. It is also a crucial tool for development of the metaverse.
Considering the risks associated with the new phenomenon, Sam Altman, founder of ChatGPT’s creator, OpenAI, says: “We … need enough time for our institutions to figure out what to do. Regulation will be critical and will take time to figure out. Although current generation AI tools aren’t very scary, I think we are … not that far away from potentially scary ones.”
In this article, the authors analyse the legal risks arising from the usage of ChatGPT and similar AIGC products, and offer our advice.
Unaccountable to inaccuracy
Ultimately, and as always, human users are accountable for their output content. In addition, according to the above-mentioned terms, OpenAI’s upper limit of liability is the higher of (1) USD100; or (2) the amount a user paid for the service during the past 12 months.
Leakage of input
This means that if the user – such as a commercial user in mainland China – fails to treat its own input with prudence, it may lead to leakage of its personal or private information, or trade secrets. The user under the duty of confidentiality will then be held legally and/or contractually liable.
Infringement of output
ChatGPT needs to be “fed” a constant and massive amount of content, some of which may be protected by copyright law. As the output content of ChatGPT may be identical or similar to what it was “fed”, subsequent use of the output may infringe on copyright of the original work. Such risks lie not only with the service provider of ChatGPT, but also users making use of its output.
Data, algorithm compliance
However, as the information or user input used to train ChatGPT may itself contain personal information, sensitive information, information on government affairs, or even military-related confidential information, input users should abide by relevant laws and regulations in their own jurisdictions.
For example, according to China’s Personal Information Protection Law, a processor of personal information should at first obtain consent of the subject of personal information, or have other legal bases.
According to the Regulations on the Administration of Deep Synthesis of Internet Information Services, providers and technical supporters of deep synthesis services that use personal information in their training data must comply with relevant provisions on personal information protection. This includes obtaining explicit consent from individuals, especially in cases involving sensitive information.
There have been a number of precedents of foreign regulators targeting AIGC providers. For instance, the Italian Data Protection Authority announced on 3 February 2023 that it had banned a program called Replika, developed by Luka, due to its involvement in the illegal collection and processing of personal data breaching the EU General Data Protection Regulation.
In addition, Lee Luda, a South Korean AI chatbot, faced a strict penalty for violating multiple South Korean Personal Information Protection Act regulations, including collecting personal information for unauthorised purposes, deleting and destroying personal information, and restrictions on the handling of sensitive information.
The South Korean Personal Information Protection Commission levied a fine of KRW103.3 million (USD78,219) on Lee Luda as a result of the offence.
“An AI may be able to write a theologically accurate and even aesthetically beautiful prayer,” says Brian Page, the vice president and chief information officer at Calvin University, in a report by The Christian Post. “However, if it’s not a prayer from the heart of the participant, it is just words.”
AIGC is not territory beyond law. While the convenience and benefits of cutting-edge technology are enjoyable, experience and wisdom should guide against falling victim to the progress.
Peng Yue is an attorney at ZSK Attorneys at Law. She can be contacted by phone at +86 10 8896 1850 or by email at firstname.lastname@example.org
April Zhao is a senior consultant at ZSK Attorneys at Law. She can be contacted by phone at +86 10 8896 1850 or by email at email@example.com
Benjamin Bai, a partner at the firm, also contributed to this article
ZSK Attorneys at Law
WeWork-117, 3/F, Wonderful World
Commercial Plaza, 38 East Third Ring Road,
Chaoyang District, Beijing 100020, China
Tel: +86 10 8896 1850