Legal risks associated with ChatGPT

By Peng Yue and April Zhao, ZSK Attorneys at Law

It’s the new wave of artificial intelligence-related evolution, but just what are the legal risks associated with AI-generated content?

ChatGPT, an application of artificial intelligence generated content (AIGC) that utilises AI technology to automatically generate text, images and audio-video content, has taken the world by storm in recent months. It is also a crucial tool for development of the metaverse.

Considering the risks associated with the new phenomenon, Sam Altman, founder of ChatGPT’s creator, OpenAI, says: “We … need enough time for our institutions to figure out what to do. Regulation will be critical and will take time to figure out. Although current generation AI tools aren’t very scary, I think we are … not that far away from potentially scary ones.”

In this article, the authors analyse the legal risks arising from the usage of ChatGPT and similar AIGC products, and offer our advice.

Unaccountable to inaccuracy

Peng Yue, Zsk Attorneys at Law
Peng Yue
ZSK Attorneys at Law
Tel: +86 10 8896 1850

“ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers,” wrote OpenAI, warning about the misleading “hallucinatory” effect of generated answers. OpenAI’s terms of use and service terms expressly exclude any liability for accuracy of its output and purposes for which it is used.

Ultimately, and as always, human users are accountable for their output content. In addition, according to the above-mentioned terms, OpenAI’s upper limit of liability is the higher of (1) USD100; or (2) the amount a user paid for the service during the past 12 months.

Leakage of input

Under OpenAI’s terms of use and service terms, users authorise the service provider to use their input and output. Although ChatGPT’s use and possession of this data are not unlimited, and users may revoke their authorisation with a clear statement by email, OpenAI is still entitled to process and use the public’s input and output to “train” ChatGPT, without being subject to any confidentiality clauses.

This means that if the user – such as a commercial user in mainland China – fails to treat its own input with prudence, it may lead to leakage of its personal or private information, or trade secrets. The user under the duty of confidentiality will then be held legally and/or contractually liable.

Infringement of output

ChatGPT needs to be “fed” a constant and massive amount of content, some of which may be protected by copyright law. As the output content of ChatGPT may be identical or similar to what it was “fed”, subsequent use of the output may infringe on copyright of the original work. Such risks lie not only with the service provider of ChatGPT, but also users making use of its output.

Data, algorithm compliance

April Zhao, Zsk Attorneys at Law
April Zhao
Senior Consultant
ZSK Attorneys at Law
Tel: +86 10 8896 1850

While the OpenAI terms of use and privacy policy state that ChatGPT may use users’ personal information – including communication information, login information and usage data – for product and service maintenance, improvement, analysis, behavioural testing and R&D of new products, such data may also be provided to any third party, and transmitted and processed within the US.

However, as the information or user input used to train ChatGPT may itself contain personal information, sensitive information, information on government affairs, or even military-related confidential information, input users should abide by relevant laws and regulations in their own jurisdictions.

For example, according to China’s Personal Information Protection Law, a processor of personal information should at first obtain consent of the subject of personal information, or have other legal bases.

According to the Regulations on the Administration of Deep Synthesis of Internet Information Services, providers and technical supporters of deep synthesis services that use personal information in their training data must comply with relevant provisions on personal information protection. This includes obtaining explicit consent from individuals, especially in cases involving sensitive information.

There have been a number of precedents of foreign regulators targeting AIGC providers. For instance, the Italian Data Protection Authority announced on 3 February 2023 that it had banned a program called Replika, developed by Luka, due to its involvement in the illegal collection and processing of personal data breaching the EU General Data Protection Regulation.

In addition, Lee Luda, a South Korean AI chatbot, faced a strict penalty for violating multiple South Korean Personal Information Protection Act regulations, including collecting personal information for unauthorised purposes, deleting and destroying personal information, and restrictions on the handling of sensitive information.

The South Korean Personal Information Protection Commission levied a fine of KRW103.3 million (USD78,219) on Lee Luda as a result of the offence.

“An AI may be able to write a theologically accurate and even aesthetically beautiful prayer,” says Brian Page, the vice president and chief information officer at Calvin University, in a report by The Christian Post. “However, if it’s not a prayer from the heart of the participant, it is just words.”

AIGC is not territory beyond law. While the convenience and benefits of cutting-edge technology are enjoyable, experience and wisdom should guide against falling victim to the progress.

Peng Yue is an attorney at ZSK Attorneys at Law. She can be contacted by phone at +86 10 8896 1850 or by email at
April Zhao is a senior consultant at ZSK Attorneys at Law. She can be contacted by phone at +86 10 8896 1850 or by email at
Benjamin Bai, a partner at the firm, also contributed to this article

ZSK Attorneys at LawZSK Attorneys at Law
WeWork-117, 3/F, Wonderful World
Commercial Plaza, 38 East Third Ring Road,
Chaoyang District, Beijing 100020, China

Tel: +86 10 8896 1850