Suing AI will need better human law

By Sumathi Chandrashekaran and Essenese Obhan, Obhan and Associates
0
57

It has been only a few weeks since being released into the wild, but ChatGPT is the latest cool topic. This chatbot comes only months after the release of DALL-E, a product from the same company, OpenAI. DALL-E is built using a neural network to create images from text. With enough imagination, the use of such tools is unlimited.

ChatGPT is a chatbot built on an artificial intelligence (AI) tool called a large language model (LLM), that uses large databases of language to generate human-like text based on the context of the conversation. Being a chatbot, it is designed to respond to human input and engage in natural-sounding conversations. The immediate impact of this will be on our engagement with chatbots, virtual assistants and automated customer service applications on phones or by text. The more disruptive use of ChatGPT is for content generation. Soon, fact-based articles, or boilerplate pieces like astrology columns, could well be written by using tools like ChatGPT.

Sumathi Chandrashekaran
Sumathi Chandrashekaran
Consultant
Obhan & Associates

For the legal community, tools like ChatGPT and DALL-E raise complex yet interesting questions that have legal implications, but might also affect the profession. One issue that stands out is that of authorship and liability. This has been discussed in various contexts already, notably in the debates around the DABUS-Thaler cases as patent authorship and ownership. The position in India is yet to be determined, although an application is reportedly under examination by the patent office.

While AI and patentability has its own set of unanswered questions, issues of authorship and liability can extend into other areas. For instance, using large amounts of text data, which is what large language models do, may include copyright material. Similarly, DALL-E could be prompted to generate an image that imitates an artist whose work has been copyrighted. In such cases, who is liable for infringement, and against whom does the copyright holder seek redress? More generally, if ChatGPT is used to generate text that is later used to cause harm to an individual or entity, who becomes liable for that? Could the person or entity who used ChatGPT to generate the text be liable?

Another issue is that of misrepresentation. Since LLMs can generate text that could be mistaken for the words of an actual person, it could be misrepresentation if the generated text is seen as falsely representing the views or opinions of an individual. The generated text could deceive or mislead others and lead to legal issues in which fraud is alleged. The generated text could also be seen as impersonating an individual without their consent. If the databases used to train software systems contains confidential or sensitive information, there could be issues of misuse of such data. There is then the case of defamation; what if the generated text contains false and damaging statements that could be the basis for a defamation lawsuit?

Essenese Obhan
Essenese Obhan
Managing partner
Obhan & Associates

When it comes to regulation, there will be complex challenges of identifying the subjects of regulation. Since content generation is a core use of LLMs, social media and online platforms may easily be flooded with automated content. Regulation around online speech and content will have to confront the issue of whom to regulate. Our understanding of intermediary liability may be turned upside down.

LLMs are also a challenge to the legal community itself. Credibility is sought and established through the speed and accuracy with which reams of case law, legislation and other legal authorities are summarised and explained. ChatGPT may immediately replace low-level tasks of legal research and summary, and make some roles in the legal profession redundant. Consequently, reputation and recognition will attach to exceptional originality and softer skills such as human engagement, salesmanship and tasks requiring higher emotional intelligence. The future of work will need to be reimagined urgently.

Ultimately, even as these tools challenge the boundaries of our familiar environments, we should heed the words of Sam Altman, the CEO of OpenAI, who said these tools, in their present state of development, offer “fun creative inspiration” at best. We must not have greater expectations or unnecessary fears. But are we, as a global community, mature enough to appreciate this?

Parts of this article were generated using ChatGPT and appropriately edited.

Sumathi Chandrashekaran is a consultant and Essenese Obhan is the managing partner at Obhan and Associates.

Obhan & Associates

Obhan & Associates

Advocates and Patent Agents

N – 94, Second Floor

Panchsheel Park

New Delhi 110017, India

Contact details:
Ashima Obhan
T: +91-9811043532
E: email@obhans.com

ashima@obhans.com

www.obhanandassociates.com