LinkedIn
Facebook
Twitter
Whatsapp
Telegram
Copy link

Artificial intelligence will inevitably transform legal practice, but it may take time to realise its full potential. Enormous benefits and dangerous pitfalls await the tentative legal user, and careful regulation is also required, writes prominent in-house lawyer and legal transformation adviser Sharyn Ch’ang

In a world where art imitates life, Hollywood movies often envision a future where artificially intelligent computers and robots seamlessly integrate into society, blurring the line between human and machine. From the sentient HAL 9000 computer in Arthur C Clarke’s 1968 novel 2001: A Space Odyssey to Pixar’s lovable robot animation, Wall-E, these cinematic representations reflect our collective fascination with the possibilities of artificial intelligence (AI).

Twenty-first century real-world advancements in AI are already used in consumer and business applications, even those from the relatively new field of generative AI.

OpenAI’s public domain ChatGPT broke records with more than 100 million users in the first two months of its November 2022 release. Industry heavyweights like Amazon, Alibaba, Baidu, Google, Meta, Microsoft and Tencent have their own generative AI products.

We’re witnessing a democratisation of AI in a manner not previously seen – you don’t need a degree in any subject to use these tools. You just need to type a question.

Sharyn Ch’ang
Sharyn Ch’ang

So, ready or not, generative AI is here. It is impacting our daily lives and will reshape various industries, including the legal profession.

If you need any convincing, just listen to today’s big tech gurus like Google CEO Sundar Pichai, who described AI generally as “the most profound technology humanity is working on. More profound than fire, electricity or anything that we have done in the past”, in an interview with The Verge in May. Pichai is not alone.

There are different types of generative AI models, each quite complex to explain. The generative pre-trained transformer (GPT) model is commonly known due to the popularity of ChatGPT.

However, as ChatGPT is a general-use AI tool, this legal industry-focused article instead references Harvey. This is a generative AI platform built on OpenAI’s GPT-4, and is the best example of a multi-functional, generative AI platform designed specifically for lawyers.

Similar legal industry generative AI tools include CoCounsel, LawDroid Copilot and Lexis+AI, and there are other tools with more limited functionality.

Generative AI

Generative AI is a subfield of AI where the system or algorithmic model is trained with human help to produce original content like text, images, videos, audio, voice and software code.

One form of generative AI is a large language model (LLM), a neural network trained on large amounts of the internet and other data. It generates outputs in response to inputs (prompts), based on inferences over the statistical patterns it has learned through training.

ChatGPT and OpenAI’s GPT-4, on which Harvey is built, are popular examples of LLMs. The model can process prompts at a speed, volume and accuracy that outranks average human capability.

Unlike conventional AI systems that primarily classify or predict data (think Google search), generative models learn the patterns and structure of the input training data, then generate new content that has similar characteristics to the training data. However, responses are based on inferences about language patterns rather than what is known to be true or arithmetically correct.

Put another way, LLMs have two central abilities:

  • Taking a question and working out what patterns need to be matched to answer the question from a vast sea of data; and
  • Take a vast sea of data and reversing the pattern-matching process to become a pattern creation process.

Both functions are statistical, so there’s a certain chance the engine will not correctly understand any question. There is a separate probability that its response will be fictitious, a hallucination.

Given the scale of data on which the LLM has been trained, and the fine-tuning it receives, it can seem like it knows a lot. However, the reality is that it is not truly “intelligent”. It is only processing patterns to produce coherent and contextually relevant text. There is no thinking or reasoning.

Even with well-crafted prompts, answers can be wrong, biased and include completely fictitious information and data, with sometimes harmful or offensive content. However, the potential benefits of generative AI easily outweigh such shortcomings, which major AI market makers like OpenAI are consciously working to improve.

Who and what is Harvey?

Harvey, the generative AI technology built on OpenAI’s GPT-4, is a multi-tool software-as-a-service AI platform that is specifically designed to assist lawyers in their day-to-day work. It comes from a startup company founded by two roommates – Winston Weinberg, a former securities lawyer and antitrust litigator from O’Melveny & Myers, and Gabriel Pereyra, previously a research scientist at DeepMind, Google Brain and Meta AI.

Typical of generative AI models, Harvey is an LLM that uses natural language processing, machine learning and data analytics to automate and enhance legal work, from research to analysing and producing legal documents like contracts. Sequioa, who led a USD21 million funding in Harvey, states: “Legal work is the ultimate text-in, text-out business – a bull’s-eye for language models.”

In May 2023, only two organisations were licensed to use Harvey: PwC has exclusive access among the Big Four; and Allen & Overy is the first law firm user. More than 15,000 law firms are on the waiting list.

Harvey is similar to ChatGPT, but it has more functions that are specifically for lawyers. Like ChatGPT, users simply type instructions about the task they wish to accomplish and Harvey generates a text-based result. The prompt’s degree of detail is user-defined.

However, unlike ChatGPT, Harvey includes multiple tools specifically for lawyers, where users can ask:

  • Free-form questions that are legal or legal-adjacent, including research, summarisation, clause editing and strategic ideation;
  • For a detailed legal research memorandum on any aspect of law;
  • For a detailed outline of a document to be drafted, including suggested content for each section; and
  • Complex, free-form questions about, or requesting summaries of, uploaded documents without any pre-training or labelling.

Importantly, Harvey describes itself as a research tool, and its output is not legal advice. The terms of use also clearly state that the AI-generated output may contain errors and misstatements, or may be incomplete. Because Harvey, like all other generative AI systems, can convincingly make things up, including case law references and legislation, any prudent lawyer must properly review Harvey’s output before providing legal advice based on it.

Judicial warnings

As this article goes to press, there are at least two precedent-setting examples in the US of judges requiring lawyers to take precautions if using generative AI to prepare for court.

  • One lawyer’s brief to the New York District Court in his client’s personal injury case against an airline cited six non-existent judicial decisions produced by ChatGPT. Naively, the lawyer said he was “unaware of the possibility that the content could be false”. The court has sanctioned two lawyers involved, fining each USD5,000. In addition to basic rules of legal and ethical professional conduct, it’s simply common sense that a lawyer needs to be responsible for any representations or legal advice. That standard does not change whether the assistance is from peers, junior lawyers, paralegals or a machine.
  • In the US Court of International Trade, Judge Stephen Vaden issued an order requiring lawyers to file notices when appearing before him that disclose which AI program was used and “the specific portions of text that have been so drafted”, and that use of the technology “has not resulted in the disclosure of any confidential or business proprietary information to any unauthorised party”.

Effective lawyers possess a combination of hard (technical) and soft skills. The Table A provides a list. So, what is generative AI good at now, and how can it be used by lawyers?

Using Harvey as a baseline, generative AI can turbocharge the drafting of written material from scratch, and make edits and recommendations for replacement text. It can analyse, extract, review and summarise faster and at a scale beyond human capabilities. The practical consequence is that properly trained legal generative AI will enable lawyers to do many of their more routine, sometimes mundane, tasks faster, cheaper and more efficiently while improving the quality of work.

President and co-founder of OpenAI, Greg Brockman, said GPT-4 (on which Harvey is built) works best when used in tandem with people who check its work – it is “an amplifying tool” that allows us to “reach new heights,” but it “is not perfect” and neither are humans.

Will I lose my job?

Goldman Sachs, in its March report titled The Potentially Large Effects of Artificial Intelligence on Economic Growth, estimated 44% of current legal work tasks could be automated by AI in the US and Europe.

There’s no comparable Asia data yet, but it’s reasonable to assume the percentage may be similar. The legal category takes second place only to office and administrative support (46%). Third place goes to architecture and engineering (37%).

But to answer the question, no, AI will not displace lawyers as a profession. That is evident when you review the skills list in Table A of what generative AI cannot do of its own accord. The future of legal practice is a world where generative AI is an indispensable productivity tool, augmenting lawyers.

While AI will automate routine tasks and assist research, analysis, drafting and similar work, the nuanced and complex aspects of legal practice require human expertise, empathy and judgement.

Harvey’s Pereyra agrees. “Our [Harvey’s] goal is not to compete with existing legal tech … or to replace lawyers, but instead to make them work together more seamlessly. We want Harvey to serve as an intermediary between tech and lawyer, as a natural language interface to the law. We see a future where every lawyer is a partner – able to delegate the busy work of the legal profession to Harvey and focus on the creative aspects of the job and what matters most, their clients,” he told lawyer Robert Ambrogi, author of legal technology blog LawSites, in November 2022.

Of course, as generative AI progresses to being the dominant doer of more routine legal workflows, many in the legal profession are anxious about their employment security. Some tasks will inevitably change because AI will do it better, faster and cheaper.

For example, junior lawyers will find their roles evolving. If their usual work is now done by AI, freeing up time in their day, they can focus on higher-value, more engaging work, develop specialised expertise and participate in the strategic aspects of legal practice – all earlier in their career than traditionally done.

On jobs in general, the Goldman Sachs report states: “Jobs displaced by automation have historically been offset by the creation of new jobs and the emergence of new occupations following technological innovations accounts for the vast majority of long-run employment growth.”

Of the legal profession specifically, Google’s Pichai has an interesting prediction: he’s “willing to almost bet” there will be more lawyers a decade from now because “the underlying reasons why law exists and legal systems exist aren’t going to go away, because those are humanity’s problems”.

Risks of using generative AI

While there are substantial potential benefits of generative AI to the legal profession, there are also inherent risks and limitations. Careful consideration and appropriate safeguards at private and governmental levels will be essential to effectively mitigate these risks and promote trustworthy deployment and use in the legal domain.

The many compliance-related and legal issues include monitoring new AI-specific legislation, like the EU’s AI Regulation. Some of the other more commonly identified legal and practical issues include accuracy and reliability of AI-generated legal documents, as well as any amplification of biases present in legal data.

Unreliable content, undetected errors and hallucinations will undermine trust in AI systems and have legal implications like professional negligence and liability for incorrect advice, while over-reliance on generative AI systems without critical evaluation will also lead to errors and oversights.

Careful curation of training data is essential to avoid biases. If training data reflects historical biases, generative AI systems may inadvertently produce discriminatory outputs that perpetuate disparities in legal outcomes.

Any networked computer system is a cybersecurity risk. To some extent, a generative AI system’s human-like conversations and known hallucination flaws make it an even more attractive target for social engineering or phishing scams. The general warning is to be alert to potential malicious activity.

Data is now a highly regulated asset, and generative AI relies on vast amounts of it. Any personal data shared with a generative AI tool is likely to be protected by privacy laws. It’s also possible the content of a prompt may contain confidential or sensitive information.

While lawyers do not generally encounter IP issues while producing their own legal documents (because we all draft from scratch or use our own precedents, right?), having AI draft templates or legal research papers may inadvertently infringe third-party IP rights if the data sources are unknown. On the flip side, there’s the issue of who owns the IP rights of the AI-generated content.

There is also the issue of technical opacity of foundation LLMs, which use some of the most advanced AI techniques in existence. Given the billions of dollars invested in developing specific generative AI models, the technology that underpins the AI is also proprietary. However, this adds opacity to the inner workings of AI models. Policymakers must encourage the development of rigorous quality control methodologies and data set disclosure standards for AI systems, appropriate to the context of the application.

There are two other commonly known technical limitations with LLMs. First, the potential limitation on the underlying foundation data. It’s well understood, because OpenAI disclosed it, that GPT-3.5 is only trained on data from the internet to September 2021. However, the proprietary nature of other AI models may make it difficult or impossible to find their training data source and currency.

Second, every generative AI model will have memory limits – in other words, the length of your prompt is limited. GPT-4 has a limit of about 40 pages of double-spaced text – 12,288 words. While LLMs’ memory capacity will improve over time, the current limitations make them unsuitable for legal matter analysis such as complex litigation or major contract reviews where that word limit is exceeded.

You must be a subscribersubscribersubscribersubscriber to read this content, please subscribesubscribesubscribesubscribe today.

For group subscribers, please click here to access.
Interested in group subscription? Please contact us.

你需要登录去解锁本文内容。欢迎注册账号。如果想阅读月刊所有文章,欢迎成为我们的订阅会员成为我们的订阅会员

已有集团订阅,可点击此处继续浏览。
如对集团订阅感兴趣,请联络我们

LinkedIn
Facebook
Twitter
Whatsapp
Telegram
Copy link