Skip to Content

Generative AI and Lawyers

Woman at desk interacting with AIThe VLSB+C continues to watch with interest the development of AI tools, noting both their potential benefits and risks in legal practice.

We provide some initial comment on current developments, particularly around ChatGPT, but with application to all generative AI platforms.

What is ChatGPT and how does it work?

ChatGPT is one of a number of ‘large language models’ (LLM) that are currently undergoing development and training. It has a conversational interface where a user provides a ‘prompt’ that will generate human-like text on a wide variety of topics. It can generate fluent and often highly convincing answers to many questions.

ChatGPT is able to generate text as it is trained on enormous amounts of data scraped from the internet. It produces text based on the probability of the word or combination of words that is most likely to come next. This context is derived from the training content, that is, material already online. Different training sets will create different suggestions for the same prompt, and it should be noted that as the model develops and the training set expands, a greater variety of options will be generated.

The training data has not been explicitly curated for accuracy or for any specific use case. ChatGPT is also continuously being trained by users inputting prompts, unless the user specifically opts out. It not only learns about the quality of the output by asking the user to rank the answers given, but it also further saves and labels the original prompt.

You should note that ChatGPT is a generalist tool – it is not specifically trained to work with legal resources or to provide accurate legal answers.

We recommend that you take steps to gain a basic understanding of generative AI, its uses and risks. It has potential to help lawyers in many tasks involving text, content and idea generation, but it is a tool that should be used very carefully. You need a clear understanding of its inherent limitations and your responsibilities when using any generative AI tool in the course of your work.

The Centre for Legal Innovation is offering a two-day Legal Generative AI Summit on 24-25 October. The summit features an international panel of experts and is free of charge. There are also a number of courses being offered by platforms like LinkedIn or Microsoft, or keep an eye out for other CPD sessions being offered specifically for lawyers.

Accuracy

ChatGPT is optimised for fluency over accuracy, and it is not clear how it processes content where there are conflicting or inaccurate sources or the issue is a matter of varying opinion. 

ChatGPT does not operate as a search engine – that is, it does not search for relevant content online to use when generating its answers. At this stage, it is not a reliable source of information. If you use it to generate text, you remain responsible for ensuring the accuracy of the content.  

Confidentiality

You should also bear in mind that ChatGPT gives no guarantee that it will keep your information confidential. Review the terms of use carefully, and note that you can opt out of your data being used for the purposes of improving and training the AI. Even if you do opt out, given the early stage of development and the evidence of AI’s emergent capabilities, you still should not submit confidential information when using it to generate text.

How can ChatGPT be used by lawyers?

ChatGPT can be a very helpful tool, but you must use it ethically and safely. All the rules of ethics apply in your use of this tool, including your duties:

  • of competence and diligence
  • of supervision
  • to the court and the administration of justice
  • to maintain client confidentiality. 

Some uses can be helpful, as long as you exercise careful critical judgement over the content that is produced. 

For example, you could use ChatGPT to help produce social media content, step out work flows, and draft legal information or standard paragraphs of text to use as a template or boilerplate clause. However, you must then use your legal expertise to ensure that the content is correct and appropriate, and edit it accordingly before using it. For this reason, it is best used for tasks where you already possess expertise.  It is safer used by experienced lawyers.

It may also be useful as a starting point in generating ideas or creating content and articles, for example, in giving an outline for an article that a person can build on.

While research usefulness is currently limited, AI is being used by legal publishers to generate more reliable research (e.g. LexisNexis’ High Court Analyser), but even so, it is up to you to check and be satisfied with the accuracy of the content you are working with.

Handle with care

You should not input any confidential client information or instructions into ChatGPT, for example a letter from an opposing party, because that will involve a breach of your duty of confidentiality. 

Again, you should be very careful if you are using ChatGPT for legal research tasks. ChatGPT does not operate as a search engine – that is, it does not search for relevant content online when generating its answers. As noted, its function is to generate text based on what word is most likely to come next, which is why it is capable of ‘hallucinating’, that is, for example, generating case references that do not actually exist and providing inaccurate case summaries. In a recent infamous example, two New York lawyers submitted a ChatGPT generated legal brief which contained six non-existent case citations. They were fined $5000 by the U.S. District Court of Manhattan and were also found to have acted in bad faith and made "acts of conscious avoidance and false and misleading statements to the court.”

You should watch for the ‘plausibility bias’: the fluency of ChatGPT can induce a false sense of credibility. If you are unfamiliar with the area of law in question, you may miss subtle or even gross inaccuracies in a ChatGPT text.  Always check to ensure the final product is accurate and helpful and if in doubt, don’t use it.

Finally, supervisors of inexperienced lawyers and other staff should be particularly careful to ensure content produced by others is accurate and appropriate.  Make sure that time and budget pressures do not incentivise excessive reliance on a tool like this - you are responsible for the final product. Firms should put in place policies and directions to staff on the use of generative AI that ensure the cautions in this guidance are observed.

More information

Last updated on
* Indicates required field
Back to top