The increasing use of AI by lawyers is causing controversy, with judges criticising submissions that include AI-generated “bogus AI-generated research”
The American Bar Association (ABA) recently published its first guidance on lawyers’ use of large language models, stating that they have a duty of competence including maintaining relevant technological competence, requiring an understanding of the benefits and risks of the AI tools used.
Dean of Suffolk University Law School, Andrew Perlman, argues that the problems of hallucination do not mean that AI tools cannot be beneficial, and are increasingly being used by lawyers for summary and research.
63% of lawyers surveyed by Thomson Reuters in 2024 said they had used AI in the past, and 12% said they use it regularly for tasks such as writing summaries of case law.
However, articles critical of the uncritical use of AI for research and writing by lawyers continue to appear in the mainstream media, with one stating that “an attorney turns to a large language model…hallucinates cases that don’t exist, and the lawyer is none the wiser”.