Will AI prove to be good, bad or indifferent for the legal profession? One thing is for certain, it’s not going away. The Brief investigates.

Since the launch of Chat GPT on 30 November 2022 the legal sector has been grappling with the implications of generative AI (artificial intelligence) for the future of the profession.

While many foresee (and, in some cases, are already seeing) great time savings from tools that are able rapidly to summarise documents or pull together legal arguments from disparate sources, others are focused on the risks.

One fear is that as the tools get better they will render redundant many legal roles, particularly at entry level. And, if those entry level roles dwindle, where will the next generation of partners and leading legal authorities come from?

The other, more immediate, concern is based around the opposite problem – that generative AI is currently adept at fabricating apparently convincing nonsense, and thus not fit for purpose.

“Abandoned responsibilities”

In June 2023 two US lawyers and their firm, Levidow, Levidow & Oberman, P.C., were fined $5,000 by a federal judge after they unwittingly submitted fictitious legal research in an aviation claim.

Judge P Kevin Castel said the lawyers and their firm, “…abandoned their responsibilities when they submitted non-existent judicial opinions with fake quotes and citations created by the artificial intelligence tool ChatGPT, then continued to stand by the fake opinions after judicial orders called their existence into question.”

In a separate case, in December 2023 Donald Trump’s disgraced former lawyer Michael Cohen admitted to having unwittingly citing non-existent court rulings in a motion to try to bring his court supervision to an early end. In this case Cohen said that he had used the Google Bard AI tool to carry out research, and that he had not understood it was a generative text service as opposed to a “super-charged search engine”.

Size matters

As well as the risk of “hallucination” – the invention of fictitious cases – another potential problem arises when using generative AI in relation to smaller jurisdictions. Could a technology that cannot currently be trusted not to make citations up be relied upon to focus on specific cases relevant to a particular jurisdiction?

In an article first published in The Scotsman, Peter Littlefair, senior litigation associate with the Scottish firm Balfour + Manson, said, “Predictive modelling will inevitably focus on larger jurisdictions, so will all relevant Scottish cases be included? Or will AI throw up cases from elsewhere which might be of interest but not persuasive in Scottish courts?

“In my view, predictive AI models are interesting, but not good enough yet to make inroads into civil litigation.”

90 per cent quicker

For all the negatives, however, a number of firms say they are already reaping the benefits of employing AI.

Weightmans, for instance, that its document review time has been cut by 90 per cent after adopting AI-enabled technology from the legal industry specialist Litera.

Dr Catriona Wolfenden, product & innovation director, at Weightmans explains, “Our journey with AI started about two years back when a client needed to review 1,800 documents within a challenging five-day timeframe. Although we were sceptical about handing over such a crucial task to technology, we worked closely with Litera to ensure that the pace could still match up to the safety of our ever-so-sensitive and private documentation/assets.

Beyond the time savings, AI has truly changed the game for us. It has given our team the tools to handle stacks of documents more efficiently, allowing our lawyers to spend more valuable time with our clients, instead of drowning in paperwork.

The firm is, she continues, now exploring the use of large language models (Chat GPT-style generative AI). She says, “We are not looking to replace lawyers – instead, we are focusing on the augmentation of a lawyer’s skill and enhancing their professional expertise.”

Devil’s advocate

The “hallucination” problem that currently dogs the use of generative AI is likely to lead many firms to remain cautious about it for some time. While acknowledging that lawyers will need to remain vigilant, Jan Van Hoecke, VP of AI Services at the legal technology provider iManage, says that rather than training AI systems on the entire internet, better results can be achieved by focusing it on firms’ own “knowledge repositories”.

He also points out that the technology has applications beyond drafting legal provisions: “A more interesting use case of generative AI is using the model to simulate a real-world argument with the counterparty, say for a litigation case. As part of the preparation for a case, the lawyer can use the generative AI chatbot to play the role of the counterparty to help anticipate their argument and strengthen their own reasoning and positioning.”

Challenging business models

While nobody is seriously arguing that AI will not have a place in legal practice, concerns about its use are not limited to the accuracy, or otherwise, of its output.

According to Littlefair the adoption of AI could make current charging practices untenable. He says, “As lawyers, our business value is the time we spend on client work.

If AI does become effective you still need to validate and interpret it, but AI will reduce time spent on cases. What does that mean for what you charge, and for legal business models?

If AI leads to greater efficiency then could it be the final nail in the coffin of the chargeable unit? In future might it be lawyers, rather than clients, who are clamouring for fixed-fee arrangements which reflect value created rather than the time spent?


Balfour + Manson




Connect with Peter Littlefair via LinkedIn

Connect with Catriona Wolfenden via LinkedIn

Connect with Jan Van Hoecke via LinkedIn