AI and Legal Analysis – a very modern concern

Increasingly, we see clients come to us having conducted a measure of their own research using an Artificial Intelligence tool, such as ChatGPT.  This can be very useful in providing them with an overview of an area of law before they proceed to take legal advice. However, the risks of relying overly or, in some cases, entirely on this approach are becoming increasingly clear.


Recent examples have included a business using an AI tool to generate their own commercial lease, which inevitably contained numerous errors which could have longstanding negative impacts for years to come.  There was a company in the early stages commercial dispute generating legal correspondence to third parties which had the potential to be binding and have serious ramifications down the line if not remedied.  In both case, we were able to spot the issues which were apparent to the untrained eye and correct them before they were beyond resolution.


The temptation is understandable. Budgets and pressure to control costs internally can lead to creative solutions. AI tools provide their information in an authoritative manner, when they can be anything but. Ultimately, AI is only as good as the information it has drawn upon, and there is no redress against it when it fails to deliver, which it often does not.


The UK courts have also seen a surge in litigants in person, and even legal professionals, using AI tools, such as ChatGPT, which have generated “hallucinated” or completely fake case citations. 


Over 50 fake cases were cited in UK submissions as of July 2025, leading to wasted costs orders, warnings from the Law Society, and potential disciplinary action for lawyers. 
The dangers are already evident and, if left unchecked, risk causing further harm to parties and the integrity of the UK court system.
Whilst we are a forward-thinking firm, and look to embrace technology where it can add value to our clients, we avoid any reliance on artificial intelligence for legal analysis and reasoning. The dangers are clear.

Key Instances and Legal Impact:


Ayinde v Haringey LBC & Al-Haroun v Qatar National Bank [2025 EWHC 1383]: The High Court revealed that 18 out of 45 case-law citations in submissions were fabricated, with lawyers failing to verify the AI-generated output.

First-Tier Tax Tribunal (FTT): A respondent was found to have cited non-existent case law, resulting in a cautionary tale about using AI for legal research.

Tribunal Incidents: litigants unwittingly submitted fake AI-generated cases to a tribunal.

Consequences: Lawyers have faced wasted costs orders (fees incurred due to improper conduct) and potential contempt of court proceedings for presenting unchecked AI-generated, fictional precedents.

Judicial Guidance: Courts have warned that while they may sympathize with, for instance, self-represented litigants, all citations must be verified as genuine to avoid misleading the court. 

Duty to Verify: AI tools are not considered reliable legal research tools; all outputs must be independently verified against official sources.

Liability: Lawyers and individuals citing the cases, not the AI, are responsible for the accuracy of their submissions.

In summary, the benefits of AI research can be helpful for clients who want a headstart before coming to see their legal representatives. It is helpful for them to have an overview, it can save time and costs in. But the risks are plain.  Inevitably there will be improvements in the quality of AI, but for the moment, whilst very much still in its infancy, AI advice is best taken with a healthy dose of scepticism.