*/
With at least 31 reports of AI hallucinations in UK legal cases – over 800 worldwide – and judges using AI to assist in judicial decision-making, the risks and benefits are impossible to ignore. Matthew Lee examines how different jurisdictions are responding
This article is a necessary update to my previous piece on the rise and rise of fake cases, also known as AI hallucinations (Counsel September 2025). Interestingly, even the terminology is contested. The very label we use for this phenomenon has attracted debate, with many questioning whether it properly reflects what is happening when AI systems generate incorrect legal material. For example, in the Federal Court of Australia, Wheatley J observed:
‘Although the term used in relation to erroneously generated references by Al is “hallucinations”, this is a term which seeks to legitimise the use of Al. More properly, such erroneously generated references are simply fabricated, fictional, false, fake and as such could be misleading.’ JML Rose Pty Ltd v Jorgensen (No 3) [2025] FCA 976 (Federal Court of Australia, 19 August 2025)
However, in the Court of Appeal of California, three judges noted:
‘We conclude by noting that “hallucination” is a particularly apt word to describe the darker consequences of AI. AI hallucinates facts and law to an attorney, who takes them as real and repeats them to a court…’ Noland v Land 2025 Cal. App. LEXIS 584
On 31 October 2025, the Courts and Tribunals Judiciary updated its AI Guidance for Judicial Office Holders and also adopted the language of AI hallucinations in the following terms:
‘AI Hallucinations are incorrect or misleading results that AI models generate. These errors can be caused by a variety of factors, including insufficient training data, the models statistical nature, incorrect assumptions made by the model, or biases in the date used to train the model.’
For the purpose of this article, I will adopt the term ‘hallucinations’ to be consistent with the guidance. However, not every case I cite had the confirmed use of AI. In some cases, it was merely alleged or suspected.
Readers of the Natural and Artificial Intelligence in Law blog (‘NAIL blog’) will have noticed the continuing rise of hallucinations featuring in court decisions. It takes time to write up each decision I am sent, but it appears that we now have at least 31 reported instances of AI hallucinations, or alleged AI hallucinations, in UK legal cases, with likely around 800 internationally. The problem does not seem to be abating and there may be more incidents that are not being identified or reported.
In my earlier article, I expressed concern that well intentioned judges sometimes cite hallucinated cases and their erroneous legal principles in full within official judgments in order to show the extent of the problem. The intention seems to be to educate readers and to be transparent about what has gone wrong. It has greatly assisted those interested in these issues that the full cases and citations are available to analyse why these hallucinations are happening.
However, by reproducing those hallucinations and invented principles in official case reports, there is a real risk that litigants in person, lawyers or AI systems will absorb them, use them and potentially feed them back into future outputs, embedding the error into the wider legal ecosystem. That problem has caused concern to the Australian judiciary:
‘... There has been an approach, which I will adopt, of redacting false case citations so that such information is not further propagated by AI systems.’ JML Rose Pty Ltd v Jorgensen (No 3)
In the UK, this approach does not seem to have been adopted. Whether the UK will move towards redaction, or another method of mitigating the problem, remains to be seen, but it is an issue that warrants urgent and careful consideration.
At the recent Housing Law Practitioners Association conference, the Master of the Rolls, Sir Geoffrey Vos, spoke about the progress being made in bringing technology into the justice system to deliver faster and more reliable outcomes. The emphasis was not only on administrative efficiency, but also on the potential for AI to assist litigants in property litigation and even to support, or perform, aspects of decision-making itself:
‘I shall not now repeat the several lectures I have given on whether AI ought to be used for judicial decision-making. Suffice it to say that I am sure that many property disputes could be amenable to machine-made decision-making. It will be for future discussion whether, as a society, we would want decisions about people’s right to stay in their home to be decided by a machine rather than a human. This conversation is rather more urgent than many people imagine.’
For some, the idea that a machine might be involved in a decision about whether someone can stay in their home still sounds like science fiction. For those who already use these tools in legal practice, knowingly or unknowingly, the direction of travel is less surprising. In one case from the Tax Tribunal, the judge expresed how AI assisted in his decision:
‘I have used AI in the production of this decision. This application is well-suited to this approach. It is a discrete case-management matter, dealt with on the papers, and without a hearing. The parties’ respective positions on the issue which I must decide are contained entirely in their written submissions and the other materials placed before me. I have not heard any evidence; nor am I called upon to make any decision as to the honesty or credibility of any party.’ VP Evans (as executrix of HB Evans, deceased) & Ors v The Commissioners for HMRC [2025] UKFTT 1112 (TC)
The benefits and promises of AI are clear. AI can assist with judicial functions, support legal analysis and act as a powerful time saving mechanism for lawyers. Properly used, it can offer guidance to litigants in person who might otherwise have no realistic access to legal advice at all. These benefits are already being realised and ought to be recognised. As the Master of the Rolls explained:
‘This is not about whether it is or is not a good thing for claims to be created by AI. AI is a reality now, and individuals are and will be able to use it as their agent to pursue litigation.’
Alongside this positive vision, there are numerous risks that must be considered before full confidence can be placed in AI as a routine assistant in legal work or even a component of judicial decision-making. Primary among these risks are the hallucinations. As the AI Judicial Guidance explains, hallucinations may include making up fictitious cases, citations or quotes, referring to legislation, articles or legal texts that do not exist, providing incorrect or misleading information regarding the law or how it might apply, and making factual errors. These kinds of errors strike at the core functions of judging and lawyering, which depend on accuracy, reliability and the careful application of real law to real facts.
There does not seem to be a consensus on whether hallucinations can ever be fully eliminated. As with much of the discussion around AI, there is uncertainty and an evolving scientific picture. When I first started tracking these cases, I assumed the problem would be short lived. I was mistaken about the timeframe but remain hopeful the technology will improve.
None of this means that technology cannot provide meaningful assistance to court users and judges. Some jurisdictions are already experimenting with AI assisted decision-making in appropriate, lower value contexts, with a human judge retaining the final say. I am hopeful that carefully designed systems, introduced with care and proper oversight, can improve access to justice, but any such system must also weigh the real risks presented by hallucinations.
‘The rise and rise of fake cases’, Matthew Lee, Counsel September 2025
AI Guidance for Judicial Office Holders, Courts and Tribunals Judiciary, October 2025
‘Thirty one UK cases of hallucinated citations, AI and otherwise’, Natural and Artificial Intelligence in Law blog, Matthew Lee
Speech by the Master of the Rolls, Sir Geoffrey Vos: ‘Innovations in the Housing Sector – New Age Solutions for Age Old Problems’, Housing Law Practitioners’ Association Conference on 4 December 2025
This article is a necessary update to my previous piece on the rise and rise of fake cases, also known as AI hallucinations (Counsel September 2025). Interestingly, even the terminology is contested. The very label we use for this phenomenon has attracted debate, with many questioning whether it properly reflects what is happening when AI systems generate incorrect legal material. For example, in the Federal Court of Australia, Wheatley J observed:
‘Although the term used in relation to erroneously generated references by Al is “hallucinations”, this is a term which seeks to legitimise the use of Al. More properly, such erroneously generated references are simply fabricated, fictional, false, fake and as such could be misleading.’ JML Rose Pty Ltd v Jorgensen (No 3) [2025] FCA 976 (Federal Court of Australia, 19 August 2025)
However, in the Court of Appeal of California, three judges noted:
‘We conclude by noting that “hallucination” is a particularly apt word to describe the darker consequences of AI. AI hallucinates facts and law to an attorney, who takes them as real and repeats them to a court…’ Noland v Land 2025 Cal. App. LEXIS 584
On 31 October 2025, the Courts and Tribunals Judiciary updated its AI Guidance for Judicial Office Holders and also adopted the language of AI hallucinations in the following terms:
‘AI Hallucinations are incorrect or misleading results that AI models generate. These errors can be caused by a variety of factors, including insufficient training data, the models statistical nature, incorrect assumptions made by the model, or biases in the date used to train the model.’
For the purpose of this article, I will adopt the term ‘hallucinations’ to be consistent with the guidance. However, not every case I cite had the confirmed use of AI. In some cases, it was merely alleged or suspected.
Readers of the Natural and Artificial Intelligence in Law blog (‘NAIL blog’) will have noticed the continuing rise of hallucinations featuring in court decisions. It takes time to write up each decision I am sent, but it appears that we now have at least 31 reported instances of AI hallucinations, or alleged AI hallucinations, in UK legal cases, with likely around 800 internationally. The problem does not seem to be abating and there may be more incidents that are not being identified or reported.
In my earlier article, I expressed concern that well intentioned judges sometimes cite hallucinated cases and their erroneous legal principles in full within official judgments in order to show the extent of the problem. The intention seems to be to educate readers and to be transparent about what has gone wrong. It has greatly assisted those interested in these issues that the full cases and citations are available to analyse why these hallucinations are happening.
However, by reproducing those hallucinations and invented principles in official case reports, there is a real risk that litigants in person, lawyers or AI systems will absorb them, use them and potentially feed them back into future outputs, embedding the error into the wider legal ecosystem. That problem has caused concern to the Australian judiciary:
‘... There has been an approach, which I will adopt, of redacting false case citations so that such information is not further propagated by AI systems.’ JML Rose Pty Ltd v Jorgensen (No 3)
In the UK, this approach does not seem to have been adopted. Whether the UK will move towards redaction, or another method of mitigating the problem, remains to be seen, but it is an issue that warrants urgent and careful consideration.
At the recent Housing Law Practitioners Association conference, the Master of the Rolls, Sir Geoffrey Vos, spoke about the progress being made in bringing technology into the justice system to deliver faster and more reliable outcomes. The emphasis was not only on administrative efficiency, but also on the potential for AI to assist litigants in property litigation and even to support, or perform, aspects of decision-making itself:
‘I shall not now repeat the several lectures I have given on whether AI ought to be used for judicial decision-making. Suffice it to say that I am sure that many property disputes could be amenable to machine-made decision-making. It will be for future discussion whether, as a society, we would want decisions about people’s right to stay in their home to be decided by a machine rather than a human. This conversation is rather more urgent than many people imagine.’
For some, the idea that a machine might be involved in a decision about whether someone can stay in their home still sounds like science fiction. For those who already use these tools in legal practice, knowingly or unknowingly, the direction of travel is less surprising. In one case from the Tax Tribunal, the judge expresed how AI assisted in his decision:
‘I have used AI in the production of this decision. This application is well-suited to this approach. It is a discrete case-management matter, dealt with on the papers, and without a hearing. The parties’ respective positions on the issue which I must decide are contained entirely in their written submissions and the other materials placed before me. I have not heard any evidence; nor am I called upon to make any decision as to the honesty or credibility of any party.’ VP Evans (as executrix of HB Evans, deceased) & Ors v The Commissioners for HMRC [2025] UKFTT 1112 (TC)
The benefits and promises of AI are clear. AI can assist with judicial functions, support legal analysis and act as a powerful time saving mechanism for lawyers. Properly used, it can offer guidance to litigants in person who might otherwise have no realistic access to legal advice at all. These benefits are already being realised and ought to be recognised. As the Master of the Rolls explained:
‘This is not about whether it is or is not a good thing for claims to be created by AI. AI is a reality now, and individuals are and will be able to use it as their agent to pursue litigation.’
Alongside this positive vision, there are numerous risks that must be considered before full confidence can be placed in AI as a routine assistant in legal work or even a component of judicial decision-making. Primary among these risks are the hallucinations. As the AI Judicial Guidance explains, hallucinations may include making up fictitious cases, citations or quotes, referring to legislation, articles or legal texts that do not exist, providing incorrect or misleading information regarding the law or how it might apply, and making factual errors. These kinds of errors strike at the core functions of judging and lawyering, which depend on accuracy, reliability and the careful application of real law to real facts.
There does not seem to be a consensus on whether hallucinations can ever be fully eliminated. As with much of the discussion around AI, there is uncertainty and an evolving scientific picture. When I first started tracking these cases, I assumed the problem would be short lived. I was mistaken about the timeframe but remain hopeful the technology will improve.
None of this means that technology cannot provide meaningful assistance to court users and judges. Some jurisdictions are already experimenting with AI assisted decision-making in appropriate, lower value contexts, with a human judge retaining the final say. I am hopeful that carefully designed systems, introduced with care and proper oversight, can improve access to justice, but any such system must also weigh the real risks presented by hallucinations.
‘The rise and rise of fake cases’, Matthew Lee, Counsel September 2025
AI Guidance for Judicial Office Holders, Courts and Tribunals Judiciary, October 2025
‘Thirty one UK cases of hallucinated citations, AI and otherwise’, Natural and Artificial Intelligence in Law blog, Matthew Lee
Speech by the Master of the Rolls, Sir Geoffrey Vos: ‘Innovations in the Housing Sector – New Age Solutions for Age Old Problems’, Housing Law Practitioners’ Association Conference on 4 December 2025
With at least 31 reports of AI hallucinations in UK legal cases – over 800 worldwide – and judges using AI to assist in judicial decision-making, the risks and benefits are impossible to ignore. Matthew Lee examines how different jurisdictions are responding
The Bar Council is ready to support a turn to the efficiencies that will make a difference
By Louise Crush of Westgate Wealth Management
Marie Law, Director of Toxicology at AlphaBiolabs, examines the latest ONS data on drug misuse and its implications for toxicology testing in family law cases
An interview with Rob Wagg, CEO of New Park Court Chambers
What meaningful steps can you take in 2026 to advance your legal career? asks Thomas Cowan of St Pauls Chambers
Marie Law, Director of Toxicology at AlphaBiolabs, explains why drugs may appear in test results, despite the donor denying use of them
The appointments of 96 new King’s Counsel (also known as silk) are announced today
Ready for the new way to do tax returns? David Southern KC continues his series explaining the impact on barristers. In part 2, a worked example shows the specific practicalities of adapting to the new system
Resolution of the criminal justice crisis does not lie in reheating old ideas that have been roundly rejected before, say Ed Vickers KC, Faras Baloch and Katie Bacon
With pupillage application season under way, Laura Wright reflects on her route to ‘tech barrister’ and offers advice for those aiming at a career at the Bar
Jury-less trial proposals threaten fairness, legitimacy and democracy without ending the backlog, writes Professor Cheryl Thomas KC (Hon), the UK’s leading expert on juries, judges and courts