A UK tribunal has issued a sharp reminder that generative AI is a drafting aid and not an authority, after a barrister was referred to the barrister’s regulator, the Bar Standards Board, for allegedly misleading the tribunal by submitting a fictitious case generated by ChatGPT.
The tribunal found that the barrister had “misused AI and attempted to mislead the tribunal” when he submitted grounds of appeal in which he cited a case titled Y (China) [2010] EWCA Civ 116, which the tribunal said does not exist. The tribunal noted that the barrister did not immediately admit to his unprofessional use of AI and maintained that the case was genuine because it was “evidenced” by the AI model.
“Hallucinating” is the generation of plausible sounding but false facts by AI, and is a well-known, well-documented behavior of large language models which stems from their probabilistic prediction design. While newer systems, such as “research” builds of ChatGPT and comparable model, have substantially reduced these failures through better training, retrieval, and tool use, the risk is never zero.
The tribunal cited guidance from the Divisional Court, including the case of R (Ayinde) v. London Borough of Haringey where the Divisional Court stated that “its overarching concern was to ensure that lawyers clearly understand the consequences of using AI for legal research without checking that research by reference to authoritative sources. Lawyers who do not comply with their professional obligations in this respect risk severe sanction.”
The tribunal stated that “it is a lawyer’s professional responsibility to ensure that checks on the accuracy of citations and quotations are carried out using reputable sources of legal information“. It found that the barrister directly attempted to mislead through reliance on Y (China), and by only making a full admission in his third explanation to the tribunal, had not acted with integrity and honesty.
The tribunal noted that a police investigation or contempt proceedings might be appropriate when there is evidence of deliberate submission of false material to the court, but in this instance concluded that neither course of action would be appropriate because the barrister did not know that AI large language models were capable of producing false authorities.
The barrister apologized for his conduct and claimed “he was misled by the search engine” and described himself as “a victim“. He stated that he has undertaken further training, which included a presentation on AI, but the tribunal nonetheless concluded that referral was “most definitely appropriate.”
Similar episodes have played out in the United States with comparable judicial responses. In Mata v. Avianca (S.D.N.Y. 2023), the court sanctioned counsel for filing a brief with fabricated case citations generated by a chatbot and required corrective letters to the affected courts. In another case in 2023, NY federal court Judge Jesse Furman issued an order to show cause why a lawyer should not be sanctioned after he filed a motion citing non‑existent decisions later traced to an AI tool. As with the barrister in Y (China), the defendant claimed not to know that “generative text services could show citations and descriptions that looked real, but were not.” Some U.S. state courts have followed suit as well, including a recent Utah appellate sanction for AI‑generated fictitious citations. Several federal judges now require up‑front certifications that any AI‑assisted drafting has been independently verified.
AI-assisted text should always be treated as an unverified draft and validated against authoritative sources before use, especially anywhere accuracy, professionalism, personal impact, safety, or regulatory compliance are implicated. The takeaway here for the legal profession is that over-reliance on AI without thorough verification jeopardizes core duties to the court and risks serious regulatory consequences. View the decision here.