[2025] EWHC 1383 (Admin)
The perils of generative AI hallucinating and, in an effort to be helpful, making up new but false cases and/or citations are becoming increasingly well known. The two cases here, have led the Divisional Court to issue a clear final warning to lawyers about the use of AI, and also to call for urgent steps to be taken to address the misuse of artificial intelligence.
In the Ayinde case, as part of a judicial review, Ritchie J had to consider an application for wasted costs. One of the grounds of that application was that the claimant’s barrister and solicitor put forward five fake cases (including one purporting to be from the Court of Appeal) in their client's statement of facts and grounds for the judicial review, but when requested to produce copies of those cases, they did not.
The judge concluded that this conduct had been: “improper” and “unreasonable”. On the facts, the judge could not be certain that AI had been used, but if it had, and if those using it had not double-checked the references, then the conduct was “negligent” too.
The case was referred promptly to the Divisional Court, where the President of the Divisional Court gave the judgment. The court noted that:
“Artificial intelligence is a tool that carries with it risks as well as opportunities. Its use must take place therefore with an appropriate degree of oversight, and within a regulatory framework that ensures compliance with well-established professional and ethical standards if public confidence in the administration of justice is to be maintained...the administration of justice depends upon the court being able to rely without question on the integrity of those who appear before it and on their professionalism in only making submissions which can properly be supported.”
Further, the court continued that those who use AI to conduct legal research notwithstanding these risks have a professional duty therefore to check the accuracy of such research by reference to authoritative sources, before using it in the course of their professional work (to advise clients or before a court, for example):
“There are serious implications for the administration of justice and public confidence in the justice system if artificial intelligence is misused. In those circumstances, practical and effective measures must now be taken by those within the legal profession with individual leadership responsibilities (such as heads of chambers and managing partners) and by those with the responsibility for regulating the provision of legal services. Those measures must ensure that every individual currently providing legal services within this jurisdiction (whenever and wherever they were qualified to do so) understands and complies with their professional and ethical obligations and their duties to the court if using artificial intelligence.”
The court then reviewed existing guidance including from the Bar Council and SRA, highlighting guidance given to judges that: “all legal representatives are responsible for the material they put before the court/tribunal and have a professional obligation to ensure it is accurate and appropriate”. There was further discussion about duties owed to professional bodies, possible contempt of court and wasted costs. In the case here, the deputy judge had ordered that counsel and the law centre each pay £2k to the defendant.
Before the divisional court, counsel explained that they: “may also have carried out searches on Google or Safari” and as a result may have taken account of artificial intelligence generated summaries of the results. This would mean that generative artificial intelligence tools would have been used to produce the list of cases. This is important: it is incredibly easy to use AI, without realising.
At the same time as considering the Ayinde case, the court considered a second case: Al-Haroun. Here, the claimant sought damages of £89.4 million for alleged fraud. The case was referred because the judge was concerned that in the course of correspondence with the court and in the witness statements, reliance was placed on numerous authorities, many of which appeared to be either completely fictitious or which, if they existed at all, did not contain the passages supposedly quoted from them. Here 45 cases were cited. In 18 cases, the case did not exist. The research had been carried out by a lay client, and then relied upon by their solicitors, something described by the court as “extraordinary”, especially as one of the fake authorities that was cited to the judge in question, was a “decision” attributed to that judge.
The court did not however decide to initiate contempt proceedings in either case. There were a number of reasons for this, including a general warning to the legal profession:
“our overarching concern is to ensure that lawyers clearly understand the consequences (if they did not before) of using artificial intelligence for legal research without checking that research by reference to authoritative sources. This court's decision not to initiate contempt proceedings…is not a precedent. Lawyers who do not comply with their professional obligations in this respect risk severe sanction.”