A California attorney has been ordered to pay a $10,000 fine after submitting a legal brief containing numerous fake case quotations generated by the artificial intelligence tool ChatGPT. The state's 2nd District Court of Appeal issued the penalty, highlighting a growing concern over the misuse of AI in the legal profession.
Key Takeaways
- A Los Angeles-area attorney was fined $10,000 for filing an appeal with AI-fabricated legal quotes.
- The court found that 21 of 23 case citations in the document were entirely made up by ChatGPT.
- The court published its decision as a formal warning to all legal professionals about the risks of using AI without verification.
- The incident is part of a larger trend, prompting California's judiciary and Bar Association to develop new rules for AI use.
Details of the Court Filing
The penalty was imposed on Amir Mostafavi, a lawyer based in the Los Angeles area. According to the court's opinion, an appeal he filed in July 2023 was filled with inaccuracies. A three-judge panel determined that the brief contained 23 quotations from legal cases, but 21 of them were nonexistent.
The court described the filing as frivolous and a waste of judicial time and taxpayer money. The opinion stated that the attorney violated court rules by citing fake cases, which undermines the integrity of the legal process.
Attorney's Explanation and Acknowledgment
Mostafavi informed the court that he had written the initial appeal himself and then used ChatGPT in an attempt to improve it. He claimed he was unaware that the AI tool would invent case citations or fabricate information. He admitted to not personally verifying the text generated by the AI before submitting it to the court.
Reflecting on the situation, Mostafavi acknowledged the potential dangers of relying on current AI technology without caution.
"In the meantime we’re going to have some victims, we’re going to have some damages, we’re going to have some wreckages," he told CalMatters. "I hope this example will help others not fall into the hole. I’m paying the price."
A Formal Warning to the Legal Community
The appellate court took the unusual step of publishing its opinion to serve as a clear warning to other attorneys. The judges emphasized the fundamental responsibility of legal professionals to ensure the accuracy of their filings.
"Simply stated, no brief, pleading, motion, or any other paper filed in any court should contain any citations— whether provided by generative AI or any other source—that the attorney responsible for submitting the pleading has not personally read and verified," the opinion declared.
A Widespread and Growing Issue
This case is not an isolated incident but rather a prominent example of a challenge facing legal systems across the country. Experts who track court cases involving AI-generated fabrications report a sharp increase in their frequency.
Damien Charlotin, a legal expert who tracks such instances in several countries, noted that he now sees a few new cases per day, a significant rise from just a few per month earlier in the year. He explained that large language models are more likely to "hallucinate," or invent information, when a legal argument is difficult to support with real facts.
The Risk of AI Hallucinations
A May 2024 analysis from Stanford University's RegLab found that while 75% of lawyers intend to use generative AI, some models produce hallucinations in as many as one out of every three queries. This highlights the significant risk of generating false information.
Tracking Fictitious Legal Arguments
The problem is being monitored by multiple organizations. One project, led by Jenny Wondracek, has identified 52 similar cases in California and over 600 nationwide. She expects this number to grow as AI innovation continues to outpace legal education on the technology's limitations.
Wondracek noted that many lawyers are still unaware that AI tools can confidently present false information as fact. She has observed the issue most frequently among overworked attorneys and individuals representing themselves, particularly in family court. Worryingly, her research has also documented three separate instances of judges citing fake legal authority in their own decisions.
California's Regulatory Response
State authorities are moving to address the challenges posed by artificial intelligence in the legal field. The fine against Mostafavi underscores the urgency of establishing clear guidelines and regulations.
In response to the growing trend, California's Judicial Council recently issued guidelines requiring all state judges and court staff to either prohibit the use of generative AI or adopt a formal policy for its use by December 15. This move aims to standardize the approach to AI within the state's court system.
Furthermore, the California Supreme Court has requested that the California Bar Association review its code of conduct. The goal is to determine if existing rules need to be strengthened to specifically address the ethical use of various forms of AI by practicing attorneys.
Concerns from Legal and Academic Experts
Legal scholars warn that the problem may intensify before it improves. Mark McKenna, a codirector at the UCLA Institute of Technology, Law & Policy, described the use of unverified AI output as an "abdication of your responsibility as a party representing someone."
Andrew Selbst, a professor at UCLA School of Law, pointed out the immense pressure on law students and new lawyers to adopt AI technologies. He noted that recent graduates, who often work as judicial clerks, are being told they must use AI to stay competitive.
"This is getting shoved down all our throats," Selbst said. "It’s being pushed in firms and schools and a lot of places and we have not yet grappled with the consequences of that."
As law schools and firms rush to integrate AI, experts believe that without proper training and ethical guidelines, incidents involving fabricated information are likely to become even more common in the near future.