A troubling new trend is emerging in the legal profession as lawyers turn to artificial intelligence for assistance. In a recent high-profile incident, a lawyer submitted a motion in a Texas bankruptcy court that included citations to 32 legal cases that do not exist. The AI tool used by the lawyer had fabricated them entirely.
This event is not isolated. It highlights a growing problem where the reliance on generative AI in high-stakes legal work leads to significant errors, prompting judges to issue sanctions and a new wave of legal professionals to police their peers' work for AI-generated falsehoods.
Key Takeaways
- Lawyers are increasingly using artificial intelligence to draft legal documents and briefs.
- AI models are prone to "hallucinations," inventing fake legal cases and citations that appear authentic.
- A Texas lawyer faced disciplinary action after filing a motion with 32 fabricated case citations.
- A community of legal professionals is now actively tracking and exposing these AI-generated errors in court filings.
- Judges are responding with sanctions, including fines and mandatory AI ethics training for offending attorneys.
A Case of Fictional Precedent
In a Texas bankruptcy court earlier this year, a legal motion was filed citing a case known as Brasher v. Stewart from 1985. On the surface, it appeared to be a standard legal citation used to support an argument. However, a closer look revealed a critical problem: the case never happened.
Further investigation showed that this was just one of 32 completely fabricated citations within the same document. The lawyer responsible had used an AI tool to assist in writing the brief, and the technology had generated a list of entirely fictional legal precedents to support the motion's claims.
The presiding judge reacted strongly to the submission of what has been termed "A.I. slop." The judge's official opinion criticized the lawyer's lack of oversight. As a consequence, the attorney was referred to the state barβs disciplinary committee and ordered to complete six hours of specialized training on the ethical use of artificial intelligence in legal practice.
What Are AI Hallucinations?
In the context of artificial intelligence, a "hallucination" occurs when a large language model (LLM) generates information that is incorrect, nonsensical, or not based on its training data, yet presents it as factual. In the legal field, this can manifest as creating plausible-sounding but entirely fake case names, judges, and legal rulings.
The Emergence of AI Watchdogs
The Texas filing did not go unnoticed by the wider legal community. Robert Freund, a lawyer based in Los Angeles, identified the flawed document and submitted it to a growing online database. This repository serves as a global tracker for instances of AI misuse within the legal system.
Freund is part of an informal network of attorneys and legal experts who have taken it upon themselves to act as watchdogs. They meticulously review court filings and public records, searching for the tell-tale signs of AI-generated content that has not been properly fact-checked.
This peer-driven oversight is becoming a crucial, albeit unofficial, check and balance against the premature and careless integration of generative AI in the justice system. The goal is not just to expose errors but to create a professional standard where verifying AI output is as fundamental as any other form of legal research.
The American Bar Association has issued guidance on the use of AI, emphasizing that lawyers retain full responsibility for the accuracy of any documents they file, regardless of whether they were drafted by a human or a machine.
Judicial System Responds to a New Threat
Judges across the country are now grappling with how to handle the influx of AI-assisted, and sometimes AI-invented, legal arguments. The response is becoming more standardized, with a clear message that accountability remains with the human attorney.
Setting a Precedent for Sanctions
The Texas case is a clear example of the new judicial approach. The sanctions were twofold: professional and educational. Referring the lawyer to the disciplinary committee signals the severity of the offense, treating it as a breach of professional conduct. The mandated training acknowledges that this is a new technological challenge that requires new skills and ethical understanding.
This approach aims to correct the immediate error while also equipping the lawyer to avoid similar mistakes in the future. Legal experts believe such rulings will become more common as courts establish firm rules for AI use.
The Burden of Verification
The core issue is that AI language models are designed to generate convincing text, not to provide factually accurate legal citations. They often create text that looks and sounds like a legitimate legal argument, complete with case numbers and judicial language, but which has no basis in reality.
Legal professionals are reminded that they cannot delegate their duty of diligence to a machine. Every single citation, fact, and legal principle suggested by an AI must be independently verified using traditional legal research databases. Failure to do so is increasingly being viewed as a form of professional negligence.
"The rise of generative AI is a paradigm shift for the legal profession, but it doesn't change the fundamental ethical obligations of an attorney. The buck stops with the lawyer who signs the filing, not the algorithm that helped write it."
The Future of AI in Law
Despite these challenges, artificial intelligence is not expected to disappear from the legal landscape. When used correctly, it can be a powerful tool for summarizing documents, identifying relevant themes in large volumes of evidence, and automating routine drafting tasks, freeing up lawyers to focus on higher-level strategy.
The current wave of errors and corrections is seen by many as a necessary growing pain. It is forcing the legal industry to confront the limitations of the technology and to develop best practices for its use. This includes:
- Implementing rigorous internal review processes for any AI-assisted work.
- Investing in training programs that cover both the capabilities and the pitfalls of AI tools.
- Developing new software that can cross-reference AI-generated citations against verified legal databases.
As the technology matures and the legal profession adapts, the hope is that AI can become a reliable assistant rather than a source of embarrassing and professionally damaging errors. For now, the vigilance of lawyers like Robert Freund and the firm stance of judges are shaping the cautious path forward.





