Why do AI-generated "hallucination" briefs keep showing up — even though the legal-tech market is crowded with AI startups? Lawyers often rely on free, general-purpose chatbots instead of paid legal tools. Cost is a barrier.
MyPillow CEO's Lawyers in Trouble for Filing AI-Generated Legal Brief with Fake Citations
Eric Coomer, formerly director of product strategy and security at Dominion Voting Systems, has brought a civil case in the U.S. District Court for the District of Colorado against Mike Lindell, Lindell’s media site FrankSpeech, and his retail company MyPillow. The complaint says the defendants claimed nationwide fraud in the 2020 election but provided no documents, data, or expert analysis to substantiate the charge. It also says Lindell called Coomer a traitor even though no evidence of criminal conduct was produced. According to the filing, FrankSpeech posted interviews and articles that repeated the allegations alongside advertisements for MyPillow products. Coomer, who works in election system security, alleges that the statements have harmed his professional reputation.
The Motion in Limine Dispute
Under the trial preparation schedule, each side had to file any motions in limine by a set deadline. Coomer’s lawyers filed "Plaintiff’s Motion in Limine", listing the specific topics — traffic accident, sexual history, substance use, religion, and politics — that they wanted excluded from the jury.
Once that motion was filed, it became part of the court record, so the judge and opposing lawyers could read it. Two weeks later, Mike Lindell and the other defendants filed "Defendants’ Brief in Opposition", arguing that the same information should be admitted.
The AI-Generated Brief and Citation Errors
A lawyer for MyPillow and Mike Lindell relied on an autonomous large language model to prepare the opposition. The draft was submitted to the court without the customary manual check of its citations.
Judge Nina Wang’s review identified almost thirty citation problems, which fell into several groups. One set involved quotations that did not appear in the opinions cited. A second group assigned legal rules to authorities that never discussed those principles. A third group mislabeled non-controlling decisions as precedent. Additional errors placed opinions in the wrong judicial districts, and several references pointed to cases that do not exist in any reporter or database.
Attorney Explanations
Lead counsel Christopher Kachouroff told the court that he used a large language model text generator to draft the opposition brief, noting that the tool produces prose and citations without verifying their accuracy. When questioned about the brief’s false or imprecise citations, he gave no specific reason and referred to the filing as a "draft pleading", even though it had been submitted as final. He admitted he did not compare the citations with the underlying opinions, thereby omitting the standard citation-checking step. Kachouroff said he first created an outline and partial draft himself before employing the AI system to expand the text, but the judge has already expressed doubt about that. When the judge pointed out a quotation that did not match the cited opinion, he called it an accidental paraphrase, denied any intent to mislead, and said he had assigned the citation-checking task to his co-counsel, Jennifer DeMaster.
Judge Wang's Response
On June 8, it was reported that Judge Nina Y. Wang fined attorneys Christopher Kachouroff and Jennifer DeMaster $3,000 each for submitting an opposition brief that included unverified, AI-generated citations in a defamation case against Mike Lindell. The court said the lawyers violated Federal Rule 11 because the brief contained invented cases, misquotations, and legal arguments not grounded in current law.
Lawyers Warned Against Presenting AI-Generated Material Containing False Information in Legal Arguments
The Problem with AI-Generated Legal Briefs
Courts require lawyers to confirm every citation, meaning attorneys must independently verify that each referenced case, statute, regulation, or quotation actually exists, is cited with the correct volume and page numbers, and still represents good law. When lawyers file briefs drafted by large language model software without performing that verification step, they expose themselves to the possibility that the model has fabricated ("hallucinated") authorities or misquoted real ones, because the software predicts text rather than consulting reliable legal databases. Citing nonexistent or inaccurately described sources can mislead judges — who rely on briefs for accurate statements of the law — and confuse or unfairly disadvantage opposing counsel, potentially leading to sanctions or a loss of credibility for the lawyer and the client.
Understanding Language Models and Their Limitations
Large language models are computer programs that predict text by running probability calculations on billions of sentences they absorbed during training, selecting the word that is most likely to follow the user’s prompt. Because they merely apply statistical rules and never form intentions or understand meaning, they are not autonomous or self-aware artificial intelligence.
When a question concerns material that was scarce or absent in the training data, the model will often invent plausible-sounding details — such as article titles, authors, or page numbers — to fill the gap, a phenomenon sometimes called "hallucination". The same statistical machinery can accelerate routine writing chores: it can draft an email, condense a report into a brief summary, or suggest alternative wording within seconds, yet it cannot independently diagnose a technical fault, devise a legal strategy, or build an investment plan.
The human user therefore sets the question, evaluates whether the answer is sensible, and decides what, if anything, to trust. Because the software manipulates patterns of words without grasping the underlying facts or their consequences, every response is merely a provisional draft that must be checked against reliable sources — a safeguard that is vital wherever errors could expose someone to legal liability, clinical harm, or financial loss.
Ethical and Legal Implications
Ethics rules require honesty toward the tribunal and reasonable diligence. Therefore, filing invented authority may constitute fraud if the lawyer knew the material was false, or negligence if the lawyer failed to exercise the care that the profession demands. The classification depends on what the lawyer knew or reasonably should have known at the time of filing.
Providing a court with false AI-generated material can be treated as the functional equivalent of perjury because it supplies information that the lawyer knows is untrue. On this view, criminal penalties — including incarceration — should supplement traditional civil or professional sanctions.
Professional Duties and Verification Requirements
Lawyers must subject every machine-generated sentence to the same rigorous scrutiny they apply to a junior associate's draft. That entails verifying each citation, cross-checking factual assertions against the record, and ensuring compliance with local procedural rules and professional conduct standards — regardless of whether the AI produces a full brief, an outline, or a single paragraph.
If a lawyer files work that has not been verified, that omission amounts to professional negligence. Verification requires looking up authorities, cross-checking numbers, and confirming quoted language. Skipping these steps breaches the duty of competence and can justify dismissal from the firm or formal discipline by the bar, which may include loss of the license to practice.
The use of AI does not lessen a lawyer's duties. Every fact and citation must still be independently verified, and the combined penalties apply to any lapse.
Legal AI Market Overview
Market Growth and Adoption
In 2025, the legal technology market continues to expand. Global revenue for legal AI software is expected to reach between USD 2.3 billion and USD 2.8 billion by year-end, up from about USD 1.9 billion in 2024. Market researchers forecast annual growth of 15 to 20 percent over the next five years, reflecting consistent double-digit adoption.
Growth is being driven by the volume and complexity of legal data, pressure on legal departments to control costs, and executive support for automation. Surveys from Deloitte show that more than two-thirds of organizations plan to increase spending on generative AI in 2025, and internal legal teams identify AI tools as a practical route to efficiency.
Generative AI is now a normal part of legal work. About 85 percent of lawyers use these tools every week. They help draft contracts, summarize cases and client files, and explain legal issues in simple terms.
A Thomson Reuters survey says 77 percent of legal professionals think AI will reshape their jobs within five years, and half of firm leaders say rolling out AI is their top priority. Data show lawyers save roughly four hours a week thanks to these tools.
Leading Legal AI Companies
- Harvey in the United States supplies a chat-based assistant trained on each firm’s data. By April 2025, it served lawyers at eight of the ten highest-grossing U.S. firms, with annual recurring revenue above USD 70 million.
- Luminance in the United Kingdom offers contract analytics software and entered 2025 with more than 700 clients in over 70 countries.
- Legora in Sweden delivers research and drafting tools and has 250 law firm customers across 20 markets.
- Eudia in the United States builds private AI agents for in-house departments.
- Supio and Eve focus on plaintiff side automation
- Paxton serves small and mid sized European practices
- Theo AI predicts litigation outcomes for lawyers and funders
- Marveri accelerates due diligence document review.
Venture Investment
- Total so far in 2025: Legal tech fundraising has already topped $1 billion.
- Seed rounds: Marveri $3.5 million and Theo AI $6.4 million.
- Series A: Eudia $105 million (led by General Catalyst) and Eve $47 million (led by Andreessen Horowitz).
- Series B: Legora $80 million and Supio $60 million.
- Later stages: Luminance $75 million in February 2025.
- Harvey: back-to-back $300 million Series D and Series E rounds, now valued at about $5 billion (June 2025).
Key Applications
AI contract review
Tools like Luminance read large batches of contracts, point out clauses that do not match the usual wording, and extract key data for contract management software.
In the last two years, Luminance has gained five times more customers and grown its recurring revenue sixfold. The newest versions can propose edits or negotiate straightforward agreements on their own by checking each term against the firm’s standard playbook.
Legal research
Westlaw and Lexis+ AI now let lawyers type questions in everyday language and receive instant AI-generated summaries. With these tools, lawyers can gather and verify case law in minutes, and citations are checked automatically.
Compliance Monitoring
AI services track regulatory changes, map requirements to internal policies, and identify gaps for remedial action. Demand is strongest among corporate legal departments that need to manage privacy, trade, and sector-specific rules without expanding headcount.
Litigation Support
Litigation teams increasingly rely on predictive analytics. Platforms analyze previous judgments, docket data, and judge-specific patterns to estimate the probability of success and inform settlement strategy. AI-enabled e-discovery systems classify and prioritize documents, while generative models produce summaries of testimony and correspondence.
Bottom Line
Why do AI-generated "hallucination" briefs keep showing up — even though the legal-tech market is crowded with AI startups?
Lawyers often rely on free, general-purpose chatbots instead of paid legal tools.
Under deadline pressure, some practitioners skip manual citation checks, and because many start-ups focus on contracts, e-discovery, or practice management rather than brief validation, lawyers still must run a separate citation checker.
Cost is a barrier: platforms like Harvey, Lexis+ AI, and Luminance are priced for enterprises, so solo and small-firm practitioners often default to no-cost alternatives. Legal assistants require onboarding steps and Word integrations, while a public chatbot is available in one browser tab.
Rate this article
Recommended posts
Our Clients' Feedback













We have been working for over 10 years and they have become our long-term technology partner. Any software development, programming, or design needs we have had, Belitsoft company has always been able to handle this for us.
Founder from ZensAI (Microsoft)/ formerly Elearningforce