Judge initially fooled by fake AI citations, nearly put them in a ruling
- Judge Michael Wilner nearly included fabricated AI-generated citations in a ruling after receiving flawed legal briefs in a civil lawsuit against State Farm.
- The issues arose because plaintiff's lawyers used AI tools like Google Gemini and ChatGPT to produce research without proper verification, creating fake cases and citations.
- Wilner identified numerous inaccurate authorities in the briefs, requested explanations from the lawyers, received their sworn acknowledgments and apologies, yet still found that errors generated by AI persisted in the updated submissions.
- Judge Wilner condemned the undisclosed use of AI for legal research and drafting, stating that competent lawyers should not delegate these critical tasks to such technology without verifying its accuracy, and imposed $31,000 in sanctions on the implicated law firms for their negligent behavior.
- The sanctions and judicial criticism highlight significant risks of undisclosed AI use in law that can mislead courts and undermine trust in legal processes.
14 Articles
14 Articles
Lawyers Used AI to Make a Legal Brief—and Got Everything Wrong
By now, some AI-generated nonsense sneaking into legal briefs isn’t shocking. But that doesn’t mean the fallout is or should be any less severe. In a case out of California, reported by The Verge, U.S. Magistrate Judge Michael Wilner slammed two law firms for filing legal documents riddled with fictional cases and quotes, all dreamed up by generative AI. By trying to cut corners, the two firms will now have to pay $31,000 in sanctions. The offen…
How an AI almost ruined a court ruling in the U.S.
Artificial intelligence (AI) is moving forward at rapid pace in many areas, and the judicial system is no stranger to this transformation.But what consequences can the incorporation of such powerful and still imperfect technology have in decisions that directly affect people's rights and responsibilities?A recent case against the State Farm insurance company in Los Angeles, California, USA, puts the magnifying glass on the subject.Judge Michael …
AI Hallucination Case Stemming from Use of a Paralegal's AI-Based Research
I blogged yesterday about AI hallucinations in court filings by prominent law firms, as well as a nonexistent source cited in an expert's declaration (the expert works for leading AI company Anthropic, though at this point it's not yet clear whether the error stemmed from an AI hallucination or from something else). But I thought I'd blog a bit more in the coming days about AI hallucinations in court filings, just to show how pervasive the probl…
Coverage Details
Bias Distribution
- 40% of the sources lean Right
To view factuality data please Upgrade to Premium
Ownership
To view ownership data please Upgrade to Vantage