See the Full Picture.
Published loading...Updated

Judge initially fooled by fake AI citations, nearly put them in a ruling

  • Judge Michael Wilner nearly included fabricated AI-generated citations in a ruling after receiving flawed legal briefs in a civil lawsuit against State Farm.
  • The issues arose because plaintiff's lawyers used AI tools like Google Gemini and ChatGPT to produce research without proper verification, creating fake cases and citations.
  • Wilner identified numerous inaccurate authorities in the briefs, requested explanations from the lawyers, received their sworn acknowledgments and apologies, yet still found that errors generated by AI persisted in the updated submissions.
  • Judge Wilner condemned the undisclosed use of AI for legal research and drafting, stating that competent lawyers should not delegate these critical tasks to such technology without verifying its accuracy, and imposed $31,000 in sanctions on the implicated law firms for their negligent behavior.
  • The sanctions and judicial criticism highlight significant risks of undisclosed AI use in law that can mislead courts and undermine trust in legal processes.
Insights by Ground AI
Does this summary seem wrong?

14 Articles

All
Left
3
Center
3
Right
4
Think freely.Subscribe and get full access to Ground NewsSubscriptions start at $9.99/yearSubscribe

Bias Distribution

  • 40% of the sources lean Right
40% Right
Factuality

To view factuality data please Upgrade to Premium

Ownership

To view ownership data please Upgrade to Vantage

FlaglerLive broke the news in on Sunday, May 11, 2025.
Sources are mostly out of (0)