Prominent chatbots routinely exaggerate science findings, study shows - Tech and Science Post
Summary by Tech and Science Post
1 Articles
1 Articles
All
Left
Center
Right
Prominent chatbots routinely exaggerate science findings, study shows - Tech and Science Post
When summarizing scientific studies, large language models (LLMs) like ChatGPT and DeepSeek produce inaccurate conclusions in up to 73% of cases, according to a study by Uwe Peters (Utrecht University) and Benjamin Chin-Yee (Western University, Canada/University of Cambridge, UK). The researchers tested the most prominent LLMs and analyzed thousands of chatbot-generated science summaries, revealing that most models consistently produced broader …
Coverage Details
Total News Sources1
Leaning Left0Leaning Right0Center0Last UpdatedBias DistributionNo sources with tracked biases.
Bias Distribution
- There is no tracked Bias information for the sources covering this story.
Factuality
To view factuality data please Upgrade to Premium
Ownership
To view ownership data please Upgrade to Vantage