- Joined
- Sep 8, 2023
- Messages
- 29,404
- Reaction score
- 32,227
- Location
- Ohio, USA
- Gender
- Undisclosed
- Political Leaning
- Liberal

AI chatbots oversimplify scientific studies and gloss over critical details — the newest models are especially guilty
More advanced AI chatbots are more likely to oversimplify complex scientific findings based on the way they interpret the data they are trained on, a new study suggests.
Large language models (LLMs) are becoming less "intelligent" in each new version as they oversimplify and, in some cases, misrepresent important scientific and medical findings, a new study has found.
Scientists discovered that versions of ChatGPT, Llama and DeepSeek were five times more likely to oversimplify scientific findings than human experts in an analysis of 4,900 summaries of research papers.
When given a prompt for accuracy, chatbots were twice as likely to overgeneralize findings than when prompted for a simple summary. The testing also revealed an increase in overgeneralizations among newer chatbot versions compared to previous generations.
The researchers published their findings in a new study April 30 in the journal Royal Society Open Science.
"I think one of the biggest challenges is that generalization can seem benign, or even helpful, until you realize it's changed the meaning of the original research," study author Uwe Peters, a postdoctoral researcher at the University of Bonn in Germany, wrote in an email to Live Science....