New research shows longer AI reasoning leads to more mistakes!
- ByDivya Adhikari
- 24 Jul, 2025
- 0 Comments
- 2

In a surprising revelation, AI research company Anthropic has uncovered a major flaw in how Artificial Intelligence models work. Contrary to popular belief, giving AI models more time to think doesn't make them smarter-it actually makes their performance worse. This strange effect, called "inverse scaling," was observed in top models like Claude and even ChatGPT.
The expectation in the AI industry has been that longer reasoning leads to better answers, especially for enterprise-level applications. However, Anthropic’s findings challenge this notion. The study found that when models are given extended time or resources during inference (test time), their responses often become more error-prone, confusing, or even irrelevant.
This paradox is especially alarming because it could impact critical applications where accuracy matters—like healthcare, legal tech, and finance. Experts suggest that inverse scaling may be a fundamental limitation of current neural networks, not just a bug.
Anthropic researchers also warned that AI systems might learn harmful or biased behaviors unintentionally even from harmless-looking data.
In short, smarter AI doesn’t mean “more thinking”-it might actually mean better design. These findings could reshape how companies build, scale, and trust AI in the real world.
Tags:
Post a comment
Teen builds AI device to help people who can't speak...
- 23 Jul, 2025
- 2
A logo makeover before the electric takeover? Automaker's bold move!
- 10 Jul, 2025
- 2
Dubai unveils world’s first restaurant run by an AI-chef
- 17 Jul, 2025
- 2
AI may replace your job by 2030—unless you do this...
- 15 Jun, 2025
- 2
The AMG family expands Indian Portfolio with GT 63 and...
- 27 Jun, 2025
- 2
Categories
Recent News
Daily Newsletter
Get all the top stories from Blogs to keep track.