1. EachPod

The Reasoning Model Paradox: Why Smarter AI Hallucinates More

Author
Skyward
Published
Thu 24 Jul 2025
Episode Link
None

OpenAI's latest reasoning models can solve complex physics problems and write sophisticated code better than some humans – but they're wrong about basic facts up to 79% of the time. How is that possible?

In this deep dive, we unpack one of the most concerning trends in AI development: the reasoning model paradox. Despite being more capable than ever, models are hallucinating at unprecedented rates, with error rates that have doubled or even tripled compared to previous generations.

Share to: