• June 12, 2025
Apple Throws Cold Water on AGI With ‘The Illusion of Thinking’

While the introduction of large language models (LLMs) has infiltrated a wide range of business systems, including process automation technology, and it’s cousin, agentic AI is beginning to do the same, the end goal for AI technology providers has always been artificial general intelligence (AGI). And, while proponents tout the advances in large reasoning models (LRMs), which they hope will underpin AGI as LLMs have made generative AI possible, a new report from researchers at Apple says the LRMs now available have limitations that will keep AGI purely theoretical for some time.

The main finding of The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity is that, while the more advanced reasoning models do better solving medium-complexity problems than LLMs, once those problems get harder, LRMs produce the same results as LLMs—they fail.

“Our findings reveal fundamental limitations in current models: despite sophisticated self-reflection mechanisms, these models fail to develop generalizable reasoning capabilities beyond certain complexity thresholds,” the report’s authors concluded. “We identified three distinct reasoning regimes: standard LLMs outperform LRMs at low complexity, LRMs excel at moderate complexity, and both collapse at high complexity.”

According to the report, in their current state, LRMs have “limitations in exact computation.”  That is, they fail to use explicit algorithms when they should and use less reasoning than they could across puzzles. Particularly concerning, they wrote, “is the counterintuitive reduction in reasoning effort as problems approach critical complexity, suggesting an inherent compute scaling limit in LRMs.”

Download a copy of the report here.