A new study from Apple’s machine learning team challenges prevailing assumptions about the capabilities of advanced AI reasoning systems. Published in a paper titled The Illusion of Thinking, the research reveals critical limitations in state-of-the-art Large Reasoning Models (LRMs) like Claude 3.7 Sonnet Thinking and Gemini Thinking, showing they struggle with systematic problem-solving beyond basic complexity levels.
The team evaluated frontier LRMs using customizable puzzle environments such as Tower of Hanoi, Checkers Jumping, and River Crossing problems. These settings allowed for precise control over task difficulty and required strict adherence to logical rules rather than relying on pattern recognition. The study revealed three central limitations. First, all tested models completely failed when puzzle complexity exceeded 15–20 steps. Regardless of the available computational resources, performance dropped to zero percent accuracy at higher difficulty levels, indicating a fundamental constraint in managing multi-step logic. Second, the models displayed what the researchers called an "overthinking paradox." As problems became more challenging, the solutions generated by the models grew increasingly verbose but less effective. At medium complexity levels, LRMs consumed two to three times more computational resources than standard models, while delivering only modest gains in accuracy. Finally, the models showed scaling limitations. Despite having sufficient computational budgets, they reduced their reasoning effort beyond certain complexity thresholds, as measured by the number of processing tokens. This behavior suggests inherent limits in how these systems allocate cognitive resources.
To further investigate these limitations, the study introduced a novel framework comparing LRMs with standard language models under equivalent computational conditions. At low complexity levels, standard models outperformed LRMs both in terms of accuracy—achieving 85% compared to 78%—and efficiency, using only 1,200 tokens per solution versus 4,500 for LRMs. At medium complexity, LRMs held a moderate advantage, solving 45% of problems compared to 32% for standard models. However, at high complexity, both types of models collapsed to nearly zero accuracy. Interestingly, LRMs often produced shorter and less coherent reasoning traces at these levels than they did when solving simpler problems.
The implications for AI development are significant. The study revealed that models struggled to reliably implement known algorithms such as breadth-first search, even when explicitly prompted. Their reasoning was often inconsistent, with solutions frequently violating basic puzzle rules mid-process, indicating a fragile grasp of logical constraints. Furthermore, while LRMs did exhibit some capacity for detecting errors, they often became trapped in repetitive correction loops instead of devising new strategies for solving problems.
Apple’s researchers urge caution in interpreting current benchmarking results. They argue that what appears as reasoning in LRMs might more accurately be described as constrained pattern completion, which can be effective for routine problems but proves brittle when faced with novel challenges. They emphasize that true reasoning involves the capacity to adapt solution strategies to the complexity of a problem—something current models have not yet demonstrated.
The study underscores the need for new evaluation paradigms that go beyond measuring final-answer accuracy to include analysis of the reasoning process itself. As AI systems are increasingly entrusted with critical decision-making responsibilities, understanding these fundamental limitations becomes essential for the development of reliable and transparent technologies.