Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. They just memorize patterns really well.
Tobberone @ Tobberone @lemm.ee Posts 0Comments 112Joined 2 yr. ago
Tobberone @ Tobberone @lemm.ee
Posts
0
Comments
112
Joined
2 yr. ago
What statistical method do you base that claim on? The results presented match expectations given that Markov chains are still the basis of inference. What magic juice is added to "reasoning models" that allow them to break free of the inherent boundaries of the statistical methods they are based on?