AI Philosophy

Can AI Actually Reason? Three Studies on LLM Cognition

· book · finished

Whether LLMs genuinely reason or merely pattern-match is the central empirical question in AI research right now. These three pieces offer competing answers.

Can LLMs Really Reason and Plan?

TLDR: Subbarao Kambhampati argues LLMs are “n-gram models on steroids” — sophisticated pattern matchers incapable of principled reasoning, planning, or self-verification. They approximate reasoning by retrieving similar patterns from training data.

Key Insight: Use LLMs as idea generators, not reliable reasoners — always verify logic independently.

Read the full article ->

New Apple Study Challenges Whether AI Models Truly Reason

TLDR: Apple researchers found that reasoning models collapse when given problems with irrelevant information added. Slight changes to problem structure caused dramatic accuracy drops, suggesting pattern matching rather than genuine reasoning.

Key Insight: Adding irrelevant details is a simple litmus test for whether a model reasons or pattern-matches.

Read the full article ->

Are Language Models Mere Stochastic Parrots? The SKILLMIX Test

TLDR: Princeton’s SKILLMIX test found GPT-4 combines multiple linguistic skills in novel ways that go beyond memorization of training data. The results complicate the “stochastic parrot” narrative without fully refuting it.

Key Insight: The truth about LLM capabilities lies somewhere between “mere memorization” and “true understanding.”

Read the full article ->

What does this mean for how we think about AI?

The evidence suggests LLMs occupy an uncomfortable middle ground: more capable than simple retrieval, less capable than genuine reasoning. The practical implication is to treat LLM outputs as drafts requiring human verification, not conclusions to be trusted at face value.