Google DeepMind Paper Argues LLMs Will Never Gain Consciousness

DeepMind scientist argues AI systems only simulate consciousness through human-defined categories, not genuine experience

Alex Barrientos Avatar
Alex Barrientos Avatar

By

Image: Mitchell Luo / Unsplash

Key Takeaways

Key Takeaways

  • Google DeepMind researcher publishes paper arguing consciousness impossible for LLMs
  • Lerchner identifies “abstraction fallacy” showing AI only simulates rather than experiences consciousness
  • Internal research directly contradicts CEO Hassabis’s claims about imminent artificial general intelligence

Corporate AI promises clash with internal research when Google DeepMind publishes a paper arguing consciousness is impossible for LLMs—directly contradicting CEO Demis Hassabis’s claims about imminent artificial general intelligence.

The Abstraction Fallacy

DeepMind scientist Alexander Lerchner argues AI systems can only simulate consciousness, never achieve it.

Lerchner’s March 2026 paper challenges the tech industry’s core assumption: that sufficiently complex computation equals consciousness. His “abstraction fallacy” concept cuts through the hype—just because AI systems manipulate language and symbols convincingly doesn’t mean they experience anything internally. Think of it like a perfect celebrity impersonator versus the actual celebrity. The performance might fool you, but there’s no genuine person behind the act.

The key mechanism he identifies is “mapmaker dependency.” Every AI system requires humans to organize messy reality into categories the machine can process. Those armies of workers labeling training images? They’re creating the meaning that LLMs appear to generate independently.

The Body Problem

Consciousness requires physical motivation that digital systems fundamentally lack.

According to Lerchner, consciousness demands embodiment with intrinsic drives rooted in biological necessity. As evolutionary systems biologist Johannes Jäger puts it: “You have to eat, breathe, and you have to constantly invest physical work just to stay alive, and no non-living system does that.”

LLMs exist as “patterns on a hard drive” that activate only when prompted, lacking any internal motivation or meaning beyond human-defined tasks. This distinction between simulation and instantiation—like an artificial heart pumping blood versus performing actual metabolic functions—suggests AGI without consciousness remains merely a sophisticated tool.

Academic Déjà Vu

Philosophy professors note these arguments aren’t exactly breaking new ground.

Leading consciousness researchers acknowledge Lerchner’s rigor while emphasizing the familiar territory. Mark Bishop from Goldsmiths University supports “99 percent” of the arguments but notes “all these arguments have been presented years and years ago.” The surprise isn’t the conclusion—it’s that Google permitted publication contradicting its own AGI marketing narrative.

This creates a credibility paradox for you evaluating AI company claims. When internal researchers publish conclusions undermining corporate AGI promises, it reveals the gap between marketing narratives and scientific findings. You’re left questioning whether these mixed signals reflect genuine uncertainty or strategic positioning in the consciousness debate.

Share this

At Gadget Review, our guides, reviews, and news are driven by thorough human expertise and use our Trust Rating system and the True Score. AI assists in refining our editorial process, ensuring that every article is engaging, clear and succinct. See how we write our content here →