Why AI Will Never Be Conscious - Anil Seth

Anil Seth argues that consciousness—rooted in subjective experience and biological processes—is fundamentally distinct from intelligence and unlikely to arise in AI systems that lack the embodied, metabolic, and integrated nature of biological brains. He emphasizes the need for conceptual clarity and humility in attributing consciousness to AI, highlighting human biases and advocating for scientific research into the biological basis of consciousness to inform ethical and philosophical discussions.

In this insightful discussion, Anil Seth distinguishes between intelligence and consciousness, emphasizing that intelligence is about functional problem-solving and doing, whereas consciousness is fundamentally about feeling and subjective experience—“what it feels like” to be an organism. He highlights that while intelligence and consciousness often co-occur in living beings, they are conceptually distinct and may not necessarily be linked in artificial systems. This distinction is crucial in debates about whether artificial intelligence (AI) can ever be truly conscious, as current AI systems exhibit intelligence-like behaviors without clear evidence of subjective experience.

Seth critically examines the popular thought experiment of neural replacement, where biological neurons are hypothetically replaced with silicon-based equivalents to argue for substrate-independent consciousness. He argues that this scenario is implausible because biological neurons have complex, embodied functions—such as metabolic waste clearance—that silicon cannot replicate. This leads to the broader claim that, unlike digital computers where hardware and software can be separated, in brains, what they are (their biological substrate) is deeply entangled with what they do, making consciousness likely dependent on biological processes rather than mere computation.

The conversation also explores human psychological biases that lead us to overattribute consciousness to AI, especially language models like GPT, which mimic human language fluently. Seth points out that other AI systems, such as AlphaFold, which perform complex tasks without language, do not evoke the same intuitions about consciousness, suggesting that language’s central role in human exceptionalism biases our perceptions. He cautions that conflating intelligence, language ability, and consciousness can lead to false positives in attributing consciousness to AI, while also risking false negatives in recognizing consciousness in non-human animals or synthetic biological systems.

Delving into the nature of consciousness itself, Seth presents his “controlled hallucination” theory, where perception is understood as the brain’s best guess or prediction about sensory inputs rather than a passive readout of reality. This framework, grounded in Bayesian inference and predictive processing, offers a unified way to understand various conscious experiences but does not fully solve the “hard problem” of why consciousness exists at all. Seth adopts a pragmatic materialist stance, acknowledging the mystery of consciousness while advocating for scientific approaches that relate conscious experience to the embodied, biological brain and its dynamic processes, including metabolism and autopoiesis.

Finally, the discussion touches on complex topics such as the unity of consciousness, split-brain patients, and the challenges of defining consciousness in AI systems that operate distributedly across servers and instances. Seth emphasizes the importance of humility and careful conceptual clarity when discussing AI consciousness, noting that current AI lacks the embodied, integrated, and temporally continuous characteristics of biological consciousness. He advocates for ongoing research to identify what aspects of biological materiality and organization are essential for consciousness, which will inform ethical considerations around AI and deepen our understanding of consciousness across different systems.