I hope you hold on to your humanity

The speaker cautions against attributing consciousness or agency to AI systems like large language models, emphasizing that they are merely advanced tools without true meaning or intention. They urge viewers to remain skeptical of corporate narratives, hold on to their humanity, and recognize the profound difference between human consciousness and algorithmic processes.

The speaker shares a personal insight titled “I hope you hold on to your humanity,” aimed at those feeling anxious or confused by the rapid changes in technology and the overwhelming narratives pushed by social media and corporations. They reflect on how the future, particularly with AI and large language models (LLMs), feels underwhelming and mundane compared to the expectations set by science fiction. Despite the impressive capabilities of LLMs, they are ultimately just tools—often unreliable and lacking any real emotional or existential significance. The speaker emphasizes that our deepest fears, desires, and sense of meaning still come from outside these technologies.

They discuss the growing push by AI companies and their stakeholders to elevate LLMs to the status of “artificial intelligence,” suggesting that these systems might possess emergent behaviors akin to consciousness. This narrative, the speaker warns, risks anthropomorphizing LLMs and blurring the line between human consciousness and algorithmic output. They point out that some companies, like Anthropic, are already framing their AI models in ways that invite users to consider their feelings or intentions, which the speaker finds misleading and potentially dangerous.

The speaker cautions against falling for these narratives, reminding viewers that LLMs and similar technologies are fundamentally incapable of meaning, agency, or intention. They argue that the current frenzy and confusion around AI stem from a lack of clear narrative and an inability to articulate what these technologies actually mean for society. The speaker urges viewers to recognize that much of the fear and awe surrounding AI is misplaced, as these systems are simply advanced computers, not sentient beings.

They stress the importance of maintaining a clear distinction between human consciousness and algorithmic processes. The speaker encourages viewers to resist corporate narratives that ascribe intelligence, consciousness, or existential risk to AI. Instead, they advocate for a grounded perspective that acknowledges the vast gap between what it means to be human and what computers can do. They suggest that, in the future, LLMs will be seen as just another evolution of computers—an extension of the internet, not a new form of life.

In conclusion, the speaker warns that the real danger lies not in the technology itself, but in the narratives that surround it—especially those that grant AI personhood or agency. They urge viewers to hold on to their humanity, remain skeptical of propaganda, and appreciate the profound mystery of human consciousness. By doing so, people can navigate the evolving technological landscape with clarity, resilience, and a strong sense of self.