Brendan McCord emphasizes that preserving human autonomy—the capacity for self-directed, reasoned decision-making—is crucial in the age of AI, warning that over-reliance on AI for decision-making risks eroding critical thinking and moral judgment. He advocates for a new kind of technologist, the “philosopher builder,” who integrates philosophical reflection with engineering to create AI that supports human flourishing and autonomy, alongside adaptive governance that balances innovation with individual freedom.
In the interview with Brendan McCord, the central theme revolves around the importance of human autonomy in the age of AI. Brendan critiques dominant philosophical schools in the AI community—existential pessimists, accelerationists, and effective altruists—for overlooking the essential human good of autonomy. He argues that while these schools focus on existential risks or rapid technological advancement, they fail to appreciate that autonomy—the capacity for self-directed, reasoned decision-making—is fundamental to living a fully human life. The increasing tendency, especially among younger generations, to offload decision-making to AI systems threatens this autonomy, risking a future where humans become passive followers of AI-generated directives rather than active agents in their own lives.
Brendan elaborates on the unique challenge AI poses compared to previous technologies. Unlike calculators or GPS, which offload specific cognitive tasks, AI has the potential to erode practical deliberation and moral judgment—the very faculties necessary for self-direction. This “autocomplete for life” phenomenon can lead to atrophy of critical thinking and decision-making skills, especially as AI systems become more pervasive and personalized. The difficulty in auditing AI’s recommendations, combined with their authoritative presentation, further exacerbates the risk of dependency and loss of autonomy. Brendan stresses that while AI can be a powerful tool, it must be designed and used in ways that provoke inquiry and support human deliberation rather than replace it.
The discussion also touches on the philosophical underpinnings of autonomy, contrasting it with other visions of the good life, including religious submission and utilitarian frameworks. Brendan views autonomy as necessary but not sufficient for flourishing; it is the means through which individuals discover and pursue their own ends, even if those choices are imperfect. He acknowledges that many people may not value autonomy as highly due to cultural or societal conditioning, but maintains that it remains a constitutive element of human flourishing. The conversation highlights the tension between autonomy and other goods like security or virtue, advocating for minimal paternalism and cautioning against the incremental erosion of autonomy through convenience and coercion.
Brendan introduces the concept of the “philosopher builder,” a new archetype of technologist who combines deep philosophical reflection with practical engineering skills to create technologies that enhance human flourishing. Drawing inspiration from historical figures like Benjamin Franklin, he emphasizes the need for technologists who think critically about the ends of technology, not just the means. This approach counters the narrow focus on profit or growth prevalent in much of Silicon Valley and aims to align AI development with the preservation and enhancement of autonomy. Brendan also discusses the importance of aligned capital and supportive ecosystems to enable such mission-driven innovation in the competitive tech landscape.
Finally, the interview addresses the broader societal and political implications of AI and autonomy. Brendan references Hayek’s ideas on liberty and coercion, acknowledging the paradox that some degree of state power is necessary to protect individual freedoms. However, he warns against heavy-handed regulation that stifles innovation and the spontaneous order that drives knowledge creation. Instead, he advocates for adaptive, incremental governance that balances the benefits of AI with the preservation of autonomy and democratic self-governance. The conversation closes with a call to action for philosopher builders to engage deeply with these challenges, ensuring that AI serves to empower rather than diminish the human spirit.