The debate explores whether AI can write great books by contrasting AI’s ability to generate semantic content with the uniquely human experiences, emotions, and cultural contexts that enrich literature and learning. While acknowledging AI’s potential to personalize education and democratize knowledge, the panel emphasizes the importance of preserving human agency, meaningful engagement, and community in the development and use of AI technologies.
The debate on whether AI can write a great book centers around the distinction between semantic content and the human experience behind literary creation. Jonathan argues that while AI currently cannot produce works with the depth and deliberateness of human authorship, there is no theoretical reason it couldn’t eventually generate great books, at least in terms of semantic content. However, he acknowledges that the human element—the lived experiences, emotions, and intentions behind the text—adds a meaningful dimension that AI lacks. This human connection is crucial in understanding and engaging with literature, especially in educational contexts where the interaction with texts is enriched by personal and historical influences.
Hollis emphasizes the importance of the human context and the genealogies of influence behind great works. He points out that literature is not just about the text itself but also about the complex web of relationships, experiences, and historical moments that shape it. This depth of human experience and the deliberate embedding of personal and cultural allusions in literature are aspects AI cannot replicate. He also highlights that while AI can deliver factual content efficiently, the interpretive and emotional engagement with texts—what he calls “insold speech”—is inherently human and essential for true understanding.
Brendan brings a philosophical perspective on learning and knowledge, stressing that much of human knowledge is tacit, practical, and gained through lived experience, which AI currently cannot emulate. He acknowledges AI’s potential to handle explicit semantic knowledge and personalized tutoring effectively, as seen in educational settings like the Alpha School, where AI supports individualized learning and frees up time for experiential activities. However, he warns of the risks of passivity and social isolation if AI is misused, underscoring the need for careful integration that fosters autonomy and human flourishing rather than dependence.
The panel also discusses the societal implications of AI in education and intellectual life. There is concern that AI might exacerbate social isolation by providing easy but superficial intellectual engagement, potentially replacing rich human-to-human interactions. Yet, there is optimism that AI can be harnessed to build intellectual communities, personalize learning, and democratize access to knowledge. Projects like the Katherine Project and AI-driven sorting mechanisms for intellectual interests illustrate how technology can facilitate connection and discovery, provided it is designed with human values and community-building in mind.
Finally, the conversation turns to the motivations and visions of those building AI technologies. There is skepticism about whether the majority of AI developers prioritize expanding human agency and flourishing or if they lean towards automation that diminishes human involvement. The panelists agree that this moment is critical for philosophy and education to engage deeply with AI’s development, ensuring that human questions and values remain central. They call for a unified effort to influence AI’s trajectory towards supporting human dignity, creativity, and meaningful learning in an era of rapid technological change.