Sandy Starr discusses the implications of generative AI on language and communication, expressing concern that reliance on AI for writing diminishes human agency and authenticity in personal expression. He critiques the cycle of AI use in job applications and education, warning that it risks eroding ownership of language and meaningful engagement in the learning process.
In the video, Sandy Starr discusses the implications of generative AI, particularly focusing on its ability to produce text, and expresses both admiration and concern regarding its use. He highlights the evolution of technology from basic spellchecking to advanced AI systems like ChatGPT, questioning whether the increasing reliance on AI for writing is diminishing human agency. Starr emphasizes that the words we use are integral to our ability to convey meaning and exercise individual and collective agency, suggesting that if we allow AI to dominate our writing, we risk losing ownership of our language.
Starr points out that the integration of AI into various aspects of life, such as job applications and education, raises significant concerns. He notes that job seekers often use AI to craft resumes and cover letters, while employers utilize AI to sift through applications and conduct interviews. This trend, he argues, could lead to a scenario where human communication is increasingly machine-like, further eroding the authenticity of personal expression in professional contexts.
In the realm of education, Starr highlights the challenge posed by students submitting AI-generated work, while institutions respond by employing AI tools for assessment. He critiques this cycle, suggesting that both students and educators are failing to uphold their responsibilities in the learning process. The reliance on AI for writing and evaluation creates a troubling dynamic where neither party fully engages with the meaning-making process, leading to a potential loss of agency and ownership over one’s words.
Starr expresses skepticism about the notion of “co-piloting” with AI, likening it to an “autopilot” scenario that diminishes human involvement. He argues that if individuals do not take the time to create and understand their own words, they risk relinquishing control to external entities, such as tech companies. This concern extends to the broader implications of language ownership and the potential consequences of allowing AI to dictate our communication.
Finally, Starr briefly addresses AI-generated sounds and images, acknowledging the concerns surrounding deep fakes and their potential impact on democracy. While he recognizes the risks, he also suggests that the future may hold both challenges and opportunities. He envisions a scenario where the authenticity of recordings becomes increasingly questionable, leading to a shift in how we perceive and value our memories and expressions. Ultimately, he concludes that the most meaningful accounts may come from our internal thoughts rather than external representations, emphasizing the importance of maintaining our agency in the face of advancing technology.