In the video “Lunduke v. ChatGPT - ‘Cease Making Defamatory Statements,’” Brian Lunduke discusses his legal concerns regarding defamatory statements made about him by OpenAI’s ChatGPT, giving the company a 30-day notice to address the inaccuracies or face legal action. He highlights the significant misrepresentations generated by the AI, shares his communication with OpenAI, and emphasizes the broader implications of AI reliability on individuals’ reputations.
In the video titled “Lunduke v. ChatGPT - ‘Cease Making Defamatory Statements,’” Brian Lunduke discusses his legal concerns regarding defamatory statements made about him by OpenAI’s ChatGPT. He has given OpenAI a 30-day notice to cease these statements or face legal action. Lunduke outlines the criteria for proving defamation, which includes demonstrating that the statements were false, publicly made, about a specific person, damaging to reputation, and that OpenAI is at fault for these claims. He believes he has a strong case due to the inaccuracies generated by the AI.
Lunduke shares his experiences with ChatGPT, noting that while some responses about him were accurate, others contained bizarre and false information, such as claims about his gender identity and personal life. He received numerous screenshots from others who queried ChatGPT about him, revealing a pattern of wildly inaccurate statements. Lunduke emphasizes that these inaccuracies are not just minor errors but significant misrepresentations that could harm his reputation.
He details his communication with OpenAI, starting with his notification on December 2, 2024, about the defamatory statements. OpenAI responded promptly, requesting that he fill out a privacy portal form, which he did. Lunduke published a video documenting the false claims made by ChatGPT and shared it with OpenAI, who acknowledged receipt and stated that they typically review such requests within 30 days.
As of December 5, 2024, Lunduke warned OpenAI that if the issue was not resolved within the specified timeframe, he would escalate the matter legally. He raises critical questions about OpenAI’s ability to control the AI system they created, pondering the implications if they cannot prevent ChatGPT from making defamatory statements. He also considers the ethical responsibility of OpenAI to address such issues for all users, not just those with a platform to voice their concerns.
Lunduke concludes by expressing his commitment to chronicle the proceedings and keep his audience updated on the developments. He acknowledges the support of his subscribers and emphasizes the importance of holding companies like OpenAI accountable for their products. The situation raises broader questions about the reliability of AI systems and their potential impact on individuals’ reputations.