Stanford CS221 | Autumn 2025 | Lecture 18: AI & Society

This lecture explores the societal impacts of AI, emphasizing the responsibility of technologists to consider both the benefits and risks—including issues of fairness, alignment, copyright, and transparency—when developing and deploying AI systems. It highlights the importance of proactive evaluation, auditing, and openness to ensure AI serves the broader good while minimizing harm and unintended consequences.

This lecture shifts focus from the technical aspects of artificial intelligence (AI) to its societal implications, emphasizing why technologists should care about the broader impact of their work. The instructor argues that technology, including AI, has historically transformed society, citing examples from the printing press to the internet and social media. AI’s rapid adoption and influence, exemplified by tools like ChatGPT, means its societal effects are only beginning to unfold. Technologists, especially those building AI systems, wield significant power through their design choices, which can affect access, fairness, and the potential for harm or benefit.

A central theme is that AI is a dual-use technology: it can be used for both beneficial and harmful purposes. The lecture outlines a framework for thinking about AI’s societal impact, distinguishing between intended and unintended consequences. Beneficial uses include advancements in healthcare, education, and climate science, while misuse encompasses cyberattacks, disinformation, and fraud. Accidents—unintended negative outcomes—are particularly concerning, such as AI systems reinforcing inequality, over-reliance by users, or perpetuating biases. The instructor stresses the importance of proactive testing and careful deployment to minimize these risks.

The lecture delves into specific societal challenges, such as inequality and alignment. Studies like the Gender Shades project reveal how AI systems can perform unevenly across demographic groups, highlighting the need for third-party auditing and more representative data. Alignment, or ensuring AI systems do what we intend, is complicated by the difficulty of specifying reward functions and the diversity of human values. Problems like reward hacking and scalable oversight are discussed, with suggestions for more robust evaluation metrics and process-level supervision to ensure AI systems behave as intended.

Copyright and data usage are explored in depth, explaining how most internet content is copyrighted and the complexities of fair use in AI training. The instructor discusses recent lawsuits and settlements, the role of licenses and Creative Commons, and the challenges posed by AI models memorizing and potentially reproducing copyrighted works. The nuances of fair use, transformation, and market impact are examined, along with the technical reality that some models can extract large portions of copyrighted texts, raising legal and ethical questions.

Finally, the lecture addresses openness and transparency in AI development. Transparency is presented as essential for accountability and improvement, with tools like the Foundation Models Transparency Index used to evaluate companies’ practices. Openness is described as a spectrum, from closed models to fully open development, with benefits including innovation, decentralization of power, and increased transparency. However, openness also brings risks, particularly around misuse. The lecture concludes by urging technologists to consider the entire AI ecosystem—upstream and downstream impacts—and to embrace auditing, openness, and comprehensive evaluation as foundations for responsible AI development.