Stanford CS329H: Machine Learning from Human Preferences | Autumn 2024 | Human-centered Design

The lecture emphasizes the importance of human-centered design in AI, advocating for technology that prioritizes user needs, context, and accessibility over purely technical optimization, using frameworks like design thinking to create intuitive and effective human-AI interactions. It also highlights the necessity of comprehensive evaluation methods that consider user goals, ethical concerns, and real-world applicability to ensure AI tools are trustworthy, fair, and broadly usable.

The lecture focuses on human-centered design within the context of human-AI interaction, emphasizing the importance of designing technology from the user’s perspective rather than purely from a technical standpoint. The speaker contrasts traditional engineering approaches, which start with a technical problem and seek solutions, with human-computer interaction (HCI) approaches that prioritize understanding the user’s needs, context, and challenges. Using the example of “Norman doors,” which confuse users due to poor design, the lecture argues that many difficulties in interacting with AI tools stem from design failures rather than user shortcomings. The goal is to create AI technologies that are intuitive and accessible, reducing the cognitive burden on users.

The discussion highlights the evolution of interfaces from punch cards and command lines to graphical user interfaces and touchscreens, illustrating efforts to bridge the gap between human capabilities and computer functions. In AI, this gap remains significant, as users often rely on complex prompting techniques to communicate with language models, which can be unintuitive and brittle. The lecture suggests that improving AI usability involves not only advancing the underlying technology but also designing better interaction methods that accommodate diverse user needs and contexts. This includes considering education, cognitive load, and social factors to make AI tools more broadly accessible.

A central theme is the contrast between technology-centric and user-centric design. Technology-centric design focuses on optimizing the tool itself, often abstracted from real user needs, while user-centric design starts with understanding the users’ problems and designing solutions tailored to them. The lecture uses chatbots as an example of a hybrid approach, where technical advances in language models are combined with interface improvements to make the technology more usable. However, challenges remain in making these tools truly intuitive and reducing reliance on specialized skills like prompt engineering.

The lecture introduces design thinking and the double diamond method as frameworks for addressing human-AI interaction challenges. Design thinking encourages repeatedly asking “why” to uncover root problems before proposing solutions, while the double diamond method involves divergent and convergent phases in both problem discovery and solution development. Practical tools such as user research, participatory design, prototyping, and storytelling help identify user needs and test solutions iteratively. These approaches aim to create AI systems that users want to engage with and that effectively solve real problems.

Finally, the lecture covers evaluation strategies for human-AI interactions, emphasizing the importance of both quantitative and qualitative methods. Evaluations can be intrinsic, focusing on model performance in isolation, or extrinsic, assessing usefulness in real-world tasks. Metrics should align with user goals, and evaluations must consider who is assessing the system—experts, lay users, or automated judges—and when evaluations occur, from immediate interaction to long-term deployment. Trust, fairness, transparency, and ethical considerations are also critical in designing and evaluating AI tools that serve diverse populations responsibly and effectively.