In the video, Sarah Hooker critiques the reliance on compute thresholds, such as FLOPs, for AI governance, arguing that they oversimplify the complexities of AI capabilities and risks. She emphasizes the need for more nuanced policies that consider factors like model architectures, data diversity, and cultural representation, particularly for low-resource languages, to ensure equitable and effective AI development.
In the video, the host welcomes Sarah Hooker, a VP of Research at Cohere, back to discuss her recent work and critiques of AI governance strategies, particularly focusing on the use of compute thresholds. Sarah highlights her concerns regarding the inadequacy of simple measures like floating point operations per second (FLOPs) for assessing AI capabilities and risks. She argues that compute thresholds, which have been adopted in major policies like the EU AI Act and the U.S. executive order on AI, do not account for the complexities of model architectures, data, and the broader AI landscape. Instead, she advocates for a more nuanced approach to AI governance that considers the diverse factors influencing AI development.
Throughout the conversation, Sarah discusses the challenges in developing multilingual AI models, emphasizing the significant gap in representation for low-resource languages. She explains how current language models tend to overfit to high-frequency patterns, which disadvantages languages with less training data available. This phenomenon not only affects the performance of models in different languages but also raises concerns about equity and representation in AI technologies as they become more integrated into society.
The discussion also delves into the historical context of risk management and the difficulties policymakers face when trying to anticipate and mitigate risks associated with AI technologies. Sarah cites that many historical examples demonstrate the challenges of identifying risks and forming proportionate responses, especially in rapidly evolving fields like technology. The conversation highlights the importance of understanding the relationship between compute, data, and the specific applications of AI to establish effective governance strategies.
Sarah critiques the concept of relying solely on compute thresholds, noting that this approach creates a false sense of security. The conversation touches on how models can be manipulated to evade these thresholds, leading policymakers to overlook real risks associated with AI technologies. She emphasizes the need for policies to incorporate multiple objectives and adapt dynamically to the changing AI landscape, rather than relying on rigid metrics that may not reflect the complexities of real-world applications.
Finally, the video concludes with a discussion on the interplay between language, culture, and AI capabilities. Sarah asserts that models trained predominantly on high-resource languages like English often fail to adequately capture the nuances of other languages and dialects. The conversation underscores the necessity for AI systems to be more inclusive and representative, not just in terms of language but also in understanding the rich tapestry of cultural contexts in which these technologies operate. As AI continues to play a crucial role in modern society, Sarah advocates for a more thoughtful and comprehensive approach to its development and governance.