How Mozilla’s President Defines Open-Source AI

In a discussion with Mark Surman, Mozilla’s President, he emphasizes the importance of trustworthy open-source AI that empowers users and holds creators accountable, while advocating for privacy, transparency, and ethical considerations in AI development. Surman critiques restrictive licensing practices of some companies and calls for evidence-based regulations to ensure a balanced ecosystem that prioritizes competition and public interest in AI technology.

In a discussion with Mark Surman, the President of Mozilla, the conversation centers around the concept of trustworthy open-source AI. Surman reflects on his 2020 paper that emphasized the need for accountability and agency in AI systems, drawing parallels to Mozilla’s mission with Firefox, which aimed to empower users and protect their privacy. He argues that trustworthy AI should ensure that users can control how AI operates and that creators are held accountable for any negative outcomes resulting from their systems.

Surman elaborates on the challenges of measuring trustworthy AI, suggesting that indicators such as privacy, transparency, and the ability to audit AI systems are essential. He highlights Mozilla’s efforts to promote privacy in AI, citing their investment in companies like Flower AI, which aims to keep user data private and accessible. The conversation also touches on the balance between rapid innovation in AI and the need for ethical considerations in its development.

The discussion shifts to the definition of open-source software, which Surman describes as software that allows users to freely use, study, modify, and share it. He emphasizes that this definition extends to AI, where not only the software but also the models and datasets must be open and accessible. Surman mentions the Open Source Initiative’s upcoming definition for open-source AI, which aligns with these principles.

Surman critiques companies like Meta for their licensing practices, arguing that while they may present their products as open-source, they impose restrictions that contradict the essence of open-source principles. He believes that a true open-source license should allow unrestricted use, and he calls for companies to adopt licenses that genuinely reflect open-source values. He also addresses misconceptions about open-source AI being more dangerous than closed-source alternatives, asserting that research shows no significant difference in risk between the two.

Finally, Surman discusses the importance of regulation in AI, advocating for frameworks that prioritize competition, privacy, and safety. He believes that regulations should be evidence-based and tailored to specific risks associated with AI applications. Surman expresses concern over the potential monopolization of AI technology by a few companies and emphasizes the need for public options alongside commercial ones to ensure a balanced ecosystem. He concludes by noting Mozilla’s commitment to fostering trustworthy and open-source AI as part of their broader mission to serve the public interest.