California's AI safety bill divides tech: Here's what to know

California’s AI safety bill, SB-1047, aims to address risks associated with artificial intelligence but has sparked division within the tech community due to concerns that it may hinder innovation and favor closed-source projects over open-source ones. Chris Kelly critiques the bill for being both too broad and too narrow, advocating for a balanced federal approach to AI regulation that encourages safety without stifling development.

California has recently passed a controversial AI safety bill, SB-1047, which is now awaiting approval from Governor Gavin Newsom. The legislation aims to address the potential risks associated with artificial intelligence, particularly concerning its misuse by malicious actors. However, the bill has sparked division within the tech and political communities, with many arguing that it could hinder innovation in the rapidly evolving AI landscape.

Chris Kelly, founder of Kelly Investment and former general counsel at Facebook, discusses the bill’s intentions and implications. He acknowledges that the bill aims to encourage companies to prioritize safety in their AI development processes. However, he criticizes the bill for being both too broad and too narrow, suggesting that it fails to address all necessary issues while also imposing onerous requirements based on compute power and spending levels, which may overlook other significant risks.

Kelly highlights that the bill could inadvertently favor closed-source AI projects over open-source ones. This is due to the potential liability assigned to open-source projects, which may lack the control that companies have over their proprietary systems. He points out that this aspect of the legislation has raised concerns among experienced technology regulators, including notable figures like Speaker Nancy Pelosi and Zoe Lofgren.

The discussion also touches on the broader concerns surrounding AI, including fears of runaway technology and the need for legislative measures to mitigate these risks. While Kelly agrees that some form of regulation is necessary, he emphasizes the importance of a balanced approach that encourages transparency and responsible development without stifling innovation. He argues that the current bill’s triggers for regulation are problematic and could lead to unintended consequences.

Finally, Kelly expresses a preference for federal legislation over state-level regulations, as having multiple state regimes could complicate compliance for companies. He notes that the Biden/Harris administration has begun outlining potential federal regulations, which also exhibit similar issues of being over and underinclusive. Ultimately, he advocates for a cohesive federal standard to effectively address AI safety concerns while fostering innovation in the industry.