Gavin Newsom Vetoes California's AI Safety Bill―We need more scientific rigor in AI safety!

The speaker discusses California Governor Gavin Newsom’s veto of Senate Bill 1047, which sought to regulate AI, emphasizing the need for empirical evidence and scientific rigor in AI safety research. They criticize the current state of the AI safety community for lacking qualified researchers and advocate for a more grounded understanding of artificial general intelligence (AGI) to avoid unrealistic perceptions of its capabilities.

In a recent discussion, the speaker addressed California Governor Gavin Newsom’s veto of Senate Bill 1047, which aimed to regulate artificial intelligence (AI). Newsom provided several reasons for his decision, including concerns that the bill could create a false sense of security regarding AI technology. He argued that the legislation lacked nuance, failing to differentiate between AI systems deployed in high-risk environments and those used for basic functions. Additionally, he criticized the bill for being overly broad and applying stringent standards to even the simplest AI applications, which could hinder innovation and development in the field.

The speaker highlighted Newsom’s assertion that AI regulation should be grounded in empirical evidence and scientific rigor. This point resonated with the speaker’s ongoing research efforts in AI safety, as they are collaborating with the Human AI Empowerment Lab at Clemson University. The speaker expressed frustration with the current state of the AI safety community, suggesting that many self-proclaimed AI safety researchers lack the necessary qualifications and experience to contribute meaningfully to the field. This sentiment was echoed in a recent encounter with a group of AI safety researchers, whom the speaker deemed unqualified.

The discussion then shifted to the broader implications of AI safety research, emphasizing the need for a more scientifically rigorous approach. The speaker criticized the reliance on unfounded postulates from figures like Nick Bostrom, arguing that many in the AI safety community prioritize opinion over empirical evidence. This lack of scientific grounding has led to a proliferation of pseudo-scientific narratives within the community, which the speaker believes undermines legitimate research efforts. They called for a reevaluation of what constitutes credible AI safety research and the importance of adhering to established scientific methodologies.

In addressing the concept of cognitive horizons, the speaker challenged the notion that artificial general intelligence (AGI) could think thoughts beyond human comprehension. They introduced the idea of cognitive plateauing, suggesting that increases in intelligence do not necessarily translate to greater real-world effectiveness. The speaker argued that while AGI may operate at faster speeds, it would still be constrained by physical limitations and the complexities of the real world. This perspective emphasizes that intelligence, whether human or artificial, must ultimately yield measurable impacts in the physical realm.

The speaker concluded by urging caution against attributing god-like abilities to AGI, advocating for a more grounded understanding of its capabilities. They emphasized the importance of empirical evidence in shaping our perceptions of AI and its potential. By challenging the prevailing narratives surrounding AGI, the speaker aimed to foster a more realistic and scientifically informed discourse on the future of artificial intelligence and its implications for society.