The recent White House executive order aims to establish a national AI regulatory standard to avoid a fragmented state-level patchwork and enhance U.S. competitiveness, but effective AI governance will also depend heavily on company transparency, internal controls, and court rulings. Building trust through clear governance, improving AI literacy among policymakers and the public, and addressing fears are crucial for widespread AI adoption and realizing its full societal benefits.
The recent executive order from the White House on AI regulation has sparked significant discussion about its potential impact and legal viability. Many companies are relieved by the prospect of avoiding a complex patchwork of state regulations, but experts caution that federal regulation is only one part of the broader governance landscape. Much of the real governance will happen within companies themselves, where transparency and internal controls are crucial. Additionally, courts—both state and federal—will play a significant role in shaping AI regulation through litigation, influencing how laws are interpreted and enforced.
State-level AI regulation is rapidly evolving, with a dramatic increase in legislative activity from 19 laws over five years to nearly a thousand bills introduced in the past year alone. This surge means that the federal executive order could disrupt existing and emerging state laws, creating a complex legal environment. Moreover, many of these state laws and regulations are likely to face legal challenges, which will further shape the regulatory landscape. The interplay between federal directives, state legislation, and court decisions will be critical in determining the future of AI governance in the U.S.
The White House argues that a national standard is essential for the U.S. to remain competitive with China in the AI industrial revolution, emphasizing the high stakes of global leadership. While this argument holds merit, experts highlight that the key missing piece is AI adoption, which is currently hindered by a lack of trust. Companies are investing heavily in AI but are not seeing expected returns because users remain skeptical. Establishing clear governance, measurement, and protocols is necessary to build trust and enable widespread adoption of AI technologies.
There is also a significant knowledge gap among policymakers, many of whom are still learning about AI and its implications. However, progress has been made through efforts like the AI Caucus and educational programs for legislators and their staff. Despite this, more work is needed to improve AI literacy among lawmakers to ensure effective and informed regulation. Communicating meaningful AI use cases to the public is equally important, as many people remain fearful or uncertain about AI’s impact, especially those over 30. Addressing these fears through dialogue and education is crucial for broader acceptance.
Finally, governance and literacy within companies are vital to the future of AI adoption. While a majority of companies use AI, less than a third have formal governance systems in place. Without clear policies on AI use, safety testing, and transparency, public trust will remain low, hindering adoption. Data shows that personal experience with AI increases acceptance and excitement about the technology. Therefore, improving governance, educating both policymakers and the public, and fostering transparent AI practices are essential steps to unlock AI’s full potential and ensure it benefits society.