We don't want a patchwork of AI regulation on the state level, says Appian CEO Matt Calkins

Appian CEO Matt Calkins emphasized the need for a consistent national or international regulatory framework for AI, following California Governor Gavin Newsom’s veto of a significant AI bill, which he deemed inadequate. He advocated for transparency in AI development, suggesting that regulations should require companies to disclose the data used to train their models, thereby fostering public trust and informed discussions about data privacy.

In a recent discussion, Appian CEO Matt Calkins addressed the need for regulation in the field of artificial intelligence (AI), particularly in light of California Governor Gavin Newsom’s veto of a significant AI bill. Calkins expressed that while regulation is necessary, the specific bill in question was not the right approach. He emphasized the importance of having a consistent regulatory framework at the national or international level to avoid creating an uneven playing field for companies innovating in AI.

Calkins highlighted the challenges of regulating AI, noting that the technology is still not fully understood, often described as a “black box.” He pointed out that while concerns about catastrophic outcomes are valid, regulation should also address other critical issues, such as privacy. He suggested that a transparency bill could be a good starting point, advocating for regulations that require companies to disclose how they build their AI models and the data they use.

The conversation also touched on the implications of transparency in AI development. Calkins argued that if companies were required to disclose the data used to train their models, it would lead to greater public awareness and potentially prompt individuals to demand more control over their personal information. He stressed the importance of understanding how AI utilizes private data, which could lead to more informed regulatory discussions.

Calkins further explored the philosophical aspects of data privacy, noting that society has become increasingly willing to give up personal information over the years. He suggested that people might draw the line when they realize the extent to which AI is being trained on their data, particularly if it affects their livelihoods. This realization could lead to a stronger demand for privacy protections and consent regarding the use of personal data.

Ultimately, Calkins concluded that trust is foundational for the acceptance of AI in society. For individuals to feel comfortable allowing AI to handle sensitive tasks, there must be transparency about how their data is used and assurances that their information will not be exploited. He reiterated that establishing a regulatory framework focused on transparency and safety is crucial for fostering trust in AI technologies.