How to Govern AI — Even If It’s Hard to Predict | Helen Toner | TED

The speaker discusses the challenges in governing artificial intelligence (AI) due to the complexity and unpredictability of AI systems. They emphasize the importance of proactive governance strategies, such as investing in measuring AI capabilities, enhancing disclosure requirements for AI companies, and promoting transparency and accountability to guide AI development responsibly.

The speaker highlights the challenges in understanding and predicting the capabilities of artificial intelligence (AI). Experts in the field acknowledge the complexity of AI systems, with limited insight into their internal workings. This lack of understanding poses a significant obstacle in governing AI effectively. Despite the uncertainty, AI is already pervasive in society, necessitating proactive governance approaches.

The ambiguity surrounding the definition of intelligence adds to the difficulty in predicting the trajectory of AI development. The distinction between narrow and general AI, once thought to be clear, has been blurred by advancements like ChatGPT. The complexity of AI systems, particularly deep neural networks, contributes to the challenge of comprehending their inner workings. Progress in AI interpretability research offers hope for better understanding and governance of AI technologies in the future.

The speaker emphasizes the need for a collective effort in governing AI. Individuals should not be intimidated by the complexity of AI but rather encouraged to engage with the technology and its implications. Technologists must involve a diverse range of stakeholders in shaping AI policies to ensure responsible and inclusive development. The focus should be on adaptability rather than certainty, enabling policymakers to navigate the unpredictable nature of AI advancements effectively.

To govern AI effectively, the speaker proposes investing in measuring AI capabilities, enhancing disclosure requirements for AI companies, and implementing incident reporting mechanisms. These strategies aim to provide a clearer view of the risks and opportunities associated with AI technologies. By promoting transparency and accountability in AI development, policymakers can steer the technology in a direction that aligns with societal values and priorities.

In conclusion, the speaker emphasizes the importance of proactive governance of AI to harness its full potential for societal benefit. While uncertainties and disagreements persist in the field of AI, policies focused on measurement, disclosure, and incident reporting can help mitigate risks and guide AI development responsibly. Individuals are encouraged to actively participate in shaping the future of AI, recognizing their roles as users, workers, and citizens in influencing the direction of AI innovation.