The video discusses a new AI policy proposal by AI Policy US, which categorises AI systems based on levels of concern and computational power. The policy includes provisions for regulating AI systems based on their potential risks and capabilities, with critics expressing concerns about the impact on innovation and the feasibility of accurately assessing risks.
In the video, the presenter discusses a new AI policy proposal by AI Policy US that has sparked significant attention. The policy outlines different tiers of AI concern, ranging from low concern to extremely high concern based on the level of computational power used to train the AI systems. However, the presenter critiques the policy for focusing too much on computational benchmarks rather than the actual capabilities and risks posed by the AI systems. The policy defines major security risks as those that could lead to existential threats or risks of AI systems escaping human control permanently.
Furthermore, the policy includes provisions for early AI training stops for medium concern AI systems, requiring developers to report performance benchmarks regularly. If a medium risk AI system exceeds certain benchmarks, developers must transition it to high concern status and apply for a permit, which the presenter views as potentially hindering innovation and progress in the AI field. The presenter notes that many companies already implement rigorous safety measures internally, questioning the necessity of such strict external regulations.
The policy also introduces a FastTrack exemption for certain narrow AI tools like self-driving cars and fraud detection systems, allowing them to continue operating without participating in the stringent regulatory requirements outlined for higher concern AI systems. Additionally, the policy sets forth criteria for determining extremely high concern AI systems, focusing on their potential to assist in developing weapons of mass destruction or destabilizing global power dynamics. The presenter expresses skepticism about the feasibility of accurately assessing and regulating such risks, especially regarding emerging capabilities in AI.
Moreover, the policy includes emergency powers that can be invoked if a frontier AI poses major security risks, granting authorities the ability to suspend permits, issue restraining orders, and even physically seize AI laboratories. The presenter highlights the potential implications of such emergency powers, especially in the case of catastrophic failures or rogue AI incidents. The policy also emphasizes the protection of whistleblowers who report violations of the AI act, ensuring their safety even if they are mistaken in their concerns. Overall, the presenter raises concerns about the balance between regulating AI advancements and fostering innovation in the rapidly evolving field.