The video explains how major AI companies like Anthropic and OpenAI have become leading lobbyists, spending millions to influence U.S. government policy and secure lucrative contracts, while shaping regulations to align with their interests. Despite public support for some safety measures, these companies largely resist meaningful oversight, leaving policymakers dependent on industry guidance as AI adoption accelerates.
The video discusses how the largest AI companies, such as Anthropic and OpenAI, have become major players in lobbying efforts, aligning their interests closely with those of the U.S. government. Anthropic CEO Dario Amodei published an essay highlighting the growing connection between AI labs and government, criticizing both the tech industry’s reluctance to challenge government policies and the government’s support for extreme anti-regulatory stances on AI. Despite calls for keeping policy above politics, the reality is complicated by massive government contracts, such as a $200 million dispute between Anthropic and the Department of Defense.
Both Anthropic and OpenAI have significantly increased their lobbying and political donations, spending record amounts in 2025. OpenAI and Anthropic spent nearly $3 million each on federal lobbying, with additional spending in California. Anthropic also made a $20 million donation to Public First Action, an AI regulation advocacy group, claiming nonpartisanship despite supporting policies that often conflict with anti-regulation figures. The companies’ lobbying efforts have become more transparent, with Anthropic disclosing donations to specific political candidates for the first time.
A notable policy divergence between the two companies emerged over California’s AI safety law, SB 53, which took effect in January 2026. The law requires large AI model makers to implement safety guardrails and self-report risk mitigation strategies. While both companies lobbied on the bill, OpenAI reportedly opposed it, whereas Anthropic endorsed it. However, some industry observers argue that such endorsements are more about publicity than substantive support for regulation, and that the industry as a whole remains resistant to meaningful oversight.
At the federal level, both companies share similar lobbying priorities, focusing on national security and AI infrastructure. Federal agencies are increasingly adopting AI systems, and both OpenAI and Anthropic have struck deals to provide their models to government agencies. OpenAI even removed language from its policies prohibiting military applications, signaling a shift toward closer government collaboration. The companies also support export controls to restrict advanced AI technology sales to adversaries like China and Russia.
The rapid expansion of AI requires massive data centers, and both companies are lobbying for policies to support these investments, including tax credits, subsidies, and streamlined approvals. Delays in data center development due to complexity, power constraints, and local opposition are common. Other lobbying issues include AI accessibility, election interference, copyright, and AI safety regulations. Despite these efforts, significant legislation reshaping AI development in the U.S. has yet to pass, leaving policymakers heavily reliant on information from the tech industry itself.