Anthropic accused three major Chinese AI labs—DeepSeek, Moonshot, and MiniMax—of illegally extracting data from its Claude model through large-scale “distillation attacks” to improve their own AI systems, framing this as a national security risk. However, the accusations sparked backlash and claims of hypocrisy, as critics pointed out that Anthropic and other Western AI companies have also used unlicensed data for training, highlighting the ethical ambiguities and geopolitical tensions in global AI development.
Anthropic, a leading American AI company, publicly accused three prominent Chinese AI labs—DeepSeek, Moonshot, and MiniMax—of conducting large-scale “distillation attacks” on its models. According to Anthropic, these labs created over 24,000 fraudulent accounts and generated more than 16 million interactions with Anthropic’s Claude model, extracting valuable data to train and improve their own AI systems. Distillation, a common and legitimate technique for creating smaller, faster models from larger ones, becomes problematic when used illicitly to siphon off proprietary capabilities, especially when it bypasses safeguards and export controls.
Anthropic detailed how each lab operated: DeepSeek focused on extracting reasoning capabilities and chain-of-thought processes, Moonshot targeted agentic reasoning and tool use, and MiniMax orchestrated the largest attack with 13 million exchanges, even adapting in real-time as Anthropic updated its models. The company claims to have attributed these campaigns to specific labs and, in some cases, even to individual executives, using IP addresses, metadata, and corroboration from industry partners. Anthropic framed these actions as significant national security risks, arguing that illicitly distilled models could be used in military or surveillance applications and undermine U.S. export controls designed to maintain a technological edge.
The internet’s reaction was swift and critical, with many accusing Anthropic of hypocrisy. Critics, including Elon Musk, pointed out that Anthropic itself has faced lawsuits for using copyrighted material—such as books and music—without permission to train its own models. Community notes and commentators highlighted that nearly all major AI labs, including Anthropic, OpenAI, and others, have relied on vast amounts of unlicensed or “stolen” data, blurring the moral distinction between Anthropic’s accusations and the actions of the Chinese labs.
Some experts questioned the scale and significance of the alleged attacks, noting that the number of interactions attributed to DeepSeek, for example, was relatively small and could be consistent with standard benchmarking rather than systematic theft. Others argued that while Chinese companies are known for aggressive business tactics and have a history of IP violations, American firms are not immune to similar accusations. The debate underscored the complex, competitive, and often ethically ambiguous landscape of global AI development.
The controversy also revealed deeper geopolitical implications. Reports surfaced that DeepSeek had illegally obtained banned Nvidia Blackwell chips, further circumventing U.S. export controls. This suggests that Chinese AI labs may be more dependent on American technology and innovation than previously thought, potentially narrowing the perceived technological gap. Ultimately, the incident raises questions about the fairness and consistency of AI development practices worldwide, and whether accusations of theft hold weight when both sides are engaging in similar behaviors. The video concludes by noting the widespread perception of hypocrisy and the ongoing challenges in regulating and securing AI innovation.