The video’s creator argues that Anthropic’s claims about Chinese AI labs conducting “distillation attacks” are exaggerated, poorly defined, and likely intended to stoke fear and restrict competition rather than address genuine security concerns. They criticize Anthropic for lacking transparency, using inconsistent terminology, and acting hypocritically given their own data practices, calling for concrete evidence to support the accusations.
The video’s creator discusses Anthropic’s recent public claims that three major Chinese AI labs—DeepSeek, Moonshot, and Miniax—have been conducting “distillation attacks” on Anthropic’s models. Anthropic alleges that these labs used over 24,000 fraudulent accounts to generate more than 16 million exchanges, extracting model capabilities to train their own systems, potentially for military or intelligence use. The creator notes that Anthropic’s terminology, particularly “distillation attack,” is new and seems to have been coined specifically for this announcement. They explain that model distillation is a common practice in AI, where outputs from advanced models are used to train smaller or cheaper models, and that even Anthropic acknowledges this can be legitimate.
The creator is highly skeptical of Anthropic’s claims, especially the numbers cited as evidence of malicious activity. They compare the alleged 150,000 exchanges attributed to DeepSeek to their own platform, T3 Chat, which handles a similar volume of exchanges daily. This, they argue, makes Anthropic’s numbers seem trivial and not indicative of large-scale, illicit activity. The creator suggests that such exchange volumes could easily result from legitimate benchmarking, product development, or user-facing services, rather than coordinated attacks.
Further, the video criticizes Anthropic for what appears to be an attempt to weaponize public and governmental opinion against Chinese labs, especially DeepSeek, which has contributed valuable research to the AI community. The creator points out that Anthropic is the only major lab not to have released open-weight models and accuses them of trying to influence policy to restrict open-source AI development. They also highlight the irony that Anthropic’s own models were trained on internet data, much of which was scraped without explicit permission, making their current complaints seem hypocritical.
The creator does acknowledge that some proxy services in China are reselling access to Anthropic’s models, mixing legitimate and potentially distillation-related traffic, but argues that this does not justify naming the major labs as malicious actors without stronger evidence. They note that Anthropic has previously made similar accusations against other companies, such as OpenAI and XAI, which were later shown to be unfounded. The creator calls for clearer definitions from Anthropic regarding what constitutes a distillation attack and criticizes the company’s vague and inconsistent policies.
In conclusion, the video expresses strong distrust of Anthropic’s motives and the validity of their claims, suggesting that the company is more interested in protecting its market position than addressing real security risks. The creator invites Anthropic to provide concrete evidence to support their accusations and promises to retract their criticism if proven wrong. Until then, they maintain that Anthropic’s statements are misleading and serve primarily to stoke fear and hinder competition, rather than to inform or protect the AI community.