Anthropic Accuses DeepSeek, Moonshot, and MiniMax of Large-Scale Distillation Attacks on Claude
Anthropic identifies 24,000 fraudulent accounts generating over 16 million exchanges to extract Claude's capabilities, with MiniMax driving the most traffic.
16 Million Exchanges via 24,000 Fake Accounts
Anthropic publicly accused three Chinese AI companies — DeepSeek, Moonshot AI, and MiniMax — of coordinated campaigns to illegally extract Claude's capabilities through model distillation, the company revealed on February 24, 2026.
Scale of the Attack
The numbers are staggering:
- Total exchanges: Over 16 million
- Fraudulent accounts: Approximately 24,000
- Largest offender: MiniMax, with over 13 million exchanges
- Focus areas: Complex reasoning, coding assistance, and tool use — areas Anthropic considers key differentiators for Claude
What is Distillation?
Distillation is a technique where a less capable model is trained on the outputs generated by a stronger AI system. In this case, the Chinese firms allegedly used commercial proxy services and fraudulent accounts to access Claude at scale while avoiding detection, then used the outputs to train their own models.
National Security Implications
Anthropic framed the attacks as national security threats, expressing concern about "authoritarian governments deploying frontier AI for offensive cyber operations, disinformation campaigns, and mass surveillance."
Industry-Wide Problem
OpenAI has reported similar distillation attacks from Chinese firms. The revelation comes as the U.S. debates AI chip export controls and the broader implications of Chinese AI development.
Anthropic said it has implemented new detection systems and banned the accounts involved.
Source: CNBC | The Hacker News | TechCrunch | CNN | Anthropic
NYC News
Expert insights on AI voice agents and customer communication automation.
Try CallSphere AI Voice Agents
See how AI voice agents work for your industry. Live demo available -- no signup required.