
Silicon Valley’s AI race has taken a sharper geopolitical turn. Anthropic has accused three Chinese artificial intelligence companies — DeepSeek, Moonshot AI, and MiniMax — of orchestrating large-scale “distillation” campaigns using its Claude AI model.
According to Anthropic, the firms created more than 24,000 fake accounts and generated over 16 million exchanges with Claude. The alleged goal was to extract advanced capabilities, particularly in agentic reasoning, tool use, and coding, and use them to improve their own systems.
Distillation itself is not controversial. AI labs commonly use it to compress large, expensive models into smaller, cheaper ones. The controversy begins when that process allegedly involves querying a competitor’s proprietary system at scale. In that context, distillation starts to look less like optimisation and more like copying.
The scale varied across companies. Anthropic claims it tracked more than 150,000 exchanges tied to DeepSeek that appeared focused on logic and alignment, including censorship-safe responses to sensitive prompts. Moonshot AI allegedly generated 3.4 million exchanges targeting agentic reasoning, coding, and computer-use agents. MiniMax, meanwhile, is accused of driving 13 million exchanges, at one point redirecting nearly half its traffic to extract capabilities from the latest Claude release.
DeepSeek has already unsettled the US AI establishment once before. Its open-source R1 reasoning model reportedly approached frontier-level performance at a fraction of the cost. A forthcoming DeepSeek V4 model is expected to push even further, with claims it could outperform Claude and ChatGPT in coding benchmarks.
The allegations arrive at a politically sensitive moment. The administration of Donald Trump recently allowed US firms such as Nvidia to export advanced AI chips, including the H200, to China. Critics argue that easing export controls strengthens China’s computing capacity at a crucial point in the AI arms race.
Anthropic contends that the sheer scale of the alleged distillation efforts implies access to advanced chips. In its view, restricting semiconductor exports limits not just model training, but also the feasibility of large-scale extraction attacks.
There is also a security argument. Anthropic says models built through illicit distillation may not retain safety guardrails designed to prevent misuse, from bioweapons research to malicious cyber activity. If such safeguards are stripped away, the risk is not merely commercial. It becomes strategic.
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!
Find the best of Al News in one place, specially curated for you every weekend.
Stay on top of the latest tech trends and biggest startup news.