The core of the issue: AI company Anthropic has stated that three Chinese artificial intelligence firms have been using a method called "distillation" to copy the advanced abilities of Anthropic's AI model, Claude. This alleged copying was done through a large number of fake accounts, aiming to improve the Chinese companies' own AI models.
"The labs 'targeted Claude’s most differentiated capabilities: agentic reasoning, tool use, and coding.'"
This accusation comes at a time when the United States is discussing how to control the export of advanced AI chips to China, a move intended to slow down China's AI development. Anthropic claims this large-scale copying is like stealing another company's homework and is an illegal way to gain an advantage.
Details of the Allegations
Anthropic reports that DeepSeek, Moonshot AI, and MiniMax are the three Chinese companies involved. They allegedly:

Created over 24,000 fake accounts.
Used these accounts to have more than 16 million conversations with Claude.
Focused on copying Claude's specific strengths, such as its ability to reason, use tools, and write code.
The technique of "distillation" is a standard way for AI companies to train smaller, more efficient models. However, Anthropic asserts that its use by these companies is illegitimate, as it's a shortcut to acquire capabilities developed through Anthropic's own resources and efforts.
Read More: Xbox Co-Founder Thinks Brand Will Be Phased Out Due to AI Focus
Anthropic mentioned that these companies likely used services that resell access to AI models, creating networks of accounts to spread their requests across Anthropic's systems and other cloud platforms.
The Role of "Distillation"
Distillation, in the context of AI, is a process where a larger, more powerful AI model (the "teacher") trains a smaller AI model (the "student"). The student model learns to perform similarly to the teacher model, often becoming more efficient and less costly to run.

"Distillation is a common training method that AI labs use on their own models to create smaller, cheaper versions, but competitors can use it to essentially copy the homework of other labs."
While distillation is a normal and legitimate practice for AI development when applied to one's own models, Anthropic claims that using it to directly copy a competitor's advanced features crosses a line.
Anthropic is working on improving its defenses to make it harder for this kind of large-scale distillation to happen and easier to spot when it does.
Read More: Bengaluru Sarvam AI launches Indus chat app to compete with ChatGPT and Gemini in India
Broader Implications and Related Discussions
The accusations are unfolding alongside significant discussions in the U.S. about AI chip export controls. These controls are designed to limit China's access to advanced technology that could be used to boost its AI capabilities, potentially for military or surveillance purposes.

The use of fake accounts and indirect access through resellers highlights a complex challenge in enforcing regulations in the rapidly evolving AI landscape.
The U.S. government has been considering how strictly to enforce policies that limit the export of high-end AI chips, which are crucial for developing powerful AI models. The alleged actions by these Chinese firms add another layer to this ongoing debate.
Industry Reactions and Expert Views
While CyberScoop and other publications were unable to immediately reach the accused Chinese companies for comment, the practice of using distillation in this manner has drawn commentary.
Read More: Apple May Release New Deep Red Color for iPhone 18 Pro in September 2026

"What makes the tactic illegitimate is that it essentially steals Anthropic’s intellectual property, computing power and effort…" - Gal Elbaz, co-founder and chief technology officer of Oligo Security.
Anthropic's claim that this is the "largest documented case of AI model theft to date" underscores the perceived scale of the alleged operation.
Next Steps and Future Considerations
Anthropic is enhancing its security measures to prevent and detect future attempts at unauthorized data harvesting through distillation.
The incident brings attention to the need for clear guidelines and enforcement mechanisms regarding the ethical use of AI technology and the protection of intellectual property in the global AI race.
Companies named by Anthropic: DeepSeek, Moonshot AI, MiniMax.
AI model targeted: Claude.
Technique used: Distillation.
Alleged scale: Over 24,000 fake accounts, over 16 million exchanges.
Key capabilities targeted: Agentic reasoning, tool use, coding.
Sources
TechCrunch: https://techcrunch.com/2026/02/23/anthropic-accuses-chinese-ai-labs-of-mining-claude-as-us-debates-ai-chip-exports/
Tom's Hardware: https://www.tomshardware.com/tech-industry/artificial-intelligence/anthropic-accuses-deepseek-other-chinese-ai-developers-of-industrial-scale-copying-claims-distillation-included-24-000-fraudulent-accounts-and-16-million-exchanges-to-train-smaller-models
CyberScoop: https://cyberscoop.com/anthropic-accuses-chinese-labs-ai-distillation-cyber-risk/
Benzinga: https://www.benzinga.com/markets/prediction-markets/26/02/50797773/anthropic-says-chinese-labs-used-24000-fake-accounts-to-rip-off-claude-heres-what-it-means-for-amzn-pltr