Anthropic Says Chinese AI Firms Copied Claude Using 24,000 Fake Accounts

Anthropic says three Chinese AI firms used over 24,000 fake accounts to copy its Claude AI. This is a huge number of fake accounts used for copying.

The core of the issue: AI company Anthropic has stated that three Chinese artificial intelligence firms have been using a method called "distillation" to copy the advanced abilities of Anthropic's AI model, Claude. This alleged copying was done through a large number of fake accounts, aiming to improve the Chinese companies' own AI models.

"The labs 'targeted Claude’s most differentiated capabilities: agentic reasoning, tool use, and coding.'"

This accusation comes at a time when the United States is discussing how to control the export of advanced AI chips to China, a move intended to slow down China's AI development. Anthropic claims this large-scale copying is like stealing another company's homework and is an illegal way to gain an advantage.

Details of the Allegations

Anthropic reports that DeepSeek, Moonshot AI, and MiniMax are the three Chinese companies involved. They allegedly:

Anthropic Accuses 3 Chinese Companies of Harvesting Its Data - 1
  • Created over 24,000 fake accounts.

  • Used these accounts to have more than 16 million conversations with Claude.

  • Focused on copying Claude's specific strengths, such as its ability to reason, use tools, and write code.

The technique of "distillation" is a standard way for AI companies to train smaller, more efficient models. However, Anthropic asserts that its use by these companies is illegitimate, as it's a shortcut to acquire capabilities developed through Anthropic's own resources and efforts.

Read More: Xbox Co-Founder Thinks Brand Will Be Phased Out Due to AI Focus

Anthropic mentioned that these companies likely used services that resell access to AI models, creating networks of accounts to spread their requests across Anthropic's systems and other cloud platforms.

The Role of "Distillation"

Distillation, in the context of AI, is a process where a larger, more powerful AI model (the "teacher") trains a smaller AI model (the "student"). The student model learns to perform similarly to the teacher model, often becoming more efficient and less costly to run.

Anthropic Accuses 3 Chinese Companies of Harvesting Its Data - 2

"Distillation is a common training method that AI labs use on their own models to create smaller, cheaper versions, but competitors can use it to essentially copy the homework of other labs."

While distillation is a normal and legitimate practice for AI development when applied to one's own models, Anthropic claims that using it to directly copy a competitor's advanced features crosses a line.

Anthropic is working on improving its defenses to make it harder for this kind of large-scale distillation to happen and easier to spot when it does.

Read More: Bengaluru Sarvam AI launches Indus chat app to compete with ChatGPT and Gemini in India

The accusations are unfolding alongside significant discussions in the U.S. about AI chip export controls. These controls are designed to limit China's access to advanced technology that could be used to boost its AI capabilities, potentially for military or surveillance purposes.

Anthropic Accuses 3 Chinese Companies of Harvesting Its Data - 3

The use of fake accounts and indirect access through resellers highlights a complex challenge in enforcing regulations in the rapidly evolving AI landscape.

The U.S. government has been considering how strictly to enforce policies that limit the export of high-end AI chips, which are crucial for developing powerful AI models. The alleged actions by these Chinese firms add another layer to this ongoing debate.

Industry Reactions and Expert Views

While CyberScoop and other publications were unable to immediately reach the accused Chinese companies for comment, the practice of using distillation in this manner has drawn commentary.

Read More: Apple May Release New Deep Red Color for iPhone 18 Pro in September 2026

Anthropic Accuses 3 Chinese Companies of Harvesting Its Data - 4

"What makes the tactic illegitimate is that it essentially steals Anthropic’s intellectual property, computing power and effort…" - Gal Elbaz, co-founder and chief technology officer of Oligo Security.

Anthropic's claim that this is the "largest documented case of AI model theft to date" underscores the perceived scale of the alleged operation.

Next Steps and Future Considerations

Anthropic is enhancing its security measures to prevent and detect future attempts at unauthorized data harvesting through distillation.

The incident brings attention to the need for clear guidelines and enforcement mechanisms regarding the ethical use of AI technology and the protection of intellectual property in the global AI race.

  • Companies named by Anthropic: DeepSeek, Moonshot AI, MiniMax.

  • AI model targeted: Claude.

  • Technique used: Distillation.

  • Alleged scale: Over 24,000 fake accounts, over 16 million exchanges.

  • Key capabilities targeted: Agentic reasoning, tool use, coding.

Sources

Frequently Asked Questions

Q: What does Anthropic say Chinese AI firms did?
Anthropic claims three Chinese AI companies, DeepSeek, Moonshot AI, and MiniMax, used over 24,000 fake accounts to copy the skills of Anthropic's AI model, Claude. They did this to make their own AI models better.
Q: How did these Chinese AI firms copy Claude's abilities?
The companies allegedly used a method called 'distillation' by having over 16 million conversations with Claude through fake accounts. This helped them learn Claude's special skills in reasoning, using tools, and coding.
Q: Why is Anthropic calling this copying illegal?
Anthropic says this method is like stealing another company's work. While distillation is a normal AI training method, using it to copy a competitor's advanced features without permission is seen as illegal and unfair.
Q: Who is affected by these claims?
The AI companies involved, Anthropic, and potentially the US government are affected. The claims add to discussions about controlling AI chip exports to China and protecting intellectual property in AI development.
Q: What are the next steps Anthropic is taking?
Anthropic is working to improve its security systems. They want to make it harder for others to copy their AI models in this way and to detect such activities more easily in the future.