This report examines the growing controversy over AI model distillation, as Anthropic accuses Chinese firms, including DeepSeek, of improperly using its Claude system to strengthen competing AI models, raising concerns about ethics, security, and global AI competition.
Tensions are escalating in the global technology sector as the debate over AI model distillation moves to the center of an international dispute. What was once considered a technical training method has now become a flashpoint in the growing rivalry between leading artificial intelligence companies.
US-based AI firm Anthropic has formally accused several Chinese developers of misusing its Claude system through large-scale AI model distillation practices. According to the company, the alleged activity involved systematic and automated interactions designed to extract valuable outputs from Claude and apply them to competing models.
The controversy gained public attention following a report by The Wall Street Journal. The report outlined what Anthropic described as organized campaigns that included the creation of approximately 24,000 accounts. These accounts reportedly generated more than 16 million exchanges with Claude, raising concerns about unauthorized AI model distillation at scale.
Among the companies named are DeepSeek, MiniMax, and Moonshot AI. Anthropic alleges that these firms relied on AI model distillation to train or refine their own artificial intelligence systems. AI model distillation typically involves transferring knowledge from a large, advanced model into a smaller or more efficient one. While AI model distillation is widely recognized as a legitimate method in machine learning, concerns arise when it is conducted without authorization.
Anthropic maintains that AI model distillation becomes problematic when it bypasses built-in safeguards. The company argues that its Claude model was accessed in ways that enabled the extraction of advanced reasoning capabilities. Through AI model distillation, these capabilities could then be embedded into other AI systems without replicating the original safety framework.
In the case of DeepSeek, Anthropic claims the firm specifically targeted Claude’s reasoning strengths. The company also alleges that AI model distillation was used to produce censorship-safe responses to politically sensitive topics. Such use of AI model distillation, if proven, could indicate an attempt to adapt advanced language capabilities to meet specific regulatory or political environments.
A central concern in this dispute is whether AI model distillation transfers only performance improvements or also weakens safety protections. Anthropic warns that models created through unauthorized AI model distillation may not retain the ethical guardrails embedded in the original system. This could increase risks related to AI cybersecurity, misinformation, surveillance technologies, and offensive cyber operations.
The issue is unfolding during a period of intense global competition in artificial intelligence. Earlier, OpenAI raised similar concerns about DeepSeek’s activities, suggesting attempts to benefit from capabilities developed by US-based frontier labs. These developments underscore how AI model distillation is becoming a strategic issue in the broader US–China AI race.
Anthropic is now urging cloud providers, policymakers, and technology leaders to establish clearer standards around AI model distillation. The company has suggested that stricter control over advanced semiconductor chips may help limit the scale of unauthorized model training and reduce large-scale AI model distillation campaigns.
At the same time, DeepSeek has attracted international attention for producing efficient and competitive AI systems. The dispute highlights how AI model distillation, once viewed mainly as a technical optimization tool, is now deeply connected to intellectual property rights, AI ethics, and national security discussions.
As artificial intelligence continues to advance, AI model distillation will likely remain at the center of industry debate. Companies, regulators, and governments must now determine how to balance innovation with responsible development. The outcome of this debate may shape not only the future of AI model distillation but also the broader structure of the global AI landscape.