Home / Technology / China Labs Hijack US AI Secrets Via 'Distillation'
China Labs Hijack US AI Secrets Via 'Distillation'
23 Feb
Summary
- Chinese AI labs allegedly used 24,000 fake accounts.
- Over 16 million exchanges targeted advanced US AI capabilities.
- AI distillation bypasses export controls, poses security risks.

Three Chinese AI laboratories allegedly circumvented U.S. export controls by using fraudulent accounts to extract advanced artificial intelligence capabilities from U.S. systems. Anthropic, a leading AI firm, reported that DeepSeek, Moonshot AI, and MiniMax created over 24,000 fake accounts to interact more than 16 million times with Anthropic's Claude chatbot.
This coordinated "distillation" campaign focused on extracting high-value model outputs, including complex reasoning and coding. Anthropic warns that models built through such large-scale distillation may lack the safety guardrails of frontier U.S. systems. This could enable authoritarian regimes to leverage AI for offensive cyber operations, disinformation, and mass surveillance.
Anthropic detected the campaigns through IP address correlations and metadata, noting the activity bypassed typical customer traffic. The company has shared its findings with U.S. government entities and industry partners. While not directly implicating the Chinese government, the incidents highlight a growing concern over AI model theft, mirroring similar allegations made by OpenAI and Google against Chinese AI firms.
The issue underscores the evolving challenges in protecting U.S. AI advancements, as distillation attacks target the reinforcement learning process, a critical component in refining AI models. This method allows foreign labs to acquire sophisticated capabilities even when direct access to advanced chips or model weights is restricted.




