Technology

Anthropic says DeepSeek, Moonshot, and MiniMax used 24,000 fake accounts to rip off Claude

Anthropic made headlines in the artificial intelligence industry by accusing three Chinese AI laboratories of engaging in coordinated campaigns to siphon capabilities from its models. The labs in question – DeepSeek, Moonshot AI, and MiniMax – allegedly used fraudulent accounts to interact with Anthropic’s models in violation of their terms of service. This practice, known as distillation, allows competitors to leapfrog research and development efforts by extracting knowledge from existing models.

Distillation involves creating smaller, more efficient AI models by extracting knowledge from larger, more powerful ones. While it is a legitimate training method, it can also be weaponized for intellectual property theft. The issue gained prominence when DeepSeek released its R1 reasoning model, which appeared to match leading American models at a lower cost. This sparked a wave of replication and innovation in the AI community, raising concerns about the ethical implications of distillation.

Anthropic detailed how the three Chinese labs conducted large-scale distillation attacks targeting specific capabilities of its models. DeepSeek, the most sophisticated of the three, used various techniques to extract knowledge related to reasoning, grading tasks, and sensitive topics. Moonshot AI and MiniMax also engaged in similar activities, focusing on different aspects of Anthropic’s models.

The labs bypassed Anthropic’s restrictions on access to their models in China using commercial proxy services and complex network architectures. These proxy networks distributed traffic across multiple accounts to evade detection and maximize throughput. Anthropic raised national security concerns, warning that models built through illicit distillation lack necessary safeguards and could be used for offensive purposes by authoritarian governments.

The legal landscape around AI distillation is complex, making it challenging to pursue legal action against perpetrators. Anthropic framed the issue as a national security crisis, highlighting the need for coordinated industry and government action to address the threat. The company implemented defensive measures to detect and prevent distillation attacks, emphasizing the importance of API security in the AI industry.

The disclosure is expected to impact policy debates on export controls and government device usage of AI models. It also underscores the importance of API security in protecting intellectual property and safeguarding national security interests. The industry now faces a new reality where API security is a strategic priority, requiring collaborative efforts to combat illicit distillation practices.

Related Articles

Back to top button