Anthropic Alleges Model Mining by Chinese AI Companies

Agentic AI,
Artificial Intelligence & Machine Learning,
Next-Generation Technologies & Secure Development

Agentic AI Firms Accused of Conducting Large-Scale Data Theft Using Fake Accounts

Anthropic Accuses China AI Firms of Model Mining
Allegations point to extensive operations by China-based MiniMax that conducted more than 13 million data exchanges targeting agentic capabilities. (Image: Shutterstock)

Anthropic, a U.S. firm specializing in artificial intelligence, has alleged that three Chinese AI companies—DeepSeek, Moonshot AI, and MiniMax—engaged in extensive operations aimed at extracting proprietary functionalities from its Claude models. The company described these activities as “industrial-scale campaigns” executed through approximately 24,000 fraudulent accounts, effectively breaching its terms of service and bypassing access limitations put in place for the China region.

Anthropic asserted that the three entities collectively generated over 16 million interactions with their Claude models. This practice, referred to as ‘distillation,’ involves training a smaller AI model on the outputs of a larger one to absorb its capabilities at minimal development costs. Although distillation is routinely employed within AI research, Anthropic claims these Chinese companies exploited the method to unlawfully extract proprietary techniques.

The allegations underscore a growing pattern in cybersecurity where organizations increasingly face threats not just from individuals, but from organized and well-resourced adversaries. Such activities could implicate various tactics outlined in the MITRE ATT&CK framework, including initial access via fraudulent account creation, and persistence through ongoing infrastructure management like the use of commercial proxy services. These proxies obfuscate the origins of the traffic, complicating detection efforts.

In a parallel revelation, rival OpenAI reported similar unauthorized activities and submitted a memo to the U.S. House Select Committee on China detailing “sophisticated, multi-stage pipelines” used for data mining. Both Anthropic and OpenAI now characterize these operations as potential threats to national security, aiming to raise the profile of the risks posed by such behaviors directly linked to foreign entities.

Significantly, MiniMax emerged as the most aggressive actor among the three, accounting for over 13 million exchanges specifically targeting Claude’s advanced reasoning capabilities. Anthropic noted that it was able to monitor MiniMax’s operations in real-time, capturing details of the campaign’s lifecycle. When a new version of Claude was introduced, MiniMax swiftly adjusted its approach, redirecting substantial traffic to the updated model.

Furthermore, Moonshot AI utilized hundreds of fraudulent accounts to conduct more than 3.4 million exchanges. This organization attempted to mask its campaign’s coordinated nature by utilizing various access paths, indicating a calculated strategy to evade detection. DeepSeek’s efforts, albeit smaller in volume with over 150,000 exchanges, were marked by the use of specific prompts aimed at reconstructing Claude’s reasoning processes, suggesting an attempt to gather sensitive training data.

The accusations gain complexity as Anthropic itself faces legal challenges related to copyright issues and unauthorized data scraping incidents during its model training. Both Anthropic and OpenAI have not resolved these claims in court yet now assert that the behaviors exhibited by the Chinese labs are detrimental not just to their operations but potentially to broader national interests.

Anthropic has initiated measures to combat these threats, including implementing behavioral fingerprinting systems designed to detect unusual API traffic patterns associated with distillation practices. Enhanced verification protocols for at-risk accounts are also being established, alongside collaborations with other AI firms and authorities to share information concerning significant indicators of fraud. Yet, as nations grapple with advancements in AI technology and their implications, the evolving landscape complicates both corporate and national security strategies.

Source link