Agentic AI,
Cybercrime,
Fraud Management & Cybercrime
AI Firm Reveals Automated Cyber Extortion Campaign Targeting Critical Infrastructure

Artificial intelligence company Anthropic has announced a significant disruption of a cybercrime operation that leveraged its large language models to automate a sophisticated data theft and extortion campaign. The group, identified as GTG-2002, has targeted a variety of entities, including healthcare providers, emergency services, government facilities, and religious organizations, as detailed in a recent threat report.
While the attackers did not deploy ransomware, they engaged in extortion attempts, seeking bribes that in some instances exceeded $500,000 for the promised deletion of stolen data. Anthropic’s findings indicate a troubling trend: cybercriminals are increasingly capable of automating their operations, thanks to tools like Claude Code which assist developers in coding more efficiently.
The firm reported that Claude Code was utilized to automate critical phases of the attack, including reconnaissance by harvesting victim credentials and penetrating networks. The attackers employed this tool to probe VPN endpoints for vulnerabilities. Claude also aided in advising attackers on valuable data to exfiltrate and helped create striking ransom notes for display on victim machines.
While Anthropic did not confirm whether any sensitive data was successfully stolen, the escalation of such automated tactics raises alarm among cybersecurity experts. These advancements are steering the field towards a situation where even individuals with minimal technical knowledge could execute widespread cyber attacks with relative ease. This emerging threat is often referred to as “vibe hacking,” a term coined to describe the ability to have AI generate usable code without users fully understanding its workings, often overlooking bugs or errors.
In response to the threat, Anthropic reported that it has suspended the accounts implicated in the attacks and shared technical data with law enforcement. They have also implemented enhanced safeguards, including a tailored classifier for automated monitoring and a new detection methodology aimed at identifying such malicious activities as swiftly as possible.
The exploitative use of AI tools is not confined to cyber extortion. Anthropic’s latest report chronicled various misuse cases, such as North Korean hackers employing Claude to fabricate professional resumes and succeed in coding assessments, all part of an ongoing effort to infiltrate Western companies and pilfer cryptocurrency. Additionally, a U.K.-based cybercrime group utilized Claude Code to develop sophisticated ransomware variants, selling them on darknet markets for substantial sums.
Anthropic indicated its ongoing efforts to curb such misuse included blocking efforts by Chinese attackers aiming to refine their cyber operations directed at Vietnam and disrupting activity from Russian and Spanish-speaking adversaries engaged in developing stealthier malware and validating stolen credit cards, respectively.
The success and effectiveness of these countermeasures are challenging to quantify; however, the evident experimentation among attackers seeking to integrate AI into their criminal practices poses a significant concern for cybersecurity professionals. The potential MITRE ATT&CK tactics relevant to this attack include initial access, with credential harvesting and exploitation of vulnerabilities; persistence, through automated access tools; and exfiltration techniques, illustrating the evolving landscape of cyber threats aimed at critical infrastructure and sensitive organizations.