New Cyberattack Technique Exploits Stolen Cloud Credentials to Target LLM Services
Cybersecurity researchers have recently uncovered a sophisticated attack that leverages stolen cloud credentials to infiltrate cloud-hosted large language model (LLM) services. This technique, dubbed LLMjacking by the Sysdig Threat Research Team, poses a significant threat as attackers aim to sell access to these services to other malicious actors, thereby monetizing their illicit activities.
In a detailed analysis, security researcher Alessandro Brucato noted that the attackers first gained access by exfiltrating cloud credentials, subsequently penetrating the cloud environment. "In this instance, a local Claude (v2/v3) LLM model from Anthropic was targeted," he stated, highlighting the precise nature of the attack. This infiltration typically begins with breaching systems running outdated versions of the Laravel Framework, such as those vulnerable to CVE-2021-3129, allowing attackers to obtain Amazon Web Services (AWS) credentials used to access LLM services.
Among the tools employed in this operation is an open-source Python script designed to validate keys for various platforms, including offerings from Anthropic, AWS Bedrock, Google Cloud Vertex AI, Mistral, and OpenAI. Brucato explained that during the credential verification phase, no legitimate LLM queries were made; rather, attackers conducted minimal actions to assess the capabilities of the credentials and any operational quotas associated with them.
This keychecker tool is further integrated with an open-source utility called oai-reverse-proxy, functioning as a reverse proxy server for LLM APIs. This integration suggests that threat actors could be offering access to compromised accounts without revealing the underlying credentials. As Brucato noted, such a reverse proxy could facilitate the attackers’ efforts to generate revenue by selling access to the LLM models, allowing them to operate discreetly.
Additionally, the attackers have been observed probing logging settings, likely to evade detection while using the compromised credentials for executing commands. This new attack vector diverges from traditional methods that often focus on prompt injections or model poisoning, instead allowing malefactors to exploit LLMs while the legitimate cloud account holders unwittingly bear the financial burden.
The potential financial implications for the targeted organizations are significant, with Sysdig estimating that such an attack could lead to LLM consumption costs exceeding $46,000 in a single day. Brucato emphasized that the expenses associated with LLM services can accumulate rapidly, given the model’s complexity and the volume of input tokens processed. Moreover, by maximizing usage limits, attackers may obstruct legitimate operations, thereby impacting normal business functions.
Organizations are advised to enhance their logging practices and scrutinize cloud activity for signs of unauthorized access. Implementing robust vulnerability management processes is crucial to prevent initial access by malicious entities. As cyber threats evolve, understanding the tactics outlined in the MITRE ATT&CK framework—such as initial access, credential dumping, and privilege escalation—can aid businesses in reinforcing their defenses against this emerging threat landscape.
In summary, the discovery of LLMjacking presents a critical challenge for businesses relying on cloud-based language models. As the cybersecurity landscape continues to evolve, awareness and preparedness are paramount in safeguarding sensitive information and mitigating the risks associated with sophisticated cyberattacks.