AI-Generated Malware Takes Advantage of React2Shell for Small Gains

Artificial Intelligence & Machine Learning,
Cybercrime,
Fraud Management & Cybercrime

AI-Driven Malware Targets React2Shell Vulnerability, Compromising 91 Hosts

AI-Generated Malware Exploits React2Shell for Tiny Profit
Image: Shutterstock

Recent research has identified artificial intelligence-generated malware leveraging the React2Shell vulnerability, allowing malicious actors to craft exploits without requiring coding expertise. This operation successfully infiltrated 91 hosts, demonstrating the evolving landscape of cyber threats.

Documented as CVE-2025-55182, React2Shell is an issue inherent in Next.js server components, enabling attackers to execute arbitrary commands remotely on affected systems. The flaw poses immediate risks upon exploitation, making it a valuable target for malicious campaigns.

Researchers from Darktrace reported that they noted this hacking activity through a deliberately exposed Docker daemon within their honeypot network. The attacker deployed a container labeled python-metrics-collector, utilizing it to download and execute a harmful Python script that exploited the React2Shell vulnerability.

Interestingly, the malware was accompanied by thorough code comments and documentation, including a note remarking, “Educational/Research Purpose Only.” This level of detail is atypical for malware, which is usually designed to obfuscate analysis. In contrast, human-written scripts often prioritize functionality over clarity, while large language models inherently document their code comprehensively.

Tests utilizing AI detection tools, such as GPTZero, indicated a 76% likelihood that the code was generated using an LLM. The educational disclaimer suggests the attacker may have bypassed AI model safeguards by framing the malicious intent as an academic exercise.

The exploitation toolkit exhibited technical sophistication, employing an IP generation loop to pinpoint potential targets and executing a carefully tailored payload to verify host vulnerabilities via the whoami command. Following confirmation, it downloaded XMRig cryptocurrency mining software from GitHub. As a result, the campaign infected 91 hosts and generated approximately 0.015 monero, equating to around 5 British pounds sterling.

Nathaniel Jones, Vice President of Security and AI Strategy at Darktrace, expressed concern over cloud infrastructure being particularly vulnerable to LLM-generated malware. He noted that “LLMs are already generating functional remote-code-execution payloads, even without the attacker fully grasping the protocol or environment.” The inherent design of cloud service APIs, combined with AI models simplifying exploit development, makes them attractive targets.

The malware’s operational structure revealed significant weaknesses, including a lack of self-propagation and reliance on centralized resources for distribution. The script sourced Python packages from Pastebin, with the primary payload linked to a GitHub Gist hosted by a now-banned user, and connected via an IP address registered to a residential internet service provider in India, indicating a possible home-based attack origin.

Jones cautioned that as attackers gain proficiency with AI tools, these weaknesses may diminish, leading to more sophisticated, autonomous behavior and improved malware capabilities. Despite the current complexities of automated attacks, specific stages, particularly those requiring targeted lateral movement and privilege escalation, still necessitate human judgment and expertise.

Organizations must be wary of the rapid evolution of AI in the cyber threat landscape. “You no longer need extensive technical skills to create effective malware,” Jones pointed out. With traditional defense models often built on static signatures, these newer threats highlight the necessity for updated strategies in anticipating both advanced and emerging attacks.

Source link