Malicious LLMs: Uncovering Users Through Hacking Prompts

Artificial Intelligence & Machine Learning,
Cybercrime as-a-service,
Fraud Management & Cybercrime

WormGPT 4 Offered at $50 Monthly; KawaiiGPT Now Open Source

Hacking as a Prompt: Malicious LLMs Find Users
Image: Shutterstock

A new wave of cybercrime-as-a-service offerings is emerging, featuring malicious large language models (LLMs) available for subscription on platforms like Telegram. For a monthly fee of $50, users can access tools such as WormGPT 4, while KawaiiGPT is being distributed for free on GitHub. These models advertise capabilities that include ransomware development with AES-256 encryption and rapid data exfiltration through Tor protocols.

Research from Unit 42, part of Palo Alto Networks, highlights the practical applications of these offensive LLMs. The analysts examined both WormGPT 4 and KawaiiGPT, noting their transition from theoretical threats to commercially viable products, complete with active user bases and generated attack code.

Andy Piazza, a senior director at Unit 42, emphasized that threat actors are leveraging AI to streamline attack processes and enhance their efficiency. The utility of these tools spans from crafting targeted spear-phishing emails to dynamically generating payloads during attacks.

Unit 42 conducted tests on WormGPT 4, which was adept at quickly producing PowerShell scripts for PDF encryption using AES-256, with optional settings for data exfiltration via Tor. Notably, the ransom notes generated by this model emphasize military-grade encryption and impose 72-hour deadlines for payment.

Similarly, KawaiiGPT, despite its casual user interface, exhibits robust capabilities. It can generate sophisticated credential-harvesting emails with carefully crafted subject lines, along with Python scripts for SSH authentication to facilitate lateral movement and remote shell access. The tool is also capable of locating EML files—standard email formats—and using Python’s smtplib to send them to attackers.

The commercial strategy for WormGPT 4 appears well-structured, featuring clear pricing tiers, while KawaiiGPT focuses on community engagement and wide accessibility through free distribution. Both tools are removing ethical considerations inherent in traditional AI models, raising concerns regarding their potential for misuse.

The initial version of WormGPT, which surfaced in July 2023, was based on the open-source GPT-J 6B model and was tailored using datasets rich in malware codes and phishing templates. Although the original tool was discontinued, its successors have proliferated, with WormGPT 4 launching sales in underground forums late last year.

KawaiiGPT emerged in July of this year and has grown its user base to over 500, bolstered by an active Telegram community of 180 members. Both models indicate a concerning trend in the cybersecurity landscape, whereby generative AI’s outputs could potentially evade traditional detection strategies that rely on linguistic and coding errors.

Understanding the implications of these advanced tools is critical for businesses. The sophistication introduced by these LLMs potentially alters the threat landscape. As organizations built detection strategies around identifiable flaws, the enhanced fluency and precision of AI-generated content challenge these paradigms, suggesting that attackers may successfully circumvent both automated and manual scrutiny.

WormGPT 4 is priced at $50 per month, $175 annually, or $220 for lifetime access, while KawaiiGPT is available for free through GitHub. Notably, both models lack the ethical safeguards found in commercial LLMs, heightening the stakes for businesses relying on conventional security measures.

Source link