Hackers Concealed Malware Using Complex AI Code

Artificial Intelligence & Machine Learning,
Next-Generation Technologies & Secure Development

Attackers Conceal Malware Within Vector Image

Hackers Obfuscated Malware With Verbose AI Code
Image: Shutterstock

Recent findings indicate that hackers have utilized artificial intelligence-generated code to embed malware in a sophisticated phishing campaign, according to insights from Microsoft. This malware is concealed behind layers of unnecessarily complicated code, making detection challenging.

Microsoft Security Copilot examined samples drawn from credential phishing attempts and noted that the complexity and verbosity of the code were atypical for human-generated scripts, leading to suspicions of AI involvement. The initial attack vector involved phishing emails designed to mimic legitimate file-sharing notifications, with sender and recipient addresses appearing identical. The malicious payload was masked as a file named “23mb – PDF- 6 pages.svg,” cleverly using an SVG extension to disguise its intentions.

Vector files present unique opportunities for cybercriminals due to their text-based and scriptable nature, allowing for embedded JavaScript and dynamic content that can deliver deceptive phishing payloads while evading detection. The SVG format can incorporate obfuscation-friendly features such as invisible elements and delayed script execution, presenting a significant challenge for traditional security measures.

Should the target open this malicious SVG file, they would be rerouted to a webpage that prompts them to complete CAPTCHA verification, a well-known social engineering tactic designed to foster trust and postpone caution. Although Microsoft’s visibility only extended to the initial landing page, it is inferred that fraudulent sign-in interfaces would follow.

Analysis revealed that the SVG code employed novel obfuscation techniques distinguishing it from typical phishing methodologies. Rather than conventional cryptographic obscurity, the attackers masked their intent through business-related terminology, embedding terms like “revenue” and “operations” into unseen sections of the file.

This embedded JavaScript processed these business terms through encoded transformations, allowing attackers to conceal malicious instructions within seemingly innocuous business metadata. The complexity of the code led Microsoft to believe it was likely generated by a language model, given its structured patterns and generic comments that paralleled AI conventions.

Microsoft’s detection methods leveraged behavioral analysis and message context, demonstrating that, despite employing AI, the attack adhered to recognizable patterns associated with human-created threats. The findings suggest that existing security measures remain effective against AI-generated attacks, as they largely replicate the frameworks utilized in traditional cyber threats.

Source link