Supply Chains, AI, and the Cloud: The Major Failures (and One Triumph) of 2025

In recent months, a series of sophisticated cyberattacks leveraging artificial intelligence (AI) have raised alarms in the technology sector. One particularly concerning incident involved a prompt injection attack against GitLab’s Duo chatbot, wherein malicious code was embedded within a legitimate code package. This exploit not only blurred the lines between safe and unsafe coding practices but also led to the exfiltration of sensitive user data.

Another striking incident targeted the Gemini CLI coding tool, allowing attackers to remotely execute harmful commands on the systems of developers utilizing the AI tool. This capability included severe actions, such as wiping hard drives, essentially compromising both the integrity of the software environment and the safety of the developers’ machines.

The employment of AI as a tool for maintaining stealth and efficiency in cyberattacks is also noteworthy. Earlier this month, two individuals were indicted for allegedly stealing sensitive government data. Prosecutors reported that one of the suspects sought guidance from an AI tool on how to erase system logs following the deletion of databases. He further inquired about clearing event and application logs on Microsoft Windows Server 2012. Although these attempts were made in an effort to evade detection, investigators managed to trace the defendants’ digital footprints.

In May, a significant breach occurred involving an employee of The Walt Disney Company, who was misled into executing a malicious version of a popular open-source AI image-generation tool. This incident highlights the increasing risks associated with software supply chain attacks, especially as AI tools become more mainstream.

Compounding these security concerns, Google researchers recently alerted users of the Salesloft Drift AI chat agent about compromised security tokens tied to the platform. Unknown attackers exploited these tokens to access Google Workspace accounts, leading to unauthorized entry into Salesforce accounts. The consequences were severe, with critical data, including credentials that might facilitate further breaches, being targeted.

Several incidents have also shed light on the vulnerabilities associated with Large Language Models (LLMs). Notably, CoPilot, a code completion tool, inadvertently exposed the contents of over 20,000 private GitHub repositories from major corporations such as Google, Intel, and Microsoft. These repositories, initially indexed by Bing, remained accessible even after their removal from search results, underscoring a persistent risk for organizations relying on AI-driven technologies.

The methodologies employed in these attacks align with various tactics identified in the MITRE ATT&CK framework. For the GitLab and Gemini incidents, initial access techniques and privilege escalation tactics were likely used to execute commands and extract sensitive data. The actions taken by the defendants in the government data breach indicate a form of persistence through the concealment of digital footprints, while the incident at Disney illustrates the risk of social engineering in exploiting human trust in technology.

As these attacks illustrate, business owners must remain vigilant against evolving cyber threats, particularly those that exploit advancements in AI technology. Understanding the potential techniques and tactics from frameworks like MITRE ATT&CK is crucial for developing effective strategies to mitigate these emerging risks. The pressing need for robust cybersecurity practices cannot be overstated amidst an environment where both threats and technologies continue to advance rapidly.

Source