Artificial Intelligence & Machine Learning,
Next-Generation Technologies & Secure Development
Exploitation of Prompt Injection and HTML Responses Raises Security Concerns

Recently discovered vulnerabilities in GitLab’s DevSecOps platform could allow hackers to exploit its generative AI assistant, leading to potential data leaks and the delivery of harmful content. These security gaps could be manipulated via prompt injection techniques, which enable attackers to alter the model’s output to exfiltrate sensitive source code.
In findings reported by Legit Security, analysts outlined how prompt injection and HTML output rendering could compromise GitLab Duo. Researchers elaborated that these exploits could hijack AI workflows and expose critical internal code. Although GitLab has since patched these vulnerabilities, the risks remain a pressing concern for organizations using the platform.
The GitLab Duo assistant is designed to streamline development processes, offering features such as instant to-do list generation, which significantly reduces the time developers spend sifting through commits. However, the features that enhance productivity could also create avenues for security exploitation if not adequately safeguarded.
Liav Caspi, a co-founder of Legit Security, and researcher Barak Mayraz effectively demonstrated the risks associated with using hidden text, obfuscated Unicode, and misleading HTML tags in commit messages and project documentation. By manipulating the surrounding contextual input that Duo relies on, attackers have the potential to influence its behavior; for example, one commit message was crafted to trick Duo into revealing the contents of a private file when posed with an innocuous question.
GitLab has recently implemented improvements to how Duo processes contextual input, aiming to mitigate these risks. Nevertheless, researchers caution that this incident highlights the vulnerabilities that can arise from typical developer activities when enhanced by AI copilots.
A significant concern also stems from how Duo’s HTML responses were rendered in GitLab’s web interface, exposing the platform to additional threats. Without proper sanitization, these responses included potentially dangerous HTML commands, providing avenues for attacks such as credential harvesting and clickjacking.
The integration of GitLab Duo within development workflows enhances operational efficiency, offering AI-driven assistance for coding, issue summarization, and merging requests. However, this deep integration amplifies the attack surface, requiring organizations to regard AI tools as integral components of their application security perimeter. The implications are clear: as AI assistants increasingly become part of application ecosystems, they must be scrutinized alongside traditional security measures.
GitLab has responded to these vulnerabilities by upgrading its output rendering mechanisms to better sanitize HTML elements and improve handling of AI-generated responses. The company reassured users that during research, no customer data was compromised, and no exploitation attempts were detected in the wild.
The nuances of these vulnerabilities align with the MITRE ATT&CK framework, particularly in tactics such as initial access and exploitation of trust relationships. Organizations are urged to proactively reassess their security architectures to address these emerging threats that could be leveraged through AI-driven tools. As the landscape of cyber threats evolves, a vigilant approach is essential to protect against innovative attacks.