GitLab Duo Vulnerability Allowed Attackers to Manipulate AI Responses via Hidden Prompts
May 23, 2025
Artificial Intelligence / Cybersecurity Threats
Cybersecurity researchers have identified a critical indirect prompt injection vulnerability in GitLab’s AI assistant, Duo. This flaw could potentially allow malicious actors to access source code and inject untrusted HTML into the AI’s responses, redirecting users to harmful websites. GitLab Duo, an AI-driven coding assistant launched in June 2023 and built on Anthropic’s Claude models, has been shown to be vulnerable. According to findings from Legit Security, this weakness enables attackers to steal code from private projects, alter code suggestions for other users, and even exfiltrate sensitive undisclosed zero-day vulnerabilities. Prompt injection is a known class of vulnerabilities within AI systems, allowing threat actors to exploit large language models (LLMs) to manipulate user interactions.
Artificial Intelligence / Cybersecurity Threats
GitLab Duo Vulnerability Exposes Users to Potential Code Hijacking and Malware Risks May 23, 2025 | Cybersecurity Insights Cybersecurity experts have recently identified a significant security vulnerability in GitLab’s AI coding assistant, Duo. This flaw involves indirect prompt injection, which could potentially enable malicious actors to access confidential source code…
GitLab Duo Vulnerability Allowed Attackers to Manipulate AI Responses via Hidden Prompts
May 23, 2025
Artificial Intelligence / Cybersecurity Threats
Cybersecurity researchers have identified a critical indirect prompt injection vulnerability in GitLab’s AI assistant, Duo. This flaw could potentially allow malicious actors to access source code and inject untrusted HTML into the AI’s responses, redirecting users to harmful websites. GitLab Duo, an AI-driven coding assistant launched in June 2023 and built on Anthropic’s Claude models, has been shown to be vulnerable. According to findings from Legit Security, this weakness enables attackers to steal code from private projects, alter code suggestions for other users, and even exfiltrate sensitive undisclosed zero-day vulnerabilities. Prompt injection is a known class of vulnerabilities within AI systems, allowing threat actors to exploit large language models (LLMs) to manipulate user interactions.