Artificial Intelligence & Machine Learning,
Next-Generation Technologies & Secure Development
Research Uncovers Vulnerability Allowing Data Exfiltration via Hidden Images

A recently resolved vulnerability within GitHub Copilot Chat has been identified, which could have permitted threat actors to extract source code and sensitive information through embedded covert prompts exploiting the AI assistant’s responses. This flaw utilized GitHub’s image proxying system, allowing illicit data leaks through images.
This vulnerability was reported by Omer Mayraz, a researcher from Legit Security, combining a remote prompt injection technique with an unconventional bypass of GitHub’s content security measures. By leveraging Camo, GitHub’s image proxy service, the researcher effectively siphoned private code from repositories.
GitHub Copilot Chat operates as an AI assistant designed to enhance developers’ workflows by answering queries, clarifying code, and suggesting implementations. The vulnerability in question stemmed from inadequate isolation and validation between hidden comments in pull requests and external content, alongside an exploitable manner in which GitHub’s image proxy managed external images. This allowed a crafted signed image link to be used for covert data extraction.
The researcher disclosed the issue via HackerOne, prompting GitHub to disable image rendering in Copilot Chat and confirm the vulnerability’s resolution as of August 14.
Copilot Chat’s contextual awareness is integral to its functionality, as it analyzes repository files and pull requests to generate relevant responses. By embedding prompts within hidden comments, the researcher successfully manipulated Copilot’s output to affect other developers visiting the same pull request, illustrating how hidden commands could alter AI behavior based on shared context.
The implications of this vulnerability extend beyond GitHub, as similar exploitation techniques may be employed against other platforms with comparable AI systems. The capacity to embed hidden prompts raises concerns about data leakage through seemingly innocuous interactions. Although the attack primarily focused on extracting small segments of sensitive information, such as security tokens, its potential for misuse in larger contexts remains concerning.
As a preventive measure, officials recommend that developers regularly review sensitive data sharing within their workflows and utilize appropriate configurations to limit access to critical files. While such actions may mitigate risks, they highlight the ongoing challenges associated with prompt injection vulnerabilities, emphasizing the need for vigilant network monitoring to detect unauthorized data exfiltration.