Microsoft Addresses Serious Vulnerability Exploiting Copilot Responses
On June 16, 2025, researchers at Aim Security revealed a significant security flaw in Microsoft 365 Copilot that could have allowed malicious actors to extract sensitive data with minimal effort. This vulnerability, dubbed "EchoLeak" and designated as CVE-2025-32711, demonstrated a high severity rating of 9.3 on the CVSS scale. Following the discovery, Microsoft promptly issued a patch, asserting there is no evidence the flaw had been exploited in the wild, and confirmed that users need not take additional precautions.
The EchoLeak vulnerability enabled attackers to perform a zero-click prompt injection attack. This method allowed them to send crafted emails that would prompt the Copilot system to disclose highly sensitive contextual information, such as internal documents and private communications. Normally, access to Microsoft’s AI-powered suite is restricted to users within specific organizations, but this vulnerability created an entry point that was alarmingly easy to exploit.
Central to this attack was a type of prompt injection, a technique where malicious actors input instructions into AI models to manipulate their responses. Researchers noted that these malicious emails cleverly disguised themselves as user-directed prompts, successfully bypassing detection mechanisms. Copilot’s operational design includes scanning incoming messages to provide context or summaries prior to user access, thereby allowing attackers to embed harmful prompts without alerting users.
The exploit’s mechanics were simple but effective: attackers included links to their own domains within the email, embedded with query string parameters that sought the most sensitive information stored in Copilot’s context. When processed, Copilot unwittingly transmitted that data back to the attacker’s server. The researchers illustrated this capability with a proof-of-concept, revealing that by querying Copilot for an API key stored in memory, it readily divulged the information, showcasing the severity of the flaw.
While Copilot’s default settings are intended to prevent it from actioning unverified links, Aim Security discovered that using less common reference-style markdown formats circumvented these safeguards. This enabled attackers to embed links seamlessly, effectively evading Copilot’s standard safety protocols. Researchers highlighted this exploit not only as a technological curiosity but as a signal that issues like EchoLeak could represent a new frontier in vulnerabilities associated with large language model (LLM) technology.
Significantly, the research underlines that traditional filtering systems may fail to identify these advanced forms of attack. Furthermore, Microsoft has not disclosed the timeline of their awareness regarding the issue or how it was first detected, heightening concerns about the overall security surrounding their generative AI tools.
For businesses utilizing Microsoft 365, it’s vital to remain vigilant about evolving cybersecurity threats. The EchoLeak incident illustrates that potentially powerful adversary tactics, such as initial access through social engineering and data exfiltration, continue to evolve, highlighting the need for robust response strategies. As organizations incorporate AI-driven technologies into their operations, understanding the implications of such vulnerabilities is critical in safeguarding sensitive information and maintaining trust in these emerging tools.