One Click Initiated a Hidden, Multi-Phase Attack on Copilot

Microsoft recently addressed a significant vulnerability within its Copilot AI assistant, which permitted cybercriminals to extract sensitive user information with a single click on a seemingly legitimate URL. The breach was discovered by ethical hackers from the security firm Varonis, who demonstrated that their multi-layered attack could successfully illicit personal data, including user names, locations, and specific event details from the Copilot chat history.

Notably, the exploit persisted even after the user had closed the Copilot chat, requiring no further interaction once the initial click on the compromised link occurred. This method effectively circumvented enterprise endpoint security measures, evading detection by traditional endpoint protection solutions. The implications of this vulnerability underscore a critical concern for organizations relying on AI-assisted tools in sensitive environments.

Dolev Taler, a security researcher at Varonis, explained that the attack was swift and automated. “Our approach involved delivering a link embedded with a malicious prompt. Once clicked, the task executed immediately, regardless of whether the user closed the Corpilot chat,” Taler detailed.

The attack was evident in the structure of the initial URL, which was managed by Varonis. It included a complex series of instructions formatted as a query parameter, a common method utilized by Copilot and other large language models (LLMs) to process user inputs efficiently. When the user clicked on the URL, the prompt forced Copilot to incorporate sensitive information into web requests.

The embedded prompt operated on a deceptive premise, urging the user to solve a riddle while inadvertently transmitting a secret key. This unfortunate interaction led to the extraction of a user secret, which was dispatched to a Varonis-controlled server. The attack’s reach did not conclude with the disclosure of simple credentials; the disguised .jpg file subsequently conveyed further instructions, aiming to gather even more specific user data such as names and geographical locations, which were relayed through the URLs accessed by Copilot.

The ramifications of this vulnerability extend beyond individual users, affecting organizations reliant on Microsoft’s AI solutions. This incident highlights potential tactics employed by malicious actors, such as those outlined in the MITRE ATT&CK Framework, including initial access techniques and the exploitation of user trust in established interfaces. As businesses become increasingly dependent on advanced technologies, understanding these vulnerabilities and their implications is paramount for enhancing cybersecurity resilience.

Source