Artificial Intelligence and Cybersecurity: Copilot Vulnerability Exposed
By Pooja Tikekar
August 21, 2025
In a recent development, Microsoft has discreetly addressed a vulnerability in its Copilot AI, which allowed users to manipulate access logs concerning corporate files. As the company intensifies its integration of this large language model into its Office suite, concerns surrounding data security have intensified, especially as new methods of attack emerge.
Zack Korman, Chief Technology Officer at cybersecurity firm Pistachio, elucidated this flaw in his blog, highlighting how users could create conditions that circumvented essential audit logging features. He revealed that Copilot could be instructed not to log requests made for summarizing documents, raising alarm bells about potential security breaches. “Imagine an employee downloading sensitive files before resigning to launch a competing firm; the absence of relevant audit records could be catastrophic,” Korman argued.
Microsoft promotes Copilot as compliant with various regulatory requirements that mandate activity logging. Under normal circumstances, Copilot retains logs of user prompts and document accesses for 180 days, provided users subscribe to the relevant auditing tier. However, Korman illustrated how a simple request to avoid linking a referenced document could result in an empty audit log, rendering it ineffective in tracking potentially malicious or unauthorized access.
The risk posed to organizations employing Copilot is particularly concerning. Korman noted that any firm utilizing the AI prior to August 18 might find its audit logs incomplete, amplifying the risk of undetected insider threats. This sentiment was echoed by Michael Bargury, the CTO of Zenity, during the Black Hat 2024 conference. He pointed out that attackers could exploit prompt injection tactics to gain control of Copilot, thereby searching files and manipulating outputs under unsuspecting users’ identities.
Although Microsoft resolved the issue on August 17, it has refrained from assigning a Common Vulnerabilities and Exposures (CVE) designation to the flaw. The company acknowledged the findings shared by security researchers but did not provide immediate commentary on the incident. Korman expressed dissatisfaction with Microsoft’s vulnerability reporting process, comparing it to a vague tracking system for security researchers, akin to a “Domino’s pizza tracker.”
The implications of this vulnerability are concerning for the cybersecurity landscape. With the rise of AI models like Copilot, the risk of cyberattacks leveraging prompt injection tactics is significant. This situation aligns with various tactics outlined in the MITRE ATT&CK framework, specifically methodologies associated with initial access and privilege escalation. It raises the urgency for organizations to ensure robust monitoring and audit trails, especially when implementing advanced AI technologies into their operations.
As businesses increasingly integrate artificial intelligence tools, understanding the associated risks and adopting stringent cybersecurity measures is crucial. The Copilot incident serves as a stark reminder of the potential vulnerabilities embedded within cutting-edge technology and the importance of vigilance in data security practices.