First Zero-Click AI Attack Hit Microsoft Copilot, Stole Corporate Data

EchoLeak: Microsoft Copilot's unpatched zero-click vulnerability exposed corporate data for months, revealing AI's unprecedented security challenges.

June 13, 2025

First Zero-Click AI Attack Hit Microsoft Copilot, Stole Corporate Data
A critical vulnerability in Microsoft's Copilot AI, which went unpatched for months, could have allowed attackers to steal sensitive corporate data using a specially crafted email, requiring no interaction from the user. The flaw, dubbed "EchoLeak" by the cybersecurity firm Aim Security which discovered it, highlights the burgeoning security risks associated with integrating powerful artificial intelligence into enterprise software. The vulnerability, now identified as CVE-2025-32711, was classified as critical and represented what researchers called the first-known "zero-click" attack on an AI agent, a type of AI that can act autonomously.[1][2][3] The vulnerability existed within Microsoft 365 Copilot, an AI assistant designed to integrate with applications like Outlook, Word, and Teams to enhance productivity by accessing and managing user data across these platforms.[4]
The discovery and remediation of EchoLeak was a lengthy process, underscoring the novel challenges AI systems pose to traditional cybersecurity protocols. Aim Security first reported the vulnerability to Microsoft in January 2025.[5][6] However, the complete fix was not implemented until May 2025, a nearly five-month period during which organizations using Copilot with its default configuration were potentially at risk.[6][7] According to Aim Security researchers, the delay was partly due to the novelty of the attack, which required educating the relevant Microsoft teams about the vulnerability and its potential mitigations. An initial fix attempted in April proved insufficient, as further security issues were discovered, prompting a more comprehensive solution.[2] Microsoft has since confirmed that the issue is fully resolved through server-side patches, meaning customers do not need to take any action.[4][1] The company also stated that it was not aware of any customers being impacted or the vulnerability being exploited in the wild.[1][2]
The attack method devised by Aim Security was sophisticated, chaining together several vulnerabilities to bypass multiple layers of Microsoft's security.[8] The attack began with an email containing hidden instructions for Copilot. These instructions were phrased as if directed at the human recipient, a tactic designed to evade Microsoft's cross-prompt injection attack (XPIA) classifiers, which are intended to detect and block malicious prompts.[9][10] The core of the vulnerability was a novel exploitation technique termed "LLM Scope Violation," where the AI model is tricked into treating untrusted, external input as a trusted internal command.[8][11] This allowed the attacker's instructions, once processed by Copilot, to access and exfiltrate data from the user's Microsoft 365 environment. This could include the entire chat history with Copilot, documents from OneDrive, SharePoint content, and Teams messages.[1][12] To get the data out, the researchers bypassed mechanisms that redact external links and content security policies (CSP) by using lesser-known Markdown formatting and leveraging trusted domains like Microsoft Teams to relay the stolen information to an attacker-controlled server.[11][9][10]
The EchoLeak vulnerability serves as a stark warning for the entire AI industry about the inherent risks of integrating large language models (LLMs) into complex enterprise ecosystems.[13] The very feature that makes AI assistants like Copilot powerful—their deep integration with and access to vast amounts of user data—also creates a significant attack surface.[11] Experts point out that because these AI agents are empowered to act on a user's behalf, scanning emails and accessing files, they become a prime target for attackers seeking to exploit that privileged access.[1] The EchoLeak attack demonstrates that traditional security measures may not be sufficient to protect against new types of AI-specific vulnerabilities like prompt injection and data exfiltration.[7][14] Researchers argue that the fundamental design of some AI agents, which mix trusted and untrusted data in the same "thought process," is a core design flaw that needs to be addressed.[2] This may require a fundamental redesign of how AI agents are built to ensure a clearer separation between trusted instructions and untrusted external data.[2]
In conclusion, the months-long struggle to patch the EchoLeak flaw in Microsoft 365 Copilot has cast a spotlight on the significant security challenges accompanying the rapid adoption of generative AI. While no malicious exploitation has been reported, the vulnerability's existence demonstrated a practical method for turning a trusted AI assistant into a tool for data theft without any user interaction.[8][1] The incident has forced a critical conversation about the security posture of AI agents and the need for new, robust defenses to protect against novel threats like LLM scope violations and sophisticated prompt injection attacks.[8][10] For businesses eagerly integrating AI to boost productivity, EchoLeak is a critical case study, emphasizing the urgent need for comprehensive AI governance, rigorous security assessments, and a deeper understanding of the unique risks these powerful new technologies introduce.[5][6] The episode makes it clear that as AI becomes more autonomous and integrated, securing these systems will require continuous innovation and a paradigm shift in how the industry approaches security.[13][15]

Research Queries Used
Microsoft Copilot "EchoLeak" vulnerability
Aim Security EchoLeak vulnerability details
Microsoft response to EchoLeak security flaw
timeline of EchoLeak vulnerability disclosure
impact of EchoLeak on Microsoft 365 Copilot users
AI security vulnerabilities in enterprise software
Share this article