Artificial Intelligence & Machine Learning,
Next-Generation Technologies & Secure Development
Asana’s MCP Server Temporarily Halted Amid Data Leakage Concerns

Asana has addressed a significant vulnerability within its artificial intelligence integration feature, which posed a risk of data exposure across user organizations. The company paused its Model Context Protocol (MCP) server for nearly two weeks to implement necessary security measures.
The incident highlights a potential flaw discovered in Asana’s implementation of MCP, an open-source framework designed to facilitate AI interactions with various external data sources, such as messaging platforms and enterprise applications. Asana’s MCP server was taken offline from June 5 to June 17 to address the issue.
Introduced by AI leader Anthropic in November, MCP serves to bridge communication between language models and structured enterprise data. Asana launched its own integration of MCP on May 1, aiming to enable users to utilize natural language queries and third-party AI applications for project data retrieval.
The vulnerability may have allowed sensitive information from some Asana accounts to be potentially exposed to other MCP users, as noted by a communication from the company on the social media platform X. However, Asana did not disclose the number of affected users or confirm whether any data was maliciously accessed.
In response to the flaw’s discovery on June 4, Asana promptly disabled the MCP server. Users were informed that the feature was restored, although re-establishing connections for those who had enabled the integration would be necessary.
The company reassured impacted organizations that they had been contacted directly with important follow-up information. In their remediation effort, Asana reset all connections to the MCP server to enhance security protocols.
Cybersecurity firm UpGuard advised that adherence to strict tenant isolation and the principle of least privilege is critical in preventing similar security issues. Furthermore, it recommended that organizations maintain logs of all Large Language Model (LLM)-generated queries to assist in auditing and potential future investigations.