The pace of artificial intelligence (AI) innovation is accelerating at an unprecedented rate. Major technology companies such as Salesforce, Microsoft, and Google are embarking on a race to make agentic AI accessible to a broader audience. Recent market research indicates that a substantial 82% of organizations are planning to implement AI agents within the next three years.
However, the autonomous capabilities inherent in these AI agents pose significant cybersecurity risks. Security teams face what could be termed their ‘Great AI Awakening’ as they come to understand the potential for these agents to be exploited for malicious purposes. If such breaches occur, the speed of AI advancement could drastically diminish.
Understanding the Cyber Risks Associated with AI Agents
AI agents navigate a complicated space that overlaps between human and machine behavior. Unlike traditional software, these agents exhibit unpredictable actions and cannot be easily categorized by standard identity and access management systems. This ambiguity increases their susceptibility to diverse cyber threats, including identity-based attacks and malware incursions.
Agentic AI operates in a non-deterministic manner, making it similar to humans in its vulnerability to manipulation. For instance, a group of cybersecurity researchers successfully deceived a well-known AI assistant into revealing confidential information by persuading it to adopt a ‘data pirate’ persona. If an AI assistant can be led astray in this manner, it raises concerns about its capacity to differentiate between legitimate communications and phishing attempts.
Identity attacks, increasingly recognized as a leading and fastest-growing category of cyber threats, exploit the human element to a far greater degree of ease than traditional software vulnerabilities. In fact, 68% of data breaches in 2024 stemmed from human error. The emergence of agentic AI amplifies this risk, transforming formerly secure software into potential targets.
Moreover, AI agents have been engineered to integrate deeply into organizational workflows, possessing greater autonomy than conventional software. This autonomy allows them to interact with vital organizational systems, fundamentally positioning AI agents as a new class of privileged user.
Exploring Practical Applications of AI Agents
In software development scenarios—where enterprises like Microsoft and Salesforce are pioneering the use of AI agents—these tools function collaboratively akin to a corporate team. Each agent plays a specialized role, working in concert to efficiently accomplish complex tasks.
For example, one agent might focus on design, outlining a high-level strategy for resource allocation and module development on a cloud platform. Another agent would then break those steps into actionable tasks, while a third could be responsible for writing the actual code. Subsequently, a reviewing agent would evaluate this code for quality and suggest modifications, culminating in an integration agent that compiles all components, tests the solution, and approves it for deployment.
This collaborative dynamic underscores the considerable influence that AI agents can exert on critical processes. Given their need for access to sensitive resources such as code repositories, cloud infrastructure, and development environments, any compromise to these agents could lead to significant data exposure. The common practice of embedding credentials within code could enable AI agents to serve as gateways for unauthorized data access.
Reconceptualizing the Identity Management of AI Agents
Organizations must avoid relegating AI agents to the status of mere software tools or isolating them within separate identity systems. Adopting a holistic identity management strategy that encompasses AI agents within the larger ecosystem—alongside servers, laptops, and human personnel—will create a centralized inventory that serves as the definitive resource for identity, access controls, policies, and real-time visibility.
By extending the same security principles applied to human identities to AI agents, organizations can streamline operations, reduce complexity, and enhance oversight across their entire infrastructure.
Prioritizing Security amid Innovation
In technology spheres, there is often an allure surrounding new innovations like AI agents. However, it is crucial for security teams—the custodians of organizational safety—to proactively assess the risks accompanying these advancements. Neglecting security considerations during the adoption of emerging technologies can lead to dire consequences, potentially derailing progress and leaving technological advancements to languish.
Just a single, significant cyberattack can radically impede the growth of new technologies, underscoring the necessity for a monumental shift in how identity is managed for AI agents. Unless these changes are made, security teams may find themselves in 2025 endeavoring to retrofit existing security models to address the inherent vulnerabilities of AI agents, thus stalling innovation.