AI in Zero Trust: Promises, Potential, and Unseen Challenges

Agentic AI,
Artificial Intelligence & Machine Learning,
Governance & Risk Management

CISOs Pursue True Value Amid Vendor Promotions of New AI Solutions

AI in Zero Trust: Hype, Hope, and Hidden Gaps
Image: Shutterstock

Artificial intelligence (AI) has permeated nearly every aspect of cybersecurity, particularly in vendor communications. From threat detection to identity management, AI has become integral to product offerings, including those focused on zero trust frameworks. As discussions evolve from generative to agentic AI, it is evident that this technology possesses significant potential to mitigate zero trust implementation challenges. However, realizing this potential will depend on business context, high-quality data, and adequate human oversight.

See Also: Machine Identities Emerge as Critical Security Blind Spot

In response to the current buzz around AI, security professionals exhibit a degree of skepticism. While they acknowledge that AI presents a “basket of opportunities,” they also highlight “vendor blind spots” and areas necessitating further improvement.

The zero trust framework poses implementation challenges. Common obstacles include managing granular access controls, enforcing least privilege, and implementing microsegmentation.

Initial Successes

AI tools are demonstrating their utility in the early phases of zero trust, particularly in assessment and policy formulation. Rob LaMagna-Reiter, Chief Information Security Officer at WoodmenLife, emphasizes the foundational importance of data, stating, “It all goes back to understanding your current posture and identifying where AI can genuinely contribute. Quality data is essential.” The early focus is on empowering teams to make rapid, data-driven decisions while maintaining smooth business operations. However, establishing a clear understanding of the protective surface is crucial before integrating AI into enforcement mechanisms.

In addition, AI is being utilized to detect abnormal behavior patterns across individuals and systems during the early stages of zero trust implementation. Billy Norwood, CISO at FFF Enterprises, notes that AI assists in identifying unusual anomalies. Currently, a significant advantage AI offers in zero trust is the acceleration of manual processes. Agentic AI is effectively underpinning tasks such as enforcing identity lifecycle policies and reducing dwell time.

AI is also bringing enhancements to access entitlement reviews, helping organizations establish baselines and intelligently segment networks. However, Norwood conveys that while there has been progress, the journey ahead for AI in zero trust is extensive. He rates current capabilities as “4 out of 10,” indicating that AI is primarily simplifying data processing without significantly contributing to decision-making.

Experts point to AI’s potential not only in advancing processes but also in alleviating fatigue among security teams and end-users. Bala Ramanan, director of risk and compliance at Microland, asserts that “zero trust is an ongoing journey, not a one-off project.” The ultimate objective is to cultivate an AI-driven zero trust environment that can autonomously mitigate breaches and enhance security posture.

Amruta Gawde, director of cybersecurity at GE Aerospace, shares a similar perspective, highlighting AI’s role in managing scale and enriching user experiences in identity and access management. Utilizing AI for just-in-time access can streamline the number of accounts users need, all while maintaining robust security.

Identifying Gaps

Despite many vendors claiming AI integration in their solutions, noteworthy gaps remain in enterprise security coverage, particularly concerning unmanaged devices and external contractors. Norwood notes that numerous AI solutions struggle with ensuring visibility for unmanaged contractor devices and rapidly changing personnel.

Vendors could enhance their offerings by providing policy-based profiling in such contexts, and addressing the lack of granular insights into user activities and permissions is crucial. “Support for visibility into transient users and enhanced monitoring of SaaS environments is essential for realizing AI’s full potential in zero trust,” he adds. The disparity in the quality of AI model responses exacerbates the challenge.

Beyond the limitations, there is a need to demystify vendor claims. Ramanan indicates that “high promises and actual outcomes being misaligned” is a prevalent issue, as many solutions remain confined to point applications. While point solutions may provide some assistance, they are unlikely to significantly advance AI and zero trust initiatives. Presently, many vendors operate on static risk models, necessitating a transition towards identifying anomalies and implementing appropriate responses. “Dynamic risk scenarios demand comprehensive AI integration in our operations,” he argues.

As the regulatory landscape becomes clearer and vendor capabilities evolve, AI technology’s capacity to enhance both security measures and user experiences is set to expand. Ramanan poignantly remarks, “AI is both adversary and ally. However, the battlefields may evolve, but the nature of the battle remains unchanged.”

Source link