Red Teaming AI: Addressing Emerging Cybersecurity Challenges

Artificial Intelligence & Machine Learning,
Events,
Next-Generation Technologies & Secure Development

Ken Huang of DistributedApps.ai Discusses Agentic AI Risks and Threat Modeling


Ken Huang, Chief AI Officer, DistributedApps.ai

In an era where AI agents are increasingly autonomous and equipped with responsive tools, businesses must adapt their threat modeling strategies. Ken Huang, Chief AI Officer at DistributedApps.ai, emphasizes the importance of adopting mixture threat modeling to account for the inherent unpredictability of AI technologies.

Huang highlighted the critical need for ongoing red teaming of AI systems, raising alarms about the practice known as “viper coding.” This fast-paced, AI-driven development methodology can inadvertently result in insecure code. He asserted that the security frameworks currently in use must evolve to meet the challenges posed by autonomous AI applications.

“Conventional trust boundaries no longer apply as AI agents engage across diverse platforms. A flexible security paradigm is imperative,” stated Huang, reflecting on the challenges of maintaining security in a rapidly changing technological landscape.

During a video interview with Information Security Media Group at the RSAC Conference 2025, Huang explored various significant issues related to AI and cybersecurity. He discussed how agentic AI enhances the attack surface, the inadequacy of traditional identity-based controls, and the associated risks stemming from viper coding practices.

As a leading figure in AI security, Huang steers initiatives at DistributedApps.ai focused on the safe deployment of generative AI. His credentials include authorship of eight books on AI and Web3, along with co-chairing key working groups at the Cloud Security Alliance dedicated to AI organizational responsibility and control frameworks.

Source link