Critical Infrastructure Security,
Next-Generation Technologies & Secure Development
AI Misconfigurations, Not Cyberattacks, Could Threaten Infrastructure by 2028: Gartner

According to Gartner, a misconfigured artificial intelligence (AI) system has the potential to bring down critical infrastructure across an advanced economy by 2028, a feat that cybercriminals have so far failed to achieve.
This forecast raises significant concerns, particularly regarding the malfunctioning of cyber-physical systems, which integrate sensing, computation, and control technologies to interact with the physical world. These systems encompass various technologies, including operational technology, industrial control systems, and the industrial Internet of Things (IoT). Errors within AI-driven control systems can escalate beyond digital disruptions, potentially causing real-world damage, widespread service outages, and instability in supply chains.
Wam Voster, a vice president analyst at Gartner, emphasized that future infrastructure failures may stem from human errors, such as a misconceived update or simple coding mistakes, rather than from cyberattacks or natural disasters. This new risk landscape could potentially see AI autonomously shutting down vital services based on misinterpreted sensor data or unsafe algorithmic actions.
Modern power grids exemplify the magnitude of this risk. These grids increasingly depend on AI to maintain balance between electricity generation and consumption. If a predictive model misreads demand patterns as a sign of system instability, it could lead to unnecessary grid isolation, affecting entire regions.
Voster pointed out that AI systems often operate as “black boxes,” complicating predictability even for developers. Small modifications in configurations can lead to unexpected emergent behaviors, thereby increasing the potential for serious consequences from misconfiguration. The inherent opacity of these systems necessitates robust human oversight to mitigate risks.
Darren Guccione, CEO of Keeper Security, echoed Voster’s warnings, noting that the rapid integration of AI into critical infrastructure is currently outpacing the development of governance, identity controls, and configuration management frameworks. This gap presents significant vulnerabilities, particularly when these systems utilize networks of privileged accounts, API keys, and automation scripts. The amplification of misconfigurations through automation is a growing concern.
As non-human identities like service accounts and AI agents outnumber human users in many environments, they pose unique management challenges. Without adequate governance, these identities can operate with excessive permissions and limited oversight, leading to a single deployment failure that could trigger widespread cascading issues across interconnected systems.
Guccione remarked, “As automation increases, so does the potential impact of failure.” As AI technology evolves, the landscape of cybersecurity risks also transforms, underscoring the necessity for vigilant oversight and robust risk management frameworks to safeguard critical infrastructure.