IBM Discovers Inadequate Controls in 97% of AI-Related Data Breaches

Recent research from IBM highlights a significant “AI oversight gap” among organizations that have experienced data breaches. According to findings from the company’s Cost of a Data Breach Report, an alarming 97% of these organizations reported a lack of adequate AI access controls, underscoring potential vulnerabilities in their cybersecurity frameworks.

Furthermore, 63% of participants indicated the absence of AI governance policies to manage the use of artificial intelligence or to prevent workers from utilizing unapproved AI tools, often referred to as “shadow AI.” This data was released in late July and brought to attention in a report by CPO Magazine on August 18.

The implications of this oversight are serious, as IBM’s announcement points out that high levels of shadow AI have added an average of $670,000 to the total costs associated with breaches globally. This financial toll reflects not only immediate damages but also the broader impacts on organizational operations. AI-related breaches have the potential to cause extensive data compromises and operational disruptions, hampering an organization’s ability to process sales orders, provide customer service, and manage supply chains effectively.

Despite these challenges, there is some positive news in the report. For the first time in five years, average global data breach costs have experienced a decline, moving from $4.88 million to $4.44 million, a 9% reduction. IBM attributes this positive trend to faster containment of breaches, propelled by AI-powered defenses, which have enabled organizations to identify and mitigate breaches in a mean time of 241 days—this represents the shortest duration recorded in nearly a decade.

Supporting this trend, research from PYMNTS Intelligence indicates that more businesses are adopting AI-driven tools for enhanced cybersecurity. The proportion of chief operating officers (COOs) claiming their companies have implemented such measures rose to 55% in August of last year, up significantly from 17% in May.

COOs are increasingly moving toward proactive, AI-driven frameworks, transitioning away from traditional reactive security measures. These AI frameworks are designed to identify fraudulent activities, detect anomalies, and provide real-time assessments of potential threats, marking a shift in organizational cybersecurity strategies.

Nonetheless, the integration of agentic AI in cybersecurity introduces new governance and compliance challenges. Since these systems operate autonomously, questions arise about accountability in instances of AI error—such as mistakenly flagging critical systems for shutdown or failing to detect an actual breach.

As Kathryn McCall, Chief Legal and Compliance Officer at Trustly, articulated in a June interview with PYMNTS, navigating this landscape is not merely a technical upgrade but a profound governance revolution.

Business owners must be aware that as AI tools become more prevalent in cybersecurity, they necessitate robust governing policies to mitigate risks effectively while ensuring the integrity of operations. The MITRE ATT&CK framework remains a vital tool for understanding and addressing potential adversary tactics, such as initial access, persistence, and privilege escalation, that can be exploited in such breaches.

Source link