Data Uploads to Generative AI Applications Soar 30 Times in a Year

Surge in Data Uploads to Generative AI Platforms Raises Concerns Over Security Risks

Recent findings from the cybersecurity firm Netskope indicate a dramatic surge in the amount of sensitive internal business data uploaded to generative AI applications, revealing a startling 30-fold increase over the past year. This influx includes a wide array of critical information, from passwords and security keys to intellectual property and regulated data, heightening the potential for data breaches and intellectual property theft.

According to the “2025 Generative AI Cloud and Threat Report” by Netskope, many enterprise users are increasingly utilizing generative AI platforms without formal oversight. Notably concerning is the prevalence of "shadow AI"—where employees access AI tools through personal accounts rather than approved corporate resources. This trend poses significant compliance risks and creates opportunities for potential attackers to exploit vulnerabilities that arise from unregulated data handling.

James Robinson, the Chief Information Security Officer at Netskope, emphasized the vital need for organizations to enhance their data security measures. He remarked on the challenges posed by shadow IT evolving into shadow AI, as nearly 75% of users continue to engage with generative AI tools via unauthorized channels. He stated that this trend, coupled with the sensitivity of the shared data, necessitates a robust approach to security governance that prioritizes visibility and acceptable usage policies within organizations.

A critical issue identified in Netskope’s report is the lack of visibility that many organizations have regarding how their data is being utilized and processed within AI applications. Rather than seeking to educate employees on safe AI practices, many firms have adopted a restrictive “block first and ask questions later” method. This approach fails to address the underlying needs and behaviors of users, which could ultimately hinder safe and productive use of AI technologies.

Ari Giguere, Vice President of Security and Intelligence Operations at Netskope, noted that the rapid advancement of AI technology is transforming business operations and the corresponding security threats. He pointed out that AI is not merely altering perimeter and platform defenses; it is fundamentally shifting security paradigms. As adversaries become more adept at crafting sophisticated threats with generative capabilities, Giguere underscores the necessity for security measures that can dynamically adapt in real time.

Potential tactics used in these AI-related security incidents align with several categories in the MITRE ATT&CK framework, particularly around initial access, where unauthorized users gain entry through compromised accounts, and persistence, where attackers maintain access to systems after an initial breach. Such sophistication in both offensive and defensive capabilities reflects an ongoing arms race in the cybersecurity landscape.

For organizations, the implications are clear: as generative AI technologies continue to proliferate, adopting proactive and adaptive security strategies that focus on user engagement and education will be essential. The need for advanced data protection measures that align with the continuous evolution of AI platforms cannot be understated.

To gain deeper insights, Netskope’s full “2025 Generative AI Cloud and Threat Report” is available here.

This surge in data uploads highlights the urgent need for businesses to reassess their cybersecurity frameworks, ensuring they are adequately equipped to manage the complexities introduced by generative AI. Business owners must navigate the intersection of innovation and security with diligence to safeguard their organizations in this rapidly changing digital landscape.

Source link