The Hidden Costs of Shadow AI: Why Access and Audit Controls Matter Now

For security practitioners, AI is a double-edged sword. It introduces new risk vectors, from insider threats to third party risks to more sophisticated phishing campaigns. But it also enhances the tools used to identify and contain incidents. Maybe the most important thing you can do to get that sword cutting in the direction you want, is to understand what AI tools exist in your environment, who’s using them, and what data they’re ingesting.
That’s the key takeaway from IBM’s 2025 Cost of a Data Breach Report. Based on over three thousand interviews with individuals from more than six hundred organizations, the Report reveals an industry landscape where the biggest divide is between companies that have visibility and control over their AI deployments, and those that don’t.
Let’s start with the positives. Over the last year, the average cost of a data breach actually declined by 9 percent, to about $4.4 million. That was largely thanks to big reductions in mean time to intervene (MTTI) and mean time to contain (MTTC). And AI had a lot to do with that. The Report found that organizations that made extensive use of AI in the security space reduced their MTTI by about 80 days, saving on average $1.9 million.
But if we look at the other side of the scale, we find AI causing problems as well as solving them. Thirteen percent, or a little more than one out of eight organizations, said they experienced a data breach related to AI. Among those, 60 percent saw their data compromised, and 31 percent suffered operational disruptions.
Shadow AI was a big part of the problem. Twenty percent of organizations reported discovering Shadow AI in their environments. It’s likely that many more are hosting AI tools without knowing it, and won’t find out until one of them causes an incident. And of course they will, because the report also found that 63% of organizations have no AI governance policy in place.
That’s not sustainable. According to Suja Viswesan, IBM’s VP of Security and Runtime Products, AI is going to magnify the impact of any gaps in your organization’s cyber defenses. And because of the rapid increase in AI-related risks, it’s no longer just highly regulated industries who need to maintain a tight data security posture. Everyone needs to level up their data security game, especially when it comes to encryption and identity and access management (IAM).
It all starts with discovery. You have to discover the AI tools at large in your environment, and you have to discover and classify the data they might be ingesting. Without this crucial step, you’re flying blind.
But you can’t stop there. You have to secure AI usage, monitoring prompts and outputs in real time. And you have to be able to tie AI tools and data to identity. Human access to AI tools has to follow the principle of least privilege. But so does AI access to sensitive data. Traditional IAM policies like Role Based Access Controls (RBAC) may not be up to the challenge of securing Non-Human Identities (NHIs). Instead, Attribute Based Access Controls (ABAC) and advanced User and Entity Behavior Analytics (UEBA) may be necessary to police AI agents.
Here at Cyera, we’re committed to creating a world where safe and trustworthy AI helps businesses grow, communities thrive, and individuals pursue their passions with peace of mind. Cyera is the world’s fastest growing data security company because it was the first to solve the problem of discovering enterprises’ data at speed and scale, and classifying it precisely.
Not content to simply protect data at rest, we took our AI-native design and used it to solve DLP. Discovering and protecting data in transit with unmatched precision, Cyera’s Omni DLP dramatically reduces false positives, giving security teams a respite from alert fatigue and more time to triage issues that really matter.
Now we’re upping the ante again. With Cyera’s AI Guardian, enterprises can confidently adopt AI tools - whether its embedded AI like Microsoft Copilot, third party tools like ChatGPT, or homegrown AI apps built on platforms like Amazon Bedrock.
AI Guardian consists of two products: AI Security Posture Management (AISPM) and AI Runtime Protection. The first identifies all the AI tools in your environment, the data those tools are accessing, and which identities are using them. The second monitors AI usage and protects sensitive data in real time, blocking abusive prompts and preventing data leakage through AI outputs. It also logs AI interactions to help security teams investigate policy drift, misuse, or hallucinations.
AI Guardian helps organizations solve the problem of Shadow AI, and ensures that your AI maturity keeps pace with your AI aspirations. To see how Cyera can help you bring AI out of the shadows, schedule a demo today at cyera.com.
Gain full visibility
with our Data Risk Assessment.