83% Use AI; Only 13% Have Visibility - Cyera’s 2025 State of AI Data Security Report
.png)
CyberSecurity Insiders, in partnership with Cyera Research Labs - Cyera’s Data & AI Security research arm, surveyed 900+ IT leaders to provide the evidence customers kept asking for.
Key Takeaways:
- 83% of enterprises already use AI- yet only 13% report strong visibility into how it touches their data.
- 76% say autonomous AI agents are the hardest to secure, and only 9% monitor AI activity in real time.
- Controls lag incidents: 66% have caught AI over-accessing sensitive data, while just 11% can automatically block risky activity.
The first alert didn’t come from a SIEM. It came from a sales manager asking why an AI copilot could “magically” find a pricing deck it shouldn’t know. No exploit. No zero-day. Just default access, missing guardrails, and no one watching the prompt layer.
That story showed up again and again in the 2025 State of AI Data Security Report : AI is already threaded through daily work, but the controls that keep data safe haven’t caught up. The result is a readiness gap-use is high, oversight is low-and it’s widening at the exact edges attackers and accidents love.
From Questions to Evidence: Why (and How) We Conducted the Survey
Customers kept asking the same questions-Where do we stand vs. peers? Which guardrails are actually standard? How do we measure readiness? There was no neutral, cross-industry baseline. So, in partnership with CyberSecurity Insiders, we fielded a structured, multiple-choice survey of 921 IT and security practitioners across industries and sizes (CISOs, security leaders, architects, SOC heads, data-governance roles). Results are statistically robust (±3.2% at 95% CI). We focused on adoption and maturity, visibility and monitoring, controls at the prompt/agent edge, identity & access for AI, and governance/readiness-aligned to the OWASP Top 10 for LLM Applications.
What we asked (and why)
We wanted to know if AI is being governed with the same rigor we apply to users, systems, and data. So we asked about:
- Where AI lives (tools, models, and workflows)
- What visibility exists (logs, real-time signals, auditability)
- Which controls are active (prompt filters, output redaction, auto-blocking)
- How access is granted (AI as first-class identity or “just another user”)
- Who owns governance (and whether policy maps to proof)
Here’s What the Data Says
Adoption outpaces control. AI is mainstream-83% use it in daily work-yet only 13% say they have strong visibility into how it touches enterprise data. Most teams are still in pilots or “emerging” maturity, but usage already spans content/knowledge work and collaboration. That’s how blind spots scale.
Risk concentrates at the edges. Confidence is highest when AI is buried inside familiar SaaS. It collapses when autonomy or public prompts enter the picture. 76% say autonomous agents are the hardest to secure; 70% flag external prompts to public LLMs. Nearly a quarter (23%) have no prompt or output controls at all. That’s not a model failure; that’s a policy failure at the interface.
AI is a new identity-treated like an old one. Only 16% treat AI as its own identity class with dedicated policies. 21% grant broad data access by default. 66% have already caught AI over-accessing information (often, not rarely). And while least-privilege should be tied to data classification in real time, only 9% say their data-security and identity controls are truly integrated for AI.
AI Governance lags reality. Just 7% have a dedicated AI governance committee, and only 11% feel fully prepared for emerging AI regulation. Logs are too often post-incident artifacts; auto-blocking exists in only 11% of programs. In other words: we’re observing after the fact and enforcing by hand.
Takeaways for CISOs, Architects and Data Leaders
If you treat AI like an app, you’ll miss the real risk. If you treat it like a human, you’ll over-permission it. If you ignore the prompt/output layer, you’ll never see the leak.
Here’s the pragmatic path forward:
- Instrument from the first pilot
Turn on discovery of AI tools/models, central prompt/output logging, and real-time anomaly signals (exfil attempts, jailbreaks, over-consumption). Don’t create governance debt you’ll pay for later. - Harden the interface
Default to input filtering and output redaction at gateways. Keep agent scopes narrow, require approvals for autonomy, and enforce rate limits and kill switches. Aim for auto-blocking where risk patterns are clear. - Make AI a first-class identity
Provision AI with its own identity type and lifecycle. Enforce least-privilege tied to data classification and context, continuously-not quarterly. Review and expire scopes like you would production tokens. - Anchor governance to evidence
Put a named owner on AI governance. Map policy to logs, decisions, and outcomes you can actually show (DPIAs/TRAs where applicable). Track board-level metrics: coverage, auto-block rate, over-access rate, and time to detect/contain prompt-layer incidents.
Introducing Cyera Research Labs (and what’s next)
This post inaugurates Cyera Research Labs-our research arm dedicated to clear, data-driven guidance at the intersection of AI and data security. Expect concise briefs, rapid-response analyses when the landscape shifts, and practical playbooks you can put into production. If you want the next report, tell us and we’ll share benchmarks and guardrail patterns you can use on Monday.
Final word: AI is not waiting for governance to catch up. The leaders who win won’t be the loudest-they’ll be the ones who can prove visibility, containment, and least-privilege for non-human actors, starting now.
Read the 2025 State of AI Data Security Report.