Table of Contents
Overview

Securing AI Agents with Cyera and Microsoft Copilot Studio

Securing AI Agents with Cyera and Microsoft Copilot Studio

AI agents and AI assistants like Microsoft 365 Copilot are rapidly transforming enterprise productivity. But every prompt, response, and workflow potentially touches sensitive information. Cyera’s integration with Microsoft Copilot Studio gives organizations a way to secure those interactions without slowing down innovation.

Copilot Studio empowers organizations to create AI agents using natural language or a simple-to-use graphical interface. With it, you can easily design, test, and publish agents that suit your specific needs for internal or external scenarios. Microsoft continues to introduce new capabilities to Copilot Studio, and has now added a framework that enables the easy integration of additional data controls and security into AI agents.

Cyera’s new integration with Copilot Studio, unveiled at Microsoft Ignite 2025, uses this framework to insert sensitivity classifications and enforce broader AI security policies into the agentic workflow. By embedding Cyera’s data classification context into Copilot’s prompts, tool actions, and responses, AI agents can automatically limit the return of sensitive information not just in chat interactions, but also in the agent’s actions on their own, for example, during scheduled and event-triggered tasks with no user present. Additionally, by embedding additional Cyera visibility into the Copilot Studio workflow, AI engineers can see in real-time the risks and data sensitivity of the data systems they are using with the agent. The integration of Cyera into Microsoft 365 Copilot helps ensure that the correct data systems are integrated from inception and that, when used, the AI agent doesn’t expose sensitive data when a user interacts with it.

"Microsoft Copilot Studio is about empowering organizations to create AI agents and workflows that act safely and responsibly, at a global scale," said Dan Lewis, CVP of Copilot Studio, Microsoft.. “This is possible when agents have the right understanding of data in real-time. The combination of Microsoft’s unified AI platform with Cyera’s data intelligence and runtime enforcement delivers this, so organizations can ensure every agent action is grounded in context and trust.”

Real-Time Data Security for AI Agents

Microsoft Copilot Studio makes it remarkably easy for enterprises to create custom AI agents. Anyone can design and publish an agent through a simple interface, then extend it with new capabilities using Microsoft’s latest update that adds a framework to allow data security and governance controls to be built directly into those agents, rather than bolted on later.

By embedding Cyera’s data classification and AI Security intelligence into Microsoft Copilot Studio, AI engineers can make their agent-based AI workflows aware of data sensitivity and security issues in real-time. This integration enables teams to identify risky data sources early and ensures that, when used, the AI agent doesn’t expose sensitive data during user interactions. Here’s a deeper view into how:

  • Agentless discovery and classification 
    • Cyera continuously scans data across Microsoft 365, Microsoft Azure and external data systems outside of the Microsoft ecosystem to identify sensitive and regulated content.
  • Data-aware, context-rich, policy enforcement
    • Cyera applies dynamic data policies to Copilot Studio workflows, leveraging its comprehensive understanding of the organization’s data environment, including sensitivity levels, ownership, residency, lineage, permissions, and usage patterns, to determine whether to allow or block tool invocation.
  • Visibility into sensitive data and its location
    • By providing integration with the Copilot Studio creation process, AI engineers gain real-time visibility into which data stores and systems are being accessed and how sensitive that data is. This ensures agents use only the data they need to, preventing unnecessary or unauthorized access.
  • In-line runtime guardrails for agents
    • Cyera’s AI Guardian evaluates intents, tool calls (such as sending an email, calling an external REST API, interacting with an MCP, and more), prompts, and responses against policy. Each action can be allowed, modified, held for approval, or blocked entirely.

An example of an alert generated and visibly displayed in Cyera's platform, which was created when Cyera’s Threat Detector stopped a Microsoft Copilot Studio agent from sending an email to an external recipient that contains a Social Security Number.

The practical benefits

This integration provides robust guardrails for how agents interact with data and tools, preventing unsafe operations before execution and evaluating each planned action in-line against context, identity, and corporate policy, then enforces these policies automatically. This allows organizations to:

  • Prevent unsafe tool calls to mitigate data leaks and malicious actions before they happen.
  • Defend against prompt injection and jailbreak attempts through context isolation, instruction provenance checks, and outbound response filtering.
  • Neutralize the so-called “Lethal Trifecta” of prompt injection, over-permissive actions, and data exfiltration by combining data context and runtime enforcement.

Customers want AI that’s powerful and responsible by default, and this integration leverages Cyera’s core strengths - precise classification, rich data context, and adaptive controls - ensuring AI agents created with Copilot Studio can perform their tasks securely, protecting sensitive content from unauthorized access or unintended exposure, without disrupting productivity. With the ability to apply consistent data controls across all AI environments, organizations can unify evidence and context for investigations and confidently scale Copilot deployments.

Cyera for Microsoft Copilot Studio Custom Agents is now available in private preview for select customers, with broader availability to follow soon.

Experience Cyera

To protect your dataverse, you first need to discover what’s in it. Let us help.

Get a demo  →
Decorative