Mitigating the very real Risks of Custom Copilots

Building no-code chatbots with Copilot Studio is refreshingly straightforward—almost deceptively so. In a few clicks, you’ve got a friendly helper automating tasks or surfacing company knowledge. But ease can be a double-edged sword. That same simplicity can lull makers into skipping the fine print, and before you know it, your "helpful assistant" is casually leaking data, executing the wrong task, or revealing credentials better left buried.

So, if you're planning to roll out a Copilot agent, take a tea break. Let’s walk through the risks and how to stay on the right side of chaos.


First Stop: The Danger Zone

The biggest trap? Input and data sources. Here’s where most agents go rogue:

  • Dodgy external data: Linking to unauthenticated sites or outdated feeds can backfire fast. Your bot may quote nonsense, depend on unreliable info, or worse, hand over control to unknown parties.
  • Local file uploads: It sounds safe until it isn’t. Those files might contain hidden metadata or sensitive bits not meant for broader eyes. If you’ve shared your agent with others, they may have download access breaking data compartmentalisation completely.
  • Over-sharing from inside your org: When an authenticated data source is added say, a SharePoint site the bot might get access to every single page. Add “author authentication” into the mix and suddenly the agent’s using your personal permissions. That’s a recipe for oversharing if you're not careful.
  • Credential exposure: Hardcoding credentials? Don’t. The underlying model can latch onto those and regurgitate them in a response. Giving agents access under your own name (via author authentication) is also a risky shortcut.
  • Wandering logic paths: Generative AI can be brilliant or bizarre. If your action descriptions are vague or duplicated, the agent may execute the wrong task at the wrong time. That means unexpected logic chains and surprising results.
  • Limit attack surface: Are default Topics pruned to reduce attack area?

How to Lock Things Down

Luckily, Copilot Studio comes with solid tools to help you steer clear of trouble. Here’s how to build a secure, sane foundation:

1. Use the Right Authentication

  • No authentication means anyone with the link can chat. Only use this if your agent exposes zero sensitive data. Segregate such agents into their own environment where DLP specifically allows no auth, that option being disallowed everywhere else by default.
  • Microsoft Entra ID (default) is ideal for Teams, Copilot or Power Apps. It signs users in using their Microsoft credentials.
  • Manual authentication is for custom setups or SSO needs. You can plug in Entra ID or any OAuth2 provider. Agents can ask users to sign in during the chat, then use that token to talk to back-end systems on their behalf.

Pro tip: Authentication settings must be tailored for every channel you deploy to. Don’t assume the default will do. Segregate and secure solutions into relevent environments.

2. Enforce Data Loss Prevention (DLP)

DLP policies control what data flows where. Configure them in the Power Platform admin center:

  • Sort connectors into Business, Non-Business, or Blocked.
  • Require connectors to be in the same group to share data.
  • Block “Chat without Microsoft Entra ID authentication” in all environments except where specifically required this ensures all agents require user authentication.
  • Block specific sources like public sites or unaudited SharePoint libraries.
  • Endpoint filtering lets you get precise, like limiting which SharePoint pages an agent can access.

DLP enforcement is on by default for all tenants since early 2025, but you still need to configure it.

3. Be Picky With Your Inputs

  • Vet all data sources. Regularly check for accuracy and relevance.
  • Use Microsoft Purview sensitivity labels (enabled by default for SharePoint sources) to enforce visibility rules.
  • Sanitize user input. Validate every piece of data before it hits your back-end systems.

4. Handle Credentials Properly

  • Never hardcode credentials in a bot or flow.
  • Store secrets in Azure Key Vault or similar.
  • Use individual credentials, not shared ones.
  • Keep encryption on, and data compartmentalised.

5. Monitor and Test Like a Pro

  • Run regular validation tests to see how your agent responds in edge cases.
  • Specifically test for exploits like Policy Puppetry.
  • Are generative answers being tested for reliability? There's no legal defense for your generative AI tool misinforming or causing damage.
  • What actions are the Bots using? Can they be altered by others without your knowledge?
  • Write clear, unique descriptions for each action this helps prevent AI misfires.
  • Enable auditing via Microsoft Purview to track what agents and users are doing. While it doesn’t store full transcripts in the audit log, it logs activity types, timestamps, and more.
  • Consider using the Center of Excellence solution to keep an extra eye on Power Platform usage.

6. Write Governance and Educate Users & Builders

  • Decide the rules, document and agree them.
  • Educate users and builders on usage and risks.
  • Get builders to sign that they understand and accept T&Cs.
  • Go back to point 5 above and monitor. Trust but verify is a good idea.

Bonus Security Wins

  • Azure provides robust infrastructure, MFA, region-based replication, and compliance badges galore.
  • Copilot Studio includes:
    • Bot-level authentication/authorization
    • Data masking
    • Warnings before publishing insecure agents (e.g., “No authentication” triggers alerts)
  • Want extra peace of mind? Use customer-managed encryption keys to control how data is encrypted at rest.

Bottom line: Yes, Copilot Studio makes it easy to build helpful bots. But if you skip the safeguards, you're not launching a clever assistant—you’re releasing a wild card. So use DLP. Set proper authentication. Validate your inputs. And for goodness’ sake, don’t hardcode your passwords.

Build smart. Build safe. Tally ho!

Disclaimer: The software, source code and guidance on this website is provided "AS IS"
with no warranties of any kind. The entire risk arising out of the use or
performance of the software and source code is with you.

Any views expressed in this blog are those of the individual and may not necessarily reflect the views of any organization the individual may be affiliated with.