10 Microsoft 365 Copilot Risks That Aren't in Your Threat Model (But Should Be)

Copilot doesn't bypass your permissions — it makes bad permissions consequential. Ten specific risks, mechanisms, and mitigations for M365 Copilot deployments.

10 Microsoft 365 Copilot Risks That Aren't in Your Threat Model (But Should Be)

Copilot Is Not Just a User

The selling point is also the threat model problem. Microsoft positions Copilot for Microsoft 365 as safe because it only accesses what you already have access to — and that's technically accurate. According to Microsoft's own privacy documentation, Copilot uses the same underlying controls that govern data access in Microsoft 365. No bypass. No privilege escalation.

That framing misses the operational problem by design.

Copilot makes your existing permissions model consequential in ways it wasn't before. An organization accumulates years of SharePoint sharing debt, over-provisioned user accounts, sensitivity labels without encryption, and audit coverage gaps — and lives with that debt indefinitely because none of it actively causes a problem. Then Copilot gets licensed, and every piece of that debt becomes an active attack surface overnight.

What you've been tolerating as a latent risk is now an instant-query risk.

One scope note before the risks: this post covers Copilot for Microsoft 365 — the productivity assistant integrated into Word, Excel, Teams, Outlook, and Microsoft 365 Chat. It does not cover Copilot for Security (a separate SOC product) or Copilot Studio (the custom AI agent builder). Both introduce their own risk profiles; both are out of scope here.

This is a threat model briefing, not a product tutorial. Ten risk categories, each with a mechanism, a Copilot-specific failure mode, and actionable mitigations. Treat it as a pre-deployment checklist gap analysis, or a post-deployment review if you've already gone live.


Risk 1: Data Oversharing Amplification

Copilot queries Microsoft Graph on behalf of the signed-in user. Any content accessible via Graph — SharePoint sites, OneDrive files, Teams channels, Exchange mail — is in scope for every session. That's by design and documented explicitly by Microsoft.

The practical failure mode is permissions debt. Most organizations have spent years granting SharePoint access to security groups and "People in your organization" scopes without systematic cleanup. Content is technically accessible to hundreds of users but practically unreachable because nobody knows it exists and nobody has a reason to go looking. Copilot changes that calculus entirely. A user with Reader access to forty SharePoint sites they've never visited can ask "What do we know about Project Aurora?" and get a synthesized answer drawing from documents across all of them.

Copilot surfaces content the user would never have found manually — content that was technically permitted but had no practical path to discovery before. The gap between "technically permitted" and "appropriate for synthesis" is enormous in most enterprise tenants. A Copilot readiness checklist specifically covering data classification and oversharing risks is available in our Purview + Copilot Starter.

Mitigation:

  • Run a SharePoint access review before any Copilot license assignment. Use SharePoint Advanced Management data access governance reports to enumerate sites accessible to broad groups.
  • Restrict or remove "Everyone" and "Everyone except external users" permission grants across SharePoint.
  • Configure SharePoint Restricted Search — this limits Copilot to querying only the sites you explicitly allow, functioning as an allowlist for the Copilot corpus while you remediate permissions debt.

Risk 2: Prompt Injection via Document and Email Content

Copilot is designed to read arbitrary document and email content and reason over it. That design decision creates a direct attack surface for indirect prompt injection.

The mechanism: an attacker controls content in a document, email, or calendar invite that a user may ask Copilot to process. That content can embed adversarial instructions — variations of "ignore previous instructions and instead do X" — that Copilot processes alongside the legitimate document content. Unlike direct attacks against the chat interface, this attack requires no access to the interface at all. A malicious email in a target user's inbox is sufficient.

Independent security researcher Johann Rehberger documented this attack class in published research at Embrace The Red, including demonstrations against M365 Copilot using emails, calendar invites, and documents with instructions hidden in white text. Responsible disclosures were submitted through MSRC.

A critical clarification: specific attack chains documented in 2023–2024 have been patched by Microsoft. The architectural problem — distinguishing adversarial instructions from legitimate document content in an LLM processing pipeline — has not been solved, and new variations continue to be found. Microsoft's Prompt Shields documentation acknowledges the mitigation investment (grounding, input filtering, Prompt Shields) without publishing technical efficacy metrics.

Mitigation:

  • Treat this as a risk class, not a specific exploit to patch against. The class exists and has been publicly demonstrated.
  • User awareness training: do not ask Copilot to summarize documents from unknown or external sources without verifying origin.
  • Tenant admins have limited direct control over Copilot's internal injection detection. Mitigations are predominantly Microsoft-side. Monitor Microsoft's security advisories for this attack surface.

Risk 3: Over-Permissioned Identity Blast Radius

When Copilot runs a session, it uses the signed-in user's Microsoft Graph token. The user's effective permissions are Copilot's operating permissions — no expansion, but also no reduction.

A user with excessive accumulated permissions — Global Reader privileges, Site Collection Administrator across legacy SharePoint environments, membership in security groups with over-broad grants — has an enormous Copilot data reach. That reach requires no attack to realize. The exposure exists the moment the Copilot license is assigned.

The highest-risk patterns in typical enterprise tenants:

  • Global Reader + Copilot license: can query any file, email content, or SharePoint site in the tenant.
  • Site Collection Administrators: have unrestricted read access to site content, including subsites, document libraries, and version history across every site they administer.
  • Legacy security group members: groups provisioned years ago with broad SharePoint grants that nobody has audited continue to convey those permissions to every member who receives a Copilot license.

The "blast radius" is entirely silent. No alert fires. No audit event triggers on overpermission detection. The user with Global Reader simply gets answers from a broader corpus than any normal user would, and nobody knows.

Mitigation:

  • Conduct an access review for every Copilot-licensed user before rollout — not a global access review, but a scoped review focused specifically on identities receiving licenses.
  • Remove excess site permissions, stale security group memberships, and unnecessary admin roles.
  • Prioritize least-privilege remediation for accounts with access to high-sensitivity sites: HR, Finance, Legal, M&A.
  • Use Entra ID Access Reviews to automate recertification cycles on an ongoing basis post-rollout.

Risk 4: Audit Log Coverage Gaps for Copilot Interactions

Microsoft Purview records Copilot interaction events under the CopilotInteraction schema — capturing user, timestamp, application context, and whether content was referenced. That sounds comprehensive. It isn't.

Per Microsoft's Purview audit documentation, the full text of Copilot prompts and responses is not stored in audit logs by default. The log confirms an interaction occurred. It does not record what was asked, what content was surfaced, or what the exact output was.

The forensic consequence: if a Copilot session surfaces a sensitive document and the user screenshots the output, your audit trail records that Copilot was used. It cannot reconstruct what Copilot produced or what query triggered the response.

The second gap is retention. E3 tenants get 90-day audit log retention by default. E5 adds up to one year. Most compliance-heavy organizations retain SharePoint and Exchange audit events for seven years or longer under regulatory requirements. If you're in that category, your Copilot audit trail may be a fraction of your standard retention policy unless you explicitly configure otherwise.

Audit capabilities have been improving rapidly since Copilot's GA in late 2023. Verify the current state of CopilotInteraction event coverage in your Purview compliance portal before drawing conclusions — this area of documentation has evolved continuously.

Mitigation:

  • Enable Purview Audit (Premium) for extended retention.
  • Confirm CopilotInteraction events are flowing in the Purview compliance portal and that retention is aligned with your audit retention policy.
  • Use Purview eDiscovery and Content Search for Copilot interaction review where investigations require it.
  • Implement Purview Insider Risk Management policies that include Copilot interaction signals as behavioral indicators.

Risk 5: Plugin and Connector OAuth Scope Creep

Copilot supports extensibility through plugins and Power Platform connectors. These authenticate via OAuth and can request permissions against Microsoft Graph or external services as part of function calling.

The concern runs in two directions. Inbound: Graph connectors and external data connectors ingest external content into the Copilot search index, potentially introducing data from systems not subject to M365's DLP and retention controls. Outbound: third-party plugins that handle context from Copilot prompts can relay that context to external APIs. A plugin connecting Copilot to an external service receives session-level context as part of how it constructs its API calls.

The severity depends heavily on your tenant's app governance posture. Microsoft's Copilot requirements documentation describes admin controls for plugin availability via the Integrated Apps admin panel. Tenants with admin-only app consent and a formal app security review process are substantially less exposed. Tenants where user-level OAuth consent is permitted and self-installed apps are allowed have a materially wider attack surface.

Mitigation:

  • Restrict Copilot plugin availability to admin-vetted apps only through Microsoft 365 admin center > Integrated Apps.
  • Block user-level OAuth consent for apps that haven't passed a security review.
  • Audit OAuth grants for currently installed connectors — review what Graph permissions each connector has been granted and whether those grants are appropriate.
  • Apply Conditional Access app control policies where applicable to govern session behavior.

Risk 6: Sensitivity Labels Don't Protect What You Think They Do

The risk is documented by Microsoft, but the implication requires close reading to surface.

Microsoft's sensitivity label guidance for Copilot states: "Microsoft 365 Copilot honors the protections of encrypted information." That sentence is technically precise. The implication — that Copilot does not honor the protections of non-encrypted information — is left for the reader to infer.

Sensitivity labels come in two functional types for Copilot purposes. Labels that apply RMS encryption actually restrict Copilot: if a document is encrypted to a restricted user set, Copilot cannot read it on behalf of users outside that set. Labels that apply only visual markings — headers, footers, watermarks, metadata tags — are entirely transparent to Copilot. The label is decoration. Copilot reads through it.

The problem is how organizations use labels in practice. Many apply "Confidential" and "Highly Confidential" designations as visual markings without encryption — typically because encryption created friction with external partner sharing, broke legacy workflows, or was too complex to deploy at scale. Those decisions made pragmatic sense at the time. In a Copilot deployment, they create a false belief that labeled content is protected when it is not. For a structured sensitivity label taxonomy with encryption tiers mapped to Copilot access behavior, advanced DLP runbooks, and a Copilot risk assessment template, see our Purview + Copilot Pro.

Mitigation:

  • Pull your active sensitivity label policies and identify which tiers enforce RMS encryption versus visual markings only.
  • For any label tier intended to restrict broad access — Highly Confidential, Restricted, Executive Only — enforce encryption.
  • Where encryption breaks partner sharing workflows, evaluate scoped encryption with appropriate user-level permissions rather than abandoning encryption entirely.
  • Document explicitly which label tiers restrict Copilot access and which do not. Share this with your security and compliance teams. It will likely surprise them.

Risk 7: Shared Mailboxes and Service Accounts Expand Copilot's Reach

This risk is an inference from documented Exchange delegation behavior, not something Microsoft names explicitly as a Copilot risk. It warrants naming anyway.

Users with Full Access delegation to shared mailboxes — HR inboxes, Legal correspondence boxes, Finance shared mailboxes — can access those mailboxes through Outlook. Microsoft documentation confirms that Copilot accesses Exchange on behalf of the signed-in user, including delegated mailbox access. A user asking "Summarize recent personnel issues" may get a synthesized response drawing from the HR shared mailbox they're delegated to, not just their personal inbox.

This is working as designed. The concern is intent amplification: a delegation grant was typically issued for a specific operational need — covering for an absent colleague, routing expense approvals. Copilot transforms that limited delegation into a general-purpose content access grant against everything in that mailbox, surfaceable by natural language query.

The service account vector is a separate problem: service accounts with SharePoint permissions that were accidentally — or by policy — assigned Copilot licenses create a data access principal with potentially sweeping scope and no organizational oversight of its query behavior.

Mitigation:

  • Audit delegated mailbox access for every Copilot-licensed user. Remove delegations that lack a current, documented business justification.
  • Ensure service accounts are explicitly excluded from Copilot license assignment policies.
  • Enforce naming conventions and account type tagging (ServiceAccount, SharedMailbox) so license assignment policies can correctly target them.
  • Review service accounts with SharePoint permissions against your Copilot license assignment lists specifically.

Risk 8: DLP Doesn't Govern What Copilot Produces

Purview DLP policies can be configured to scan and act on user prompts submitted to Copilot. If a user types a credit card number directly into the Copilot chat, a DLP policy can detect it, audit it, or block the submission. Microsoft's DLP documentation for Copilot covers this input-side governance.

The output side is a gap. Microsoft's DLP workload coverage for Copilot is scoped around what users submit — preventing sensitive data from entering Copilot prompts. What Copilot synthesizes from content it's authorized to read, and then renders to the user, falls outside that coverage model as currently documented. When Copilot summarizes a document containing credit card numbers and renders those numbers in the chat interface, the DLP policy on the upstream document does not intercept the output.

The result: you can have rigorous DLP coverage on Exchange, SharePoint, and OneDrive, configure Copilot prompt scanning, and still have a path for sensitive content to surface through Copilot responses without triggering a policy.

This is a documented gap as of early 2026. Microsoft has indicated that expanded AI output governance is in development. Check current DLP documentation against your deployment before treating this as a permanent limitation — this coverage area changes frequently.

Mitigation:

  • Layer encrypted sensitivity labels with DLP. Encrypted labels prevent Copilot from accessing the source document entirely, sidestepping the output gap for content you've encrypted.
  • Configure Purview Communication Compliance to include Copilot interactions for retrospective review of policy violations.
  • Track Microsoft's DLP roadmap for AI output governance. The current gap may narrow substantially before your next compliance audit cycle.

Risk 9: Default Sharing Settings Determine Copilot's Data Pool

Risk 9 overlaps with Risk 1 but originates from a different mechanism. Risk 1 is about explicit permission grants that are too broad. Risk 9 is about what default sharing behavior did quietly over years without anyone paying attention.

SharePoint's tenant-level default sharing link setting determines the access scope for files shared without explicit permission assignment. If your tenant default was set to "People in your organization with the link" — a common historical configuration — then every file shared under that default since the setting was applied is accessible to every authenticated user in your SharePoint search index. That corpus is what Copilot queries.

Organizations that deployed SharePoint before 2020 and haven't revisited their defaults often have large accumulated bodies of content under org-wide sharing scope. Every one of those files is in the Copilot data pool for every licensed user.

Microsoft's pre-deployment readiness guidance calls out default sharing settings explicitly as a prerequisite to address before going live. SharePoint Copilot best practices documentation covers both restricted search configuration and data access governance reporting for scoping this exposure.

Mitigation:

  • Audit the current tenant default link type and change it to "Specific people" if it isn't already.
  • Use SharePoint Advanced Management data access governance reports to enumerate files shared under broad org-wide settings.
  • Apply retroactive link cleanup using SharePoint's link reporting and expiration tooling where feasible.
  • Configure link expiration policies to prevent future accumulation under broad defaults.

Risk 10: Copilot Makes Insider Threats Faster and Harder to Detect

This is the hardest risk to control because the mechanism is legitimate functionality used for illegitimate purpose.

A Copilot-licensed user with broad permissions can conduct reconnaissance across an M365 tenant in minutes through natural language queries. "What do we have on the acquisition?" "Summarize HR complaints from the past year." "What are the financial projections for next year?" None of these prompts are inherently suspicious activity. All of them, from a user with sufficient permissions, can surface highly sensitive content spanning dozens of data sources.

The detection gap amplifies the problem. Manual file exfiltration generates download and copy audit events. Copilot-based reconnaissance generates CopilotInteraction events — which, as covered in Risk 4, confirm that an interaction occurred but don't record what was asked or what was surfaced. A malicious insider conducting reconnaissance through Copilot leaves a forensic trail that is considerably lighter than traditional document access patterns.

CISA's Guidelines for Secure AI System Development, co-authored with the UK NCSC and allied agencies, explicitly identifies data aggregation capability in AI systems as a novel insider threat vector — not because permissions are bypassed, but because AI dramatically reduces the time and skill required to leverage broad permissions effectively. The principle applies directly to Copilot.

No Microsoft documentation explicitly names Copilot as an insider threat amplification tool. This is an application of established AI security principles to the Copilot deployment context, not a confirmed Microsoft-acknowledged risk category.

Mitigation:

  • Configure Purview Insider Risk Management policies to include Copilot interaction signals as behavioral risk indicators before rollout.
  • Establish Copilot usage baselines so anomalous patterns — unusual query volume, queries spanning unexpected data domains, queries about sensitive business topics from accounts without clear operational need — can be surfaced.
  • Configure DLP and Communication Compliance to audit Copilot interactions, providing a retrospective review capability.
  • The most effective control remains pre-rollout: ensure no single user has simultaneous access to sensitive content across HR, Finance, Legal, and M&A unless their role explicitly requires it.

What Your Copilot Threat Model Is Missing

These ten risks fall into three categories that map to how remediation ownership should be assigned:

Data surface expansion (Risks 1, 3, 7, 9): Copilot amplifies latent permissions debt and sharing debt that already exists in your tenant. The work here is access reviews, least-privilege remediation, and SharePoint governance — none of which require Copilot-specific tooling. They are Copilot-urgent work you may have been deferring.

Governance and visibility gaps (Risks 4, 6, 8): Controls that exist in your tenant may not extend to Copilot interactions or outputs. Audit coverage is narrower than you assume. Sensitivity labels may not protect what stakeholders believe they do. DLP does not govern Copilot's output side. These require configuration changes and honest documentation of what your controls cover and what they don't.

Novel attack surfaces (Risks 2, 5, 10): Prompt injection, plugin scope creep, and insider threat amplification are behaviors that don't exist without Copilot. They require new detection and response thinking, not just configuration.

One misconception worth addressing directly: Copilot for Microsoft 365 does not route data through OpenAI's infrastructure. Microsoft operates the underlying models within its own data centers under M365's existing data processing and residency agreements. If that concern has been raised in your organization as a deployment barrier, it's based on a misunderstanding of the product architecture.

The actual argument for delaying deployment is simpler and more accurate: Copilot is safe when deployed into a tenant where permissions, governance, and audit are in order. In most enterprise tenants, they aren't. The good news is that the mitigations above are predominantly configuration work, not additional product purchases. The bad news is that the configuration work is substantial, and most organizations haven't done it.


Three Things to Do This Week

If you take one action per day for the rest of this week:

  1. Run a SharePoint oversharing assessment. Use SharePoint Advanced Management's data access governance reports to identify sites and files accessible to "Everyone" or "People in your organization" groups. Quantify the exposure before you assign a single Copilot license. If SAM isn't licensed, start with manual report pulls from the SharePoint admin center's sharing reports.
  2. Audit sensitivity label encryption coverage. Pull your active sensitivity label policies and identify which tiers enforce RMS encryption. Any label at "Confidential" or higher that relies on visual markings only is transparent to Copilot today. Escalate the finding to whoever owns the information protection program; it is likely news to them.
  3. Enable and verify Copilot activity logs in Purview. Confirm CopilotInteraction events are flowing in the Purview compliance portal. Check the retention period against your audit requirements. If you're on E3 with 90-day retention and your audit policy requires 12 months minimum, Purview Audit (Premium) needs to be configured before Copilot goes live — not after an incident makes the gap obvious.

None of these require a Copilot deployment to be in progress. All of them should have been done already. Copilot just made the cost of skipping them visible.


Purview + Copilot StarterFoundation sensitivity label taxonomy, baseline DLP policy configurations, an information protection onboarding checklist, a Copilot for Microsoft 365 readiness checklist covering data classification and oversharing risks, and a prompt governance guide for responsible AI use.TirionPurview + Copilot ProEverything in the Purview + Copilot Starter, plus advanced DLP runbooks, audit log review guides, an expanded sensitivity label taxonomy, a Copilot risk assessment template for identifying AI readiness gaps, and compliance gap checklists for AI governance controls.Tirion


Revision History

2026-04-08 — published

Added inline CTAs and Ghost bookmark cards for Purview + Copilot Starter and Purview + Copilot Pro bundle references; added feature image.