Enabling Anthropic’s Claude models alongside OpenAI in Microsoft 365 Copilot and Copilot Studio can give you more choice for reasoning‑heavy and safety‑critical workflows. This quick tips guide covers why you might enable it, what to watch out for, and how to switch it on.
Why Turn On Anthropic Models
- Strong at summarizing complex information and long context, ideal for research, policy, and multi‑document review.
- Great for question answering with citations from source material, helping users trust outputs in regulated or documentation‑heavy environments.
- Useful for synthesis across multiple sources and idea generation, which pairs well with Copilot’s Researcher and Copilot Studio agent workflows.
- In Copilot Studio, you can pick Anthropic per agent or per prompt, and the system can fall back to OpenAI if Anthropic is disabled.
Key Risks and Governance Notes
Before you flip the switch, treat Anthropic like any other new subprocessor and model family.
Data handling and contracts
- Anthropic runs its models outside Microsoft’s core infrastructure, with Anthropic acting as a Microsoft subprocessor for Copilot experiences.
- Anthropic usage in Copilot is covered under Microsoft’s product terms and data protection commitments for supported scenarios.
Regional and data residency nuances
- Anthropic is not included in every data boundary scenario yet, so some regions may require explicit opt‑in and a DPIA or legal review.
- Government and some sovereign clouds have stricter limitations, so check your tenant’s availability and commitments.
Feature dependency and user expectations
- Some Copilot experiences (for example, Researcher and certain advanced reasoning flows) may rely on Anthropic; turning it off can reduce functionality.
- UI indicators can show when Anthropic models are in use, so include this in user education and privacy notices.
Tip: Run Anthropic in a limited pilot first, with a subset of users and environments, while legal and security finalize their stance.
Quick Steps to Enable Anthropic in Microsoft 365
You need a Global Administrator account and Copilot‑licensed users to benefit from Anthropic models.
1. Turn on Anthropic in the Microsoft 365 Admin Center
- Go to the Microsoft 365 admin center at
https://admin.microsoft.comand sign in as a Global Admin. - In the left navigation, choose Copilot → Settings.
- Open the Data access page and look for AI providers for other large language models or the equivalent AI providers or subprocessor page in your tenant.
- Under the LLM providers section for your organization, select Anthropic.
- Review and accept Anthropic‑related Terms and Conditions, then choose Allow provider or turn the toggle On.
- Allow a short propagation period; Copilot’s Researcher and other supported features will then begin using Anthropic where applicable.
2. Optional – Enable in Copilot Studio / Power Platform
If you build agents in Copilot Studio, you likely want Anthropic available there too.
- Go to the Power Platform admin center at
https://admin.powerplatform.microsoft.com. - For each target environment, enable the setting to allow external large language models for generative responses or external models.
- After Anthropic is on at the tenant level, Anthropic models appear in Copilot Studio’s model picker for orchestration and prompts.
- Makers can then choose the Claude family per agent or prompt, with automatic fallback to your default model if Anthropic is later disabled.
Practical Rollout Tips for Enabling Anthropic
- Start small: Pilot with a single department focused on research, policy analysis, or complex document workflows, then expand as you validate value and controls.
- Document the toggle: Capture screenshots of your Anthropic settings, data access notes, and legal approvals in your Copilot governance playbook.
- Educate users: Explain when Anthropic is in use, what data it can see via Copilot, and how to handle sensitive content.
A focused pilot plus clear governance gives you the upside of Anthropic’s reasoning strengths while keeping your compliance and security teams comfortable.

