Why this guide exists
Most comp teams I talk to have been told they can't use Claude with employee data.
Usually the answer comes from IT or legal, and it sounds like a no. The concern is real. Pay rates, performance ratings, demographic breakdowns, exit packages, equity holdings. Some of the most sensitive data your company holds runs through your team's hands every cycle.
The first guide we shipped, Claude Cowork for Compensation, included a chapter called Code Not Data. It walked through one safe route: get Claude to write the analysis code, run it locally on your machine, and let your data never leave your laptop. That route still works and it's still my recommendation for individuals who can't get an enterprise route approved.
But two things changed in April 2026 that materially expand the options for comp teams. Claude is now available inside Microsoft 365 Copilot. And Anthropic launched Claude Cowork inside Amazon Bedrock. For any company already on Microsoft 365 or AWS, the conversation with IT just got dramatically simpler.
This guide explains all the safe routes, when each one is the right answer, what governance you inherit on each path, and the risks that don't disappear no matter which route you pick.
A practitioner field guide for the comp lead who is sat between an IT team saying no and a CFO asking why AI hasn't transformed the function yet. Compliance manuals belong elsewhere.
What sensitive comp data actually means
Before we talk about safe routes, let's name what we are protecting.
Compensation data is not all the same. Some of it is mildly sensitive. Some of it is special category data under GDPR, regulated under the EU AI Act, and material non public information under securities law. The risk profile depends on the field.
Personal data
GDPR Article 4(1). Anything that identifies an individual: employee ID, name, email, location, job title at small headcounts. Most comp datasets contain this by default.
Special category data
GDPR Article 9. Health, ethnicity, religion, trade union membership, sexual orientation. Pay equity analysis often requires this. The legal basis for processing is narrower and the protections are stricter.
Pay data
Base salary, total cash, equity, bonus, deferred comp. Not always classified as special category data, but treated as confidential by every reasonable employer and frequently subject to works council consultation requirements in Europe.
Performance and ratings
Calibration outputs, performance ratings, succession ratings, flight risk flags. Sensitive because they often correlate with protected characteristics and because they drive pay decisions.
Material non public information
Executive compensation in development, board pack scenarios, IPO grant pools, M&A retention designs. If leaked, this can move share prices and end careers.
Identifiable analysis output
Even when the input data has been anonymised, small group aggregations can re-identify individuals. The 3 female directors in Market X is not anonymous. The two senior engineers above £180k in our Berlin office is not anonymous either.
The point: when this guide talks about safe routes, what we mean is keeping all of the above inside your company's existing governance perimeter, away from consumer AI services that sit outside any contract you've signed.
The three safe routes
There are three legitimate ways for a comp team to use Claude on sensitive employee data in 2026. Pick the one that matches your situation.
Code Not Data
Claude writes Python. You run it locally. Your data never reaches Anthropic's servers.
When: you're an individual with a laptop and no enterprise contract.
Anthropic Enterprise
Direct contract with Anthropic. DPA, no training on your data, single sign on, audit logs, team management.
When: you're not on AWS or Microsoft, or you want the full Claude feature surface.
Inside your existing platform
Claude on AWS Bedrock or inside Microsoft 365 Copilot. Same DPA your company already signed.
When: your company is already on AWS or Microsoft 365. The fastest unblock.
Route A: Code Not Data (local script approach)
What it is: Claude writes Python code, you run it on your machine, your data never reaches Anthropic's servers. The chapter from the original Cowork guide explains this in depth.
When it's the right answer: you're an individual practitioner who hasn't been able to get an enterprise route approved. You're working alone. You have a laptop and a CSV.
What it gives you: maximum data isolation. Zero employee records exposed to any AI provider. No new contracts required.
What it costs you: it's a single user route. You can't share workflows easily across the team. You need to be comfortable enough with the workflow to review the code Claude writes before running it.
Route B: Anthropic Enterprise plan
What it is: a direct contractual relationship between your company and Anthropic. Includes a Data Processing Agreement, no training on your data, single sign on (so people log in with their work account), audit logging, and team management.
When it's the right answer: your company isn't on AWS or Microsoft 365 (or doesn't want to use Bedrock or Copilot for this). You want the deepest set of Claude features (full Cowork, Skills, MCP integrations) without going through a cloud platform.
What it gives you: a clean, purpose built contract specifically for Claude. The full feature surface. Anthropic's own admin console.
What it costs you: a separate procurement process and a separate contract for IT to manage. Most enterprise procurement cycles take three to six months.
Route C: Claude inside the platform you already use
This is the new bit. As of April 2026, Claude is available inside two enterprise platforms most companies are already on.
Claude in Amazon Bedrock. If your company runs on AWS, Claude Cowork can now run inside your existing AWS account. The next section walks through the detail because for most comp teams reading this, it's the fastest path through.
Claude in Microsoft 365 Copilot. If your company is on Microsoft 365 with Copilot licensing, Claude is now available as one of the model options inside Copilot. The data governance, audit, and compliance posture is the one Microsoft already provides for Copilot.
When it's the right answer: your company is already on AWS or Microsoft 365. Your IT team has already done the heavy lifting on the underlying contract, the DPA, the security review, the audit logging, and the access controls. You're not asking IT to onboard a new vendor. You're asking them to enable a feature inside a vendor they've already approved.
What it gives you: the fastest path from blocked to approved. You inherit the entire governance scaffolding your company already runs on AWS or Microsoft. Same DPA, same identity provider, same audit logs, same region controls.
What it costs you: you're constrained to the feature surface that your cloud platform exposes. Some Claude capabilities (like every MCP server under the sun) may not be available through Bedrock or Copilot in the same shape they are through Anthropic Enterprise. That said, for the vast majority of comp workflows, the available surface is more than enough.
What just changed in April 2026
These two announcements point the same way and together collapse most of the legitimate IT objections to comp teams using Claude on real data.
Microsoft 365 Copilot now hosts Claude
Microsoft and Anthropic agreed to make Claude available inside Microsoft 365 Copilot. For comp teams in companies on Microsoft 365 with Copilot, this means Claude is now one of the model options users can pick when they invoke Copilot inside Word, Excel, Outlook, Teams, or Copilot Chat.
The governance posture is whatever your company's existing Copilot deployment looks like. If your IT team uses Microsoft Purview (the Microsoft compliance toolset) to label and protect sensitive data, that still applies. If they have rules that limit who can use Copilot from where, those still apply. If audit logs already flow into your Microsoft compliance centre, those still capture Claude usage. Microsoft's existing Data Processing Agreement covers it.
Claude Cowork on Amazon Bedrock
Anthropic launched Claude Cowork inside Amazon Bedrock. This is the bigger move for comp teams, and it's where the rest of this guide focuses.
In plain language: a comp team in an AWS shop can now run the Cowork experience (the autonomous agent, the file handling, the local script execution, the integrations) inside their company's own AWS account. Your data stays in your AWS environment. AWS doesn't train on it. Access goes through the same login system (IAM, Identity and Access Management) your IT team already manages. Every prompt and response leaves an audit trail in AWS CloudTrail (the running record of who did what inside AWS). You pick the AWS region your data sits in. There's no separate seat licence from Anthropic. It rolls into your existing AWS billing.
Picture this
Comp teams had to choose between local workarounds, a long Anthropic procurement, or shadow AI.
Same DPA. Same identity. Same audit logs. Same region controls. No new vendor.
The comp lead who was blocked last month now has a route that fits inside the governance their company has already approved. The shadow AI risk hasn't moved (people can still paste pay data into a personal ChatGPT account on their phone). But the legitimate route is now inside the same fence as everything else IT already trusts.
Claude on AWS Bedrock
This is the route most likely to get a comp team unblocked fast in 2026, so it's worth understanding in detail.
What Bedrock actually is, in plain language
Amazon Bedrock is a service inside AWS that lets your company use AI models through its existing AWS account. Claude is one of the models available. Claude Cowork is the Anthropic desktop app that, as of April 2026, can send all its model requests through Bedrock instead of Anthropic's own servers.
Translation for non technical readers: instead of your prompts going to Anthropic directly, they go through your company's AWS account first. AWS handles the security, the billing, and the audit. Anthropic handles the model. Your IT team already has a contract with AWS. They don't need a new one with Anthropic.
What governance you inherit
When you run Claude Cowork through Bedrock, you inherit the AWS posture your company has already set up.
- No training on your data. Contractual.
- Region pinning. EU, US, APAC. Works council friendly.
- Identity based access. Single sign on, multi factor authentication, role scoped through the AWS login system.
- CloudTrail audit. Every prompt logged where IT already looks (the AWS audit log).
- Private network. Prompts can run over AWS's private network so they never travel the public internet.
- Billing in AWS. No new seat licence. Consumption based.
1. No training on your data. Amazon Bedrock does not store prompts, files, tool inputs and outputs, or model responses, and does not use them to train its AI models. The contract states this explicitly. It applies to every model on Bedrock, including Claude.
2. Data residency. You choose the AWS region (the geographic location of the AWS data centre) where your data is processed. If your works council says employee data must stay in the EU, IT can pin Bedrock usage to a Frankfurt or Dublin region.
3. Identity and access. Access to Claude through Bedrock goes through your company's existing AWS login system (IAM) or shared system credentials issued by IT. Your IT team can scope access by user, by role, by team. They can require multi factor authentication. They can integrate with whatever identity provider your company uses.
4. Audit logging. Every time someone uses Claude through Bedrock it gets logged in AWS CloudTrail. Who used Claude, when, from which application, and (if your IT team has switched on prompt logging) the contents of the prompt. This is the same record keeping your IT team already watches for every other AWS service.
5. Network containment. Bedrock traffic can be routed entirely over AWS's private network (using a feature called PrivateLink) so your data and your prompts never travel across the public internet. They stay inside the AWS network your company already trusts.
What this means commercially
A few practical points worth knowing before the conversation with IT or finance:
- Billing rolls into AWS. No separate seat licence from Anthropic. You pay for what you use, billed alongside your other AWS spend. Your finance team already has a process for AWS budget approvals.
- No new vendor onboarding. Procurement, security review, DPA negotiation. Already done for AWS. Adding Bedrock is enabling a service inside an account that already exists.
- Consumption based pricing. You pay for what you use. This makes pilots easy. A small team can experiment for a hundred dollars before deciding whether to expand.
Three concrete questions to ask IT this week
If your company is on AWS, send your IT lead these three questions. Frame the email as a conversation, not a demand.
- Is Amazon Bedrock enabled in our AWS account, or can it be enabled?
- Can the rewards team get Bedrock access for a defined set of use cases (pay equity analysis, market data summaries, calibration prep, cycle communications)?
- What's the audit and approval process you'd want us to follow if we proceed?
If the answer to question 1 is yes, you are likely two weeks away from being unblocked. If it's no but it can be enabled, expect a few weeks for security review. If it's a hard no, the next section on Microsoft 365 Copilot or Anthropic Enterprise becomes your route.
What unlocks for comp when this is approved
Specific workflows that become safe to run on real employee data once you have Bedrock access:
- Pay equity analysis with regression. Upload anonymised data, ask Claude to write the regression script, run it through Bedrock. You can let Bedrock process the data because it's inside your AWS account, your DPA, your region.
- Market data triangulation. Pull data from multiple survey providers, ask Claude to reconcile job mappings and surface anomalies, get a written summary back inside your audit trail.
- Calibration pack generation per business unit. Feed Claude the calibration framework and the leadership data, get back a draft pack per BU. Review, refine, send.
- Cycle communication drafts at scale. Manager letters, HRBP briefings, employee FAQs. Generated, reviewed, sent. Every prompt logged in CloudTrail.
- Manager FAQ chatbots. Build a Bedrock backed chatbot that answers manager questions from your comp guidelines. Deploy inside your company. The audit trail covers every interaction.
The unifying theme: you can start running the workflows that the original Cowork guide described, but on real data, with full audit, inside your existing security perimeter.
Claude inside Microsoft 365 Copilot
If your company runs on Microsoft 365 with Copilot licensing, the picture is similar in shape but different in detail.
What changed
Microsoft and Anthropic agreed to make Claude available as one of the model options inside Microsoft 365 Copilot. When users invoke Copilot inside Word, Excel, Outlook, Teams, or Copilot Chat, they can now choose between Microsoft's default models and Claude. Whichever they pick, the request goes through Microsoft's existing Copilot infrastructure, not directly to Anthropic.
What governance you inherit
The same governance scaffolding your company has already set up for Copilot applies to Claude requests. The five things that matter:
1. No training on your data. Microsoft's Copilot terms commit to not using your prompts or responses to train its AI models. This applies whether Copilot routes the request to a Microsoft model or to Claude.
2. Data residency and tenant isolation. Your prompts, responses, and reference data stay inside your company's own Microsoft 365 environment (what Microsoft calls a tenant, your protected slice of the cloud). The data residency commitments your company already negotiated with Microsoft apply.
3. Identity and access. People log in through Microsoft Entra ID (formerly Azure AD), the same Microsoft account system your company already uses. Your IT team can scope Copilot access by user, group, role, or by the existing access rules they have set on Microsoft 365.
4. Audit logging. Copilot interactions are captured in Microsoft Purview's audit logs. The same compliance team that already monitors Copilot usage can monitor Claude usage. There is nothing new to set up.
5. Sensitivity labels and data loss prevention. If your IT team has set up sensitivity labels (the tags Microsoft uses to mark documents as confidential or higher) on pay or performance documents, those labels still apply when Copilot operates on the document. Data loss prevention rules (the rules that block sensitive content from being shared outside the company) still trigger.
When this is the right route
The Copilot path is the right route when your company is already on Microsoft 365 Copilot, your IT team has already deployed it, and your governance posture is built around the Microsoft compliance stack. For comp teams in many enterprises (especially in regulated industries that standardised on Microsoft years ago), this is by far the fastest path because it requires no new approvals at all.
When this is not the right route
The Copilot route gives you Claude inside Microsoft applications. It does not give you the full Cowork experience (autonomous agent, file system access, local script execution, MCP integrations) the same way Bedrock or Anthropic Enterprise does. If you need those capabilities, the Copilot path is a complement to the others, not a substitute.
Three questions to ask IT if you're a Microsoft shop
- Is Microsoft 365 Copilot deployed for our team, with the latest model options including Claude?
- Can the rewards team get Copilot access for our comp workflows?
- What sensitivity labels and data loss prevention rules apply to comp data, and how do they interact with Copilot?
Things that don't get safer
Three safe routes are now open. None of them removes the risks below. Read this section twice.
Shadow AI
Personal accounts still sit outside every audit log. Policy beats tooling here.
Small group re-identification
3 female directors in Munich is not anonymous. Keep n ≥ 10 in any output.
Error message leakage
Error messages often contain sample data values. Strip them before pasting back.
Connections to other tools
Each connector is a doorway. Scope read only and limit it to specific folders.
Audit logs without review
Logs nobody reads are not a control. Agree review cadence with IT.
EU AI Act high risk
Annex III still applies. Document AI output, human review, and decisions.
Shadow AI
The single largest data risk for comp teams in 2026 sits outside the legitimate routes. It's the personal Pro account a colleague is using on their personal laptop with company employee data pasted in. None of the enterprise routes can see this. None of the audit logs catch it. The data is gone.
The fix lives in policy and culture rather than tooling. Write a one page rule that says compensation data is processed only inside [list the approved routes], and back it up with a team culture where people understand why. Asking IT to enable Bedrock is the easy part. Asking your peers to stop using personal accounts is the harder one.
Small group re-identification
Even when the input data is anonymised, small group aggregations can still identify individuals. If your output says the gender pay gap among our 4 senior PMs in Munich is 22%, anyone who knows the company can identify the people involved. Bedrock and Copilot won't help with this one. It belongs in your analysis discipline, before any output leaves the room.
The fix: keep group level analysis above a minimum threshold (typically n=10 or higher) when sharing outputs, even with the AI. If you need to discuss small group patterns, do it locally on your machine without sending the result back to the model.
Error messages
When code fails and you paste the error back into Claude, the error message often contains sample data values, file paths, or partial records. Even with Bedrock processing the data inside your AWS account, the error you paste back into the conversation is still a prompt that gets logged. That can be fine inside your company perimeter. It can also turn up as a finding in a future audit if those samples include identifying values.
The fix: review error messages before sharing them with Claude. Strip data values out. Share the shape of the error, not the contents.
Connections to other tools (MCP servers and extensions)
Cowork can connect to other tools through what Anthropic calls MCP servers (Model Context Protocol, the standard plug socket Claude uses to talk to other apps). Google Drive, Notion, Slack, GitHub, your HRIS, your survey provider. Each connection is a doorway. Each doorway has permissions. A misconfigured permission can let Claude read more than you intended.
The fix: when you set up these connections, scope each one to read only by default, point it at specific folders or projects, and check what it can actually see. Treat each new connection the way you would treat a new SaaS vendor.
Audit logs are only useful if someone reads them
CloudTrail and Purview both produce comprehensive logs. Logs without review are theatre. If something goes wrong, your audit trail will show that something happened, but only if someone has been actively monitoring or knows what to look for.
The fix: agree with IT in advance what review cadence makes sense (weekly, monthly, on incident). Agree what the threshold for escalation is. Agree who reads the logs. We have audit logs on its own is not an answer to a security question.
EU AI Act classification doesn't move
If your comp use of Claude touches employment related decisions (hiring, promotion, performance ratings, pay outcomes), it falls under the EU AI Act's high risk classification in Annex III. Bedrock or Copilot doesn't change the classification. What does change: a high risk system requires human oversight, transparency to data subjects, risk management documentation, and post market monitoring.
For most comp practitioners, this means: document what the AI produced, document what a human reviewed, document what decisions the analysis informed. The bounded autonomy thesis from the original guide (AI amplifies judgment, never replaces it) is now a regulatory requirement in the EU. The good news: if you were operating responsibly already, you're 80% of the way there. The remaining 20% is the documentation.
The minimum governance setup
Whichever route you take, the following six elements are the floor your company needs in place before processing real comp data through Claude. If any of these are missing, work with IT to fix them before you start.
- A Data Processing Agreement. AWS, Microsoft, and Anthropic Enterprise all provide one. Confirm with IT or legal which one applies to the Claude route you've chosen.
- Identity based access control. Pay data should never be accessible without an authenticated login (single sign on, multi factor authentication). Shared system credentials passed around the team are a finding waiting to happen.
- Audit logging enabled and routed somewhere it gets reviewed. Logs that nobody looks at are not a control.
- A written acceptable use policy. One page. What data can be processed, on which route, by whom, with which review steps. Plain language. Reviewed quarterly.
- Data classification awareness. If your company has data classification levels (Public, Internal, Confidential, Restricted), comp data is usually Confidential or higher. The classification dictates which routes are acceptable.
- A response plan for incidents. What do you do if a colleague pastes data into a personal account by mistake? Who do you tell? How do you document it? Five lines is enough. Don't wait for the incident to write the plan.
A note for smaller companies: if you're a 200 person scale up without a formal compliance function, this can feel disproportionate. The AWS or Microsoft contract already gives you most of items 1 to 3. Items 4 to 6 you can write in an afternoon. The cost of having them is low. The cost of skipping them when something goes wrong is high.
When you're not on AWS or Microsoft
Some companies are on Google Cloud. Some are on a smaller cloud. Some are on a mix. Some haven't deployed Copilot. The two enterprise platform routes don't help here. You still have options.
Anthropic Enterprise
A direct contract with Anthropic. DPA, no training on your data, single sign on, audit logging, team management. The most direct way to get the full Claude experience inside a company that doesn't want to go through a cloud provider. Procurement timeline: typically three to six months in enterprise environments.
The local script route from the original guide
If you can't get any enterprise contract approved (or don't want to wait for one), the Code Not Data approach from the original Cowork guide still works. Claude writes the analysis script, you run it locally, your data never leaves your laptop. This is a single user route, not a team route. But it is real, and it is safe.
Local models
For environments where even basic usage information sent to an external AI provider is unacceptable (defence sector, certain regulated finance functions, high sensitivity executive comp design), running an open source AI model on your own hardware is an option. You sacrifice capability (locally hosted models are not as strong as Claude on complex reasoning) for full data isolation.
A combined approach
In practice, many comp teams end up combining routes: Bedrock for the routine analysis on real data, local scripts for the most sensitive executive scenarios, Copilot for everyday document drafting, and Anthropic Enterprise for the one or two power users who want the full feature surface. A single route for everything rarely fits how a real comp team works.
How to ask IT this week
Three short scripts you can paste into Slack or email to start the conversation. Pick the one that matches your situation.
Hi [IT lead], the rewards team is exploring options to use Claude AI safely on compensation data. Anthropic launched Claude Cowork inside Amazon Bedrock in April. As I understand it, that means Claude requests would run inside our existing AWS account (so your existing DPA, IAM, CloudTrail, and region controls apply, no new vendor needed). Three quick questions:
- Is Bedrock enabled in our AWS account, or can it be enabled?
- Could the rewards team get Bedrock access for a defined set of use cases (pay equity analysis, market data summaries, calibration prep, cycle communications)?
- What audit and approval process would you want us to follow if we proceed?
Happy to set up 15 minutes if that's easier than email. There's a guide written for non technical comp teams that explains the setup if useful: range.community/guides/cowork-compensation-data.
Hi [IT lead], the rewards team is exploring options to use Claude AI safely on compensation data. Microsoft and Anthropic agreed to make Claude available inside Microsoft 365 Copilot in April. As I understand it, Claude requests through Copilot would run inside our existing Microsoft 365 environment (so our existing data residency, Entra ID login, Purview audit logs, and sensitivity labels apply, no new vendor needed). Three quick questions:
- Is Microsoft 365 Copilot deployed for our team, with the latest model options including Claude?
- Could the rewards team get access for our comp workflows?
- How do our sensitivity labels and data loss prevention rules interact with Copilot, and is there anything specific we should configure for comp data?
Happy to set up 15 minutes if that's easier. There's a guide for non technical comp teams: range.community/guides/cowork-compensation-data.
Hi [IT lead], the rewards team wants to use Claude AI on compensation data, safely. Three options I'm aware of:
- Anthropic Enterprise. Direct contract, full feature set.
- The local script approach. Claude writes Python, we run it locally, data never leaves the machine.
- Local models. For maximum data isolation, no external AI provider involved.
Could we book 30 minutes to discuss which route fits our governance posture best? I've read the practitioner guide at range.community/guides/cowork-compensation-data and have specific use cases in mind.
One last thing
If you take one thing from this guide, take this.
The blockers comp teams face in 2026 are mostly institutional. Procurement cycles, risk reviews, the absence of a written policy. The technology is ready. The institutional scaffolding takes longer. Your IT team is doing their job when they ask hard questions about employee data flowing through AI. The work is to give them an answer that fits inside the governance posture your company already has.
For most comp teams in companies on AWS or Microsoft 365, that answer is now available. Bedrock or Copilot. Same DPA, same identity, same audit, same region. You're not asking for an exception. You're asking to use a service that is already approved.
For comp teams in companies that aren't on either, the original guide's local script approach still works. Anthropic Enterprise still works. The work to get a clean route is real, and the timeline is months, not years.
The judgment is what makes you a compensation professional.
The bounded autonomy principle from the first guide still holds. AI amplifies judgment, never replaces it. You can't fire and forget pay decisions, no matter how clean the governance.
Run the workflow, review the output, and own the decision.
The AI handles the mechanics so you can spend more time on the work that actually needs you.