Michael Burry — the investor who called the 2008 housing crisis before anyone else — is now calling the enterprise AI market. His thesis: Anthropic is eating Palantir's lunch. Seventy-three percent of new enterprise AI spending is going to Anthropic. One in four businesses on Ramp now pays for Anthropic, up from one in twenty-five a year ago.
The market is listening. Palantir's stock is under pressure. The narrative is that enterprises are abandoning platform-mediated AI in favor of direct foundation model access.
That narrative is correct. What it misses is what enterprises are losing in that transition — and what they need to replace it with.
Palantir's AIP wasn't just a wrapper around GPT or Claude. It was a governance layer. It defined what the AI was supposed to do. It recorded whether it did it. It provided a mechanism when it didn't.
That governance layer was bundled into the platform fee. Enterprises paid Palantir's premium because they got workflow orchestration, audit logging, and accountability infrastructure in a single package. The model was abstracted behind the platform.
When enterprises move from Palantir to direct Anthropic access, they get the model. They lose the governance wrapper.
Direct foundation model access is cheaper. It's more flexible. It lets engineering teams build exactly what they need without platform constraints. These are real benefits.
But every enterprise that made this transition now faces questions that Palantir used to answer:
What is this agent authorized to do? Palantir's workflow definitions created explicit scope. Direct model access has no inherent scope — the agent does whatever the prompt tells it to do.
What happened during execution? Palantir logged everything into a platform audit trail. Direct model access produces API call logs, not governance evidence.
What happens when something goes wrong? Palantir had support contracts and escalation paths. Direct model access has a Discord and a status page.
These aren't edge cases. These are the questions every enterprise legal and compliance team asks before deploying AI on sensitive work.
There's a financial markets pattern that makes this concrete.
SWIFT is the global messaging network for bank transactions. It commoditized wire transfer routing. Any bank can send money to any other bank through the same infrastructure.
ISDA is the International Swaps and Derivatives Association. It publishes the Master Agreement that governs what happens when a derivatives transaction goes wrong — who owes what, how disputes are resolved, what triggers a close-out.
SWIFT commoditizing the routing layer didn't eliminate the need for ISDA. It increased it. The more transactions that flowed through the pipes, the more critical it became to have a standard framework for governing what those transactions meant.
Anthropic commoditizing the model layer doesn't eliminate the need for governance. It increases it. Every enterprise that moved from Palantir's governed platform to direct Claude access needs to replace the governance layer they lost.
The governance layer that works across any model, any platform, any provider has four components:
A bilateral service agreement. Before the agent starts work, both parties agree on what success looks like. Scope defined. Criteria locked. No moving goalposts after delivery.
An immutable audit record. Every action the agent takes, every tool it calls, every input it receives — recorded contemporaneously, hash-chained, verifiable by anyone with the hash. Not a log. A Trace.
Structured dispute resolution. When something goes wrong, a framework for resolving it with evidence. Not a support ticket. Not a public argument. A documented process with escalation paths proportional to the stakes.
Human accountability. Every agent delegation chain terminates at a verified human responsible party. The chain of accountability cannot end at a model.
This is what Palantir's governance wrapper provided in bundled form. This is what enterprises need in modular form when they unbundle.
The enterprises that just moved from Palantir to direct Anthropic access are discovering the governance gap in real time. Their legal teams are asking questions that their engineering teams can't answer. Their compliance officers are looking for audit trails that don't exist.
The infrastructure to close that gap exists today. The Standard AI Service Agreement provides the bilateral commitment. Trace provides the audit record. The dispute resolution framework provides the escalation paths.
The model layer is commoditizing. That's good — it means more enterprises can deploy AI at lower cost.
The service agreement layer remains. Every AI agent needs one. Regardless of which model it runs on.
Every AI agent needs a contract.
exact.works →