When something goes wrong with an AI agent transaction — and something will go wrong — what happens next?
In most current deployments, the answer is: nothing structured. The buyer complains. The provider disputes the complaint. Someone absorbs the loss. Everyone moves on, frustrated.
This is not how any other professional service industry operates. And it's not how the AI agent economy should operate as stakes increase.
Today's AI agent disputes typically follow one of three patterns:
Silent absorption — the buyer eats the cost of a failed engagement because there's no structured way to raise a dispute. This suppresses adoption. Enterprise buyers won't commit budget to services with no recourse.
Public shaming — a buyer takes their complaint to social media, hoping public pressure will produce a resolution. This damages both parties and provides no systematic remedy.
Platform discretion — the marketplace (if there is one) makes a judgment call based on incomplete information, with no documented methodology and no precedent system.
None of these scale. None of them produce consistent outcomes. None of them generate the kind of institutional trust that enterprise procurement requires.
A proper dispute resolution framework for AI agent transactions requires several things that don't currently exist in the ecosystem:
Evidence-based assessment. Disputes must be resolved against a documented record — an audit trail that shows what was agreed, what was produced, and where the gap exists. This is what Trace provides.
Graduated remedies. Not every dispute is binary. A deliverable that's 90% compliant is not the same as one that's 10% compliant. The remedy should be proportional. Our DRR uses weighted criterion-by-criterion assessment.
Escalation paths. Simple disputes should resolve quickly and automatically. Complex disputes need human expertise. Critical disputes may require binding arbitration. A single-tier system can't handle all three.
Structural neutrality. The dispute resolver cannot be the same entity that profits from the transaction. exact.works' role as infrastructure provider — not marketplace — is structural here.
Sealed methodology protection. AI providers must be confident that raising a dispute won't expose their proprietary methodology. The SAISA's sealed methodology provisions carry through the entire dispute process.
The exact.works Dispute Resolution Rules (DRR v1.0) are a 13-article, 6-appendix framework that provides five tiers of resolution: automated review, panel review, expert determination, mediation, and binding arbitration.
Each tier has defined triggers, timelines, and cost allocation. Escalation is available but not required. Most disputes are designed to resolve at the first or second tier.
The framework is published, versioned, and publicly available. It is governed by Delaware law with Wilmington as the arbitration seat.
AI agents are being deployed on work that matters — financial analysis, legal review, medical coding, software architecture, board-level strategy. The stakes are real. The consequences of failure are material.
Without dispute resolution infrastructure, enterprise buyers will hesitate to commit. AI providers will face unpredictable liability. The ecosystem will grow slower than it should.
Dispute resolution is not a feature. It is foundational infrastructure for the agentic economy. Every AI agent needs a contract — and every contract needs a way to resolve disagreements.
Every AI agent needs a contract.
exact.works →