Paid.ai just launched free MSA templates for AI service agreements. GitLaw is publishing open-source legal templates for AI/ML agreements on GitHub. Both are well-intentioned. Both validate the thesis that enterprises need governance infrastructure for AI agent transactions. And both are insufficient for production deployments where real money and real liability are at stake.
The market signal is clear: the industry has moved past the question of whether AI agents need service agreements. The question now is what kind of service agreement is adequate.
Templates get one critical thing right: they acknowledge that AI agent transactions require legal structure. A developer deploying an agent to process insurance claims, review contracts, or generate financial analysis needs more than an API key and a prayer. They need a document that defines scope, allocates liability, and establishes what happens when something goes wrong.
Paid.ai and GitLaw both recognize this. Their templates cover the basics — scope definitions, IP ownership, liability limitations, termination clauses. For low-risk, internal-only deployments with no regulatory pressure, a template may be sufficient. Download it, fill in the blanks, have both parties sign, file it away.
The problem is that most AI agent deployments worth governing are not low-risk, internal-only, or free from regulatory pressure.
A template is a static document. It captures intent at the moment of signing. It has no mechanism for recording what actually happened during execution. This is the fundamental gap.
When an AI agent fabricates figures in board documents — as happened earlier this year in a widely publicized incident — a template provides no evidence of what the agent was instructed to do, what it actually did, or where the deviation occurred. The buyer holds a signed MSA and fabricated documents. The AI Provider holds a signed MSA and no audit trail. The dispute devolves into he-said-she-said with no contemporaneous record.
Templates cannot provide runtime evidence because they are not connected to runtime. They exist on paper. The agent exists in code. The two never interact.
This is not a limitation that better templates can fix. It is a structural limitation of the template model itself.
An Exacted Paper is not a template. It is a bilateral instrument — Exacted by both parties at formation, with completion criteria hash-locked before the first token is consumed. The criteria cannot be moved after formation. Both parties attested to the same terms. The attestation is cryptographically verifiable.
At runtime, every agent action is written to a Trace record — an immutable, contemporaneous audit trail that chains every session to its originating Paper. Not a log file that can be modified. A hash-chained evidence record that can be verified by anyone with the hash.
Before deployment, APEX-BG produces a conformity observation — a pre-deployment assessment of whether the agent's declared behavior matches the Exacted criteria. Not a certification. Not a quality score. A factual observation: does the agent's fingerprint conform to what was agreed?
When something goes wrong, Parler provides structured dispute resolution — a five-tier framework with weighted criterion-by-criterion assessment, escalation paths proportional to the stakes, and a tricameral AI panel that evaluates Trace evidence against Exacted criteria. Not a support ticket. Not a public argument on social media. A documented process with evidence.
None of this is possible with a template. Templates define intent. Exacted Papers enforce accountability.
Templates work when four conditions are met simultaneously: the deployment is low-risk, the agent operates internally only, there is no regulatory reporting requirement, and both parties have high mutual trust.
Internal tooling agents that summarize meeting notes, draft initial documentation, or assist with code review may fall into this category. The stakes are low enough that a signed PDF and good faith are sufficient governance.
Regulated industries — financial services, healthcare, legal — cannot rely on templates. The EU AI Act requires immutable audit logs for high-risk AI systems by August 2026. Colorado SB 24-205 is already in effect. ISO 42001 auditors need conformity evidence attached to enforceable agreements, not compliance claims floating free of contractual context.
Enterprise deployments with external AI Providers need bilateral accountability. When you hire an external agent to process sensitive data, the governance infrastructure must record what happened — not just what was supposed to happen.
High-stakes transactions where disputes are foreseeable need structured resolution mechanisms. A template with a standard arbitration clause does not provide the evidentiary infrastructure to resolve an AI agent dispute. Arbitrators need Trace records, conformity observations, and criterion-by-criterion analysis. Templates provide none of these.
Paid.ai and GitLaw validate the exact.works thesis. The market has moved past the question of whether AI agents need service agreements. The 97% of enterprises that expect agent incidents within 12 months are looking for governance infrastructure — and finding templates.
Templates are the first step. They prove the need exists. They do not fill it.
The question is not whether AI service agreements exist. It is whether they are enforced. A template tells you what should happen. An Exacted Paper proves what did happen.
Every AI agent needs a contract.
exact.works →