J
Jurixon
Jurixon
Back to Blog
Contract Drafting

Why AI contracts fail: the hallucination problem and how precedent-based drafting solves it

18 February 2026 · 5 min read · Nemrod Garciah

Ask any partner who has used a general-purpose AI tool to draft a contract, and you will hear the same story.

The draft looks convincing. The structure is there. The clauses sound plausible. Then you read it carefully, and you find language that your firm would never use. Indemnity provisions that go further than your standard position. Definitions that contradict your template. Entire clauses invented from nothing — not wrong in the abstract, but wrong for your firm, your client, and this specific deal.

This is hallucination. And it is not a bug that will be fixed with a better model. It is a structural problem with how most legal AI tools are built.

Why general AI tools hallucinate in legal drafting

General-purpose AI models are trained on enormous amounts of text — including legal text. They have seen thousands of contracts. They know what a limitation of liability clause looks like. They know the rhythm of commercial agreements.

But they do not know your limitation of liability clause. They do not know that your firm always caps liability at two times the contract value, or that your employment templates require a specific disciplinary procedure, or that your preferred NDA structure puts the mutual obligations in clause 3, not clause 7.

When you ask a general AI to draft a contract, it produces something statistically likely given its training data. That is not the same as something that reflects your firm's approved position.

The result: every AI-generated draft requires a full review pass to strip out invented language and replace it with your firm's standard clauses. You have not automated drafting. You have automated the production of a rough draft you still need to rewrite.

The precedent-library approach

The solution is not a better AI. It is a different architecture.

If you constrain the AI to draft exclusively from your firm's approved clause library — your actual precedents, your actual standard positions — then hallucination becomes structurally impossible. The AI cannot invent a clause that is not in your library. It can only assemble, adapt, and order clauses that you have already approved.

This is how experienced associates draft. They do not write from scratch. They reach for the firm's standard NDA, pull the relevant clauses, adapt the definitions for the specific client and deal, and produce a document that reflects the firm's voice because it is built from the firm's content.

AI should work the same way.

What this means in practice

When Jurixon generates a contract, it drafts from your clause library. You upload your precedents — individual clauses, full templates, schedules — and the AI draws exclusively from that library.

The result is a first draft that:

  • Uses language your firm has already approved
  • Reflects your standard positions on indemnity, liability, IP, and whatever else you have configured
  • Sounds like your firm, because it is built from your firm's content

For more complex cases — where the client has an existing relationship, where there are prior agreements in play, where the matter history matters — you can attach a matter. Upload the relevant documents, and the AI extracts the key facts, parties, and timeline so the draft reflects the full context of the deal.

But the foundation is always your clause library. Not the AI's best guess at what a clause should look like.

The audit trail question

There is a second problem with general AI drafting that rarely gets mentioned: accountability.

If an AI-generated clause causes a problem in a signed agreement, and you cannot trace where that clause came from, you have a governance problem. You cannot explain to a client, a regulator, or a court why that language appeared in the document.

When every clause in a draft comes from your firm's approved library, the answer is simple: this clause is our standard position, approved by the firm, used in this contract type. That is a defensible answer. "The AI thought it sounded right" is not.

One thing to do this week

If your firm is experimenting with AI drafting tools, run this test.

Take the last AI-generated draft your team produced. Go clause by clause. For each clause, ask: is this exactly our firm's standard language, or did the AI invent this?

Count the invented clauses.

If the number is higher than zero — and it will be — you have a hallucination problem. The question is whether your current tool is designed to solve it.

Share this article on LinkedIn

Share this article with your network

See how this works in practice

Jurixon lets firms draft from their own clause libraries — with optional matter context when cases get complex.

Request Early Access →