Your malpractice carrier has not asked you about AI yet. They will. When they do, you want to hand them a file, not start building one.
Most professional liability insurers are already distinguishing between consumer AI use and commercial API deployments. The firms that can document their AI architecture, terms of service, and data handling practices will renew without issues. The firms that cannot will face questions they are not ready to answer.
Here is what your carrier will want to see and how to have it ready.
The three documents
Your insurer’s underwriting team will look for three things. These are not optional. They are the minimum standard emerging across the professional liability market.
1. A commercial API agreement with the AI provider.
Not a personal subscription. Not a team plan under consumer terms. A commercial agreement between your law firm and the AI provider that contractually prohibits training on your submissions, restricts data retention, and includes confidentiality obligations.
This is the document that separates your firm from the Heppner fact pattern. In United States v. Heppner (S.D.N.Y., Feb. 17, 2026), the court found no privilege because the consumer terms permitted training and third-party disclosure. A commercial API agreement is the structural fix.
If your firm uses Anthropic’s commercial API, the agreement is between your firm and Anthropic. If your firm uses a platform that connects to Anthropic’s API with your firm’s own API key, the agreement is still between your firm and Anthropic.
2. A Data Processing Addendum (DPA).
The DPA establishes the vendor’s obligations for handling your firm’s data. It covers what data is processed, how it is stored, who can access it, how long it is retained, and what happens when the relationship ends.
Your carrier wants to see that your AI vendor has signed a DPA with your firm. If your vendor does not offer one, that is a red flag.
The DPA should address:
- Data ownership (your firm retains all rights to client data)
- Processing purposes (only to provide the services, not for training)
- Data retention and deletion (defined timeline, certification of deletion on request)
- Subprocessor disclosure (who else touches your data)
- Breach notification (how quickly you are informed)
- Confidentiality obligations (binding on vendor personnel)
3. A written AI use policy for your firm.
Your carrier wants to know that your attorneys understand the rules. A written policy should cover:
- Which AI tools are approved for client work (and which are not)
- Who is responsible for reviewing AI-generated output before use
- How AI use is disclosed to clients
- How AI-generated documents are supervised under Model Rule 1.1
- What data can and cannot be submitted to AI tools
- How the firm documents its AI due diligence
The policy does not need to be long. It needs to exist, be signed by the managing partner, and be dated.
What satisfies the underwriter
The underwriting team is not evaluating your AI tool. They are evaluating your risk management posture. They want to see that you thought about this before they asked.
A firm that can produce all three documents in the first conversation signals competence. A firm that says “we use ChatGPT but we’re careful” does not.
The distinction that matters most is the commercial relationship. Consumer AI tools, even paid ones, operate under terms that permit training and disclosure. Commercial API agreements prohibit both. Your carrier understands this distinction because the Heppner court spelled it out.
What raises flags
These are the things that will generate follow-up questions from your insurer:
No AI policy. 53% of legal professionals report no AI policy at their firm (ABA 2024). If your firm is in that group, fix it before your next renewal.
Consumer AI for client work. If any attorney at your firm uses ChatGPT, Claude.ai, or Gemini with a personal account for client work, that is the Heppner pattern. Your carrier will want to know it has stopped.
No DPA. If your AI vendor cannot produce a Data Processing Addendum, your carrier will ask why you chose a vendor that will not commit to data handling obligations in writing.
No documentation of attorney supervision. ABA Formal Opinion 512 (2024) requires attorneys to supervise AI-generated work product under Model Rule 1.1. Your carrier will want to know how your firm implements this, not that you have good intentions, but that you have a documented process.
Build the file now
Do not wait for your carrier to ask. Build the file now.
Step 1. Check your AI provider’s terms. If you are on a commercial API agreement that prohibits training, print it or save a PDF. If you are on consumer terms, stop using that tool for client work and move to a commercial agreement.
Step 2. Request a DPA from your AI vendor. If they have one, sign it and file it. If they do not, ask why.
Step 3. Write your firm’s AI use policy. One page is enough. List the approved tools, the review requirements, the disclosure practices, and the supervision expectations. Have the managing partner sign it. Date it.
Step 4. Put all three documents in a folder labeled “AI Compliance.” When your carrier asks, hand them the folder.
The firms that do this now will renew without friction. The firms that wait will be answering questions under pressure.
KrisLegal’s Data Processing Addendum is published and in effect for every firm. Your firm holds its own Anthropic commercial API contract. Schedule a call to see how it works for your practice.