← Blog

Consumer AI vs. Commercial API: What the Distinction Means After Heppner

· KrisLegal

Every major AI provider offers two products. One is consumer. One is commercial. They use the same model. They produce the same quality of output. The difference is the contract, and after Heppner, the contract is everything.

Two products, one model, different terms

When an attorney signs up for Claude.ai, ChatGPT, or Gemini with a personal email, they accept consumer terms of service. Those terms typically include:

  • The provider may retain your inputs and outputs
  • The provider may use your data to improve its models (training)
  • The provider may disclose data to third parties, including in response to legal process
  • The provider has no confidentiality obligation to you

When a law firm signs a commercial API agreement with Anthropic, OpenAI, or Google, the terms are different:

  • The provider does not use API-submitted data to train models
  • The provider does not retain data beyond the processing window
  • The provider has contractual confidentiality obligations
  • The relationship is governed by a commercial agreement, not a consumer clickthrough

Same model. Same capabilities. Different legal structure around the data.

Why this matters after Heppner

In United States v. Heppner (S.D.N.Y., Feb. 17, 2026), Judge Rakoff found that documents created using Anthropic’s free consumer Claude were not privileged. One of the three grounds: Anthropic’s consumer terms permitted data retention, model training, and third-party disclosure. There was no reasonable expectation of confidentiality.

The court did not say AI destroys privilege. It said these terms of service destroyed privilege. The terms permitted the provider to use and disclose the data. That is voluntary disclosure to a third party. Privilege waived.

Had the defendant used Claude through a commercial API agreement that prohibited training and guaranteed confidentiality, the analysis would have been different. The court said so explicitly, pointing to the Kovel framework for attorney-directed tools operating under confidentiality obligations.

What to check in your provider’s terms

If your firm uses any AI tool for client work, read the terms. Look for these specific provisions:

Data training. Does the provider reserve the right to use your inputs to train or improve its models? Consumer products almost always do. Commercial API agreements almost always prohibit it.

Data retention. How long does the provider keep your submissions? Consumer products may retain data indefinitely. Commercial API agreements typically define a short processing window (often 30 days or less) after which data is deleted.

Third-party disclosure. Can the provider share your data with third parties? Consumer terms often permit disclosure to service providers, affiliates, and in response to legal process without notice to you. Commercial agreements restrict this.

Confidentiality obligations. Does the provider have a contractual duty of confidentiality to you? Consumer terms generally do not create one. Commercial agreements do.

Where the commercial tiers actually stand

Not every paid tier is commercial. The distinction matters.

Anthropic offers Claude Pro and Max as paid consumer plans. They are governed by consumer terms. As of late 2025, consumer plans allow training on your data by default, with an opt-out toggle. An opt-out toggle is not a contract. It is a setting that can change with a terms update.

Claude for Work (the Team plan, starting at 5 seats) and the Enterprise plan (50+ seats) are governed by Commercial Terms. Training is contractually prohibited, not toggled off. Data retention is configurable. These are legitimate commercial agreements.

The API operates under the same commercial terms. Training is prohibited. Retention is 7 days by default, configurable down to zero.

For a small firm, the realistic options with commercial-grade data protections are:

  • Claude for Work Team plan (5+ seats, $25-$150/seat/month). Commercial terms. No training. Your firm gets a chat interface with Claude.
  • The API (no seat minimum, pay per token). Commercial terms. No training. Your firm gets programmatic access that can be built into tools and workflows.

Both satisfy the confidentiality requirement from Heppner. Both are under commercial terms that prohibit training and restrict disclosure. The difference is what you can do with them.

Chat window vs. workflow

A Claude for Work seat gives your attorneys a chat window. It does not connect to your practice management system. It does not read your matters. It does not know your client names, case numbers, or opposing counsel. Every conversation starts from scratch. The attorney copies and pastes case facts into the chat, gets a response, copies the response into a document, and formats it manually.

That works for ad hoc questions. It does not work for document production at scale.

A platform built on the API can connect to Clio or PracticePanther, read matter data, generate documents on your letterhead with the correct attorney’s signature, file work product back to the right matter, and track structured data across cases. The AI operates within the firm’s systems, at counsel’s direction, with a full audit trail. This is the Kovel framework the court pointed to in Heppner: attorney-directed tools operating under confidentiality obligations within documented workflows.

Both approaches use the same AI model under the same commercial terms. The question is whether your firm needs a chat window or a workflow.

What a compliant setup requires

Regardless of which path you choose, three elements:

1. Commercial terms. The firm’s AI access must be governed by a commercial agreement that contractually prohibits training and restricts data retention and disclosure. Consumer plans, even paid ones, do not meet this standard.

2. No personal accounts for client work. Individual Pro or Max subscriptions are consumer accounts. An attorney using a personal Claude.ai account for client work has the same terms-of-service problem that Heppner identified, regardless of whether they toggled off training.

3. Attorney direction within documented workflows. AI use flows through the firm’s systems, at counsel’s direction. This satisfies both the supervision obligation under ABA Formal Opinion 512 (2024) and the Kovel framework the court pointed to as the safe harbor.

What to do this week

Pull up every AI tool your firm uses. For each one, answer four questions:

  1. Is the firm on a commercial API agreement, or a consumer/team subscription?
  2. Do the terms prohibit training on your submissions?
  3. Do the terms restrict data retention to a defined processing window?
  4. Does the provider have a contractual confidentiality obligation to your firm?

If the answer to any of these is no, that tool should not be used for client work until the terms are fixed or the tool is replaced.

This is not a technology decision. It is a risk management decision. The technology works either way. The terms of service are what determine whether your client’s privilege survives.


KrisLegal connects to your practice management system and uses Anthropic’s commercial API with your firm’s own API key. Commercial terms prohibit training. DPA available. Schedule a call to see how it works for your practice areas.

See how KrisLegal works for your firm.

30-minute call. Your practice areas. Your data. Real output.

Schedule a Demo