← Blog

What Heppner Means for Your Firm's AI Use

· KrisLegal

In February 2026, Judge Jed Rakoff of the Southern District of New York ruled that 31 documents a criminal defendant generated using Anthropic’s free consumer Claude chatbot were neither privileged nor protected as work product. United States v. Heppner, 25-cr-00503-JSR, is the first written federal court opinion on AI-generated materials and legal privilege.

What happened

Bradley Heppner, a corporate executive charged with securities fraud, used the free version of Claude to analyze his legal exposure, draft defense strategies, and process materials he’d received from his attorneys. He created 31 documents on his own. His lawyers didn’t direct him to use Claude.

The government moved to compel production. The defense claimed privilege and work product protection. Judge Rakoff rejected both.

What the court found

Three grounds:

1. No attorney-client relationship with Claude. Privilege requires what the court called a “trusting human relationship” with fiduciary duties and professional discipline. Claude is not a lawyer. It has no fiduciary obligations. Conversations with an AI chatbot are not attorney-client communications.

2. No reasonable expectation of confidentiality. Anthropic’s consumer terms permitted data retention, model training, and disclosure to third parties including government authorities. Under those terms, there was no confidentiality to protect. Voluntary disclosure to a third party. Privilege waived.

3. Not at counsel’s direction. Heppner created the documents on his own, then shared them with his attorneys after the fact. Work product doctrine requires preparation at counsel’s direction in anticipation of trial. That didn’t happen here. Routing unprivileged documents through a lawyer after creation doesn’t make them privileged.

What the court did not say

The opinion does not hold that attorneys cannot use AI. It does not create an AI-specific exception to privilege. Judge Rakoff applied existing privilege principles to new facts.

He also pointed to the structure that would work. Had counsel directed Heppner to use Claude, Rakoff wrote, the tool “might arguably be said to have functioned in a manner akin to a highly trained professional who may act as a lawyer’s agent within the protection of the attorney-client privilege.”

That’s the Kovel framework: third-party tools retained by counsel to facilitate legal advice can fall within privilege if they operate under attorney supervision and are bound by confidentiality obligations.

Why this matters for your firm

The ruling turned on consumer terms of service and the lack of attorney direction. Those facts apply anywhere attorneys or clients use consumer AI tools (Claude.ai, ChatGPT, Gemini, any free-tier product) with privileged material.

Prompt discipline doesn’t fix it. The terms of service are the terms of service. If the provider reserves the right to train on your data or disclose it to third parties, confidentiality is compromised at the structural level.

What “post-Heppner compliant” means

Three conditions:

1. Contractual confidentiality. The AI provider’s terms must prohibit training on your submissions, prohibit disclosure to third parties, and define data retention limits. Not a toggle in settings. A contract.

2. Attorney direction and supervision. AI use flows through the firm’s systems, at counsel’s direction, within documented workflows. This is the Kovel framework the court pointed to as the safe harbor.

3. A direct commercial relationship. Your firm holds a commercial agreement with the AI provider. Not a consumer account. Not a free tier. Not a personal subscription. Commercial API terms are structurally different from consumer terms.

ABA Formal Opinion 512 (2024) laid the groundwork for this framework before Heppner was decided. The court enforced what the ABA had already recommended.

What to do now

Audit your AI use. Ask every attorney and staff member what AI tools they’re using. Are any of them consumer products? Are client documents going into them?

Read the terms. For every AI tool in use, check the terms of service for training rights, data retention policies, and third-party disclosure clauses. If the provider reserves the right to train on your data, stop using it for client work.

Move to commercial agreements. Your firm holds a direct commercial contract with the AI provider. Terms prohibit training and guarantee confidentiality. This is what the court will look for.

Establish attorney-directed workflows. AI use should run through the firm’s systems, directed by licensed attorneys, within documented workflows. This satisfies the Kovel framework.

Document your due diligence. Get a Data Processing Addendum from your AI vendor. Keep it with the commercial agreement in your compliance file. If your malpractice carrier or a court asks, you have the documentation.

Tell your attorneys and your clients. Every attorney needs to understand the risk. Clients need to know not to put privileged communications into consumer AI tools on their own. That’s what Heppner did.

The bottom line

Heppner didn’t ban AI in legal practice. It told us the structure that works: commercial terms, attorney direction, contractual confidentiality. The court itself pointed to the safe harbor.

The firms that act now will have their compliance documentation and workflows in place. The firms that wait will be explaining why they didn’t.


KrisLegal uses Anthropic’s commercial API with your firm’s own API key. Your firm holds the Anthropic contract. Contractual prohibition on training. DPA available. Attorney-directed workflows within your practice management system. Schedule a call to see how it works for your practice areas.

See how KrisLegal works for your firm.

30-minute call. Your practice areas. Your data. Real output.

Schedule a Demo