← Blog

ABA Formal Opinion 512: What It Means for Your Firm's AI Use

· KrisLegal

In July 2024, the ABA Standing Committee on Ethics and Professional Responsibility issued Formal Opinion 512. It is the first national ethics guidance on generative AI in legal practice.

Seven months later, Judge Rakoff’s decision in United States v. Heppner enforced exactly the framework Opinion 512 recommended. If you have not read both, you should. They are the two documents that define how attorneys can and cannot use AI tools right now.

What Opinion 512 says

The opinion applies existing Model Rules to generative AI. It does not create new rules. It tells attorneys how the rules they already follow apply to a new category of tools.

Four obligations:

1. Competence (Model Rule 1.1). Attorneys must understand how AI tools work well enough to supervise their output. You do not need to understand transformer architecture. You need to understand what the tool can and cannot do, what it is likely to get wrong, and how to verify its work before relying on it.

2. Confidentiality (Model Rule 1.6). Client information submitted to an AI tool is a disclosure to a third party. The attorney must ensure the tool’s terms of service protect that information. If the provider reserves the right to train on your submissions, retain your data indefinitely, or disclose it to third parties, you have a confidentiality problem.

This is the obligation that Heppner enforced. The court found that Anthropic’s consumer terms permitted training and third-party disclosure, destroying any expectation of confidentiality.

3. Communication (Model Rule 1.4). Clients should be informed when AI tools are used in their representation. The opinion does not require consent in every case, but the attorney should disclose AI use when it is material to the representation.

4. Supervision (Model Rules 5.1, 5.3). Attorneys are responsible for supervising AI-generated work product the same way they supervise work from associates and paralegals. AI output must be reviewed for accuracy, completeness, and legal soundness before filing or reliance.

What Opinion 512 does not say

The opinion does not ban AI in legal practice. It does not require disclosure to courts (though some jurisdictions now require this independently). It does not set specific technical standards for AI tools. It does not address privilege directly, though the confidentiality analysis under Rule 1.6 maps to the privilege analysis in Heppner.

The connection to Heppner

Opinion 512 was issued in July 2024. Heppner was decided in February 2026. The court did not cite Opinion 512, but the ruling enforces the same framework.

Opinion 512 says: check the terms of service. If the provider can train on your data, you have a Rule 1.6 problem.

Heppner says: the court checked the terms of service. The provider could train on the data. Privilege was waived.

The ABA told attorneys what to do. The court showed what happens when they do not.

What “compliant” looks like after both

Three requirements emerge from Opinion 512 and Heppner together:

Commercial API terms. The AI provider’s contract must prohibit training on your submissions, prohibit third-party disclosure, and define data retention limits. Consumer terms of service do not meet this standard. A commercial API agreement does.

Attorney-directed workflows. AI use must flow through the firm’s systems, at counsel’s direction, within documented workflows. This satisfies both the supervision obligation under Opinion 512 and the Kovel framework the court pointed to in Heppner.

A direct contractual relationship. Your firm holds the contract with the AI provider. Not a personal subscription. Not a free tier. Not an account your associate signed up for. A commercial agreement between your firm and the provider.

What your firm should do now

Audit your current AI use. Every attorney and staff member using AI tools for client work should be identified. For each tool, check the terms of service for training rights, data retention, and third-party disclosure.

Move to commercial agreements. If any attorney is using a consumer AI product for client work, stop. Replace it with a tool where your firm holds a commercial contract that prohibits training and guarantees confidentiality.

Document your due diligence. Get a Data Processing Addendum from your AI vendor. Keep it with the commercial agreement in your compliance file. If your malpractice carrier asks, you have the documentation.

Train your team. Every attorney needs to understand the competence obligation. AI output must be reviewed before filing or reliance. This is not optional under Rule 1.1.

Inform your clients. Establish a disclosure practice for AI use in client matters. Whether you add it to engagement letters or communicate it separately, the communication obligation under Rule 1.4 requires it.

The bottom line

Opinion 512 is not a suggestion. It is the ABA’s interpretation of how existing ethics rules apply to AI. Heppner is what enforcement looks like when those rules are not followed.

The firms that read both documents and act on them will have their compliance in order. The firms that treat AI ethics as someone else’s problem will be explaining their position to their malpractice carrier.


KrisLegal uses Anthropic’s commercial API with your firm’s own API key. Commercial terms prohibit training. DPA available. Attorney-directed workflows within your practice management system. Schedule a call to see how it works for your practice areas.

See how KrisLegal works for your firm.

30-minute call. Your practice areas. Your data. Real output.

Schedule a Demo