Groundbreaking Court Ruling Says AI Chats Are Not Protected by Attorney-Client Privilege

Groundbreaking Court Ruling Says AI Chats Are Not Protected by Attorney-Client Privilege

A recent federal court ruling has drawn a clear line between artificial intelligence and legal counsel. In United States v. Heppner, No. 25 Cr. 503 (JSR) (S.D.N.Y. Feb. 17, 2026), Judge Jed S. Rakoff ruled that a criminal defendant’s exchanges with Anthropic’s Claude chatbot are protected by neither attorney–client privilege nor the work product doctrine.

The ruling involved Bradley Heppner, who was facing multiple charges, including securities fraud, wire fraud, conspiracy, making false statements to auditors, and falsifying records. FBI agents, armed with a search warrant and an arrest warrant, seized many assets from his home. Among them were AI chat records, in which he discussed the financial transactions central to the charges and potential legal strategies he could use to defend himself.

Heppner’s legal team attempted to withhold the AI documents as “artificial-intelligence generated analysis” for the “purpose of obtaining legal advice”, claiming attorney-client privilege and work product privilege. In response, the government sought to release the documents, claiming they were neither attorney-client communications nor confidential. Judge Rakoff sided with the government.

The United States v. Heppner Warning: Why AI Isn’t Your Lawyer

The reasoning behind this decision is that privilege only applies to communications between clients and their attorneys. An AI platform isn’t an attorney, can’t form an attorney–client relationship, and therefore can’t satisfy the threshold requirement that privilege demands. Whatever legal analysis a chatbot produces, it originates outside of the protected relationship that privilege is designed to protect.

Heppner sets an important precedent about AI attorney–client privilege waivers, specifically regarding the decision to share sensitive legal information with an AI tool.

The Waiver Trap: Why Data Training and Third-Party Access Kill Confidentiality

Even setting aside the fundamental issue that a chatbot is not an attorney, the court identified a second problem with respect to privilege in AI communications: users of consumer AI platforms have no reasonable expectation of confidentiality. Anthropic’s privacy policy, typical of most public-facing AI tools, expressly permits:

  • The collection of user inputs and outputs
  • The use of that data to train its models
  • Disclosure to third parties, including government regulators

Those privacy terms apply to every conversation on an AI platform by default. Once a user consents to them, any inputting of privileged or litigation-sensitive information into the platform is legally indistinguishable from disclosing it to any other third party. Privilege is waived.

In Heppner, the court directly clarified that sharing a confidential legal strategy with a consumer AI tool is equivalent to broadcasting it publicly. It doesn’t matter that the intent was private, that the output was later shared with counsel, or that the subject matter was plainly sensitive. Once the information enters a system governed by broad third-party disclosure rights, it is no longer privileged.

Public vs. Enterprise AI: Practical Strategies to Protect Your Firm

The Heppner ruling doesn’t bar the use of AI in legal practice, but it does limit AI’s utility in certain applications. Judge Rakoff’s opinion left open the possibility that AI could function as a protected agent under the Kovel doctrine, which extends attorney–client privilege to third parties who assist attorneys in case preparation. This extension may apply if an AI tool is used at the express direction of counsel, within a confidential system, and as part of a documented workflow.

Under that framework, attorneys who wish to implement these tools can protect themselves from the legal risks of generative AI by:

  • Retiring consumer-grade AI platforms for any litigation or legal research function
  • Replacing such platforms with private enterprise deployments with contractual confidentiality protections
  • Implementing enforceable AI usage policies that restrict employees from inputting privileged information into any tool not vetted and approved by counsel
  • Urging their clients not to use open-system AI chat tools to discuss legal matters

AI can be a powerful asset in legal practice, but only when its use is attorney-directed, documented, and governed by terms that preserve the protections that clients depend on.

At Fibich, Leebron, Copeland & Briggs, we draw from over a century of combined legal know-how and expertise. With the tenacity to win and the resources to get us there, our lawyers provide strong representation for injured victims and their families.