Judge Rules AI-Generated Docs Not Privileged
Feb. 25, 2026
A federal court issued a landmark ruling that documents created using a consumer AI tool and later shared with an attorney are not protected by attorney-client privilege or the work product doctrine. The decision in United States v. Heppner didn't create new law. It applied well-settled privilege principles to a new technology.
What Happened
Bradley Heppner, founder of Beneficient and former chairman of GWG Holdings, faces federal charges including securities and wire fraud, conspiracy, and falsification of records. According to the Department of Justice (DoJ), Heppner allegedly misappropriated more than $150 million, and GWG's subsequent bankruptcy resulted in over $1 billion in losses to retail investors.
Heppner, 59, of Dallas, Texas, is charged with securities fraud, wire fraud, false statements to auditors, and falsification of records, each of which carries a maximum sentence of 20 years in prison, according to a press release issued by the United States Attorneys Office, Southern District of New York. Heppner is also charged with conspiracy to commit securities fraud and wire fraud, which carries a maximum sentence of five years in prison. Read the indictment for United States v. Heppner, No. 25-cr-00503-JSR (S.D.N.Y. Feb. 17, 2026), Dkt. No. 27.
The charges arise from an alleged scheme by Heppner and others to fraudulently extract funds from GWG Holdings, Inc., a publicly traded company for which Heppner served as chairman, through the use of a shell company he controlled, the Highland Consolidated Limited Partnership (“HCLP”), the press release says.
After receiving a grand jury subpoena and engaging defense counsel, Heppner turned to the free version of Anthropic's Claude. On his own initiative and without attorney direction, he used the AI tool to generate 31 documents, including reports outlining potential defense strategies, legal arguments, and analysis of the facts and charges he anticipated facing. He then shared those documents with his attorneys at Quinn Emanuel.
Federal agents executing a search warrant at Heppner's mansion seized electronic devices containing the AI-generated materials. The government moved for a ruling that the documents were not privileged. Defense counsel argued they were protected by both attorney-client privilege and the work product doctrine.
Judge Jed S. Rakoff of the Southern District of New York ruled from the bench on Feb. 10, stating he saw "not remotely any basis for any claim of attorney-client privilege." He followed with a written opinion on Feb. 17, 2026, further explaining his reasoning.
How the government learned about the AI documents is itself a cautionary detail: during pretrial discovery, defense counsel produced a privilege log. One entry described documents as "artificial intelligence-generated analysis conveying facts to counsel for the purpose of obtaining legal advice." That description flagged the materials for prosecutors, who then moved to compel production.
The Court's Reasoning
Judge Rakoff identified independent grounds on which privilege failed, any one of which would have been sufficient to deny the claim.
Claude is not an attorney. Attorney-client privilege protects confidential communications between a client and a licensed lawyer made for the purpose of obtaining legal advice. Claude is not an attorney. Judge Rakoff rejected the argument that this was irrelevant because Claude functioned like a word-processing tool, noting that all recognized privileges require "a trusting human relationship" with "a licensed professional who owes fiduciary duties and is subject to discipline."
No such relationship can exist between a user and an AI platform. The court also noted Claude's own terms of service expressly disclaim any ability to give legal advice and direct users to consult a qualified attorney — which Claude itself said when the government queried the tool directly during proceedings.
Confidentiality was not maintained. Privilege requires communications remain confidential. Heppner voluntarily submitted his prompts to a commercial platform governed by Anthropic's privacy policy, which explicitly advises users the company collects data on prompts and outputs, may use that data to train its AI systems, and reserves the right to disclose user data to governmental regulatory authorities and third parties. The court found that submitting information under those terms was inconsistent with any reasonable expectation of confidentiality. The ruling did not treat the AI platform as a neutral tool. It treated the free AI tool as a third party.
Retroactive sharing doesn't create privilege. The documents were created before Heppner transmitted them to counsel. Sending non-privileged materials to an attorney after the fact doesn't retroactively shield them. The court also noted a more troubling dimension: because Heppner had fed information he received from his lawyers into Claude, the government argued, and the judge agreed, doing so may have constituted a waiver of privilege over the original attorney-client communications themselves.
Work product protection also failed. The work product doctrine protects materials prepared by or at the direction of counsel in anticipation of litigation. Defense counsel conceded Heppner created the documents of his own volition, without attorney involvement or direction. The court found that even if the documents were prepared in anticipation of litigation, they weren't prepared by or at counsel's behest and didn't reflect defense counsel's mental strategy, the core purpose the doctrine is designed to protect.
Rich Miller
CEO, STACK Cybersecurity
We’ve taken a proactive approach to artificial intelligence security that goes beyond basic guidelines. Across the industry, we’re seeing decisive security actions, including blocking certain artificial intelligence models internally, because companies can't assume public tools will protect sensitive or regulated information. And don't get me started on the risks of shadow AI, a massive risk I believe is present in nearly every company today.”
What the Court Left Open
The written opinion is notable not only for what it decided but for what it deliberately left open. Judge Rakoff acknowledged the analysis might differ under different facts. Two scenarios in particular could yield different outcomes in future cases.
First, if an attorney directs a client to use an AI tool as part of the legal representation, similar to how counsel might engage a paralegal, interpreter, or forensic consultant, the AI tool could arguably function as a necessary agent of counsel, potentially preserving privilege. The court suggested that attorney direction, clearly documented, changes the analysis.
Second, the court expressly noted that enterprise AI platforms, which typically include contractual commitments not to train on user data, defined data segregation and retention practices, and explicit confidentiality terms, may give rise to a reasonable expectation of confidentiality that consumer tools do not. The ruling applied specifically to the free, publicly available version of Claude. Whether enterprise-tier (paid) tools would reach a different result remains an open question, though no court has yet ruled enterprise AI use preserves privilege.
Consumer vs. Enterprise AI
The Heppner ruling draws a sharp line that every business should understand before its next team meeting, incident response session, or compliance review involves an AI tool.
Consumer AI platforms, the free or low-cost versions of Claude, ChatGPT, Gemini, and similar tools, are governed by general terms of service designed for individual users. These terms typically reserve rights to use inputs for model training, store data in shared infrastructure, permit disclosure to third parties under various circumstances, and disclaim confidentiality expectations. A $20-per-month subscription doesn't change this calculus in any legally meaningful way.
Enterprise AI platforms are a different category. Paid business tiers from major providers often include contractual commitments not to train on client data, data segregation and encryption, defined retention and deletion protocols, and in some cases zero data retention options where prompts and responses are not stored at all. Some enterprise offerings support HIPAA-compliant workflows and will execute Business Associate Agreements. These features don't guarantee privilege will be preserved, no court has made that determination, but they represent a materially stronger position than consumer tools.
The practical implication is straightforward: if your team is using the free version of any AI platform to discuss legal strategy, draft incident response plans, evaluate regulatory exposure, or analyze anything that could become relevant to litigation or a government investigation, those conversations may be discoverable. Our AI readiness checklist covers data leakage and compliance considerations worth reviewing alongside the governance questions this ruling raises.
What This Means for Business
The Heppner ruling isn't limited to criminal defendants. Its reasoning applies to any company whose employees use commercial AI tools for purposes that touch legal, compliance, HR, or security functions. Common business uses that now carry documented privilege risk include using AI to analyze breach scenarios or draft incident response strategies, assess regulatory exposure or draft remediation plans, organize internal investigation findings or analyze employee conduct, evaluate contract terms or negotiation positions, analyze vulnerabilities or compliance gaps, and evaluate business decisions with legal implications. The same consumer tools many employees use casually, including those used to generate content now regulated under federal and state deepfake laws, are subject to the same evidentiary exposure.
The ruling also matters in the context of the broader AI regulatory environment. As state AI laws proliferate — Colorado's algorithmic discrimination requirements, California's transparency mandates, Illinois employment AI restrictions, and others — companies are increasingly using AI tools to assess their own compliance posture. If those assessments involve consumer AI platforms and later become relevant to regulatory proceedings, the Heppner reasoning suggests they may not be shielded from production.
Defense contractors face a compounding risk. Using consumer AI platforms to discuss contract performance, security incidents, or CMMC compliance matters may implicate data protection requirements for controlled unclassified information (CUI), entirely separate from the privilege question the Heppner ruling addressed.
Trigger for Policy Review
Companies should treat this ruling as a trigger for immediate policy review rather than a distant legal development. That means inventorying which teams are using AI platforms and for what purposes, reviewing the terms of service and privacy policies for every AI tool currently in use, establishing clear policies that prohibit consumer AI platforms for any sensitive legal, compliance, investigative, or security matter, evaluating whether enterprise AI tools with contractual confidentiality protections are appropriate for those use cases, and updating legal hold and document retention policies to account for AI-generated content including prompts, outputs, and summaries.
For guidance on foundational AI security controls recommended by NSA and CISA, see our post on securing AI systems throughout the development lifecycle.
Attorneys advising business clients should consider adding explicit AI disclosure language to engagement letters. The privilege belongs to the client, but so does the responsibility to maintain it. Most clients don't understand a private-feeling chat interface is legally equivalent to a conversation with a third party.
Tech-Neutral Decision
Judge Rakoff's opinion is best understood as a technology-neutral decision. The court didn't rewrite privilege law for the AI era. It enforced existing rules in a new context and found that a commercial AI platform, operating under standard consumer terms of service, functions as a third party for privilege purposes, regardless of the user's intent.
The lesson isn't that businesses should avoid AI. It is that AI platforms create records, and those records are subject to the same evidentiary rules as any other document. Treating AI as a productivity tool without accounting for its evidentiary footprint is a governance gap this ruling makes visible, and that future courts are likely to build on.
STACK Cybersecurity helps companies develop AI governance frameworks that address privilege protection, data security, and regulatory compliance. This includes vendor risk assessments, legal hold policy updates, and CMMC-aligned AI use policies.
Contact us to discuss how your current AI tool usage measures up.
This post is for informational purposes only and does not constitute legal advice. Consult qualified legal counsel regarding your specific situation.
Sources
Falcon Rappaport & Berkman LLP. "Your AI Conversations Are Not Privileged." Feb. 2026.