Judge Rules AI-Generated Docs Not Privileged
March 21, 2026
On Feb. 10, 2026, a federal judge in New York issued a first-of-its-kind ruling with immediate consequences for every business using AI: documents created using a consumer AI tool and later shared with an attorney are not protected by attorney-client privilege or the work product doctrine. The decision in United States v. Heppner didn't create new law. It applied well-settled privilege principles to a new technology. The result was unambiguous.
Editor's note: This post was originally published Feb. 25, 2026 and was updated March 21, 2026 to reflect new developments in the case, expanded guidance for business leaders, and analysis from the legal community.
What Happened
Bradley Heppner, founder of Beneficient and former chairman of GWG Holdings, faces federal charges including securities fraud, wire fraud, conspiracy, and falsification of records. According to the Department of Justice, Heppner allegedly misappropriated more than $150 million, and GWG's subsequent bankruptcy resulted in over $1 billion in losses to retail investors.
After receiving a grand jury subpoena and engaging defense counsel, Heppner turned to the consumer version of Anthropic's Claude. On his own initiative and without attorney direction, he used the tool to generate 31 documents, including reports outlining potential defense strategies, legal arguments, and analysis of the facts and charges he anticipated facing. He then shared those documents with his attorneys at Quinn Emanuel.
Federal agents executing a search warrant at Heppner's mansion seized electronic devices containing the AI-generated materials. The government moved for a ruling that the documents were not privileged. Defense counsel argued they were protected by both attorney-client privilege and the work product doctrine.
Judge Jed S. Rakoff of the Southern District of New York ruled from the bench on Feb. 10, stating he saw "not remotely any basis for any claim of attorney-client privilege." He followed with a written opinion on Feb. 17, 2026, further explaining his reasoning.
How the government learned about the AI documents is itself a cautionary detail: during pretrial discovery, defense counsel produced a privilege log. One entry described documents as "artificial intelligence-generated analysis conveying facts to counsel for the purpose of obtaining legal advice." That description flagged the materials for prosecutors, who then moved to compel production.
Case update: A pretrial conference held Feb. 26, 2026 resulted in a defense request to adjourn the trial. Judge Rakoff granted it. The trial was rescheduled from April 6 to April 21, 2026.
The Court's Reasoning
Judge Rakoff identified three independent grounds on which privilege failed, any one of which would have been sufficient to deny the claim.
1. Claude isn't an attorney. Attorney-client privilege protects confidential communication between a client and a licensed lawyer made for the purpose of obtaining legal advice. Judge Rakoff rejected the argument this was irrelevant because Claude functioned like a word-processing tool, noting all recognized privileges require "a trusting human relationship" with "a licensed professional who owes fiduciary duties and is subject to discipline." No such relationship can exist between a user and an AI platform.
The court also noted Claude's own terms of service expressly disclaim any ability to give legal advice. The platform directs users to consult a qualified attorney, which Claude itself confirmed when the government queried the tool directly during proceedings.
2. Confidentiality wasn't maintained. Privilege requires communications remain confidential. Heppner voluntarily submitted his prompts to a commercial platform governed by Anthropic's privacy policy, which explicitly advises users that the company collects data on prompts and outputs, may use that data to train its AI systems, and reserves the right to disclose user data to governmental regulatory authorities and third parties.
The court found that submitting information under those terms was inconsistent with any reasonable expectation of confidentiality. The ruling didn't treat the AI platform as a neutral tool. It treated Claude as a third party.
3. Retroactive sharing doesn't create privilege. The documents were created before Heppner shared them with counsel. Sending non-privileged materials to an attorney after the fact doesn't retroactively shield them.
The court also noted a more troubling dimension: because Heppner fed information he received from his lawyers into Claude, the government argued — and the judge agreed — doing so may have constituted a waiver of privilege over the original attorney-client communications themselves.
Work product protection also failed. The work product doctrine protects materials prepared by or at the direction of counsel in anticipation of litigation. Defense counsel conceded that Heppner created the documents of his own volition, without attorney involvement or direction.
The court said even if the documents were prepared in anticipation of litigation, they weren't prepared by or at counsel's behest. And they didn't reflect defense counsel's mental strategy, which is the core purpose the doctrine is designed to protect.
In reaching this conclusion, Judge Rakoff disagreed with an earlier SDNY magistrate judge decision, Shih v. Petal Card, 565 F. Supp. 3d 557 (S.D.N.Y. 2021), that extended work product protection to materials generated by non-lawyers without attorney direction. The Heppner ruling signals that approach won't hold.
What the Court Left Open
The written opinion is notable not only for what it decided but for what it deliberately left open. Judge Rakoff acknowledged the analysis might differ under different facts. Two scenarios in particular could yield different outcomes in future cases.
First, if an attorney directs a client to use an AI tool as part of legal representation, the AI platform could arguably function as a necessary agent of counsel under what legal scholars call the Kovel doctrine. A principle established in United States v. Kovel, 296 F.2d 918 (2d Cir. 1961), allows privilege to extend to third parties engaged to help attorneys render legal advice, provided those parties are bound by confidentiality obligations and are acting under attorney direction.
Judge Rakoff explicitly acknowledged the Kovel framework, writing that had counsel directed Heppner to use Claude, "Claude might arguably be said to have functioned in a manner akin to a highly trained professional who may act as a lawyer's agent within the protection of the attorney-client privilege." That door remains open. What it requires is documented attorney direction before and during the AI use, not retroactive ratification.
Second, the court expressly noted enterprise AI platforms may give rise to a reasonable expectation of confidentiality that consumer tools do not. Enterprise AI tools typically include contractual commitments not to train on user data, defined data segregation and retention practices, and explicit confidentiality terms.
The ruling applied specifically to the free, publicly available version of Claude. Whether enterprise-tier tools would reach a different result remains an open question. No court has yet ruled enterprise AI use preserves privilege.
What Legal Experts Say
Since the written opinion was released, some commentary has suggested the ruling means AI and legal privilege are permanently incompatible. That overstates the holding. Multiple law firms have published analyses making clear the court applied traditional, technology-neutral privilege principles.
The decision doesn't declare generative AI incompatible with legal protections. It declares a specific set of facts — a consumer tool, no attorney direction, a privacy policy that permits third-party data sharing — fails to satisfy those protections.
Legal experts agree the ruling highlights an anthropomorphism risk businesses underestimate. As the New York State Bar Association noted in its analysis, people increasingly speak of "asking Claude" or "talking to ChatGPT" as if these tools are personal advisers. That framing creates a false sense of confidentiality. Legally, querying a consumer AI platform is closer to a Google search than to speaking with an attorney. And no reasonable person should believe their Google search history is privileged.
HR and employment counsel are raising a concern worth noting separately. According to Detroit shareholder Richard Warren of Ogletree Deakins, businesses should now expect litigants and government agencies to ask companies to produce AI prompts and outputs generated by HR teams and managers, particularly in the context of internal investigations and employment decisions. That exposure extends well beyond fraud cases.
Consumer vs. Enterprise AI
The Heppner ruling draws a sharp line that every business should understand before its next team meeting, incident response session, or compliance review involves an AI tool.
Consumer AI platforms are governed by general terms of service designed for individual users. These free or low-cost versions of ChatGPT, Gemini, and other AI tools typically reserve rights to use inputs for model training, store data in shared infrastructure, permit disclosure to third parties under various circumstances, and disclaim confidentiality expectations. A $20-per-month subscription doesn't change this calculus in any legally meaningful way.
Enterprise AI platforms are a different category. Paid business tiers from major providers often include contractual commitments not to train on client data, defined retention and deletion protocols, and data segregation and encryption. Some even offer zero data retention options in which prompts and responses aren't stored at all.
Some enterprise offerings support HIPAA-compliant workflows and will execute Business Associate Agreements. These features don't guarantee privilege will be preserved, as no court has made that determination. But they represent a materially stronger starting position than consumer tools. At a minimum, an enterprise agreement gives counsel something to reference when asserting a confidentiality expectation.
The implication is straightforward: if your team is using the free version of any AI platform to discuss legal strategy, draft incident response plans, evaluate regulatory exposure, or analyze anything that could become relevant to litigation or a government investigation, those conversations may be discoverable.
Our AI readiness checklist covers data leakage and compliance considerations worth reviewing alongside the governance questions this ruling raises.
Governance Gap Now a Board-Level Issue
The Heppner ruling lands at a moment when corporate AI governance is already under intense scrutiny, and the gap between stated policy and actual practice is significant. While 62% of boards now hold regular AI discussions, only 27% have formally added AI governance to their committee charters, according to the National Association of Corporate Directors' 2025 Board Practices and Oversight Survey.
Most boards remain focused on education and risk awareness rather than embedding structured oversight into core operations. The gap between adoption and governance is exactly what this ruling exposes.
Legal scholars at WilmerHale have noted under Delaware's standard for director oversight, the Caremark doctrine, boards could face heightened exposure if they fail to exercise adequate oversight of AI-related risks. While no court has yet applied Caremark in an AI-specific context, the precedent is already in place. The question isn't whether board-level AI governance will become a legal obligation, but how quickly.
The Securities and Exchange Commission has already moved in this direction. Its Division of Examinations identified cybersecurity and AI-driven threats to data integrity, including third-party AI vendor risk, as a focus area for examinations in fiscal year 2026. The SEC's Investor Advisory Committee has separately recommended enhanced disclosures around how boards oversee AI governance as part of managing material cybersecurity risks. For regulated businesses, this is an active examination priority.
AI governance is also moving from documentation to enforcement, with 2026 likely marking the shift from high-level principles to enforceable rules. Expectations include documented AI inventories, risk classifications, third-party due diligence, and model lifecycle controls. Governance will increasingly be measured by clear, demonstrable metrics rather than paper policies.
Regulations Accelerating
The Heppner ruling doesn't exist in isolation. It arrives as the regulatory environment around AI becomes substantially more complex, and the combination of judicial precedent and legislative activity creates compounding obligations for firms not actively managing their AI use.
On the state level, Colorado's Artificial Intelligence Act was delayed from February 2026 to June 30, 2026, giving businesses more preparation time. The Act targets high-risk AI systems that make or substantially influence consequential decisions. It requires risk management programs, impact assessments, ongoing monitoring, and more. Colorado is widely considered a bellwether for other states.
Texas enacted the Responsible Artificial Intelligence Governance Act effective Jan. 1, 2026. The law bans certain harmful AI uses and requires disclosures when government agencies and health care providers use AI that interacts with consumers. At least 11 states are advancing chatbot disclosure legislation as of early 2026.
The cyber insurance market is also responding. Carriers have begun introducing AI security riders that condition coverage on documented evidence of adversarial testing, model-level risk assessments, and specific AI safeguards. Businesses that can't demonstrate governance controls may find that AI-related incidents aren't covered, or that their premiums reflect the absence of documented policy. This is a direct financial consequence of the same governance gap the Heppner ruling exposes.
Defense contractors face an additional layer of exposure. Using consumer AI platforms to discuss contract performance, security incidents, or CMMC compliance matters may implicate data protection requirements for controlled unclassified information, entirely separate from the privilege question Heppner addressed. As state AI laws proliferate and California's transparency mandates take effect, companies using AI to assess their own compliance posture face the recursive risk that those assessments themselves are discoverable.
What This Means for Your Business
The Heppner ruling isn't limited to criminal defendants. Its reasoning applies to any business whose employees use commercial AI tools for purposes that touch legal, compliance, HR, or security functions.
Common uses that now carry documented privilege risk include:
- Analyzing breach scenarios
- Drafting incident response plans
- Assessing regulatory exposure
- Preparing remediation documentation
- Organizing internal investigation findings
- Analyzing employee conduct
- Evaluating contract terms or negotiation positions
- Reviewing security vulnerabilities or compliance gaps
- Analyzing business decisions with legal implications
The same consumer tools many employees use casually, including those used to generate content now regulated under federal and state deepfake laws, are subject to the same evidentiary exposure. And as the HR community has noted following Heppner, the question of AI-generated records surfaces in employment disputes and regulatory inquiries just as readily as it does in criminal proceedings.
Steps to Take Now
Businesses should treat this ruling as a trigger for immediate policy review. That starts with knowing what AI tools your team is actually using, not just the ones you approved. Shadow AI adoption is widespread, and you can't govern what you haven't inventoried.
From there, review the terms of service and privacy policies for every AI tool in use. Determine whether the vendor can access, train on, or disclose your conversation data. If the answer is yes under any circumstance, that tool shouldn't be used for sensitive legal, compliance, investigative, or security discussions.
Establish written policies that clearly distinguish between approved tools and prohibited use cases. Work with legal counsel to determine whether enterprise AI platforms with appropriate contractual protections are warranted for specific functions. Update legal hold and document retention policies to account for AI-generated content that's now subject to the same evidentiary rules as any other business record. This includes prompts, outputs, and summaries.
Attorneys working with business clients should add explicit AI disclosure language to engagement letters. The privilege belongs to the client, but so does the responsibility to preserve it. Most business leaders don't understand a private-feeling chat interface is legally equivalent to a conversation with a third party. That education is now part of effective legal representation.
For guidance on foundational AI security controls recommended by the National Security Agency and the Cybersecurity and Infrastructure Security Agency, see our post on securing AI systems throughout the development lifecycle.
Frequently Asked Questions (FAQs)
Does this ruling only apply to criminal cases?
No. The Heppner case is criminal, but the privilege principles Judge Rakoff applied are the same ones that govern civil litigation, regulatory investigations, government audits, and employment disputes. Any proceeding in which attorney-client privilege or work product protection could be claimed is affected by this reasoning. If your company is involved in a civil lawsuit, an SEC examination, an EEOC investigation, or a state regulatory inquiry, the same analysis applies to AI-generated documents your employees created using consumer tools.
Does it matter which AI tool I use — Claude, ChatGPT, Gemini?
The ruling was about Claude specifically, but the reasoning turns on the privacy policy of the consumer platform, not the brand name. ChatGPT, Gemini, Microsoft Copilot in its free tier, and most other consumer AI tools have materially similar terms: the vendor collects prompts and outputs, may use them for model training, and reserves the right to disclose data to third parties. If your AI vendor's terms of service include any of those provisions, the Heppner reasoning applies regardless of whose logo is on the interface.
I pay for a subscription. Does that change anything?
Not meaningfully, no. A paid individual subscription — such as ChatGPT Plus or Claude Pro — doesn't typically include the contractual data protections that enterprise agreements provide. The court's analysis focused on what the privacy policy actually says, not what the user paid. A $20-per-month subscription generally still permits the vendor to access and train on your data unless the specific plan includes explicit zero-retention or data segregation commitments in a written agreement. If the terms don't include a data processing agreement and explicit confidentiality provisions, treat that tool as a consumer platform for privilege purposes.
What does "attorney direction" actually look like in practice?
The court suggested that if an attorney had directed Heppner to use Claude — similar to how counsel might engage a paralegal or forensic consultant — the outcome might have differed. In practice, that means the attorney explicitly instructs the client or a staff member to use a specific AI tool for a defined purpose as part of the legal representation, that instruction is documented in writing, and the AI use occurs within the scope of that direction rather than independently. An employee deciding on their own to use an AI tool to prepare for a meeting with counsel doesn't qualify. The direction needs to exist before and during the AI use, not be applied retroactively to documents already created.
Are documents my employees already created at risk?
Potentially, yes. If your employees have used consumer AI tools to analyze legal exposure, draft responses to regulatory inquiries, prepare internal investigation summaries, or document compliance gaps, those materials may already exist in your systems and could be subject to discovery in future proceedings. This is why an AI inventory audit is urgent rather than theoretical. You need to understand what AI-generated content exists, where it lives, and whether any of it was created in a context that could become legally sensitive before you can assess your current exposure.
Does this affect trade secret protection, not just privilege?
Yes, and this point tends to get overlooked. Attorney-client privilege is one form of protection — trade secret law is another. When an employee inputs proprietary business information, product formulas, customer data, financial projections, or strategic plans into a consumer AI platform, they may be disclosing that information to a third party under terms that undermine trade secret protection as well. Trade secret status requires that the owner take reasonable steps to maintain secrecy. Voluntarily inputting trade secrets into a platform whose privacy policy permits the vendor to access and train on that data is difficult to characterize as a reasonable secrecy measure. Life sciences companies, manufacturers, and technology firms with significant intellectual property exposure should treat this as a separate, compounding risk alongside the privilege question.
What should our HR team do specifically?
HR teams are on the front lines of this exposure because they routinely handle matters that could become litigation — terminations, investigations, performance documentation, accommodation requests, and discrimination or harassment complaints. Following Heppner, employment lawyers expect government agencies and opposing counsel to begin requesting AI prompts and outputs generated by HR teams in employment disputes. That means HR should stop using consumer AI tools for any matter that involves a specific employee or could foreshadow legal proceedings, document which AI tools are currently in use for HR functions, work with legal counsel to establish which tools are approved for sensitive HR tasks, and ensure any AI-generated summaries or analysis related to employment decisions are treated as business records subject to legal hold obligations.
Does enterprise AI actually fix the problem?
It strengthens your position, but doesn't guarantee it. No court has ruled that enterprise AI use preserves attorney-client privilege — that question remains open. What enterprise tools provide is a materially better factual foundation: contractual data processing agreements, zero-retention options, defined confidentiality terms, and explicit prohibitions on vendor training. Those features give counsel something to point to when asserting a confidentiality expectation that consumer tools simply can't provide. The right answer is enterprise tools plus attorney direction plus documented governance policies — not any single element in isolation. Enterprise AI without attorney oversight and clear policies is still a governance gap. It's a smaller one, but it's not a complete solution.
Tech-Neutral Decision
Judge Rakoff's opinion is best understood as a technology-neutral decision. The court didn't rewrite privilege law for the AI era. It enforced existing rules in a new context and found that a commercial AI platform, operating under standard consumer terms of service, functions as a third party for privilege purposes, regardless of the user's intent.
The lesson isn't that businesses should avoid AI. It's that AI platforms create records, and those records are subject to the same evidentiary rules as any other document. Treating AI as a productivity tool without accounting for its evidentiary footprint is a governance gap this ruling makes visible. The legal and regulatory environment forming around it — Heppner, state AI laws, the SEC's examination priorities, new insurance requirements — suggests that future courts and regulators will build on it quickly.
As the trial of Bradley Heppner proceeds to April 21, 2026, the practical consequences of that footprint will play out in federal court. His AI prompts are now government evidence. For business leaders still treating consumer AI as a private tool, that's the most concrete demonstration available of what this ruling actually means.
STACK Cybersecurity helps companies develop AI governance frameworks that address privilege protection, data security, and regulatory compliance — including vendor risk assessments, legal hold policy updates, and CMMC-aligned AI use policies.
Contact us to discuss how your current AI tool usage measures up.
This post is for informational purposes only and does not constitute legal advice. Consult qualified legal counsel regarding your specific situation.
Sources
United States v. Heppner, 2026 WL 436479, No. 25 Cr. 503 (JSR) (S.D.N.Y. Feb. 17, 2026).
Cleary Gottlieb. "Managing AI Risk: Legal and Governance Imperatives for the Board." Jan. 2026.
HRMorning. "New Generative AI Court Ruling Warns HR: What to Know Now." Feb. 2026.