Artificial intelligence tools are now part of everyday business operations—drafting emails, analyzing data, preparing reports, and accelerating decision‑making. But as AI adoption grows, courts are confronting a critical question:
When companies use AI, what becomes discoverable in litigation—and what legal protections may be lost?
Recent federal court decisions show that AI is not treated as a legal novelty. Instead, courts are applying traditional discovery, privilege, and work‑product principles to modern AI tools, often with high stakes for businesses. This article highlights three key cases shaping the emerging legal framework and explains what companies should do now to reduce litigation risk.
Why AI Use Creates Discovery Risk
From a litigation standpoint, AI tools raise three recurring issues:
1. AI prompts and outputs can be discoverable evidence
Courts increasingly view AI‑generated content as ordinary electronically stored information (ESI), similar to emails or draft documents. Depending on how a tool is configured, AI prompts and outputs may:
- Be stored outside company systems
- Be retained by third‑party vendors
- Exist independently of traditional document management controls
If AI content relates to business decisions, compliance, employment, pricing, or disputes, it may be discoverable.
2. Confidentiality is not guaranteed
Many widely used AI tools—particularly consumer or public platforms—expressly state that user inputs may be stored, reviewed, or used for training purposes. This can undermine arguments that communications were confidential, which is a prerequisite for privilege.
3. Privilege depends on how AI is used
Whether AI‑generated material is protected often turns on:
- The type of AI tool used;
- Whether legal counsel directed or supervised the work;
- Whether the material reflects legal strategy or business facts; and
- Whether confidentiality was preserved.
Recent cases illustrate how quickly privilege can be lost—or preserved—based on these factors. First, you should be confident that you are not using an open AI model, only used closed. To further protect against disclosure, I would advise consulting counsel before using AI for disputes and litigation. Counsel then can either perform the task for you or give you written directions about what you can do and clear instruction that it is being done for purposes of litigation and under the direction of counsel.
United States v. Heppner: When AI Use Destroyed Privilege
The Ruling
In United States v. Heppner (S.D.N.Y. 2026), the court ruled that documents created by a criminal defendant using a public generative AI tool were not protected by attorney‑client privilege or the work‑product doctrine.
The defendant used AI to generate analyses and potential defense arguments after learning he was under investigation. He later shared those AI‑generated materials with his lawyers.
The court rejected privilege claims, emphasizing that:
- The AI platform was a third party, not counsel
- The platform’s terms eliminated any reasonable expectation of confidentiality
- The work was done on the client’s own initiative, not at counsel’s direction
- Sending the documents to counsel later did not retroactively create privilege
Key Takeaway for Businesses
Using public or consumer AI tools to analyze legal exposure, compliance risks, investigations, or disputes—especially without legal oversight—can result in those materials being fully discoverable giving away key strategic aspects of your case. Privilege is not preserved simply because AI output is shared with counsel after it is created.
In practice, for business, while AI use presents some risk associated with benefits of its use, the real risk lies in the how, why, and when it used. If the answer to any of those questions involves actual or potential litigation, you should consult counsel first.
Warner v. Gilbarco: Courts Reject AI “Fishing Expeditions”
The Ruling
In Warner v. Gilbarco (E.D. Mich. 2026), the court took a very different approach. There, a party sought discovery into an opponent’s use of AI tools like ChatGPT, arguing that AI use should open the door to probing how litigation documents were drafted.
The court flatly rejected that argument, holding that:
- Discovery into “how AI was used” was irrelevant and disproportionate
- AI‑assisted drafting that reflects legal strategy remains protected work product
- Using AI does not automatically waive work‑product protection
- Generative AI tools are best understood as tools, not third‑party witnesses
Key Takeaway for Businesses
Courts are reluctant to allow AI to become a backdoor method for discovering litigation strategy. When AI is used responsibly and confidentially, courts may protect AI‑assisted legal work just as they would traditional drafting. Here is was crucial to the Court’s ruling how, why, and when AI was used.
United Healthcare v. Lokken: AI Summaries and Work Product
Why This Case Matters for AI
Although United Healthcare v. Lokken (D. Minn. 2020) did not involve AI, its discovery principles are directly relevant to AI‑generated summaries and analytics.
The court held that:
- Producing underlying business records does not require producing later‑created attorney summaries or analyses
- Attorney work product does not become discoverable simply because it compiles or interprets existing data
- The duty to supplement discovery applies to new business records, not legal analyses
Key Takeaway for Businesses
When AI is used at counsel’s direction to summarize or analyze large datasets for litigation, those outputs may remain protected, provided they are clearly treated as legal work product and not ordinary business records.
What These Cases Mean for Businesses Using AI
Taken together, these decisions send a clear message:
Courts are not hostile to AI—but they will not create new protections simply because AI is involved.
Instead, they will ask:
- Was confidentiality preserved?
- Was counsel involved?
- Does the material reflect legal strategy or business facts?
- Was AI used as a tool—or as a substitute for protected communications?
While the role of AI in litigation in ever-changing and its applications are endless, it cannot replace the judgment and experience of counsel. AI, like any consultant or expert in litigation, can have tremendous advantages, but is does not replace counsel since courts do not extend privileges to an AI chatbot or language model, just as it would a consultant or expert. To best maintain privilege and protect a confidential information and legal strategy always consult counsel first. If not, it can result in costly litigation.
Best Practices: Reducing AI Litigation Risk Before It Starts
Businesses using AI should consider the following steps now—not after litigation begins:
1. Use the right AI tools
Avoid consumer or public AI platforms for sensitive matters. Enterprise tools with contractual confidentiality, no‑training provisions, and retention controls provide stronger footing.
2. Treat AI like email
Assume AI prompts and outputs may be discoverable. If you would not put it in an email, do not put it into an AI prompt.
3. Involve legal counsel early
When AI is used for legal, regulatory, or litigation‑related work, ensure it is done at the direction of counsel and documented accordingly.
4. Separate business facts from legal strategy
Courts protect mental impressions and legal theories—not underlying facts. Maintain clear separation between business analytics and legal analyses.
5. Train employees on AI use
Most AI risk comes from well‑intentioned employees. Clear policies on what may and may not be entered into AI tools are essential.
Conclusion: AI Is Powerful, But Not Privileged by Default
AI can dramatically improve efficiency, but it can also create unexpected litigation exposure. Courts are already making clear that AI does not provide a privilege shield—and in some cases can tear one down.
Companies that implement thoughtful AI governance, involve counsel, and choose tools carefully can reduce discovery risk while still capturing AI’s benefits. Those that do not may learn, too late, that their AI use created the very evidence an adversary was seeking.