Judge Jed S. Rakoff of the US District Court for the Southern District of New York issued a notable decision from the bench last week, holding that documents a defendant generated using an AI tool were not privileged under either attorney-client privilege or work product doctrines. US v. Heppner, No. 1:25-CR-00503 (S.D.N.Y.).
What Happened?
The defendant faces multiple criminal charges including securities fraud and wire fraud in connection with a financial services firm he founded. After receiving a federal grand jury subpoena and learning that he was the target of a government investigation, the defendant generated a set of memoranda using a public AI tool to assess potential factual and legal strategies in his case, which he later presented to his counsel. At the time of his arrest, the government seized the electronic devices on which the defendant had stored copies of these memoranda.
The defendant then claimed that these memoranda were privileged and protected as work product. The defense described the documents in privilege logs as “[a]rtificial intelligence-generated analysis conveying facts to counsel for purpose of obtaining legal advice” and argued in court that the memoranda were shielded from discovery under Federal Rule of Criminal Procedure 16(b)(2)(A) as “documents made by the defendant . . . during the case’s investigation or defense.”
The government disagreed and argued that the documents were not protected by either attorney-client privilege or the work product doctrine because, among other things, the documents generated by AI:
Were not prepared at counsel’s direction;
Did not reflect defense counsel’s legal strategy;
Did not become work-product when the defendant sent the memoranda to his counsel; and
Were not confidential, since the defendant shared his prompts with a publicly accessible AI tool.
The Court agreed with the government and concluded that documents the client prompted an AI tool to generate are not privileged, even though they were partially informed by information his attorneys provided to him and despite the fact that he sent the generated documents to his attorneys for the purpose of seeking legal advice. Judge Rakoff noted that, by including his attorney’s information in the prompts, the client had “disclosed it to a third-party, in effect, AI.” Judge Rakoff observed that the defendant had no expectation that the information he put into the AI tool, or the memoranda that the AI tool generated, would be confidential because the terms of service of the AI tool “expressly stated that users should have no expectation of privacy in their inputs.”
This decision follows an emerging body of caselaw considering the impact of AI on the privilege and work product doctrines in which courts have taken different views in cases where the user had a reasonable expectation that their use of an AI tool would be private and cases in which the user did not. For example, Judge Martínez-Olguín of the Northern District of California ruled that certain chatbot conversations generated by attorneys in anticipation of a copyright infringement lawsuit constituted opinion work product and would be protected from discovery. There, however, the user claimed that they had understood based on the applicable user agreement that “any prompts input in ChatGPT [would] be private.” See Letter at 5, Tremblay v. OpenAI, Inc., No. 3:23-cv-3223 (N.D. Cal. June 18, 2024), ECF No. 153. This left Judge Martínez-Olguín room to find that the “ChatGPT prompts were queries crafted by counsel and contain counsel’s mental impressions and opinions about how to interrogate ChatGPT” and thus could be shielded from discovery. Order at 3, Tremblay, (Aug. 8, 2024), ECF No. 167.
Key Takeaways
Judge Rakoff’s decision is notable because it suggests that courts will apply traditional privilege rules to navigate the AI landscape and that if, as in this case, there is no reasonable expectation of confidentiality, it may be an uphill battle to assert privilege. Judge Rakoff’s ruling thus emphasizes the need for parties anticipating, or in, litigation to consider how their use of AI may impact their future privilege claims.
Going forward, companies and individuals weighing how they can use AI tools to help them evaluate their legal risks would be well advised to keep the following points in mind:
1. Confidential or privileged data or documents may risk “losing” their privileged status if input into public AI tools;
2. AI outputs may not attract privilege protections, even if they were drafted for use in discussions with counsel;
3. Ensuring that counsel directs (or is involved in) the generation of AI legal analysis likely strengthens a privilege assertion; and
4. Check user agreements and settings regarding confidentiality; non-public or enterprise-based AI tools that do not use inputs to train models or disclose inputs to third parties likely offer stronger privilege protection than public AI tools.
