We are in the midst of a revolution in the way legal practice is being conducted. Traditionally, lawyers consulted practitioners’ texts, reviewed the applicable cases and used their firm’s available models to craft their work product. The move from paper to online sources did not essentially alter this; it merely expanded the resources that were available as even small firms could now consult most standard primary and secondary sources via Lexis or Westlaw. With AI the game has radically changed. Now statutes, cases, practitioners’ texts and vast numbers of documents can be fed to AI systems all at once to provide an analysis within minutes. Work product of all kinds can be fed to AI systems to check for internal consistency, verify citation, to synthesise or simply to improve clarity and presentation.

Lawyers are not the only ones affected. All the things that lawyers do with AI can, theoretically, be done by their clients. Clients can use AI to carry out legal research, draft agreements or affidavits, synthesise their documentary records or assemble material to prepare for conferences with their lawyers. The distinction between lawyer and client use of AI is that clients are more likely to use public AI tools, whereas their lawyers will likely use private or closed AI systems. One area in which the Courts have begun to grapple with the implications of AI use is privilege. This article reviews in outline what the current position is.

Arguably two new categories of documents have been created by AI: AI specific prompts and AI outputs. A prompt is what you ask the AI tool to do; the output is what the AI tool produces as a result of the prompt. These must be considered in the context of both litigation privilege and legal advice privilege.

Litigation privilege

Litigation privilege applies to communications or documents created for the dominant purpose of obtaining legal advice, information or evidence relating to the conduct of that litigation. If this test is met, it is hard to see why litigation privilege would not apply both to the prompt and the output, as well as to any communications or documents then created using the output. Importantly, in applying the dominant purpose test, the relevant purpose is that of the person who creates the communication or document in question i.e. the person inputting the prompt or creating the document based on the output. There is no need to ascribe a purpose to the AI system itself.

Although no courts in England or Bermuda have considered the question of litigation privilege as it applies to AI usage, there is some recent US authority. In the US, litigation privilege is known as the work product privilege but provides a narrower form protection than that provided by litigation privilege. In USA v Heppner 25 Cr 5032, Mr Heppner was charged with fraud arising from alleged misconduct as a director of several companies. He sought to argue that 31 documents that memorialised his communications with Anthropic’s Claude were protected by privilege. The District Court of the Southern District of New York disagreed.

It held that, as the doctrine shelters only “the mental processes of the attorney”, AI-generated documents are not protected “if they were not prepared by, or at the behest of counsel in anticipation of litigation or for trial”. In this case, the prompts and the documents derived from the outputs were prepared at Heppner’s sole volition independently of counsel. Nor did they reflect counsel’s strategy at the time of creation. It should be noted that no such restrictions apply to the test under English or Bermuda law.

By contrast in Warner v Gilbarco, Inc3, No. 2: 2024 cv 123333, the Court decided that Warner’s use of AI was entitled to protection. Warner was acting in person, and the judge felt that forcing disclosure would reveal a litigant’s mental impressions regardless of whether those impressions were recorded through dictation, a word processor or an AI tool. The fact that Warner was in person likely made the difference, on the basis that requiring an attorney’s involvement as a pre-condition for work product protection was not necessary. A similar result was reached in Morgan v V2X, Inc4, No. 25 cv 01991, where Morgan was acting in person.

Legal advice privilege

If litigation is not in prospect, the question is whether legal advice privilege is available. Legal advice privilege applies only to lawyer-client communications or draft communications that are prepared for the dominant purpose of giving or obtaining legal advice. Other documents or communications will not be covered, save to the extent that they evidence the lawyers’ advice or the questions on which they are asked to advise. As an AI tool is not a lawyer, a client who chooses to ask AI for legal advice will not be protected by legal advice privilege.

Again, although no courts in England or Bermuda have considered the question of legal advice privilege as it applies to AI usage, there is some recent US authority. US attorney client privilege is conceptually similar to legal advice privilege. In Heppner, the judgment rejected Heppner’s reliance on the doctrine for three reasons. First, because the communications were not between Heppner and his attorney; as the judgment noted, “Heppner could not, and does not, maintain that Claude is an attorney”. Secondly, because Anthropic’s privacy policy did not protect Heppner’s prompts or Claude’s outputs derived from them as Anthropic retained the right to disclose them to third parties; and thirdly the Court considered that, in the absence of direction from counsel, Heppner did not communicate with Claude to obtain legal advice.

Waiver of privilege

A basic and unforgiving feature of all privilege is confidentiality: if privilege material is copied to a third party in circumstances which are inconsistent with confidentiality, the privilege holder will have waived that privilege. Without more (see below), providing material to open or public AI tools will constitute a waiver of any privilege in that material. Judicial guidance in England in relation to this issue was provided October 2025 by the Courts and Tribunals paper on AI it pithily notes, “you should treat all public AI tools as being capable of making public anything entered into them”.

This warning was borne out in the recent decision in the Upper Tribunal Immigration and Asylum Chamber’s in UK and R (on the application of Munir) v Secretary of State for the Home Department (AI hallucination; Hamid) [2026] UKUT 00081 (IAC). The Court held that placing confidential material on ChatGPT constituted a waiver of legal professional privilege. However, neither the judicial guidance nor the decision in Hamid addresses the position where the user of a public or open AI tool adopts the privacy settings within the system. Many public AI tools now offer opt-out settings for training data usage, and it is unlikely that waiver will follow if the privacy settings are deployed.

In both Warner v Gilbarco, Inc and Morgan v V2X, Inc, the US Courts considered the question of waiver by use of public AI tools and decided that no waiver had taken place. Although the reasoning deployed is derived from US waiver rules it raises some interesting questions. In Warner, the Court held that as waiver in relation to work product privilege required “adversarial disclosure”, disclosure to an AI tool could not meet the test as generative AI were not persons, even though they may have administrators somewhere in the background.

In Morgan, the Court addressed the broader question of privacy in third-party intermediary systems and stated that routing information through such systems did not automatically extinguish reasonable privacy expectations. Curiously, the Court observed that AI’s “conversational, trust-inducing” design made it a uniquely sensitive disclosure environment that arguably warranted stronger privacy than ordinary email or cloud storage.

In Bermuda, it should be assumed for the moment that uploading confidential and/or privileged material to a public AI system will constitute a waiver where the system’s privacy settings are not adopted. This does not apply to private AI systems, provided that the relevant privacy policy expressly protects the private user. Clients should therefore be warned that pasting privileged legal advice, draft submissions, draft witness statements, privileged correspondence and the like into a public AI tool will likely be a deemed waiver and make the documents disclosable and should avoid doing so where the privacy opt-out setting is not deployed.

Takeaways

Even at this early stage in the development of the courts’ attitude to AI, there are some reasonably clear takeaways:

  1. Confidential and privileged information should not be to avoid waiving privilege which would otherwise protect them from disclosure. If privilege has been waived in these circumstances, it operates like any other waiver and cannot retrospectively be recovered.
  2. Where AI tools are used for legal tasks, organisations should use private or closed AI systems with appropriate contractual and technical safeguards. Even where using closed AI, organisations should ensure that prompts and outputs containing privileged information are securely stored and access is restricted.
  3. Where litigation is pending, if the dominant purpose test for litigation privilege is met, it is likely that the prompt, the output and any communications or documents then created using the output would be protected. However, if this is done on a public AI system, certainly where no privacy setting is adopted, any privilege held would be lost.
  4. Where no litigation is pending, the dominant purpose test for legal advice privilege must be met. As an AI tool is not a lawyer, a client who chooses to ask AI for legal advice will not be protected by legal advice privilege.

Finally, in case you were wondering, this article was not written by a chatbot.

Stay current with our latest legal insights. Subscribe today.