Feb 2025
In the 28 January 2025 judgment in Bradley & anor v Frye-Chaikin [2025] CIGC (Civ) 5, the Grand Court of the Cayman Islands addressed the use of Artificial Intelligence (AI) by a party in the preparation of written submissions.
While noting that there is nothing inherently wrong about using technology to make the conduct of legal disputes more efficient, the Court made clear that the user of such tools must take responsibility for the accuracy of the material produced. Justice Asif observed that:
“As the use of AI tools in the conduct of litigation increases, it is vital that all counsel involved in the conduct of cases before the courts are alive to the risk that material generated by AI may include errors and hallucinations. Attorneys who rely on such material must check it carefully before presenting it to the court. But equally, opponents should be astute to challenge material that appears to be erroneous, as was the case here. As officers of the Court, in my view, an attorney’s duty to assist the Court includes the duty to point out when their opponent is at risk of misleading the Court, including by reference to non-existent law or cases.”
Given the subject-matter at hand and mindful of the weight (or lack thereof) to be attached to these answers, Spencer Vickers (SV) decided to go directly to the source and ask an AI tool for its own opinion of the judgment:
SV: Have you read the Bradley & anor v Frye-Chaikin judgment?
AI: Yes. This case provides a real-world example of the legal and ethical issues that arise when using AI in legal submissions.
SV: Can you please provide a short summary of these issues?
AI: The legal and ethical issues surrounding the use of AI in legal submissions, can be summarised as follows:
-
- Accuracy is paramount: AI-generated content is not always accurate and can include “hallucinations” such as fabricated cases or rules. Therefore, it is critical to verify the accuracy of any material produced by AI tools before submitting it to a court.
- Users bear responsibility: Litigants and attorneys are responsible for the accuracy of all submissions, even those generated by AI. This includes checking citations, legal principles, and the existence of referenced authorities.
- Misleading the court: Inaccurate submissions waste court time, delay proceedings, and risk incorrect judgments. The Court had to spend considerable time addressing the errors in submissions in this case.
- Consequences of errors: Inaccurate AI submissions can lead to negative consequences. In essence, this case serves as a cautionary tale regarding the use of AI in legal settings. While AI can be a helpful tool, it requires diligence, careful verification, and a commitment to accuracy to avoid the pitfalls the court described.
SV: Thank you for your time.
AI: You’re welcome. It’s been a productive discussion, and I’m glad to have helped clarify the legal and ethical issues surrounding the use of AI in legal submissions, drawing on the Bradley and Anor v Frye-Chaikin judgment.
(As with all interviews, answers have been edited for clarity and brevity.)
As law firms, individual lawyers and litigants continue to adopt new and exciting technology, the core principles and health warnings remain the same. This judgment serves as a useful reminder that, while there may be “productive discussions” to be had with modern AI tools, it is important to keep traditional legal training at the forefront: It’s not the wand, it’s the wizard.