Setting Precedents: Why the Isle of Man Needs to Pick an Approach to AI in Court
Imagine a Manx Advocate submitting a skeleton argument to the High Court, citing three authorities from the Manx Law Reports. Two cases are real. One is a hallucinated citation to a case that doesn’t exist.
Except this is not entirely hypothetical. In Munir v Home Office [2026] UKUT 81,1 the England and Wales Upper Tribunal delivered another stern warning on AI hallucinations in legal submissions.2 The judgment is blunt: legal professionals who cite fake cases fail their professional obligations, supervisors who don’t catch the errors are likely more culpable than the juniors who made them, and uploading confidential documents to an open-source AI tool like ChatGPT places client information in the public domain, thus breaching confidentiality and legal privilege in one step.
Munir is not an isolated event. Courts are encountering AI-related issues with increasing frequency, from reliance on AI-generated evidence,3 to the use of smart glasses for witness coaching,4 to the question of whether a defendant’s conversations with a GenAI platform were protected by legal privilege.5 AI is already in the courtroom. The question is whether the rules will catch up before something goes seriously wrong.
Different jurisdictions are answering that question differently, perhaps reflecting fundamentally different views of who should govern AI in the legal profession. As we finish the first quarter of 2026, it is worth comparing the approaches, because the Isle of Man will need to choose one.
The English Approach: Top-Down, Judiciary-Driven
The approach of England and Wales to AI in legal practice is being built through the courts.
The Civil Justice Council, chaired by Sir Colin Birss, has published an interim report and consultation on the use of AI for preparing court documents.6 It covers four categories: statements of case, skeleton arguments and other advocacy documents, witness statements, and expert reports. The key proposal is that where AI has been used to generate evidence on which the court is asked to rely, legal representatives should be required to make a declaration. More routine uses - such as transcription, spell-checking, administrative tasks - would not trigger the requirement.
Transparency is a good thing. But the edge cases will be interesting. If AI has been used to organise correspondence and handle discovery that forms the factual matrix of a witness statement, or your client has incorporated AI workflows into their daily working life and the instructions to you are not entirely human-produced, would you be safe to declare no AI was used on the basis that the witness statement itself was drafted by you? These lines are not as clear as the consultation might suggest.
The broader judicial direction is clear though. On 31 October 2025, the judiciary issued updated guidance for judicial office holders on AI, replacing its April 2025 version.7 Sir Geoffrey Vos, the Master of the Rolls, has spoken publicly about the need to engage with AI in the legal system, specifically calling for debates about human rights, the future role of human judges, and the profession’s readiness.8 The UK Jurisdiction Taskforce’s consultation on liability for AI harms under English private law, for which Sir Geoffrey wrote the introduction, closed on 13 February 2026. The report is anticipated.9
After Munir, the common law is doing what it does: using existing professional obligations to absorb a new technology. No new legislation has been required, although new AI-specific laws are being considered. The courts are telling lawyers what they must disclose, and caselaw is setting the consequences for failures. This approach is coherent for a jurisdiction with high court volume, a large and well-resourced bar, and an active judiciary. The question is whether it is right for everyone.
The Singaporean Approach: Practitioner-Facing, Toolkit-Based
Singapore has been building its AI governance infrastructure since 2019, with frameworks spanning government, industry, and now the legal profession specifically.10
On 6 March 2026, Singapore’s Ministry of Law published its Guide for Using Generative AI in the Legal Sector.11 It is worth reading in full (yes, all 50 pages of it) because it represents something quite different from the English approach.
The Guide is non-binding. It addresses law firms and in-house teams directly, rather than being court-facing. And it is relentlessly practical. It includes sample internal GenAI governance policies, employee handbook clauses, letter of engagement templates, and a vendor assessment checklist. This is not a statement of principle. It is a toolkit, with implementation steps mapped to three progressive adoption stages, designed so that a two-partner firm and an international firm’s Singapore office can both use it.
One of the most useful features is its risk-based oversight framework, which distinguishes between human-in-the-loop (active review before output is used) and human-on-the-loop (monitoring with intervention only when anomalies arise), mapped to whether the task is internal or external and whether the output has legal, reputational, or financial consequences. Have a look at the diagram at page 18. This is what practical AI governance looks like. It is not glamorous. It is checklists and data classification tables and vendor assessment questions. It gives you specific key concepts to consider (that you can Google) when you’re building your firm’s AI policy. But it is the kind of framework that prevents a firm from uploading client files to ChatGPT and discovering the consequences after the fact.
The Guide is particularly strong on data governance: tiered data classification (public, internal, confidential, highly confidential, prohibited), specific guidance on free-to-use versus enterprise AI tools, and contractual safeguards for vendor selection. For a jurisdiction like the Isle of Man, where much of the legal work involves licensed financial services and regulated activities, this is directly relevant.
Singapore’s courts have also acted. Since 1 October 2024, the Guide on the Use of Generative AI Tools by Court Users applies across the Supreme Court, State Courts, and Family Justice Courts.12 It does not mandate a declaration of AI use, but requires users to be prepared to identify AI use where directed by the court. In other words, a lighter touch than the CJC’s proposed mandatory disclosure.
The Singapore approach says: we trust the profession to govern itself, and here are the tools to do it well. This is arguably the result of years of digitalisation and capacity building. Beyond the legal sector, Singapore is investing in AI literacy across the whole population. In his Budget 2026 speech, Prime Minister Lawrence Wong announced that Singaporean citizens enrolling in AI training courses will receive six months of free access to premium AI tools.13 The Law Society of Singapore already offers a Productivity Solutions Grant for pre-approved legal technology solutions.14 The direction is clear: institutional support for responsible adoption, across the profession and beyond. The government has removed the biggest barriers and lowered friction to aid adoption, and promised to take its people along the journey as the nation navigates the changes brought by AI.
The Australian Approach: Principles-Based, Court-Led
Australia offers a third model. Several state supreme courts have issued AI guidelines for litigants. These are court-led, but principles-based and softer in tone than the English or Singaporean materials.15
The Supreme Court of Victoria’s Guidelines for Litigants: Responsible Use of Artificial Intelligence in Litigation,16 issued in May 2024, is the clearest example. It sets out 12 principles rather than mandatory rules. Disclosure is encouraged: parties and practitioners “should” disclose AI use to each other and the court but it is not mandated. The Court is candid about GenAI limitations, stating explicitly that output from general-purpose tools “is not presumed to be correct” and is “more likely to produce results that are inaccurate for the purpose of current litigation.”
One principle is particularly relevant for a small jurisdiction. Principle 8(d) warns that AI output may be “inapplicable to the jurisdiction, as the data used to train the underlying model might be drawn from other jurisdictions with different substantive laws and procedural requirements.”
For the Isle of Man, this is not a theoretical concern. It is an acute one, and it connects directly to the points I raised before about the accessibility of Manx law online. Manx law is a distinct legal system. As a Crown Dependency, the Isle of Man has its own statutes, its own jurisprudence, land law with Norse and Celtic origins, and its distinct constitutional and administrative frameworks. Its common law references English cases in some areas but not others. If Manx judgments and legislation are not accessible online, they are almost certainly not in the training data of any major AI model, or, for that matter, in the databases of major legal research tools like Westlaw and Lexis.
The Australian approach shows that court-level guidance does not require the formality of a consultation or a state-led development programme. It requires judicial and professional leadership, and a willingness to put principles on paper. For a small jurisdiction with a handful of Deemsters and a collegial bar, this might be the most natural starting point.
The Isle of Man’s Time to Choose
In the Isle of Man, the judiciary has yet to adopt a position on AI in legal practice. The Law Society’s current approach has been to leave it to individual firms to adopt their own responsible AI policies. At a time when every comparable common law jurisdiction, from England to Singapore to multiple Australian states, has issued at least some form of guidance, the Isle of Man’s current position is, in effect, that there is no position.
It is worth pausing on what being free to adopt their own responsible AI policy means in practice. A sole practitioner here is expected to develop their own AI governance framework from scratch, covering disclosure, confidentiality, data classification, vendor assessment, and professional liability. The resources to do that well exist: Singapore has just published a 50-page guide with templates and checklists. But the expectation that every firm will independently discover and adapt those resources is optimistic. Guidance from a professional body does not restrict practitioners. It equips them. Or at the very least, they would be less likely to just ask ChatGPT to generate an AI policy for their firm. The Law Society in England and Wales has an AI strategy too.17
The Isle of Man does not need to choose just one model. By reviewing and comparing what England, Singapore, and Australia have done, it can adapt what suits the island’s context. But a choice does need to be made, and it needs to be made soon. The CJC consultation closes on 14 April. The Singapore Guide was published on 6 March. The Australian guidelines have been in effect for up to two years. This conversation is overdue.
But there is a deeper issue behind the guidance question, and the most recent UK consultation report brings it into focus. On 18 March 2026, the UK government published its report on copyright and artificial intelligence under Section 137 of the Data Use and Access Act.18 The report addresses how copyright works are used in AI training and, based on the strong concerns on the proposal, it confirms the government will not pursue a broad copyright exception but will work with industry on input transparency and best practices. Whatever one thinks about the copyright questions, the report signals that the UK government is actively thinking about what data goes into training AI systems and how.
This should concentrate minds in Douglas, but perhaps from the opposite direction. The UK’s concern is about protecting copyrighted works from being scraped for AI training without permission. The Isle of Man’s concern should be that its legal data is not being used for AI training at all, because it is not systematically and publicly available. AI tools used by Manx advocates will produce outputs trained overwhelmingly on English, American, or Australian law, but very little Manx law. Even if the island wants to position itself as AI-friendly, that positioning is not effective if its own body of law is largely invisible to AI systems. The government’s current decision to block bots from indexing government websites actively prevents AI systems from learning about the Isle of Man as a jurisdiction.
Issuing guidance on AI use in legal practice is necessary. But it is not sufficient. The work of making the Isle of Man’s legal system legible (to humans and to machines) remains the foundation. Not as a concession to technology companies, but as a precondition for the island’s AI adoption ambitions to mean anything.
The question is whether anyone is going to set the rules before something goes wrong.
AIOM is published fortnightly from the Isle of Man. Next issue: I’ll look at the UK Jurisdiction Taskforce’s consultation on AI liability and what it means for the Isle of Man.
Munir v Home Office [2026] UKUT 81. The amended judicial review claim form now mandates declarations that cited authorities exist, can be located, and support the propositions for which they are cited.
The last main one from the High Court was Ayinde v London Borough of Haringey [2025] EWHC 1383 (Admin).
Crypto Open Patent Alliance v Craig Steven Wright [2024] EWHC 1198 (Ch), at paras. 514-537.
UAB Business Enterprise v Oneta Limited [2026] EWHC 543 (Ch), at paras. 110-118.
United States of America v Bradley Heppner 25 Cr 503 (JSR), US District Court, Southern District of New York, 17 February 2026.
Civil Justice Council, ‘Use of AI for Preparing Court Documents’ Interim Report and Consultation. Consultation closes 14 April 2026.
Sir Geoffrey Vos, MR, speech at Legal Geek Conference: “What a Difference a Year Makes”.
UK Jurisdiction Taskforce (UKJT), Consultation on the Legal Statement on Liability for AI Harms under the private law of England and Wales (January 2026). The UKJT was established by the LawtechUK Panel, a Ministry of Justice-backed initiative dedicated to driving digital transformation in the UK legal sector.
Singapore’s National AI Strategy was launched in 2019. The Personal Data Protection Commission released its Model AI Governance Framework in January 2019. IMDA’s AI Verify Foundation launched in 2023. The Ministry of Law’s Legal Technology Platform Initiative began in 2022.
Singapore Ministry of Law, Guide for Using Generative AI in the Legal Sector (6 March 2026).
Singapore Courts, Guide on the Use of Generative AI Tools by Court Users (Registrar’s Circular No. 1 of 2024, 23 September 2024).
Prime Minister Lawrence Wong, Budget 2026 Speech - Harness AI as a Strategic Advantage.
Law Society of Singapore, Productivity Solutions Grant - Legal Tech Adoption.
In addition to the Supreme Court of Victoria (see the footnote below), AI guidelines have been issued by the Supreme Court of New South Wales (Practice Note SC Gen 23), the Supreme Court of South Australia (effective 1 January 2026), and the Supreme Court of Western Australia. International guidance has also been issued by UNESCO (2025), the Chartered Institute of Arbitrators (updated September 2025), and the International Bar Association (June 2025).
Supreme Court of Victoria, Guidelines for Litigants: Responsible Use of Artificial Intelligence in Litigation (May 2024).
UK Government, Report and Impact Assessment on Copyright and Artificial Intelligence (18 March 2026).
