<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[AIOM]]></title><description><![CDATA[AI governance from the Isle of Man. Written by a practising lawyer on the National AI Office Advisory Group, currently its only legal practitioner. Formerly Hong Kong Bar. Fortnightly on regulation, risk, and what small jurisdictions get right.]]></description><link>https://www.theaiom.im</link><generator>Substack</generator><lastBuildDate>Sat, 09 May 2026 03:46:23 GMT</lastBuildDate><atom:link href="https://www.theaiom.im/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[AIOM]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[theaiom@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[theaiom@substack.com]]></itunes:email><itunes:name><![CDATA[AIOM]]></itunes:name></itunes:owner><itunes:author><![CDATA[AIOM]]></itunes:author><googleplay:owner><![CDATA[theaiom@substack.com]]></googleplay:owner><googleplay:email><![CDATA[theaiom@substack.com]]></googleplay:email><googleplay:author><![CDATA[AIOM]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[What Each Jurisdiction Thinks AI Is]]></title><description><![CDATA[Notes From Taipei, Tokyo, and Douglas]]></description><link>https://www.theaiom.im/p/what-each-jurisdiction-thinks-ai</link><guid isPermaLink="false">https://www.theaiom.im/p/what-each-jurisdiction-thinks-ai</guid><dc:creator><![CDATA[AIOM]]></dc:creator><pubDate>Wed, 15 Apr 2026 16:57:05 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!PEUd!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F67457335-1c9d-4091-a628-b072644b2daf_609x609.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>The UKJT report I promised a fortnight ago is still pending. When the consultation report is published, I will do a proper piece on it. In the meantime, here is a detour that turns out not to be a detour.</em></p><p>I have just finished reading Dr Chien Lee-feng&#8217;s book on the future of artificial intelligence in Taiwan.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> The first thing that struck me was how the book does not really talk much about AI governance. It reads like a national strategy briefing, pitched to government, industry, business owners, employees, parents and young people in one go, about an industrial transition that Taiwan cannot afford to miss.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.theaiom.im/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>Dr Chien, the former no. 1 Google Taiwan employee, thinks about AI as an industrial transition. That&#8217;s remarkable, because that kind of context is the key to understanding why jurisdictions produce such different AI conversations. What each jurisdiction thinks AI <em>is</em> shapes what it thinks AI governance is <em>for</em>. And what it thinks AI governance is for determines what kind of institutional bet it makes, and when. </p><p>I think that makes comparing jurisdictions fascinating. The difference between jurisdictions is not that some have rules and others do not. It is that the rules each jurisdiction writes are shaped by what each thinks AI is <em>for</em> in the first place.</p><p></p><h2>Taiwan: AI as an industrial transition</h2><p>Taiwan sits at the centre of the global AI supply chain. TSMC fabricates the overwhelming majority of the world&#8217;s leading-edge AI accelerators; the island&#8217;s hardware ecosystem is inseparable from the frontier-compute economy that OpenAI, Anthropic, Google, and the large Chinese labs all depend on. Chien&#8217;s book takes that industrial centrality as the frame within which Taiwan&#8217;s AI future has to be reasoned about.</p><p>Within that frame, the problems he identifies are almost entirely capacity-side rather than rules-side. Four stand out.</p><p>The first is <strong>talent retention</strong>. Taiwan trains world-class engineers, and then loses many of them to the United States. The AI hardware story is a Taiwanese story, but a significant share of the people who will build the AI software stack over the next decade are doing so in Palo Alto, Seattle, and Cambridge, Massachusetts, not Hsinchu or Taipei. This is not an easy problem to legislate out of existence.</p><p>The second is <strong>training data in traditional Chinese</strong>. Simplified Chinese dominates the internet&#8217;s Chinese-language corpus; traditional Chinese (the written script used in Taiwan and Hong Kong) is comparatively under-represented, and much of the high-quality material that does exist is not digitised or not openly accessible due to IP protection. A jurisdiction that wants its own legal, medical, and cultural context to be legible to AI systems has to build the corpus that makes that possible.</p><p>The third is <strong>quality data for sovereign AI</strong>. Even with good corpora, developing models that reflect Taiwan&#8217;s legal, regulatory, and cultural specificity requires institutional collaboration that cuts across sectors usually kept apart. Chien notes that the TAIDE and TAME sovereign AI models &#8212; built on Meta&#8217;s open-weights Llama &#8212; were developed by a consortium including the National Science and Technology Council, the Department of Computer Science and Information Engineering and the Department of Information Management at National Taiwan University, Pegatron, Unimicron, Chang Gung Memorial Hospital, and the CCP Group.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> State, academy, industry, and healthcare, aligned around a single technical project. That is what proper capacity-building looks like. I am genuinely curious how these models have been adopted, and how they have performed against frontier alternatives in real Taiwanese use &#8212; a question I would have to dig in further. Any views from readers closer to that development would be most welcome.</p><p>The fourth, and the one I found most striking, is <strong>adoption-timing risk</strong>. Chien observes that Taiwan&#8217;s economy has relatively robust fallback sectors &#8212; mature manufacturing, agriculture, established services &#8212; which cushion individual actors from the pressure to adopt AI quickly. The consequence, he argues, is that Taiwan as a whole may drift past the window in which early AI adoption delivers compounding advantage, while other economies with fewer fallbacks move faster out of necessity. This is a governance concern that sounds nothing like the governance concerns dominating Western AI debate. It is not about risk, or liability, or rights. It is about timing &#8212; specifically about the failure mode of moving too <em>slowly</em>, not too fast.</p><p>Taken together, these are the worries of a technology-intensive, export-oriented, democratically governed jurisdiction whose security posture is aligned with the United States and whose manoeuvring room is shaped by external pressures it did not choose. They are also, unmistakably, the worries of a jurisdiction that has decided AI is primarily an industrial question.</p><p></p><h2>Taiwan&#8217;s rules, written to serve Taiwan&#8217;s industry</h2><p>It would be wrong, however, to stop there and conclude that Taiwan has chosen capacity over law. In December 2025, the Legislative Yuan adopted the Basic Law on Artificial Intelligence, which came into effect on 14 January 2026.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a> Unlike German or Hong Kong law, Basic Laws in Taiwan is a type of framework or umbrella statute and sits below the Constitution. The Basic Law on AI sets out seven foundational principles: sustainable development and well-being, human autonomy, privacy protection and data governance, cybersecurity and safety, transparency and explainability, fairness and non-discrimination, and accountability. It also mandates high-risk warnings for AI products and systems classified as such by the relevant competent authority in consultation with the Ministry of Digital Affairs. Finally, it requires all government agencies to review, amend, or enact legislation under their jurisdiction to bring it into compliance with the Basic Law within two years of its commencement. The clock runs to <strong>14 January 2028</strong>.</p><p>What makes the Basic Law on AI worth reading closely is not the seven principles (as most contemporary AI frameworks name a similar set) but the genealogy of its drafting. In 2004, Taiwan&#8217;s Fundamental Communications Act established, without qualification, that the interpretation and application of communications statutes should not prejudice the provision of innovative technologies and services. Twenty years later, when a similar pro-technology clause was included in the Basic Law on AI draft, the Legislative Yuan &#8212; after &#8220;vigorous discussions,&#8221; as per Ms Ken-Ying Tseng&#8217;s measured description &#8212; added a precondition: the promotion of new technologies and services would prevail over conflicting laws, but only where the seven fundamental principles are adhered to.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a> The direction of the concession matters. Pro-innovation was the default; principles were the negotiated qualifier. That is a very particular kind of AI rulebook. It is a rulebook that treats technology-first as the starting premise, and treats the seven principles as the constraints within which the starting premise gets to run.</p><p>The Basic Law on AI also instructs MODA to align Taiwan&#8217;s framework with international risk-classification standards, establishes a regulatory sandbox, and commits the government to budgetary support, talent cultivation, and mechanisms for data openness, sharing, and reuse to enhance AI-usable data. In other words, the same capacity concerns Chien raises in his book is now codified as statutory obligations on government agencies. The law aligns with the industrial strategy, translated into the operative task to develop statute in the next two years.</p><p>Taiwan, then, has not chosen capacity over rules, or rules over capacity. It has written its rules <em>to serve</em> its capacity bet. The two-year legislative review clock is the institutional expression of a judgment that capacity-building and legal clarity have to arrive together. Industry gets certainty; the profession gets an architecture to work within; government gets a programmatic deadline that survives the election cycle.</p><p>That is a more disciplined move than most of the AI legislation passed elsewhere in the last eighteen months. And it is a move whose distinctive character is only visible if you read the industrial frame (Chien) and the legal frame (the Basic Law on AI) as one continuous argument, not two separate ones.</p><p></p><h2>England: AI as a liability and professional-conduct question</h2><p>The contrast with the English conversation is starker than one would expect. In the last fortnight alone, the English AI-and-law conversation has circled around the Civil Justice Council&#8217;s interim report on AI in court documents (consultation has just closed on 14 April 2026), <em>Munir v Home Office</em> on AI hallucinations and verification obligations, the UK Jurisdiction Taskforce&#8217;s forthcoming legal statement on AI liability, the October 2025 judicial guidance update, and Sir Geoffrey Vos&#8217; continuing public reflections on AI in the legal system.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a></p><p>Every one of those interventions is a rules-side move in a specific register: the key ideas are professional duties, negligence, and the absorption of a new technology into existing rights and remedies. Chien&#8217;s Taiwanese concerns about corpora, talent flight, and fallback-sector drag have no English analogue in the legal-regulatory conversation, because they belong to a different conversation altogether. It looks like the UK&#8217;s AI capacity questions are being handled, for better or worse, in other rooms &#8212; by the Government&#8217;s industrial-strategy apparatus, the CMA, the frontier labs themselves. The courts and the legal profession do what they do well, on common law grounds they understand.</p><p>This is not a criticism. England and Wales most probably will have the court volume, the legal market depth, and the institutional infrastructure to make an incremental, case-law-driven approach work. It is coherent for a jurisdiction whose AI capacity questions are being settled elsewhere.</p><p>It is simply a different lens. Consequently, the lens produces a different kind of rulebook. Where Taiwan&#8217;s Basic Law on AI treats AI as an industrial transition to be supported within principled limits, English common law treats AI as an analytic object to be absorbed within the existing fabric of professional duty, much like the arrival of the fax and email as methods of service of Court documents. </p><p>Both jurisdictions are working on rules. They are not the same rules.</p><p></p><h2>Tokyo: commitments extracted, and capacity built alongside</h2><p>Between the industrial lens and the liability lens, Dr Chien brought up a third move worth noticing, and Tokyo illustrates it.</p><p>When Sam Altman visited Japan and met Prime Minister Kishida in April 2023, and returned again to speak at the University of Tokyo in February 2025, the discussions around the visits emphasised OpenAI&#8217;s Japan-specific commitments: a Japanese-optimised GPT-4 model, collaboration on analysing publicly available Japanese government data, an OpenAI Japan office, and eventually a SoftBank-OpenAI joint venture.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a> The point is, a frontier lab will make locally-tailored commitments &#8212; tuned models, local offices, joint ventures with national champions, data-collaboration frameworks &#8212; when a jurisdiction has enough leverage to ask.</p><p>The important move, though, is what Japan has done <em>alongside</em> those commitments, not because of them. Japan did not sit around and wait for OpenAI. Their National Institute of Advanced Industrial Science and Technology opened the ABCI 3.0 supercomputer to industry and universities for foundational model development in January 2025. The GENIAC programme has funded thirty Japanese-language foundation-model projects since February 2024. Sakana AI and Rakuten have built and released Japanese-optimised models, some of them open-source.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-7" href="#footnote-7" target="_self">7</a> The commitments extracted from OpenAI are real, but they are a supplement to Japanese sovereign capacity, not a substitute for it.</p><p>Nor are capacity programmes Japan&#8217;s only legal-architecture move. Most recently this month (April 2026), the Japanese Cabinet approved an amendment bill to the Act on the Protection of Personal Information that removes the opt-in consent requirement for third-party provision of personal data, and for the acquisition of publicly available sensitive personal information, where the data is to be used for &#8220;statistical purposes&#8221; &#8212; a term the bill reads broadly enough to cover AI development and training. The same bill introduces an administrative fine regime for violations of data-processing obligations. One instrument, two directions at once: loosened consent rules to enable AI training data to move at scale, tightened enforcement to make the privacy floor stick.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-8" href="#footnote-8" target="_self">8</a>[^8] Taiwan&#8217;s Basic Law on AI pairs industrial-strategy support with seven principled constraints; Japan&#8217;s APPI amendment pairs training-data liberalisation with a new penalty regime. The move is not unique to either jurisdiction, but the pattern is becoming identifiable. Rules that reflect what a jurisdiction thinks AI is for are rules that move on more than one axis at once.</p><p>The lesson for jurisdictions further down the scale is not &#8220;demand more from OpenAI.&#8221; It is that extracting commitments from frontier labs is worth doing only if you are also building something that makes the commitments bite &#8212; and that legal architectures built to serve an AI strategy have to calibrate enablement and enforcement as a single, integrated move. A jurisdiction that is not building anything gets press releases. A jurisdiction that only enables, or only enforces, gets an incomplete rulebook.</p><p></p><h2>The Isle of Man: manoeuvring room, and what to do with it</h2><p>The Isle of Man is a Crown Dependency. It is not part of the United Kingdom, and it is not part of the European Union. Tynwald &#8212; the Island&#8217;s parliament, and one of the oldest continuously sitting legislatures in the world &#8212; passes its own primary legislation. It does so, however, subject to review by the UK Ministry of Justice before Royal Assent, and Westminster retains, under the framework articulated in the 1973 Kilbrandon Report, a reserved power to legislate directly on Isle of Man matters where the good governance of the Island or the United Kingdom&#8217;s legitimate interests are judged to require it.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-9" href="#footnote-9" target="_self">9</a>[^9]</p><p>In practice, this is less a ceiling than a permanent negotiation. Much of the delicate conversation between the Island and the UK is about where the line between internal Manx matters and external matters falls, and that line moves, slowly, through decades of practice rather than constitutional rewriting. I have not seen this arrangement described as clearly in public-facing material as it is explained to students on the Manx Bar course, which is itself a small fact worth sitting with. But it is the constitutional reality within which any Isle of Man AI strategy has to be made.</p><p>Taiwan&#8217;s manoeuvring room is constrained, structurally, by a contested international status and by a security posture aligned with the United States. The Isle of Man&#8217;s manoeuvring room is constrained, structurally, by its relationship with the Crown and with Westminster. Of course, these are not analogous constraints. Taiwan has dense domestic capacity &#8212; hardware, universities, cross-sectoral consortia, an active industrial base, and now a statute that programmes that capacity into a two-year legislative cycle &#8212; and makes institutional bets inside a geopolitical constraint it did not choose. The Isle of Man has legislative agility &#8212; a parliament that can move a bill from introduction to Royal Assent in months &#8212; inside a relational constraint that is negotiated rather than imposed. Neither set of tools is complete. What each jurisdiction can do with the tools it has is a different question.</p><p>The Isle of Man does not have a frontier-model industry, and is unlikely to acquire one. It has an emerging deployment layer &#8212; a small but growing group of Manx firms developing AI workflow products, some of them explicitly marketed to offshore jurisdictions &#8212; which looks broadly similar to the deployment layers forming in dozens of other jurisdictions of comparable scale. What is more distinctive is what the Isle of Man has started doing legislatively with <em>data</em>.</p><p></p><h2>The Data Asset Foundation, and an extension of the logic</h2><p>On 24 March 2026, Tynwald passed the Foundations (Amendment) Bill, with Royal Ascent pending, which would create a new legal structure: the Data Asset Foundation. A DAF is a licensed entity &#8212; a new type of legal personality like a company but closer in nature to a foundation or a trust &#8212; designed as a legal wrapper to hold data, set enforceable rules about how that data can be used and by whom, and allow the data to be treated as a formal capital asset.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-10" href="#footnote-10" target="_self">10</a> Secondary regulations are in development; a six-month pilot with early adopters is now running; full enactment is scheduled for later in 2026.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-11" href="#footnote-11" target="_self">11</a></p><p>Two features of the DAF regime are worth isolating.</p><p>The first is that the DAF is positioned as a structure <em>outside</em> the reach of the US CLOUD Act and outside the UK and EU regulatory frameworks.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-12" href="#footnote-12" target="_self">12</a>[^12] Its pitch is not that the Island has built a better version of what larger jurisdictions offer, but that it has built something larger jurisdictions cannot offer, because they are inside regulatory regimes with which the DAF is deliberately non-aligned. The Crown Dependency position &#8212; not part of the UK, not part of the EU, not subject to US extraterritorial legislation &#8212; is being treated not as a marginal status to be overcome, but as the defining feature of the product.</p><p>The second is that the DAF regime is being delivered by an emerging ecosystem of data hosting and corporate service providers now offering DAF administration and governance services. The infrastructure is not hypothetical. It is being built.</p><p>It would be too early to evaluate this regime. It is closer to a bet of a particular kind &#8212; not Taiwan&#8217;s capacity bet, not England&#8217;s liability bet, but a <strong>jurisdictional-arbitrage bet</strong>. The proposition is that strategic regulatory distance from larger powers is itself a valuable asset, and that a small Crown Dependency with legislative agility and an established financial-services regulatory culture is unusually well placed to sell that distance in a structured, legally-cognisable form.</p><p>Here is my speculation. If the DAF bet works &#8212; if, by 2027 or 2028, there is a visible body of international data custody business operating through DAFs &#8212; then it is entirely plausible that a post-September-2026 Tynwald will want to extend the same logic to AI. Not just data as an asset, but AI models, AI training runs, AI workloads, and AI governance obligations structured in, and administered from, a jurisdiction deliberately positioned outside the CLOUD Act, outside the EU AI Act, and maybe even <em>partially</em> outside whatever the UK&#8217;s eventual AI legal landscape looks like. An &#8220;AI Asset Foundation&#8221; is not something anyone has publicly proposed. But the institutional logic that produced the DAF points in that direction, and the Island&#8217;s regulatory culture &#8212; built over decades for financial services &#8212; is, for better and for worse, precisely the kind of culture that can productise a legal concept once the political decision is made.</p><p>This is a bet Taiwan cannot make. Taiwan&#8217;s security and trade posture requires alignment with the United States on AI supply-chain matters, with all that implies about export controls, compute access, and regulatory convergence. Its manoeuvring room runs in a different direction. What Taiwan can do &#8212; build state-academy-industry-hospital consortia to develop sovereign models, retain talent, digitise and steward its linguistic and legal corpus, write a Basic Law that codifies the industrial-strategy framing into statute &#8212; the Isle of Man cannot. What the Isle of Man can do &#8212; hold data and, possibly, AI assets in a structure deliberately outside the major regulatory regimes &#8212; Taiwan cannot. The contrast and complementarity are, to me, noteworthy. The lesson-flow, honestly, probably runs mostly in one direction: there is more the Isle of Man can learn from Taiwan about the patient, cross-sectoral work of building capacity than there is the other way around. But the jurisdictional-arbitrage move is a move, and it is one worth potential Taiwanese readers understanding, because a version of it is being built about two thousand miles from Taipei, in a Crown Dependency of 85,000 people.</p><p></p><h2>Three clocks</h2><p>Taiwan thinks AI is an industrial transition, and has written a Basic Law that treats governance as the architecture that transition is run through. England and Wales thinks AI is a legal liability and professional-conduct problem, and its courts and profession are doing the incremental, case-law-driven work of absorbing a new technology into its common law jurisprudence. Japan, pragmatically, thinks it is both &#8212; something to be built and something to be negotiated with &#8212; and has the state capacity to do both at once. The Isle of Man seems to be in the process of deciding if AI is, for it, a jurisdictional-arbitrage question: a question about what kind of regulatory distance the Island can offer that larger jurisdictions structurally cannot.</p><p>None of these lenses is wrong. Each is answering a different question, shaped by the manoeuvring room the jurisdiction actually has. The interesting thing is that they are not substitutes. The industrial lens and the liability lens and the arbitrage lens each illuminate parts of AI governance the others do not, and a jurisdiction paying attention to all three &#8212; not just the one its own structural position pushes it toward &#8212; is better placed to make whatever institutional bet it ends up making.</p><p>Three clocks are running through 2026 and 2027, and they matter because they set the deadlines by which the bets actually have to be placed.</p><p>Taiwan&#8217;s clock is statutory. The Basic Law on AI requires every government agency to complete the review, amendment, or enactment of legislation in its jurisdiction by <strong>14 January 2028</strong>. That deadline survives elections, cabinet reshuffles, and shifts in political mood. It is running whether anyone chooses to pay attention to it or not.</p><p>The European Union&#8217;s clock is regulatory, and currently uncertain. The statutory deadline for compliance with the Annex III high-risk provisions of the EU AI Act remains <strong>2 August 2026</strong>, but the Commission&#8217;s Digital Omnibus proposal &#8212; on which the Parliament&#8217;s IMCO and LIBE committees voted jointly on 18 March 2026, and which the Parliament adopted as its negotiating position in plenary on 26 March 2026 &#8212; would push that deadline to <strong>2 December 2027</strong> for stand-alone high-risk AI systems, and <strong>2 August 2028</strong> for high-risk AI systems embedded in products already regulated under Annex I. Trilogue negotiations with the Council are expected to begin in April. Whichever deadline ends up binding, that is the compliance horizon EU-facing businesses are facing.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-13" href="#footnote-13" target="_self">13</a></p><p>The Isle of Man&#8217;s clock is democratic. The general election is scheduled for <strong>September 2026</strong>. Whatever AI strategy the Island commits to for the next five years will be chosen by the Tynwald that election returns. The manifestos being drafted over the coming months are where the direction actually gets set. Unlike the Taiwanese and European clocks, the Manx clock is not set by statute or treaty. It is set by the electorate, and by the candidates who choose to put AI governance on their platforms.</p><p>So we have three jurisdictions with different structural questions converging on the same approximate timeline. Chien&#8217;s book is, among other things, an argument that Taiwan needs to make its bet soon, because the industrial window is closing. The Isle of Man, on a different clock but the same horizon, is making up its mind, potentially deciding on what the Island is willing to commit to for the next five years.</p><p>I will be in Dublin for the IAPP AI Governance Global Europe conference in June, where the state of the Digital Omnibus and the high-risk rollout will likely be one of the conversations in the room. I will report back on what the EU is actually going to ask of anyone doing AI-adjacent business from a jurisdiction of our size.</p><p>In the meantime, the decision the Isle of Man is in the process of making is not a bet to imitate the EU, or the UK, or Taiwan. It looks like a bet on what the Island&#8217;s Crown Dependency status and established regulatory culture can build that no larger jurisdiction structurally can. What the September 2026 election will decide is whether the Island is willing to make that bet deliberately, or by default.</p><div><hr></div><p><em>AIOM is published fortnightly from the Isle of Man. Next issue: the UKJT legal statement, when it lands.</em></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>&#31777;&#31435;&#23792;, <em>&#12298;&#21488;&#28771;AI&#22823;&#26410;&#20358; &#8212; &#35299;&#26512;&#26368;&#26032;&#30340;AI&#36264;&#21218;&#12289;&#21488;&#28771;&#24773;&#21218;&#12289;&#20225;&#26989;&#24067;&#23616;&#33287;&#20491;&#20154;&#30332;&#23637;&#12299;</em> (my unofficial translation of the title is &#8216;Taiwan&#8217;s Grand Future in AI: Analysing the Latest AI Trends, Taiwan&#8217;s Position, Corporate Strategy, and Personal Development&#8217;), 2024. Dr Chien Lee-feng was the first employee and, for fourteen years, the Managing Director of Google Taiwan; he is a former research fellow and deputy director of the Institute of Information Science at Academia Sinica and professor in the Department of Information Management at National Taiwan University. He retired from Google in 2020 and now sits on the boards of Appier and iKala.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>Chien also notes that the TAIDE/TAME consortium was explicitly designed to produce a sovereign AI model tailored to Taiwan&#8217;s local language, knowledge, legal framework, and cultural needs.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>Ken-Ying Tseng, &#8220;Taiwan: The Basic Law on Artificial Intelligence leads the way to AI development,&#8221; DataGuidance, 3 February 2026. </p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p><em>Ibid</em>. The contrast with the unqualified 2004 Fundamental Communications Act provision is Tseng&#8217;s, and it is illuminating.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>See my last piece for more on the CJC, <em>Munir</em>, and the comparative picture across England and Wales, Singapore, and Australia.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p>See <a href="https://openai.com/index/introducing-openai-japan/">Introducing OpenAI Japan</a> and <a href="https://techcrunch.com/2024/04/15/openai-announces-tokyo-office-and-gpt-4-model-optimized-for-the-japanese-language/">OpenAI opens Tokyo hub, adds GPT-4 model optimized for Japanese</a>. </p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-7" href="#footnote-anchor-7" class="footnote-number" contenteditable="false" target="_self">7</a><div class="footnote-content"><p>See <a href="https://asiatimes.com/2025/09/inside-japans-struggle-to-build-sovereign-ai/">Inside Japan&#8217;s struggle to build sovereign AI</a> for an overview of ABCI 3.0, GENIAC, and the Japanese sovereign AI landscape; and on Sakana AI&#8217;s trajectory, <a href="https://techcrunch.com/2025/11/17/sakana-ai-raises-135m-series-b-at-a-2-65b-valuation-to-continue-building-ai-models-for-japan/">Sakana AI raises $135M Series B</a>.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-8" href="#footnote-anchor-8" class="footnote-number" contenteditable="false" target="_self">8</a><div class="footnote-content"><p>See the Personal Information Protection Commission&#8217;s January 2026 <a href="https://www.nishimura.com/en/knowledge/newsletters/data_protection_260120">Policy Direction for Amendment of the APPI</a>; <a href="https://dig.watch/updates/japan-appi-personal-data-ai-fines">Japan approves APPI amendment bill on personal data, AI training, and fines &#8212; Digital Watch Observatory</a>; and <a href="https://www.theregister.com/2026/04/08/japan_privacy_law_changes_ai/">Japan relaxes privacy laws to make AI development easy &#8212; The Register</a>. For a helpful overview, <a href="https://www.luizasnewsletter.com/p/the-ai-wave-reaches-privacy-law">Luiza Jarovsky, &#8220;The AI Wave Reaches Privacy Law&#8221;</a>.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-9" href="#footnote-anchor-9" class="footnote-number" contenteditable="false" target="_self">9</a><div class="footnote-content"><p>Royal Commission on the Constitution 1969&#8211;1973 (the Kilbrandon Report), Cmnd 5460. On the Ministry of Justice&#8217;s role in the Royal Assent process for Manx primary legislation, see the Isle of Man Government&#8217;s published guidance on the legislative process.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-10" href="#footnote-anchor-10" class="footnote-number" contenteditable="false" target="_self">10</a><div class="footnote-content"><p>Foundations (Amendment) Bill 2025. </p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-11" href="#footnote-anchor-11" class="footnote-number" contenteditable="false" target="_self">11</a><div class="footnote-content"><p><a href="https://www.digitalisleofman.com/data-asset-foundations/">Data Asset Foundations &#8212; Digital Isle of Man</a>; <a href="https://www.iomtoday.co.im/news/business/isle-of-man-passes-world-first-legislation-to-establish-data-as-an-asset-896085">Isle of Man passes world-first legislation to establish data as an asset &#8212; iomtoday.co.im</a>; <a href="https://datacentrenews.uk/story/isle-of-man-pioneers-a-new-data-as-asset-framework">Isle of Man pioneers a new data-as-asset framework &#8212; Data Centre News</a>.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-12" href="#footnote-anchor-12" class="footnote-number" contenteditable="false" target="_self">12</a><div class="footnote-content"><p>On the DAF&#8217;s positioning relative to the US CLOUD Act and the Crown Dependency&#8217;s extraterritorial status, see the <a href="https://www.digitalisleofman.com/media/b5doal3w/daf-whitepaper-01-12-2025.pdf">White Paper</a>, <a href="https://channeleye.media/isle-of-man-passes-world-first-legislation-to-establish-data-as-an-asset/">Channel Eye</a> and <a href="https://www.raconteur.net/technology/isle-of-man-races-regulated-data-marketplace">Raconteur, &#8220;Isle of Man races to legislate for world-first data marketplace&#8221;</a>.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-13" href="#footnote-anchor-13" class="footnote-number" contenteditable="false" target="_self">13</a><div class="footnote-content"><p>On the current state of the Digital Omnibus, see the European Parliament&#8217;s <a href="https://www.europarl.europa.eu/legislative-train/package-digital-package/file-digital-omnibus-on-ai">Legislative Train Schedule entry on the Digital Omnibus on AI</a>; <a href="https://www.euronews.com/my-europe/2025/11/19/european-commission-delays-full-implementation-of-ai-act-to-2027">Euronews, &#8220;European Commission proposes delaying full implementation of AI Act to 2027&#8221;</a>; and <a href="https://www.onetrust.com/blog/eu-digital-omnibus-proposes-delay-of-ai-compliance-deadlines/">OneTrust, &#8220;EU Digital Omnibus Proposes Delay of AI Compliance Deadlines&#8221;</a>. On the unamended statutory position, see the <a href="https://ai-act-service-desk.ec.europa.eu/en/ai-act/timeline/timeline-implementation-eu-ai-act">EU AI Act Service Desk Implementation Timeline</a>.</p></div></div>]]></content:encoded></item><item><title><![CDATA[Setting Precedents: Why the Isle of Man Needs to Pick an Approach to AI in Court]]></title><description><![CDATA[Imagine a Manx Advocate submitting a skeleton argument to the High Court, citing three authorities from the Manx Law Reports.]]></description><link>https://www.theaiom.im/p/setting-precedents-why-the-isle-of</link><guid isPermaLink="false">https://www.theaiom.im/p/setting-precedents-why-the-isle-of</guid><dc:creator><![CDATA[AIOM]]></dc:creator><pubDate>Sun, 22 Mar 2026 23:00:36 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!PEUd!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F67457335-1c9d-4091-a628-b072644b2daf_609x609.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Imagine a Manx Advocate submitting a skeleton argument to the High Court, citing three authorities from the Manx Law Reports. Two cases are real. One is a hallucinated citation to a case that doesn&#8217;t exist.</p><p>Except this is not entirely hypothetical. In <em>Munir v Home Office</em> [2026] UKUT 81,<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> the England and Wales Upper Tribunal delivered another stern warning on AI hallucinations in legal submissions.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> The judgment is blunt: legal professionals who cite fake cases fail their professional obligations, supervisors who don&#8217;t catch the errors are likely more culpable than the juniors who made them, and uploading confidential documents to an open-source AI tool like ChatGPT places client information in the public domain, thus breaching confidentiality and legal privilege in one step.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.theaiom.im/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p><em>Munir</em> is not an isolated event. Courts are encountering AI-related issues with increasing frequency, from reliance on AI-generated evidence,<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a> to the use of smart glasses for witness coaching,<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a> to the question of whether a defendant&#8217;s conversations with a GenAI platform were protected by legal privilege.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a> AI is already in the courtroom. The question is whether the rules will catch up before something goes seriously wrong.</p><p>Different jurisdictions are answering that question differently, perhaps reflecting fundamentally different views of who should govern AI in the legal profession. As we finish the first quarter of 2026, it is worth comparing the approaches, because the Isle of Man will need to choose one.</p><p><strong>The English Approach: Top-Down, Judiciary-Driven</strong></p><p>The approach of England and Wales to AI in legal practice is being built through the courts.</p><p>The Civil Justice Council, chaired by Sir Colin Birss, has published an interim report and consultation on the use of AI for preparing court documents.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a> It covers four categories: statements of case, skeleton arguments and other advocacy documents, witness statements, and expert reports. The key proposal is that where AI has been used to generate evidence on which the court is asked to rely, legal representatives should be required to make a declaration. More routine uses - such as transcription, spell-checking, administrative tasks - would not trigger the requirement.</p><p>Transparency is a good thing. But the edge cases will be interesting. If AI has been used to organise correspondence and handle discovery that forms the factual matrix of a witness statement, or your client has incorporated AI workflows into their daily working life and the instructions to you are not entirely human-produced, would you be safe to declare no AI was used on the basis that the witness statement itself was drafted by you? These lines are not as clear as the consultation might suggest.</p><p>The broader judicial direction is clear though. On 31 October 2025, the judiciary issued updated guidance for judicial office holders on AI, replacing its April 2025 version.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-7" href="#footnote-7" target="_self">7</a> Sir Geoffrey Vos, the Master of the Rolls, has spoken publicly about the need to engage with AI in the legal system, specifically calling for debates about human rights, the future role of human judges, and the profession&#8217;s readiness.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-8" href="#footnote-8" target="_self">8</a> The UK Jurisdiction Taskforce&#8217;s consultation on liability for AI harms under English private law, for which Sir Geoffrey wrote the introduction, closed on 13 February 2026. The report is anticipated.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-9" href="#footnote-9" target="_self">9</a></p><p>After <em>Munir</em>, the common law is doing what it does: using existing professional obligations to absorb a new technology. No new legislation has been required, although new AI-specific laws are being considered. The courts are telling lawyers what they must disclose, and caselaw is setting the consequences for failures. This approach is coherent for a jurisdiction with high court volume, a large and well-resourced bar, and an active judiciary. The question is whether it is right for everyone.</p><p><strong>The Singaporean Approach: Practitioner-Facing, Toolkit-Based</strong></p><p>Singapore has been building its AI governance infrastructure since 2019, with frameworks spanning government, industry, and now the legal profession specifically.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-10" href="#footnote-10" target="_self">10</a></p><p>On 6 March 2026, Singapore&#8217;s Ministry of Law published its <em>Guide for Using Generative AI in the Legal Sector</em>.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-11" href="#footnote-11" target="_self">11</a> It is worth reading in full (yes, all 50 pages of it) because it represents something quite different from the English approach.</p><p>The Guide is non-binding. It addresses law firms and in-house teams directly, rather than being court-facing. And it is relentlessly practical. It includes sample internal GenAI governance policies, employee handbook clauses, letter of engagement templates, and a vendor assessment checklist. This is not a statement of principle. It is a toolkit, with implementation steps mapped to three progressive adoption stages, designed so that a two-partner firm and an international firm&#8217;s Singapore office can both use it.</p><p>One of the most useful features is its risk-based oversight framework, which distinguishes between human-in-the-loop (active review before output is used) and human-on-the-loop (monitoring with intervention only when anomalies arise), mapped to whether the task is internal or external and whether the output has legal, reputational, or financial consequences. Have a look at the <a href="https://www.mlaw.gov.sg/files/Guide_for_using_Generative_AI_in_the_Legal_Sector__Published_on_6_Mar_2026_.pdf">diagram at page 18</a>. This is what practical AI governance looks like. It is not glamorous. It is checklists and data classification tables and vendor assessment questions. It gives you specific key concepts to consider (that you can Google) when you&#8217;re building your firm&#8217;s AI policy. But it is the kind of framework that prevents a firm from uploading client files to ChatGPT and discovering the consequences after the fact.</p><p>The Guide is particularly strong on data governance: tiered data classification (public, internal, confidential, highly confidential, prohibited), specific guidance on free-to-use versus enterprise AI tools, and contractual safeguards for vendor selection. For a jurisdiction like the Isle of Man, where much of the legal work involves licensed financial services and regulated activities, this is directly relevant.</p><p>Singapore&#8217;s courts have also acted. Since 1 October 2024, the Guide on the Use of Generative AI Tools by Court Users applies across the Supreme Court, State Courts, and Family Justice Courts.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-12" href="#footnote-12" target="_self">12</a> It does not mandate a declaration of AI use, but requires users to be prepared to identify AI use where directed by the court. In other words, a lighter touch than the CJC&#8217;s proposed mandatory disclosure.</p><p>The Singapore approach says: we trust the profession to govern itself, and here are the tools to do it well. This is arguably the result of years of digitalisation and capacity building. Beyond the legal sector, Singapore is investing in AI literacy across the whole population. In his Budget 2026 speech, Prime Minister Lawrence Wong announced that Singaporean citizens enrolling in AI training courses will receive six months of free access to premium AI tools.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-13" href="#footnote-13" target="_self">13</a> The Law Society of Singapore already offers a Productivity Solutions Grant for pre-approved legal technology solutions.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-14" href="#footnote-14" target="_self">14</a> The direction is clear: institutional support for responsible adoption, across the profession and beyond. The government has removed the biggest barriers and lowered friction to aid adoption, and promised to take its people along the journey as the nation navigates the changes brought by AI.</p><p><strong>The Australian Approach: Principles-Based, Court-Led</strong></p><p>Australia offers a third model. Several state supreme courts have issued AI guidelines for litigants. These are court-led, but principles-based and softer in tone than the English or Singaporean materials.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-15" href="#footnote-15" target="_self">15</a></p><p>The Supreme Court of Victoria&#8217;s <em>Guidelines for Litigants: Responsible Use of Artificial Intelligence in Litigation</em>,<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-16" href="#footnote-16" target="_self">16</a> issued in May 2024, is the clearest example. It sets out 12 principles rather than mandatory rules. Disclosure is encouraged: parties and practitioners &#8220;should&#8221; disclose AI use to each other and the court but it is not mandated. The Court is candid about GenAI limitations, stating explicitly that output from general-purpose tools &#8220;is not presumed to be correct&#8221; and is &#8220;more likely to produce results that are inaccurate for the purpose of current litigation.&#8221;</p><p>One principle is particularly relevant for a small jurisdiction. Principle 8(d) warns that AI output may be &#8220;inapplicable to the jurisdiction, as the data used to train the underlying model might be drawn from other jurisdictions with different substantive laws and procedural requirements.&#8221;</p><p>For the Isle of Man, this is not a theoretical concern. It is an acute one, and it connects directly to the points I raised before about the accessibility of Manx law online. Manx law is a distinct legal system. As a Crown Dependency, the Isle of Man has its own statutes, its own jurisprudence, land law with Norse and Celtic origins, and its distinct constitutional and administrative frameworks. Its common law references English cases in some areas but not others. If Manx judgments and legislation are not accessible online, they are almost certainly not in the training data of any major AI model, or, for that matter, in the databases of major legal research tools like Westlaw and Lexis.</p><p>The Australian approach shows that court-level guidance does not require the formality of a consultation or a state-led development programme. It requires judicial and professional leadership, and a willingness to put principles on paper. For a small jurisdiction with a handful of Deemsters and a collegial bar, this might be the most natural starting point.</p><p><strong>The Isle of Man&#8217;s Time to Choose</strong></p><p>In the Isle of Man, the judiciary has yet to adopt a position on AI in legal practice. The Law Society&#8217;s current approach has been to leave it to individual firms to adopt their own responsible AI policies. At a time when every comparable common law jurisdiction, from England to Singapore to multiple Australian states, has issued at least some form of guidance, the Isle of Man&#8217;s current position is, in effect, that there is no position.</p><p>It is worth pausing on what being free to adopt their own responsible AI policy means in practice. A sole practitioner here is expected to develop their own AI governance framework from scratch, covering disclosure, confidentiality, data classification, vendor assessment, and professional liability. The resources to do that well exist: Singapore has just published a 50-page guide with templates and checklists. But the expectation that every firm will independently discover and adapt those resources is optimistic. Guidance from a professional body does not restrict practitioners. It equips them. Or at the very least, they would be less likely to just ask ChatGPT to generate an AI policy for their firm. The Law Society in England and Wales has an AI strategy too.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-17" href="#footnote-17" target="_self">17</a> </p><p>The Isle of Man does not need to choose just one model. By reviewing and comparing what England, Singapore, and Australia have done, it can adapt what suits the island&#8217;s context. But a choice does need to be made, and it needs to be made soon. The CJC consultation closes on 14 April. The Singapore Guide was published on 6 March. The Australian guidelines have been in effect for up to two years. This conversation is overdue.</p><p>But there is a deeper issue behind the guidance question, and the most recent UK consultation report brings it into focus. On 18 March 2026, the UK government published its report on copyright and artificial intelligence under Section 137 of the Data Use and Access Act.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-18" href="#footnote-18" target="_self">18</a> The report addresses how copyright works are used in AI training and, based on the strong concerns on the proposal, it confirms the government will not pursue a broad copyright exception but will work with industry on input transparency and best practices. Whatever one thinks about the copyright questions, the report signals that the UK government is actively thinking about what data goes into training AI systems and how.</p><p>This should concentrate minds in Douglas, but perhaps from the opposite direction. The UK&#8217;s concern is about protecting copyrighted works from being scraped for AI training without permission. The Isle of Man&#8217;s concern should be that its legal data is not being used for AI training at all, because it is not systematically and publicly available. AI tools used by Manx advocates will produce outputs trained overwhelmingly on English, American, or Australian law, but very little Manx law. Even if the island wants to position itself as AI-friendly, that positioning is not effective if its own body of law is largely invisible to AI systems. The government&#8217;s current decision to block bots from indexing government websites actively prevents AI systems from learning about the Isle of Man as a jurisdiction.</p><p>Issuing guidance on AI use in legal practice is necessary. But it is not sufficient. The work of making the Isle of Man&#8217;s legal system legible (to humans and to machines) remains the foundation. Not as a concession to technology companies, but as a precondition for the island&#8217;s AI adoption ambitions to mean anything.</p><p>The question is whether anyone is going to set the rules before something goes wrong.</p><p><em>AIOM is published fortnightly from the Isle of Man. Next issue: I&#8217;ll look at the UK Jurisdiction Taskforce&#8217;s consultation on AI liability and what it means for the Isle of Man.</em></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p><em><a href="https://www.bailii.org/uk/cases/UKUT/IAC/2026/81.html">Munir v Home Office</a></em><a href="https://www.bailii.org/uk/cases/UKUT/IAC/2026/81.html"> [2026] UKUT 81</a>. The amended judicial review claim form now mandates declarations that cited authorities exist, can be located, and support the propositions for which they are cited.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>The last main one from the High Court was <em><a href="https://www.bailii.org/ew/cases/EWHC/Admin/2025/1383.html">Ayinde v London Borough of Haringey</a></em><a href="https://www.bailii.org/ew/cases/EWHC/Admin/2025/1383.html"> [2025] EWHC 1383 (Admin)</a>.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p><em><a href="https://www.bailii.org/ew/cases/EWHC/Ch/2024/1198.html">Crypto Open Patent Alliance v Craig Steven Wright</a></em><a href="https://www.bailii.org/ew/cases/EWHC/Ch/2024/1198.html"> [2024] EWHC 1198 (Ch)</a>, at paras. 514-537.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p><em><a href="https://www.bailii.org/ew/cases/EWHC/Ch/2026/543.html">UAB Business Enterprise v Oneta Limited</a></em><a href="https://www.bailii.org/ew/cases/EWHC/Ch/2026/543.html"> [2026] EWHC 543 (Ch)</a>, at paras. 110-118.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p><em>United States of America v Bradley Heppner</em> 25 Cr 503 (JSR), US District Court, Southern District of New York, 17 February 2026.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p>Civil Justice Council, &#8216;<a href="https://www.judiciary.uk/wp-content/uploads/2026/02/Interim-Report-and-Consultation-Use-of-AI-for-Preparing-Court-Documents-2.pdf">Use of AI for Preparing Court Documents</a>&#8217; Interim Report and Consultation. Consultation closes 14 April 2026.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-7" href="#footnote-anchor-7" class="footnote-number" contenteditable="false" target="_self">7</a><div class="footnote-content"><p><a href="https://www.judiciary.uk/wp-content/uploads/2025/10/Artificial-Intelligence-AI-Guidance-for-Judicial-Office-Holders-2.pdf">Judiciary of England and Wales, Guidance for Judicial Office Holders on the Use of Artificial Intelligence</a> (31 October 2025).</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-8" href="#footnote-anchor-8" class="footnote-number" contenteditable="false" target="_self">8</a><div class="footnote-content"><p>Sir Geoffrey Vos, MR, speech at Legal Geek Conference: &#8220;<a href="https://www.judiciary.uk/speech-about-ai-by-the-master-of-the-rolls-what-a-difference-a-year-makes/">What a Difference a Year Makes&#8221;.</a></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-9" href="#footnote-anchor-9" class="footnote-number" contenteditable="false" target="_self">9</a><div class="footnote-content"><p>UK Jurisdiction Taskforce (UKJT), <a href="https://lawtechuk.io/ukjt/public-consultation-liability-for-ai-harms-under-the-private-law-of-england-and-wales/">Consultation on the Legal Statement on Liability for AI Harms under the private law of England and Wales</a> (January 2026). The UKJT was established by the LawtechUK Panel, a Ministry of Justice-backed initiative dedicated to driving digital transformation in the UK legal sector.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-10" href="#footnote-anchor-10" class="footnote-number" contenteditable="false" target="_self">10</a><div class="footnote-content"><p>Singapore&#8217;s National AI Strategy was launched in 2019. The Personal Data Protection Commission released its Model AI Governance Framework in January 2019. IMDA&#8217;s AI Verify Foundation launched in 2023. The Ministry of Law&#8217;s Legal Technology Platform Initiative began in 2022.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-11" href="#footnote-anchor-11" class="footnote-number" contenteditable="false" target="_self">11</a><div class="footnote-content"><p>Singapore Ministry of Law, <a href="https://www.mlaw.gov.sg/files/Guide_for_using_Generative_AI_in_the_Legal_Sector__Published_on_6_Mar_2026_.pdf">Guide for Using Generative AI in the Legal Sector</a> (6 March 2026).</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-12" href="#footnote-anchor-12" class="footnote-number" contenteditable="false" target="_self">12</a><div class="footnote-content"><p>Singapore Courts, <a href="https://www.judiciary.gov.sg/docs/default-source/news-and-resources-docs/guide-on-the-use-of-generative-ai-tools-by-court-users.pdf?sfvrsn=3900c814_1">Guide on the Use of Generative AI Tools by Court Users</a> (Registrar&#8217;s Circular No. 1 of 2024, 23 September 2024).</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-13" href="#footnote-anchor-13" class="footnote-number" contenteditable="false" target="_self">13</a><div class="footnote-content"><p>Prime Minister Lawrence Wong, <a href="https://www.singaporebudget.gov.sg/budget-speech/budget-statement/c-harness-ai-as-a-strategic-advantage">Budget 2026 Speech - Harness AI as a Strategic Advantage</a>.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-14" href="#footnote-anchor-14" class="footnote-number" contenteditable="false" target="_self">14</a><div class="footnote-content"><p>Law Society of Singapore, <a href="https://www.lawsociety.org.sg/support-schemes/legal-tech-adoption/">Productivity Solutions Grant - Legal Tech Adoption</a>.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-15" href="#footnote-anchor-15" class="footnote-number" contenteditable="false" target="_self">15</a><div class="footnote-content"><p>In addition to the Supreme Court of Victoria (see the footnote below), AI guidelines have been issued by the Supreme Court of New South Wales (Practice Note SC Gen 23), the <a href="https://www.courts.sa.gov.au/2026/01/19/supreme-court-issues-guidelines-for-the-use-of-generative-ai/">Supreme Court of South Australia</a> (effective 1 January 2026), and the <a href="http://supremecourt.wa.gov.au/_files/Guidelines for the Use of Generative Artificial Intelligence.pdf">Supreme Court of Western Australia</a>. International guidance has also been issued by <a href="https://unesdoc.unesco.org/ark:/48223/pf0000396582">UNESCO</a> (2025), the <a href="https://www.ciarb.org/media/bpndtcgu/guideline-on-the-use-of-ai-in-arbitration_updated-sept-2025.pdf">Chartered Institute of Arbitrators</a> (updated September 2025), and the <a href="https://www.ibanet.org/Guidelines-on-the-use-of-generative-artificial-intelligence-in-mediation">International Bar Association</a> (June 2025).</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-16" href="#footnote-anchor-16" class="footnote-number" contenteditable="false" target="_self">16</a><div class="footnote-content"><p>Supreme Court of Victoria, <a href="https://www.supremecourt.vic.gov.au/forms-fees-and-services/forms-templates-and-guidelines/guideline-responsible-use-of-ai-in-litigation">Guidelines for Litigants: Responsible Use of Artificial Intelligence in Litigation</a> (May 2024).</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-17" href="#footnote-anchor-17" class="footnote-number" contenteditable="false" target="_self">17</a><div class="footnote-content"><p>See the Law Society <a href="https://www.lawsociety.org.uk/topics/ai-and-lawtech/">website</a>.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-18" href="#footnote-anchor-18" class="footnote-number" contenteditable="false" target="_self">18</a><div class="footnote-content"><p>UK Government, <a href="https://www.gov.uk/government/publications/report-and-impact-assessment-on-copyright-and-artificial-intelligence">Report and Impact Assessment on Copyright and Artificial Intelligence</a> (18 March 2026).</p></div></div>]]></content:encoded></item><item><title><![CDATA[First Mover Disadvantage: Why the Isle of Man Might Be the Most Important AI Governance Experiment You’re Not Watching]]></title><description><![CDATA[The Isle of Man signed an MOU with AI Singapore in March 2024, committed &#163;1 million to set up a dedicated office to AI, then launched the National AI Office in January 2026.]]></description><link>https://www.theaiom.im/p/first-mover-disadvantage-why-the</link><guid isPermaLink="false">https://www.theaiom.im/p/first-mover-disadvantage-why-the</guid><dc:creator><![CDATA[AIOM]]></dc:creator><pubDate>Sat, 07 Mar 2026 20:40:35 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!PEUd!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F67457335-1c9d-4091-a628-b072644b2daf_609x609.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The Isle of Man signed an MOU with AI Singapore in March 2024, committed &#163;1 million to set up a dedicated office to AI, then launched the National AI Office in January 2026. Now more than one member of the Manx parliament is questioning how the money is being spent.</p><p>This country is often the least well known British Crown Dependency, with a population of 85,000 and a physical island the size of Singapore. It offers a beautiful UNESCO recognised biosphere, rolling hills of farmland, a low tax environment, plenty of strong wind, and a community willing to pour resources to educate its population (and to find ways to entice the young to return).</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.theaiom.im/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>For a small jurisdiction, the question is not &#8220;can we keep up with AI?&#8221; but &#8220;can we build something specific and useful faster than the big states can build something comprehensive and slow?&#8221; and &#8220;how do we equip the population when technology is changing the world so rapidly?&#8221;</p><p>I&#8217;m starting this substack to keep track of the AI experiment in this small jurisdiction, and to scan the horizon of how it compares to others. 2026 will be a pivotal year. As someone who came to this island with fresh eyes and was invited to the table, I decided my contribution will be my perspective, of learning about the Isle of Man while drawing on what I&#8217;ve experienced elsewhere.</p><p>It is fair to say the Isle of Man is not reaching for first mover advantage with AI. If our advantage is not speed, could we at least be interesting and insightful with how we handle it?</p><p>In my view, the Isle of Man has an infrastructure gap it needs to plug before any kind of serious AI development.</p><p>As I have been expressing my surprise since I started working in this jurisdiction, the Isle of Man needs to make its laws more accessible. It is one of the few common law jurisdictions that do not publish their judgments on the World Legal Information Institute (WorldLII) network. It does not yet have its body of case authorities and legislation reliably accessible online in a searchable format. The government websites have a script banning bots, search engines and AI from indexing and reading their contents. Some secondary legislation, such as the Civil Service Regulations, does not have an official full body of text available<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a>.</p><p>There is no way to tell how many investors and potential businesses have been put off by this lack of publicly accessible information, or have simply skipped over this jurisdiction because things are much clearer online for other countries. These are the digital foundations for building a functional regulatory environment. Companies need to be able to access and evaluate the legal landscape. This is true even if you&#8217;re not talking about AI, but even more so when a company is evaluating whether to deploy AI from the Isle of Man. Without that, any regulatory certainty advantage evaporates.</p><p>This is a credibility issue. You cannot credibly govern AI if your own laws are not accessible. This is not a criticism, but a sequencing issue. You need to build the foundation before the tower. Without the foundations solidly prepared, any rushed move will show up as first mover disadvantage instead.</p><p>It&#8217;s been two months since the establishment of the National AI Office. If small democratic jurisdictions can build credible AI governance, they create options. If they can&#8217;t, the conversation defaults to the giant players. But the Isle of Man will not do AI like the US, China, or Singapore. It&#8217;s a completely different regulatory environment. I&#8217;ll try to explain how and why, and share my observations from watching AI and the law, in writing here. With some luck, I also hope to improve my writing skills for non-legal drafting. I&#8217;ll post every fortnight. Thanks for your time.</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>The librarians at the Tynwald Library have helpfully directed me to the unofficial copy available at the Government Office of Human Resources website when I searched for it. See https://hr.gov.im/terms-conditions-for-employees/civil-service-regulations/</p></div></div>]]></content:encoded></item></channel></rss>