What Each Jurisdiction Thinks AI Is
Notes From Taipei, Tokyo, and Douglas
The UKJT report I promised a fortnight ago is still pending. When the consultation report is published, I will do a proper piece on it. In the meantime, here is a detour that turns out not to be a detour.
I have just finished reading Dr Chien Lee-feng’s book on the future of artificial intelligence in Taiwan.1 The first thing that struck me was how the book does not really talk much about AI governance. It reads like a national strategy briefing, pitched to government, industry, business owners, employees, parents and young people in one go, about an industrial transition that Taiwan cannot afford to miss.
Dr Chien, the former no. 1 Google Taiwan employee, thinks about AI as an industrial transition. That’s remarkable, because that kind of context is the key to understanding why jurisdictions produce such different AI conversations. What each jurisdiction thinks AI is shapes what it thinks AI governance is for. And what it thinks AI governance is for determines what kind of institutional bet it makes, and when.
I think that makes comparing jurisdictions fascinating. The difference between jurisdictions is not that some have rules and others do not. It is that the rules each jurisdiction writes are shaped by what each thinks AI is for in the first place.
Taiwan: AI as an industrial transition
Taiwan sits at the centre of the global AI supply chain. TSMC fabricates the overwhelming majority of the world’s leading-edge AI accelerators; the island’s hardware ecosystem is inseparable from the frontier-compute economy that OpenAI, Anthropic, Google, and the large Chinese labs all depend on. Chien’s book takes that industrial centrality as the frame within which Taiwan’s AI future has to be reasoned about.
Within that frame, the problems he identifies are almost entirely capacity-side rather than rules-side. Four stand out.
The first is talent retention. Taiwan trains world-class engineers, and then loses many of them to the United States. The AI hardware story is a Taiwanese story, but a significant share of the people who will build the AI software stack over the next decade are doing so in Palo Alto, Seattle, and Cambridge, Massachusetts, not Hsinchu or Taipei. This is not an easy problem to legislate out of existence.
The second is training data in traditional Chinese. Simplified Chinese dominates the internet’s Chinese-language corpus; traditional Chinese (the written script used in Taiwan and Hong Kong) is comparatively under-represented, and much of the high-quality material that does exist is not digitised or not openly accessible due to IP protection. A jurisdiction that wants its own legal, medical, and cultural context to be legible to AI systems has to build the corpus that makes that possible.
The third is quality data for sovereign AI. Even with good corpora, developing models that reflect Taiwan’s legal, regulatory, and cultural specificity requires institutional collaboration that cuts across sectors usually kept apart. Chien notes that the TAIDE and TAME sovereign AI models — built on Meta’s open-weights Llama — were developed by a consortium including the National Science and Technology Council, the Department of Computer Science and Information Engineering and the Department of Information Management at National Taiwan University, Pegatron, Unimicron, Chang Gung Memorial Hospital, and the CCP Group.2 State, academy, industry, and healthcare, aligned around a single technical project. That is what proper capacity-building looks like. I am genuinely curious how these models have been adopted, and how they have performed against frontier alternatives in real Taiwanese use — a question I would have to dig in further. Any views from readers closer to that development would be most welcome.
The fourth, and the one I found most striking, is adoption-timing risk. Chien observes that Taiwan’s economy has relatively robust fallback sectors — mature manufacturing, agriculture, established services — which cushion individual actors from the pressure to adopt AI quickly. The consequence, he argues, is that Taiwan as a whole may drift past the window in which early AI adoption delivers compounding advantage, while other economies with fewer fallbacks move faster out of necessity. This is a governance concern that sounds nothing like the governance concerns dominating Western AI debate. It is not about risk, or liability, or rights. It is about timing — specifically about the failure mode of moving too slowly, not too fast.
Taken together, these are the worries of a technology-intensive, export-oriented, democratically governed jurisdiction whose security posture is aligned with the United States and whose manoeuvring room is shaped by external pressures it did not choose. They are also, unmistakably, the worries of a jurisdiction that has decided AI is primarily an industrial question.
Taiwan’s rules, written to serve Taiwan’s industry
It would be wrong, however, to stop there and conclude that Taiwan has chosen capacity over law. In December 2025, the Legislative Yuan adopted the Basic Law on Artificial Intelligence, which came into effect on 14 January 2026.3 Unlike German or Hong Kong law, Basic Laws in Taiwan is a type of framework or umbrella statute and sits below the Constitution. The Basic Law on AI sets out seven foundational principles: sustainable development and well-being, human autonomy, privacy protection and data governance, cybersecurity and safety, transparency and explainability, fairness and non-discrimination, and accountability. It also mandates high-risk warnings for AI products and systems classified as such by the relevant competent authority in consultation with the Ministry of Digital Affairs. Finally, it requires all government agencies to review, amend, or enact legislation under their jurisdiction to bring it into compliance with the Basic Law within two years of its commencement. The clock runs to 14 January 2028.
What makes the Basic Law on AI worth reading closely is not the seven principles (as most contemporary AI frameworks name a similar set) but the genealogy of its drafting. In 2004, Taiwan’s Fundamental Communications Act established, without qualification, that the interpretation and application of communications statutes should not prejudice the provision of innovative technologies and services. Twenty years later, when a similar pro-technology clause was included in the Basic Law on AI draft, the Legislative Yuan — after “vigorous discussions,” as per Ms Ken-Ying Tseng’s measured description — added a precondition: the promotion of new technologies and services would prevail over conflicting laws, but only where the seven fundamental principles are adhered to.4 The direction of the concession matters. Pro-innovation was the default; principles were the negotiated qualifier. That is a very particular kind of AI rulebook. It is a rulebook that treats technology-first as the starting premise, and treats the seven principles as the constraints within which the starting premise gets to run.
The Basic Law on AI also instructs MODA to align Taiwan’s framework with international risk-classification standards, establishes a regulatory sandbox, and commits the government to budgetary support, talent cultivation, and mechanisms for data openness, sharing, and reuse to enhance AI-usable data. In other words, the same capacity concerns Chien raises in his book is now codified as statutory obligations on government agencies. The law aligns with the industrial strategy, translated into the operative task to develop statute in the next two years.
Taiwan, then, has not chosen capacity over rules, or rules over capacity. It has written its rules to serve its capacity bet. The two-year legislative review clock is the institutional expression of a judgment that capacity-building and legal clarity have to arrive together. Industry gets certainty; the profession gets an architecture to work within; government gets a programmatic deadline that survives the election cycle.
That is a more disciplined move than most of the AI legislation passed elsewhere in the last eighteen months. And it is a move whose distinctive character is only visible if you read the industrial frame (Chien) and the legal frame (the Basic Law on AI) as one continuous argument, not two separate ones.
England: AI as a liability and professional-conduct question
The contrast with the English conversation is starker than one would expect. In the last fortnight alone, the English AI-and-law conversation has circled around the Civil Justice Council’s interim report on AI in court documents (consultation has just closed on 14 April 2026), Munir v Home Office on AI hallucinations and verification obligations, the UK Jurisdiction Taskforce’s forthcoming legal statement on AI liability, the October 2025 judicial guidance update, and Sir Geoffrey Vos’ continuing public reflections on AI in the legal system.5
Every one of those interventions is a rules-side move in a specific register: the key ideas are professional duties, negligence, and the absorption of a new technology into existing rights and remedies. Chien’s Taiwanese concerns about corpora, talent flight, and fallback-sector drag have no English analogue in the legal-regulatory conversation, because they belong to a different conversation altogether. It looks like the UK’s AI capacity questions are being handled, for better or worse, in other rooms — by the Government’s industrial-strategy apparatus, the CMA, the frontier labs themselves. The courts and the legal profession do what they do well, on common law grounds they understand.
This is not a criticism. England and Wales most probably will have the court volume, the legal market depth, and the institutional infrastructure to make an incremental, case-law-driven approach work. It is coherent for a jurisdiction whose AI capacity questions are being settled elsewhere.
It is simply a different lens. Consequently, the lens produces a different kind of rulebook. Where Taiwan’s Basic Law on AI treats AI as an industrial transition to be supported within principled limits, English common law treats AI as an analytic object to be absorbed within the existing fabric of professional duty, much like the arrival of the fax and email as methods of service of Court documents.
Both jurisdictions are working on rules. They are not the same rules.
Tokyo: commitments extracted, and capacity built alongside
Between the industrial lens and the liability lens, Dr Chien brought up a third move worth noticing, and Tokyo illustrates it.
When Sam Altman visited Japan and met Prime Minister Kishida in April 2023, and returned again to speak at the University of Tokyo in February 2025, the discussions around the visits emphasised OpenAI’s Japan-specific commitments: a Japanese-optimised GPT-4 model, collaboration on analysing publicly available Japanese government data, an OpenAI Japan office, and eventually a SoftBank-OpenAI joint venture.6 The point is, a frontier lab will make locally-tailored commitments — tuned models, local offices, joint ventures with national champions, data-collaboration frameworks — when a jurisdiction has enough leverage to ask.
The important move, though, is what Japan has done alongside those commitments, not because of them. Japan did not sit around and wait for OpenAI. Their National Institute of Advanced Industrial Science and Technology opened the ABCI 3.0 supercomputer to industry and universities for foundational model development in January 2025. The GENIAC programme has funded thirty Japanese-language foundation-model projects since February 2024. Sakana AI and Rakuten have built and released Japanese-optimised models, some of them open-source.7 The commitments extracted from OpenAI are real, but they are a supplement to Japanese sovereign capacity, not a substitute for it.
Nor are capacity programmes Japan’s only legal-architecture move. Most recently this month (April 2026), the Japanese Cabinet approved an amendment bill to the Act on the Protection of Personal Information that removes the opt-in consent requirement for third-party provision of personal data, and for the acquisition of publicly available sensitive personal information, where the data is to be used for “statistical purposes” — a term the bill reads broadly enough to cover AI development and training. The same bill introduces an administrative fine regime for violations of data-processing obligations. One instrument, two directions at once: loosened consent rules to enable AI training data to move at scale, tightened enforcement to make the privacy floor stick.8[^8] Taiwan’s Basic Law on AI pairs industrial-strategy support with seven principled constraints; Japan’s APPI amendment pairs training-data liberalisation with a new penalty regime. The move is not unique to either jurisdiction, but the pattern is becoming identifiable. Rules that reflect what a jurisdiction thinks AI is for are rules that move on more than one axis at once.
The lesson for jurisdictions further down the scale is not “demand more from OpenAI.” It is that extracting commitments from frontier labs is worth doing only if you are also building something that makes the commitments bite — and that legal architectures built to serve an AI strategy have to calibrate enablement and enforcement as a single, integrated move. A jurisdiction that is not building anything gets press releases. A jurisdiction that only enables, or only enforces, gets an incomplete rulebook.
The Isle of Man: manoeuvring room, and what to do with it
The Isle of Man is a Crown Dependency. It is not part of the United Kingdom, and it is not part of the European Union. Tynwald — the Island’s parliament, and one of the oldest continuously sitting legislatures in the world — passes its own primary legislation. It does so, however, subject to review by the UK Ministry of Justice before Royal Assent, and Westminster retains, under the framework articulated in the 1973 Kilbrandon Report, a reserved power to legislate directly on Isle of Man matters where the good governance of the Island or the United Kingdom’s legitimate interests are judged to require it.9[^9]
In practice, this is less a ceiling than a permanent negotiation. Much of the delicate conversation between the Island and the UK is about where the line between internal Manx matters and external matters falls, and that line moves, slowly, through decades of practice rather than constitutional rewriting. I have not seen this arrangement described as clearly in public-facing material as it is explained to students on the Manx Bar course, which is itself a small fact worth sitting with. But it is the constitutional reality within which any Isle of Man AI strategy has to be made.
Taiwan’s manoeuvring room is constrained, structurally, by a contested international status and by a security posture aligned with the United States. The Isle of Man’s manoeuvring room is constrained, structurally, by its relationship with the Crown and with Westminster. Of course, these are not analogous constraints. Taiwan has dense domestic capacity — hardware, universities, cross-sectoral consortia, an active industrial base, and now a statute that programmes that capacity into a two-year legislative cycle — and makes institutional bets inside a geopolitical constraint it did not choose. The Isle of Man has legislative agility — a parliament that can move a bill from introduction to Royal Assent in months — inside a relational constraint that is negotiated rather than imposed. Neither set of tools is complete. What each jurisdiction can do with the tools it has is a different question.
The Isle of Man does not have a frontier-model industry, and is unlikely to acquire one. It has an emerging deployment layer — a small but growing group of Manx firms developing AI workflow products, some of them explicitly marketed to offshore jurisdictions — which looks broadly similar to the deployment layers forming in dozens of other jurisdictions of comparable scale. What is more distinctive is what the Isle of Man has started doing legislatively with data.
The Data Asset Foundation, and an extension of the logic
On 24 March 2026, Tynwald passed the Foundations (Amendment) Bill, with Royal Ascent pending, which would create a new legal structure: the Data Asset Foundation. A DAF is a licensed entity — a new type of legal personality like a company but closer in nature to a foundation or a trust — designed as a legal wrapper to hold data, set enforceable rules about how that data can be used and by whom, and allow the data to be treated as a formal capital asset.10 Secondary regulations are in development; a six-month pilot with early adopters is now running; full enactment is scheduled for later in 2026.11
Two features of the DAF regime are worth isolating.
The first is that the DAF is positioned as a structure outside the reach of the US CLOUD Act and outside the UK and EU regulatory frameworks.12[^12] Its pitch is not that the Island has built a better version of what larger jurisdictions offer, but that it has built something larger jurisdictions cannot offer, because they are inside regulatory regimes with which the DAF is deliberately non-aligned. The Crown Dependency position — not part of the UK, not part of the EU, not subject to US extraterritorial legislation — is being treated not as a marginal status to be overcome, but as the defining feature of the product.
The second is that the DAF regime is being delivered by an emerging ecosystem of data hosting and corporate service providers now offering DAF administration and governance services. The infrastructure is not hypothetical. It is being built.
It would be too early to evaluate this regime. It is closer to a bet of a particular kind — not Taiwan’s capacity bet, not England’s liability bet, but a jurisdictional-arbitrage bet. The proposition is that strategic regulatory distance from larger powers is itself a valuable asset, and that a small Crown Dependency with legislative agility and an established financial-services regulatory culture is unusually well placed to sell that distance in a structured, legally-cognisable form.
Here is my speculation. If the DAF bet works — if, by 2027 or 2028, there is a visible body of international data custody business operating through DAFs — then it is entirely plausible that a post-September-2026 Tynwald will want to extend the same logic to AI. Not just data as an asset, but AI models, AI training runs, AI workloads, and AI governance obligations structured in, and administered from, a jurisdiction deliberately positioned outside the CLOUD Act, outside the EU AI Act, and maybe even partially outside whatever the UK’s eventual AI legal landscape looks like. An “AI Asset Foundation” is not something anyone has publicly proposed. But the institutional logic that produced the DAF points in that direction, and the Island’s regulatory culture — built over decades for financial services — is, for better and for worse, precisely the kind of culture that can productise a legal concept once the political decision is made.
This is a bet Taiwan cannot make. Taiwan’s security and trade posture requires alignment with the United States on AI supply-chain matters, with all that implies about export controls, compute access, and regulatory convergence. Its manoeuvring room runs in a different direction. What Taiwan can do — build state-academy-industry-hospital consortia to develop sovereign models, retain talent, digitise and steward its linguistic and legal corpus, write a Basic Law that codifies the industrial-strategy framing into statute — the Isle of Man cannot. What the Isle of Man can do — hold data and, possibly, AI assets in a structure deliberately outside the major regulatory regimes — Taiwan cannot. The contrast and complementarity are, to me, noteworthy. The lesson-flow, honestly, probably runs mostly in one direction: there is more the Isle of Man can learn from Taiwan about the patient, cross-sectoral work of building capacity than there is the other way around. But the jurisdictional-arbitrage move is a move, and it is one worth potential Taiwanese readers understanding, because a version of it is being built about two thousand miles from Taipei, in a Crown Dependency of 85,000 people.
Three clocks
Taiwan thinks AI is an industrial transition, and has written a Basic Law that treats governance as the architecture that transition is run through. England and Wales thinks AI is a legal liability and professional-conduct problem, and its courts and profession are doing the incremental, case-law-driven work of absorbing a new technology into its common law jurisprudence. Japan, pragmatically, thinks it is both — something to be built and something to be negotiated with — and has the state capacity to do both at once. The Isle of Man seems to be in the process of deciding if AI is, for it, a jurisdictional-arbitrage question: a question about what kind of regulatory distance the Island can offer that larger jurisdictions structurally cannot.
None of these lenses is wrong. Each is answering a different question, shaped by the manoeuvring room the jurisdiction actually has. The interesting thing is that they are not substitutes. The industrial lens and the liability lens and the arbitrage lens each illuminate parts of AI governance the others do not, and a jurisdiction paying attention to all three — not just the one its own structural position pushes it toward — is better placed to make whatever institutional bet it ends up making.
Three clocks are running through 2026 and 2027, and they matter because they set the deadlines by which the bets actually have to be placed.
Taiwan’s clock is statutory. The Basic Law on AI requires every government agency to complete the review, amendment, or enactment of legislation in its jurisdiction by 14 January 2028. That deadline survives elections, cabinet reshuffles, and shifts in political mood. It is running whether anyone chooses to pay attention to it or not.
The European Union’s clock is regulatory, and currently uncertain. The statutory deadline for compliance with the Annex III high-risk provisions of the EU AI Act remains 2 August 2026, but the Commission’s Digital Omnibus proposal — on which the Parliament’s IMCO and LIBE committees voted jointly on 18 March 2026, and which the Parliament adopted as its negotiating position in plenary on 26 March 2026 — would push that deadline to 2 December 2027 for stand-alone high-risk AI systems, and 2 August 2028 for high-risk AI systems embedded in products already regulated under Annex I. Trilogue negotiations with the Council are expected to begin in April. Whichever deadline ends up binding, that is the compliance horizon EU-facing businesses are facing.13
The Isle of Man’s clock is democratic. The general election is scheduled for September 2026. Whatever AI strategy the Island commits to for the next five years will be chosen by the Tynwald that election returns. The manifestos being drafted over the coming months are where the direction actually gets set. Unlike the Taiwanese and European clocks, the Manx clock is not set by statute or treaty. It is set by the electorate, and by the candidates who choose to put AI governance on their platforms.
So we have three jurisdictions with different structural questions converging on the same approximate timeline. Chien’s book is, among other things, an argument that Taiwan needs to make its bet soon, because the industrial window is closing. The Isle of Man, on a different clock but the same horizon, is making up its mind, potentially deciding on what the Island is willing to commit to for the next five years.
I will be in Dublin for the IAPP AI Governance Global Europe conference in June, where the state of the Digital Omnibus and the high-risk rollout will likely be one of the conversations in the room. I will report back on what the EU is actually going to ask of anyone doing AI-adjacent business from a jurisdiction of our size.
In the meantime, the decision the Isle of Man is in the process of making is not a bet to imitate the EU, or the UK, or Taiwan. It looks like a bet on what the Island’s Crown Dependency status and established regulatory culture can build that no larger jurisdiction structurally can. What the September 2026 election will decide is whether the Island is willing to make that bet deliberately, or by default.
AIOM is published fortnightly from the Isle of Man. Next issue: the UKJT legal statement, when it lands.
簡立峰, 《台灣AI大未來 — 解析最新的AI趨勢、台灣情勢、企業布局與個人發展》 (my unofficial translation of the title is ‘Taiwan’s Grand Future in AI: Analysing the Latest AI Trends, Taiwan’s Position, Corporate Strategy, and Personal Development’), 2024. Dr Chien Lee-feng was the first employee and, for fourteen years, the Managing Director of Google Taiwan; he is a former research fellow and deputy director of the Institute of Information Science at Academia Sinica and professor in the Department of Information Management at National Taiwan University. He retired from Google in 2020 and now sits on the boards of Appier and iKala.
Chien also notes that the TAIDE/TAME consortium was explicitly designed to produce a sovereign AI model tailored to Taiwan’s local language, knowledge, legal framework, and cultural needs.
Ken-Ying Tseng, “Taiwan: The Basic Law on Artificial Intelligence leads the way to AI development,” DataGuidance, 3 February 2026.
Ibid. The contrast with the unqualified 2004 Fundamental Communications Act provision is Tseng’s, and it is illuminating.
See my last piece for more on the CJC, Munir, and the comparative picture across England and Wales, Singapore, and Australia.
See Inside Japan’s struggle to build sovereign AI for an overview of ABCI 3.0, GENIAC, and the Japanese sovereign AI landscape; and on Sakana AI’s trajectory, Sakana AI raises $135M Series B.
See the Personal Information Protection Commission’s January 2026 Policy Direction for Amendment of the APPI; Japan approves APPI amendment bill on personal data, AI training, and fines — Digital Watch Observatory; and Japan relaxes privacy laws to make AI development easy — The Register. For a helpful overview, Luiza Jarovsky, “The AI Wave Reaches Privacy Law”.
Royal Commission on the Constitution 1969–1973 (the Kilbrandon Report), Cmnd 5460. On the Ministry of Justice’s role in the Royal Assent process for Manx primary legislation, see the Isle of Man Government’s published guidance on the legislative process.
Foundations (Amendment) Bill 2025.
On the DAF’s positioning relative to the US CLOUD Act and the Crown Dependency’s extraterritorial status, see the White Paper, Channel Eye and Raconteur, “Isle of Man races to legislate for world-first data marketplace”.
On the current state of the Digital Omnibus, see the European Parliament’s Legislative Train Schedule entry on the Digital Omnibus on AI; Euronews, “European Commission proposes delaying full implementation of AI Act to 2027”; and OneTrust, “EU Digital Omnibus Proposes Delay of AI Compliance Deadlines”. On the unamended statutory position, see the EU AI Act Service Desk Implementation Timeline.
