Paris – February 11, 2025: French President Emmanuel Macron (front center) will pose for group photos with world leaders and participants at the end of the AI ​​action summit at the Grand Palais. (Photo by Ludovic Marin/AFP via Getty Images)
As we reach a critical period in AI development, key governance challenges emerge that can curb innovation and create global digital divisions. The current state of AI governance is similar to fragmented regulations, patchwork of technical and non-technical standards, and frameworks that make global deployment of AI systems increasingly difficult and expensive. This fragmentation raises several challenges, including conflicting rules and technical specifications, reduced trade capabilities, and increased organizational compliance burden. Each can be partially addressed by combining regulatory and technology interoperability efforts.
Fragmented AI Governance Situation
Today’s global AI governance environment is characterized by diverging regulatory approaches across major economies. The EU has established itself as the first me bar in AI law, bans certain AI applications entirely and implements binding, risk-based classification systems that place strict obligations on high-risk systems such as biometric identification and critical infrastructure. This AI law is in stark contrast to the UK’s sector-specific approach, circumventing new laws that support existing regulators applying five cross-cut principles tailored to industries such as healthcare and finance. Meanwhile, the United States lacks comprehensive federal AI laws, resulting in a chaotic mix of state-level laws and non-binding federal guidelines. States like Colorado have enacted laws on the “duty of care” standard to prevent algorithmic discrimination, while other countries have passed various sector-specific regulations.
Recent changes in the US federal leadership have been more complicated, with the Trump administration’s 2025 executive order replacing previous guidance, focusing on “maintaining and strengthening US AI control.” China has taken yet another approach, combining state-led ethical guidelines with hard laws targeting specific technologies such as generator AI. Unlike Western frameworks emphasizing individual rights, China’s regulations focus on adjusting AI development to national security and government values.
Apart from these and other hard laws, the soft row initiative adds another layer of complexity to the fragmented AI governance situation. Recent datasets capture over 600 AI soft-low programs and over 1400 AI and AI-related standards across organizations such as IEEE, ISO, ETSI, ITU and more. While some efforts like ISO 42001 and the OECD AI Principles have gained considerable traction, the number of competing soft laws has created a significant compliance burden for organizations that aim to develop or deploy AI systems globally and responsibly.
Why AI Regulation and Technical Interoperability Is Important
This fragmentation raises serious problems for innovation, safety and fair access to AI technology. The global deployment of beneficial AI systems is becoming increasingly complicated when medical algorithms developed in accordance with the EU’s strict data governance regulations could violate US state laws that allow broader biometric data collection or face forced security reviews for exports to China. The economic costs are substantial. According to APEC’s 2023 survey results, interoperable frameworks could increase cross-border AI services by 11-44% each year. Complex and inconsistent AI rules disproportionately affect startups and small businesses that lack the resources to navigate fragmented compliance regimes, essentially giving large businesses an unfair advantage.
Beyond economics, technological fragmentation perpetuates closed ecosystems. Without a standardized interface for AI-to-AI communication, most systems will remain siloed within the boundaries of the enterprise, eliminating interoperability between AI agents or agents and platforms. This lack of interoperability reduces competition, user choice, edge-based innovation and trust in AI systems. If safety, fairness and privacy rules differ dramatically across jurisdictions, users cannot confidently rely on AI applications regardless of where they are developed. Establishing shared regulatory and technical principles ensures that users from various markets can trust AI applications across borders.
Routes to AI interoperability
Fortunately, there are four promising pathways to advance both regulatory and technological interoperability. These pathways do not require completely uniform global regulations, but focus on creating consistency that allows cross-border AI interactions while respecting domestic priorities. First, governments need to incorporate global standards and frameworks into domestic regulations. Rather than formulating regulations from scratch, policy makers can refer to established international standards in national regulations, such as ISO/IEC 42001. This incorporation through this reference approach, such as the practice of EU harmonized standardization standards systems, allows for national customization while creating natural alignments in compliance mechanisms.
Second, there is a need for open technical standards for AI-to-AI communication. Corporate APIs may offer short-term solutions, but true open standards developed through multi-stakeholder organizations such as IEEE, W3C, and ISO/IEC create equal arenas. Governments can encourage adoption through procurement policies or tax incentives, such as the NIST Smart Grid interoperability roadmap.
Third, the pilot interoperability framework in high impact sectors validates the approach before broader implementation. Similar to those founded between the UK, the UAE and Singapore, the multilateral regulatory sandbox provides a safe environment for testing regulatory and technical interoperability approaches across borders. The development of measurement tools that map relationships between different interoperability frameworks can identify overlaps and gaps and create crosswalks between key regulatory and technical systems.
Finally, building stronger economic and trade cases for interoperability stimulates political will. As seen in the USMCA’s Digital Trade Branch, integrating AI governance provisions into trade agreements creates a mechanism for regulatory consistency while promoting digital trade. Regional frameworks such as APEC and ASEAN are aware of this approach and are urging the economy to pursue regulatory interoperability to prevent market fragmentation.
The road ahead
Achieving regulatory and technology interoperability does not happen overnight. They also do not spontaneously emerge from the power of the market alone. The natural incentive for incumbents is to protect AI silos from erosion. What is needed is a networked multistakeholder approach that includes government, industry, civil society, and international organizations collaborating with specific achievable goals. International initiatives such as the G7 Hiroshima AI Process, the United Nations’ high-level advisory body on AI, and the international network of AI Safety Institutes provide promising venues for networked, multistakeholder coordination. These efforts must avoid pursuing perfect uniformity and instead focus on creating consistency that allows AI systems and services to function across boundaries without unnecessary friction. Just as international transport standards allow world trade despite differences in domestic road rules, AI interoperability can create a foundation for innovation while respecting legitimate differences in national approaches to governance.
Alternatives – a deeply fragmented AI landscape – do not slow innovation, entrench the power of dominant players, or deepen digital division. While there is no time for collaborative action on AI interoperability now, the governance approach is still evolving. By pursuing regulatory and technology interoperability together, AI can take the path of fulfilling its promises as a technology that benefits humanity, not deepening existing disparities.