•      Thu Dec 11 2025
Logo

How Global AI Governance Could Work



LONDON – Ahead of the AI Impact Summit in India in February, it is clear that most countries still lack a workable model for governing the technology.

While the United States leaves matters largely to market forces, the European Union relies on extensive regulatory compliance, and China on concentrated state authority.

But none of these is a realistic option if you are among the many countries that must govern AI without large regulatory structures or massive computing capacity. Instead, we need a different framework, one that embeds transparency, consent, and accountability directly into digital infrastructure.

This approach treats governance as a design choice that can be built into the very foundations of digital systems. When safeguards are part of the architecture, responsible behavior becomes the default.

Regulators gain immediate insight into how data and automated systems behave, and users have clear control over their information. It is a far more scalable and inclusive method than one that relies on regulation alone.

But what should this look like in practice? India’s experience with digital public infrastructure offers many lessons. The country’s platforms for identity documentation (Aadhaar), payments (UPI), travel (DigiYatra), and digital commerce (ONDC) all show how public standards and private innovation can operate together on a national scale.

For example, DigiYatra – a public-private initiative that streamlines airline check-ins, queuing, and other elements of travel – demonstrates how real-time identity verification and consent protocols can be managed across large user groups in a secure and predictable way.

These systems demonstrate how digital architecture can expand access, increase trust, and promote thriving markets. They would not single-handedly solve the challenges of AI governance, but they do show that technical standards and public purpose can be aligned even in the largest, most diverse societies.

India’s own Data Empowerment and Protection Architecture builds on these lessons and is already being deployed across many sectors. Since it allows individuals to authorize or withdraw permission for the use of their data through clear and auditable channels, transparency is built in, enabling regulators to follow data flows without the need for new supervisory institutions.

Again, the underlying design principle is straightforward: Durable protection is most effective when it is embedded in system architecture, rather than being enforced only through compliance processes.

To be globally viable, an architectural approach must prioritize sovereignty over compute. Computing capacity is clearly the strategic bottleneck of the AI age, which is why the US and China are spending hundreds of billions of dollars annually on advanced data centers and AI chips.

But since most countries cannot hope to match these investments, we must avoid a scenario where meaningful AI governance itself requires compute – where most countries would have little real authority over the systems shaping their societies.

Maintaining sovereignty over compute does not necessarily mean that every data center should be built domestically. But it does mean that AI systems operating within a country should remain subject to its laws and accountable to domestic authorities, regardless of where the compute resides.

Multinational technology companies would need to maintain clear legal and operational partitions with technical firewalls and auditable controls. Such safeguards are necessary to prevent data from crossing borders without authorization, and to ensure that domestic data are not incorporated into globally available models without explicit approval.

Without enforceable partitions, governments will struggle to maintain oversight of the digital systems that influence domestic finance, health care, logistics, and public administration.

This underscores a major strength of the architectural approach: it allows every country to set its preferred balance of risk, innovation, and commerce. Societies differ in their views on privacy, experimentation, market openness, and safety, so no single regulatory model can ever accommodate everyone’s preferences.

But a shared architectural foundation based on transparent data flows, traceable model behavior, and the principle of “sovereignty over compute” gives each country the flexibility to calibrate its own parameters. The rails are shared, but the national settings remain sovereign.

Compared with current global approaches, an architectural model provides a more balanced and realistic path forward.

The US system encourages rapid experimentation but often recognizes harm only after it occurs. The European system provides strong safeguards but demands high compliance capacity. And the Chinese system achieves speed through centralization, which leaves it ill-suited for distributed governance.

By embedding transparency and consent into digital systems from the start, an architectural approach enables innovation to proceed predictably while ensuring public accountability.

The Global AI Summit in India is an opportune moment for all countries to consider such a framework. The world needs a shared governance system that has been built into the very foundations of this powerful technology.

That is how we will protect users, preserve sovereignty, and give every country the ability to strike its own balance of risk and innovation. As AI reshapes every sector of the economy, an architectural approach offers the most credible and equitable path forward.

Jayant Sinha, a former minister of state for finance and minister of state for civil aviation in India, is President of the Everstone Group (a private equity firm) and Visiting Professor in Practice at the London School of Economics.

Copyright: Project Syndicate, 2025.
www.project-syndicate.org