Radio ExpressTV
Live
How Global AI Governance Can Work
Jaiant Sinha: Former Minister of State for Finance and Minister of State for Civil Aviation in India / Chairman of Everstone Group (a private equity firm) and Practice Professor at the London School of Economics.
As the Artificial Intelligence Impact Summit approaches in India this February, it is evident that most countries still lack a practical model for governing this technology. While the United States largely leaves matters to market forces, the European Union relies on rigorous regulatory compliance, and China depends on centralized state authority. However, none of these options are realistic for many countries that need to govern AI without massive regulatory structures or tremendous computing capabilities. Instead, we need a different framework—one that integrates transparency, consent, and accountability directly into the digital infrastructure.
This approach treats governance as a design choice that can be embedded into the very foundations of digital systems. When safeguards are part of the infrastructure, responsible behavior becomes the default. Regulatory bodies gain real-time insights into how data and automated systems operate, while users enjoy clear control over their information. This method is more scalable and inclusive than one that relies solely on regulation.
But what should this look like in practice? India’s experience with digital public infrastructure offers many lessons. Platforms for identity verification (Aadhaar), payments (UPI), travel (DigiYatra), and digital commerce (ONDC) demonstrate how public standards and private innovation can work together on a national scale. For instance, the DigiYatra platform—a public-private initiative that streamlines airline check-in, queuing, and other travel elements—shows how identity verification and consent protocols can be managed in real-time across large groups of users securely and predictably.
These systems illustrate how digital infrastructure can expand access, build trust, and enhance market prosperity. While they alone cannot solve AI governance challenges, they also show the potential for reconciling technical standards with public purpose, even in the largest and most diverse societies.
The architecture of data empowerment and protection in India draws from these lessons and is already being applied across various sectors. By allowing individuals to authorize or revoke permission for the use of their data through clear, auditable channels, transparency is inherently built in, enabling regulators to track data flows without the need for new supervisory institutions. Once again, the core design principle is straightforward: enduring protection becomes more effective when integrated into the system’s architecture rather than enforced solely through compliance measures.
For it to be applicable globally, the architectural approach must prioritize sovereignty over computing. It is clear that computing capacity is the strategic bottleneck of the AI era; this is why the United States and China are spending hundreds of billions of dollars each year on advanced data centers and AI chips. However, as most countries cannot hope to match these investments, we must avoid a scenario in which AI governance itself demands computing—where most countries lack genuine authority over the systems shaping their societies.
Maintaining sovereignty over computing does not necessarily entail building every data center locally. However, it does mean that AI systems operating within a country must remain subject to its laws and accountable to local authorities, regardless of where the computing takes place. Multinational technology companies will need to maintain clear legal and operational barriers with technical safeguards and verifiable controls. These guarantees are essential to prevent data from crossing borders without consent and to ensure that local data is not integrated into models available globally without explicit approval. In the absence of enforceable barriers, governments will struggle to retain oversight over digital systems that affect local finance, healthcare, logistics, and public administration.
This underscores the main strength of the architectural approach: it allows each country to determine its preferred balance among risk, innovation, and trade. Societies differ in their views on privacy, experimentation, market openness, and safety, and thus it is impossible for any single regulatory model to accommodate everyone’s preferences. Yet, the common architectural foundation based on transparent data flows, traceable model behavior, and the principle of “computational sovereignty” grants each nation the flexibility to calibrate its engagements autonomously. The lines may be shared, but sovereignty remains for national contexts.
Compared to current global approaches, the architectural model offers a more balanced and realistic path forward. The American system encourages rapid experimentation but only recognizes harm after it occurs. The European system provides strong safeguards but demands a high level of compliance capability. The Chinese system achieves speed through centralization, making it unsuitable for distributed governance. By embedding transparency and consent into digital systems from the outset, the architectural approach allows innovation to advance predictably while ensuring public accountability.
The Global AI Summit in India presents an opportune moment for all countries to consider such a framework. The world needs a shared governance system embedded in the foundations of this powerful technology. This is how we will protect users, maintain sovereignty, and empower each nation to find a balance between risks and innovation. At a time when AI is reshaping every sector of the economy, the architectural approach provides the most credible and equitable way forward.
