Technology companies should not be able to set the rules for artificial intelligence
Gabriela Ramos: Co-chair of the working team studying the gaps in inequality and financial disclosures related to social affairs, former Assistant Director-General for Social and Human Sciences at UNESCO, where she oversaw the development of the “Recommendation on the Ethics of Artificial Intelligence,” former Chief of Staff of the Organisation for Economic Co-operation and Development (OECD), and personal envoy of world leaders to G20 and G7 summits, as well as the Asia-Pacific Economic Cooperation (APEC) forum.
The ongoing dispute between Anthropic and the Trump administration reveals a troubling reality about the current state of AI governance. It is evident that a private entity is more concerned about ethical controls than the most powerful military institution in the world.
Earlier this month, the U.S. Department of Defense classified Anthropic as posing “supply chain risks.” This unusual move followed the company’s insistence on implementing guarantees to prevent its technologies from being used for mass surveillance of Americans or in fully autonomous weapons. In response, the Pentagon placed Anthropic on a list generally reserved for foreign entities deemed threats to national security. Since then, Anthropic has filed a lawsuit challenging this designation.
Regardless of one’s opinion on the company’s motives, this incident sheds light on the severe deviation from intended governance frameworks. When the responsibility to insist on basic ethical boundaries falls on private companies, systems that are meant to protect public interest from potentially harmful technologies clearly fail.
It is encouraging that the AI Impact Summit held in February in India demonstrated that it is not too late to change course. Around the world, pioneering companies are developing systems specifically designed for safe and ethical applications, while civil society organizations leverage AI to tackle pressing social challenges, including violence against women and girls. On another note, the costs of AI applications have decreased by up to 90% in recent years, while the growth of open-source ecosystems has made powerful tools accessible to smaller actors.
This is the AI revolution that many of us have long hoped for, where technological advancement is guided by democratic values and respect for human rights. I have been inspired by this vision in my work on UNESCO’s Recommendation on the Ethics of Artificial Intelligence – the first framework of its kind globally – and on the AI principles endorsed by the OECD.
India’s experience offers a useful model for countries looking to harness AI in ways that serve the public good. By heavily investing in public digital infrastructure – particularly the biometric identification system “Aadhaar” and the unified payment interface – the country has demonstrated how technology can be deployed at scale to meet citizens’ daily needs.
However, the dispute involving Anthropic highlights an increasing tension between sound AI governance and governments’ desire to attract investment. The business models of a handful of U.S. companies currently dominating the AI field are shaped by fierce competition, both among themselves and with their Chinese counterparts, making policymakers hesitant to impose rules that might drive them away.
This dynamic was evident during last year’s AI Action Summit in Paris, where media coverage focused on investment commitments from major tech companies rather than initiatives that prioritize the public good, such as Current AI or the Coalition for Sustainable Development through Sport.
Consequently, these summits increasingly function as platforms for governments to announce investments and data center deals. Notably, the standout image from the AI Impact Summit in India featured Prime Minister Narendra Modi surrounded by tech executives, including Sundar Pichai from Alphabet, Sam Altman from OpenAI, and Dario Amodei from Anthropic.
The original purpose of these gatherings was to foster multilateral cooperation on managing transformative technologies. Their transformation into platforms for promoting investment underscores the difficulty of maintaining serious oversight. Policymakers have attempted various approaches, from voluntary principles to binding legislation like the EU AI Act. Yet geopolitical competition and commercial pressures continue to drive governments toward a race to the bottom.
Certainly, not every country needs to confront major tech companies on the global stage. However, governments must get their internal affairs in order by establishing clear regulations and building the necessary enforcement capacities.
Public procurement is a powerful tool in this regard, representing nearly 13% of GDP in OECD countries. Procurement contracts could mandate data localization and algorithmic transparency, and could also establish effective mechanisms for appealing harmful algorithmic decisions. They could impose safety testing on high-risk systems before deployment while rewarding companies that adhere to ethical standards and excluding those that do not.
But procurement alone is insufficient; it must be followed by legislation. One of the most important steps governments can take is to ensure that AI systems are never granted legal personality, so that responsibility always rests with a human or an institution. They should also impose a strict ban on data extraction without consent, mass surveillance, and using AI for profiling and political manipulation.
Not every country has the capability to build its own core AI models, nor should it try. The more practical path is to invest in smaller, open-source models specifically designed to suit local languages, needs, and values. While such strategies still require investments, institutions, infrastructure, and appropriate incentives, they have the potential to yield widespread results.
The EU AI Act represents the most ambitious attempt to implement this approach yet. Critics dismiss it as bureaucratic and burdensome, and the European Commission is under increasing pressure to delay its implementation. However, the law simply reaffirms a fundamental principle: technology is not above the law. Pharmaceutical companies must meet safety standards before launching new medications, and construction firms must certify the structural safety of the bridges they build. Similarly, high-risk AI systems should be subject to the same degree of scrutiny.
The rapid pace of AI development underscores the urgency of this task. Countries that fail to build these foundations will not only lag behind in the current technological race but also risk losing control over how new technologies are used in an increasingly power-oriented world where accountability becomes optional.
The good news is that governments and consumers still have leverage. Access to markets provides countries with real influence over how AI products are deployed, and civil society organizations have repeatedly demonstrated that coordinated public pressure can change corporate behavior.
In truth, democratic societies cannot entrust the defense of their values to private companies. Instead, they must establish institutions, laws, and capacities that render such dependency unnecessary before the cost of inaction becomes excessively high.






