Artificial Intelligence and the Imperative of Upholding Human Rights

Artificial Intelligence and the Imperative of Upholding Human Rights

- in Opinions & Debates

Artificial Intelligence and the Imperative to Uphold Human Rights

Alexandra Reeve Givens, former Director of the National Artificial Intelligence Office, is now the President of the Center for Democracy and Technology.

Karen Kornbluh, former U.S. Ambassador to the Organisation for Economic Co-operation and Development.

Washington, D.C. – In July, the Trump administration held an event titled “Winning the Artificial Intelligence Race,” unveiling its AI action plan. Like the multi-billion-dollar data center deals announced during President Donald Trump’s trip to the Gulf in May, this plan aims to bolster U.S. leadership in the AI sector. However, since neither this plan nor previous announcements mentioned human rights, it is reasonable to ask: what does it mean for the U.S. to “win” the AI race?

Many in Washington and Silicon Valley simply assume that American technology inherently aligns with democratic values. As OpenAI’s CEO Sam Altman stated before Congress last May: “We want to ensure that democratic AI wins out over authoritarian AI.” While this sentiment is nice, new technological systems do not automatically protect human rights. Policymakers and companies must take proactive steps to ensure that AI deployment meets specific standards and conditions, as is already the case in many other industries.

Recent reports from the United Nations Working Group on Business and Human Rights, the UN Human Rights Council, and the Freedom Online Coalition highlight the fact that both governments and companies share the responsibility of assessing how AI systems affect people’s rights. Existing international frameworks require all companies to respect human rights and avoid causing or contributing to human rights violations through their activities. However, AI companies, for the most part, have failed to acknowledge and uphold these responsibilities.

These calls to action underscore commitments already shouldered by other industries. Most major companies are aware of their obligation to conduct human rights impact assessments prior to purchasing or deploying new systems; integrate human rights due diligence into product design and business decisions; include contractual safeguards to prevent misuse; and provide effective remedies when harm occurs.

The challenge in addressing AI lies not in a lack of clarity around the standards but rather in the reality that many companies—and governments—act as if these standards do not apply to them. Consider Trump’s AI deals in the Gulf. If these investments are finalized, they could firmly entrench the region’s ambition to become a global AI hub, raising troubling questions about whether U.S. leaders and technology executives are now abandoning commitments they once upheld.

In the United Arab Emirates, the U.S. has agreed to transfer advanced chips to G42, an Emirati AI firm, as part of a broader plan to build a massive AI campus in Abu Dhabi. In Saudi Arabia, a newly formed state-backed company has just announced multi-billion dollar agreements with major U.S. firms for chip procurement and infrastructure development. Elon Musk’s Starlink has received authorization to operate in the Kingdom. None of these announcements mentioned protective measures to ensure that the technology is not used for surveillance or repression.

This risk is not hypothetical. The UAE has been known to use spyware against journalists and dissidents, while Saudi Arabia has engaged in various forms of transnational repression, alongside its deep involvement in the humanitarian crisis in Yemen. In fact, new AI capabilities significantly enhance governments’ ability to crush fundamental rights, for instance, through detailed profiling of opposition members, real-time surveillance, analyzing individuals’ social media posts and communications, and controlling the outputs of AI models.

Unlike traditional goods or infrastructure, AI systems can be transferred and deployed digitally with minimal public scrutiny. Under a sovereign AI plan—where government develops and controls AI systems for its populace—these systems could swiftly transform into tools for entrenching state power. As American companies ramp up international deal-making—with strong support from the Trump administration—failure to incorporate human rights commitments into these efforts represents a dangerous turning point.

It is also a missed opportunity. Access to world-leading American technologies can be leveraged as leverage either to promote human rights-respecting applications or to protect technology from misuse. There’s no need to start from scratch. The UN Guiding Principles on Business and Human Rights—endorsed by the U.S. and many of its allies—oblige companies to avoid human rights violations and address any harm they may cause.

The OECD Guidelines for Multinational Enterprises on Responsible Business Conduct go even further, requiring companies to exercise due diligence across operations and supply chains. The Global Network Initiative (GNI), launched by leading technology firms 17 years ago, established principles for protecting user privacy and freedom of expression in high-risk markets, with regular assessments of member companies’ compliance (our organization is a founding partner).

Companies like Coca-Cola, Volkswagen, and Estée Lauder already implement these frameworks or undergo independent oversight, as do some tech firms. However, the technology industry, for the most part, has failed to explicitly accept these responsibilities in relation to AI, despite the heightened need in this field. As a first step, leading AI companies that have not yet joined the Global Network Initiative could benefit from its framework and network.

Recently, a memo leaked from Anthropic CEO Dario Amodei revealed what is now at stake. Announcing the company’s intention to accept investment from Gulf countries after previously indicating it would not, Amodei remarked that it’s consistent to call for policies that prevent anyone from doing “this or that,” but if that policy fails and everyone does “this or that,” then “reluctantly we’ll have to do ‘this or that’ ourselves.”

If AI companies end up in a race to the bottom, what hope remains for protecting fundamental human rights? If winning the AI race means abandoning the values that distinguish us from authoritarian competitors like China, it will be a costly victory. Our future will not be secure simply because the technology is American.

It is not too late for governments and companies to commit to applying long-standing human rights standards to AI. The tools are already available; there is no excuse for failing to use them.

Loading

Leave a Reply

Your email address will not be published. Required fields are marked *

You may also like

Implementation of Alternative Sentences: 80 Court Rulings Issued in Morocco Since the Beginning of September

Since the enforcement of the alternative penalties law,