Harnessing Artificial Intelligence Without Undermining Democracy
Kaylee Bourne, formerly the Director of the Cyber Policy Center at Stanford University, is now the Director of the Democracy, Rights, and Governance Initiative at the David and Lucile Packard Foundation.
Artificial intelligence is already impacting the foundations of democratic governance worldwide. Its effects can be mapped in concentric circles emanating from elections through governance adoption, political participation, public trust, information ecosystems, and systemic risks—such as economic shocks, geopolitical competition, and existential threats like climate change or biological weapons. Each circle presents both opportunities and challenges.
Let’s start with elections. In the United States, election administrators are facing severe staffing and funding shortages. Many claim that AI could assist in translating ballots into multiple languages, verifying mail-in ballots, or selecting optimal locations for polling places. However, only 8% of election officials in the U.S. are currently employing these tools.
Instead, AI is being used to make voting more challenging. In Georgia, some activists have used the AI network Eagle to generate mass voter challenges, pressuring officials to purge voter rolls. Opponents employ similar tools to attempt to reinstate voters. Familiar risks—like deepfakes designed to confuse or mislead voters—are abundant. In 2024, Romania annulled its presidential election results amid inflated evidence of Russian interference utilizing AI—the first decisive example of AI’s impact.
Yet, the search for “smoking gun” evidence may overlook the greater danger: the ongoing erosion of trust, facts, and social cohesion.
The government’s use of AI provides a secondary transmission vector for its influence—a more promising one. Public trust in the U.S. federal government hovers around 23%, and agencies at various levels are experimenting with AI to improve efficiency. These efforts are already yielding results. For instance, the State Department has reduced the time employees spend processing requests under the Freedom of Information Act by 60%. In California, the city of San Jose has employed AI transportation optimization software to redesign bus routes, successfully reducing travel times by nearly 20%.
Such improvements could bolster democratic legitimacy, but the risks are real. Black box algorithms already influence decisions on eligibility for government benefits and even criminal sentencing, presenting serious threats to justice and civil rights. The military is also rapidly adopting these algorithms: in 2024, the U.S. Department of Defense awarded contracts worth $200 million to four leading AI companies, raising concerns about state surveillance and AI-driven policing and warfare.
Simultaneously, AI stands to reshape public participation. In Taiwan—a global model for technology-supported governance—AI-powered tools like Pol.is have helped rebuild public trust following the occupation of parliament in 2014, lifting governmental approval ratings from below 10% to over 70%. Currently, Stanford’s Collaborative Democracy Lab is employing AI coordinators in over 40 countries, while Google’s Jigsaw is exploring similar methods to support healthier dialogues. Even social movement organizers are using AI to identify potential allies or track those behind funding anti-democratic efforts.
However, four major risks loom large: disrupted participation systems, in which processes like “notice and comment” are overwhelmed by AI detritus; active suppression, where the deployment of private data and phishing—or even state surveillance—terrifies activists into retreating from civic spaces; passive suppression, where individuals might withdraw further from real-world civic spaces in favor of digital ones, or even ultimately delegate their civic voices entirely to AI agents; and finally, efficiency erosion, where over-reliance on AI—or sycophantic AI chatbots—diminishes our capabilities for sound governance and respectful disagreement.
The environmental information ecosystem is also changing due to AI. On the positive side, newsrooms are innovating. In California, CalMatters and Cal Poly are using AI to process legislative texts statewide, mining them for insights and generating story ideas.
However, these benefits could be overshadowed by a flood of deepfakes and increasingly convincing synthetic media. Misleading content can influence opinions—people can distinguish between real and fake images only about 60% of the time. Moreover, the sheer volume of deception feeds what is called “liar’s dividends,” where people become so immersed in fabricated content that they begin to doubt everything, leading to ridicule, apathy, and disengagement.
Finally, behind the immediate threats to democratic institutions lie broader systemic challenges. The International Monetary Fund estimates that AI might impact 60% of jobs in advanced economies, while McKinsey forecasts that between 75 million and 345 million people will need to change jobs by 2030.
The issue isn’t solely that massive economic shocks pose a permanent threat to political stability. AI might also exacerbate excessive wealth concentrations, distorting political voice and undermining equality. Additionally, the West risks losing the AI race, ceding global military and economic dominance to anti-democratic powers like China.
Addressing these challenges requires action on two fronts. First, sector-specific measures can help journalists, government officials, election officials, and civil society adopt AI responsibly. Second, we need broader “foundational interventions”—comprehensive measures that protect not just individual sectors but society as a whole.
Foundational measures should cover the entire AI lifecycle, from development to deployment. This includes providing robust privacy protections and transparency regarding the data used to train models, potential biases, how companies and governments deploy AI, dangerous capabilities, and any real-world harms (the Global Incident Tracker is a great starting point).
It is also essential to impose limits on usage, from police deployment of AI for real-time facial recognition to schools and employers tracking students or workers’ activities (or even emotions). We need accountability systems when AI systems wrongfully deny people jobs, loans, or government benefits. New ideas in antitrust or economic redistribution might also be necessary to prevent unsustainably democratic levels of inequality.
Lastly, a public infrastructure for AI is crucial—open models, affordable computing resources, and shared databases accessible to civil society to ensure the widespread distribution of technology benefits.
While the European Union is moving swiftly on regulation, federal action in the United States has stalled. However, state legislatures are moving forward: 20 states have enacted privacy laws, 47 states now have AI deepfake laws, and 15 states impose restrictions on police use of facial recognition technology.
Indeed, the window for political action is narrow. Just as campaign finance reforms accelerated post-Watergate and efforts to regulate social media surged—then stalled—after the 2016 U.S. elections, democracies must rise to meet AI’s challenges and work to mitigate its costs while simultaneously harnessing its remarkable benefits.