Radio ExpressTV
Live
How Artificial Intelligence Restores Trust in Democratic Governance
In an era where public trust in governments of established democracies has plummeted to disturbingly low levels, many fear that artificial intelligence might exacerbate the issue by propagating misinformation and undermining confidence in the very facts themselves. However, new AI tools might actually serve as part of the solution to the trust deficit facing democracies.
According to the Organisation for Economic Co-operation and Development (OECD), only 39% of citizens in member countries trust their national governments, down from 45% in 2021. In the United States, the Pew Research Center found that trust hovers around historic lows of 17%, with similar trends observed in France, the United Kingdom, and Australia. In contrast, the most effective technocratic governments enjoy much higher levels of trust, exceeding 70% in Singapore and the United Arab Emirates. Even the authoritarian regime in China clearly surpasses many Western democracies in this respect.
The traditional interpretation of this gap—that democracies invite criticism while authoritarian regimes enforce compliance—oversimplifies the issue. High-trust technocratic regimes share another quality: they achieve results effectively while remaining responsive to the concerns of ordinary people. Thus, expert-led policy-making is linked with popular legitimacy.
This points to a deeper challenge facing democratic governments: the widening gap between “bounded rationality” and “abstract rationality” in policy-making. Bounded rationality refers to the realm of experienced experts who formulate policies shaped by political feasibility, public sentiment, and proven methods, while abstract rationality is the domain of economists and technical experts focused on enhancing policies for efficiency. The latter tends to prioritize evidence and theoretical coherence over political constraints in the real world.
When bounded rationality excessively dominates, the policy-making process appears mocking and survey-driven. Citizens feel that government officials prioritize political survival over problem-solving. As superior technical policies—like carbon taxes—are abandoned in favor of politically safer but less effective alternatives, trust erodes.
Similarly, when abstract rationality overly prevails, the policy-making process seems detached and indifferent. Governments propose reforms designed by experts that look great on paper, only to collide with political realities. Pension reforms that could save billions lead to costly strikes lasting weeks, while hospital restructurings meant to improve outcomes end up costing the health minister their job. Trust diminishes when governments seem oblivious to legitimate public concerns.
Successful technocratic stories avoid this pitfall. Singapore’s government combines rigorous policy analysis with sophisticated insights on how policies are received. Similarly, policymakers in Gulf Cooperation Council countries heavily invest in technical expertise and mechanisms to gauge citizen satisfaction. In the UAE, nearly all public service providers are equipped with kiosks to receive customer feedback regarding their satisfaction. Rather than favoring one form of rationality over the other, these nations have worked to integrate both.
Western democracies struggle to apply this approach, partly because the ruling authority is continuously undermined by partisan foes. While economists argue that eliminating fuel subsidies would save money and narrow wealth gaps, elected officials know this would ignite a political earthquake. While treasury models stress the need to reform pension systems, polling data suggests such policies are electoral suicide. With both forms of rationality ignoring one another, governments oscillate between technocratic overreach and political capitulation, leading to the paralysis that has become a hallmark of many democracies.
AI could help bridge this gap. Large language models (LLMs) demonstrate unique capabilities in policy analysis. Unlike traditional decision-making models that optimize predefined indicators, LLMs absorb how people actually discuss policy outcomes, reflecting ethical concerns, emotional values, underlying political narratives, and stakeholder perspectives.
For example, when analyzing a housing policy proposal, LLMs would not only assess economic efficiency. They could also identify language (“luxury housing developer”) that might provoke class opposition, or terms (“family-friendly neighborhoods”) that could alienate younger voters. They might discover that similar policies succeeded in state X but failed politically in state Y, despite similar economic conditions.
In my experience with AI-assisted policy analysis for government clients, I found these systems excel at what I call “emotionally attuned policy design.” While traditional tools may show that congestion pricing reduces traffic by 22%, AI systems can remind you that the term “congestion pricing” polls worse than “clean air fees” and that implementation during election years doubles political risks. Furthermore, exemptions for delivery vehicles can create opportunities for alliances with small business groups.
The goal is not to replace human judgment but to make the implicit political knowledge of seasoned experts clearer, more systematic, and testable. With AI, advocates of abstract rationality gain quantitative precision, practitioners of bounded rationality obtain political acumen, and importantly, both sides can see each other’s perspectives more clearly.
Moreover, when combined with web search capabilities, AI tools can contribute to near-instantaneous sentiment analysis. This is crucial, as policies designed to address concerns in one quarter may not be appropriate for the political climate when implemented in the next. By the time pension reforms reach parliament, speculation about recession may have completely altered voter priorities.
AI-supported analysis can reveal how particular issues are discussed across news outlets, social media, parliamentary debates, stakeholder communications, and other channels. It can identify rising concerns and signal when a political opportunity window is open or closed. Such insights could help governments combat perceptions of being slow, unresponsive, and detached from everyday reality. AI cannot make governments omniscient, but it can render them more responsive and less oblivious to the political consequences of technical decisions.
High-trust technocracies succeed partly because they systematically integrate technical excellence with political responsiveness. Now, AI offers democracies the means to do the same.
Certainly, large language models can reproduce biases and may even hallucinate (fabricate responses), displaying a lack of deep contextual understanding. They cannot replace the minister who knows that coalition partner X would never accept policy Y, or the permanent secretary who personally recalls the catastrophic policy Z failure in 1997. However, they can uncover invisible analytical insights, render implicit knowledge visible and shareable, help experts understand why technically optimal policies are politically untenable, and enable officials to identify promising political adjustments.
Restoring trust in democratic governance requires the implementation of capable policies that citizens recognize as effective and responsive to their concerns. AI alone will not solve the challenges facing democracies, but it can aid in bridging the rationality gap that has paralyzed policy-making. AI offers tools to maintain legitimacy and effectiveness—the combination mastered by high-trust governments, whether democratic or not.
