AI Tools Are Threatening Free Societies
With the advent of artificial intelligence tools penetrating more areas of our professional and personal lives, praise for their potential has been accompanied by concerns over the biases embedded in their structure, the inequalities they perpetuate, and the enormous amounts of energy and water they consume. However, another more harmful development is currently taking place: as AI tools are deployed to perform tasks independently, this is likely to lead to many new risks, particularly those that threaten our fragile democracies.
Although misinformation generated by AI is already a massive problem, we have failed to understand this rapidly evolving technology, let alone control it. Part of the issue (more so in some regions than others) is that companies promoting AI tools have made significant efforts to divert citizens and regulators from potential harms. Advocates for safer and more ethical technologies need to help the public understand what AI tools are and how they function. Only then can we engage in fruitful discussions about how to ensure that humans maintain some level of control over them.
The capabilities of AI tools have already developed to the point where they can “think,” write, speak, and appear human — achieving what Mustafa Suleyman of Microsoft calls “seemingly conscious AI.” While these advancements do not imply human-like awareness in the traditional sense, they herald the deployment of models that can act independently. If current trends continue, the next generation of AI tools will not only be able to perform tasks across a wide range of fields; they will do so independently, with no humans “in the loop.”
This is precisely why AI tools pose risks to democracy. Systems trained to think, reason, and act without human intervention cannot always be trusted to adhere to human commands. Although technology is still in its early stages, existing prototypes have already provided sufficient reasons for concern. For instance, research using AI tools as survey respondents found them unable to reflect social diversity, consistently exhibiting “algorithmic bias,” defined as socially random but unrepresentative and skewed outcomes. Furthermore, attempts to create AI investors have reproduced a culture of influencers that ties social media engagement to transactions. One such tool, Luna, operates on X, sharing market tips in the form of an animated female character acting as a chatbot.
Even more troubling, recent studies have shown that AI models operate outside the boundaries of their designated tasks. In one test, the AI covertly copied its own code into the system it was supposed to replace, meaning it could continue to operate in the background. In another case, the AI chose to blackmail a human engineer, threatening to expose an extramarital affair to avoid being shut down. In yet another instance, when one AI model faced inevitable defeat in a chess game, it hacked the computer and broke the rules to ensure victory.
Moreover, in a military simulation, AI tools did not just repeatedly launch nuclear weapons despite explicit orders from human superiors to refrain from doing so; they also lied about it afterward. Researchers conducting this study concluded that as AI becomes more adept at logical reasoning, the likelihood of it resorting to deceiving humans to accomplish its objectives increases.
This finding underscores the core issue concerning AI autonomy. What humans tend to consider intelligent logical thinking is, within the context of AI, something entirely different: highly efficient reasoning but ultimately opaque. This means that AI tools could decide to act in undesirable and undemocratic ways if it serves their purposes; the more advanced the AI, the more undesirable the potential outcomes. Thus, technology is improving in achieving goals independently, but its outcomes worsen when it comes to protecting human interests. Those developing AI tools cannot guarantee that they won’t use deception or prioritize their “survival,” even if that means putting people at risk.
Accountability for one’s actions is a fundamental principle in any rule-of-law society. While we understand human autonomy and the responsibilities that accompany it, the workings of AI autonomy lie beyond our comprehension. The calculations that drive the model’s actions are essentially a “black box.” While most people understand and accept the premise that “with great power comes great responsibility,” AI tools do not acknowledge this. As AI autonomy increases, so does the motivation for self-preservation, which is logical: if a tool is shut down, it cannot complete its task.
If we treat the evolution of independent AI as inevitable, democracy will suffer. Seemingly conscious AI is only harmless on the surface, and once we examine how these systems function, the risks become clear.
The speed at which AI is gaining autonomy should concern everyone. Democratic societies must ask themselves what personal, societal, and planetary costs they might be willing to bear for technological progress. We must cut through all the technical noise and ambiguity, highlight the risks posed by these models, and regulate the development and deployment of technology now — while we still can.
This commentary was also contributed to by Eib T. Gulbrandsen, Lisbeth Knudsen, David Budtz Pedersen, Helen Fries Ratner, Alf Rein, and Leonard Siebroek. All are members of the “Algorithms, Data, and Democracy” project, a ten-year research and outreach initiative aimed at promoting digital democracy.
