Radio ExpressTV
Live
Corporate Artificial Intelligence Threatens Freedom
Eight years ago, Russian President Vladimir Putin stated that whoever masters artificial intelligence “will rule the world.” Since then, investments in this technology have skyrocketed, with major American tech companies (Microsoft, Google, Amazon, and Meta) spending over $320 billion in 2025 alone.
It is not surprising that the race for dominance in artificial intelligence has also generated significant backlash. There are growing concerns that smart machines could replace human jobs or create new security risks, such as enabling terrorists, malicious hackers, and other malicious entities. What if artificial intelligence were to escape human control entirely and potentially overpower us in its quest for dominance?
However, the most urgent danger lies in the increasingly powerful and opaque algorithms of artificial intelligence that threaten freedom itself. The more we allow machines to think for us, the less capable we become of confronting the challenges posed by autonomy.
This freedom-threatening danger has two facets. On one hand, authoritarian regimes like Russia and China are already increasingly using artificial intelligence for mass surveillance and sophisticated forms of repression, stifling not only dissent but any source of information that may provoke opposition. On the other hand, private companies—particularly multinationals with access to vast amounts of capital and data—threaten human will by integrating artificial intelligence into their products and systems. The goal is to maximize profits—a pursuit that does not necessarily serve the public good, as evidenced by the harmful social, political, and mental health impacts caused by social media platforms.
Artificial intelligence poses an existential question for liberal democracies. If it remains under private control, how can—rephrasing Abraham Lincoln—government of the people, by the people, and for the people not perish from the earth?
The public must understand that the true exercise of freedom depends on defending human will against incursions by machines designed to shape thought and feeling in ways that favor corporations rather than human flourishing.
This is not merely a hypothetical threat. A recent study involving nearly 77,000 people who used AI models to discuss political issues found that persuasion-designed chatbots were up to 51% more effective than those that were not trained in this way. In another study conducted in Canada and Poland, about one in ten voters reported to researchers that conversations with smart chatbots convinced them to shift from abstaining from supporting specific candidates to backing them.
In free societies like the United States, the ability of companies to monitor and influence behavior on a broad scale has long benefited from traditional legal restrictions imposed on state regulation of the market, including the marketplace of opinions and ideas. The practical assumption has always been that in the absence of a significant threat of imminent violence, the best way to deal with words and images presumed to be harmful is to use more words and images to counteract their effects.
Yet, this familiar doctrine of free speech does not fit the digital market shaped by rapidly proliferating algorithms and operating covertly with the influence of artificial intelligence. Users of online services may think they are getting what they want—based on previous viewing choices or purchases, for example. But the comprehensive measures that “push” users toward what a particular platform wants them to engage with remain opaque, buried deep within codes of ownership. As a result, it is unlikely that “counter-speech” will simply breach programmed barriers; the very perception of harm—and the urgent need to address it—is suppressed at its source.
A similar distortion of the principle of free speech is evident in Section 230 of the Communications Decency Act of 1996, which protects owners of digital platforms (including the most popular social media sites) from liability for damages arising from online content. This corporate-friendly policy assumes that all this content is user-generated—just individuals exchanging ideas and expressing preferences. However, companies like Meta, TikTok, and X do not provide a neutral platform for users. Their existence hinges on the premise that monetizing attention is an exceedingly lucrative practice.
Now, companies are seeking to increase their profits not only by marketing various artificial intelligence services but also by deploying them in ways that maximize users’ time online, thus increasing their exposure to targeted advertisements. If attracting users’ attention means subtly presenting certain types of information while withholding others or providing AI-generated flattery and thoughtless encouragement, then so be it.
Governments betray their commitment to protecting the true practice of freedom when they fail to regulate online marketing designed to manipulate preferences covertly. Just as calculated lies constitute fraud when it comes to commercial products or services, covert or deliberately disguised behavioral manipulation by companies for profit falls outside what the U.S. Supreme Court considers “the fruitful exercise of the right to free speech.”
Laws and public policies must catch up to contemporary conditions and the threats posed by corporate artificial intelligence to freedom in the digital age. If artificial intelligence has indeed become powerful enough to rule the world, governments in free societies must ensure that it serves the public interest or at least does not harm it.
