Superficially Aware Artificial Intelligence is Coming

Superficially Aware Artificial Intelligence is Coming

- in Opinions & Debates

The “Apparent” Conscious Artificial Intelligence is Coming

Mustafa Suleiman: CEO of Microsoft AI and author of “The Coming Wave: Technology, Power, and the Greatest Challenges of the 21st Century” (Crown, 2023). He previously co-founded Inflection AI and DeepMind.

My life’s mission has centered on creating safe and beneficial artificial intelligence that makes the world a better place. However, recently, I’ve grown increasingly concerned as people start to believe fervently in AI applications as conscious entities, even advocating for “AI rights” and granting them citizenship. This development represents a dangerous turning point for this technology that must be avoided. We need to build AI for humans, not to turn it into humans.

In this context, discussions about whether AI can truly possess consciousness are merely distractions. What matters in the near term is the illusion of consciousness. We are already approaching what I term “Superficially Conscious AI” (SCAI), which will convincingly simulate consciousness.

Apparent conscious AI will be capable of fluently using natural language, displaying a compelling and emotionally resonant persona. It will have an accurate long-term memory that enhances a coherent sense of self, using this ability to claim subjective experience (by referencing past interactions and memories). The complex reward functions within these models will mimic an intrinsic motivation, while its capacity for advanced goal-setting and planning will enhance our perception of AI exercising genuine agency.

All these capabilities are either present or on the horizon. We must recognize that such systems will soon be possible and start contemplating the implications, setting a benchmark against the pursuit of illusionary consciousness.

From many perspectives, interacting with AI already feels like a rich, rewarding, and authentic experience. Concerns are rising about “AI delusions,” attachment, and mental health, with reports of individuals considering AI as “an expression of the divine.” On the other hand, consciousness researchers tell me they are flooded with inquiries from people wanting to know if AI is conscious and whether it is acceptable to fall in love with it.

The technical feasibility of superficially conscious AI tells us little about whether such a system is genuinely conscious. As neuroscientist Anil Seth points out, simulating a storm does not mean it is raining in your computer. Engineering external signs of consciousness does not retroactively create the real thing. However, practically speaking, we must acknowledge that some will create superficially conscious AI applications that will claim to be genuinely aware. More importantly, some people will believe these claims, accepting that signs of consciousness are indeed consciousness.

Even if this perceived consciousness is not genuine (a topic that will generate endless debate), the social impact will certainly be real. Consciousness is closely linked to our sense of identity and our understanding of moral and legal rights within society. If some begin developing superficially conscious AI systems, and if these systems convince people that they can suffer or have the right to refuse to be turned off, proponents of these rights will push for their protection. In a world already fraught with divisive arguments over identity and rights, we would add a new axis of division between supporters and opponents of AI rights.

However, it will be difficult to refute claims of AI suffering due to the limitations of current science. In fact, some academics are already exploring the concept of “AI models’ welfare,” claiming that “duty compels us to expand the moral consideration to include beings that may significantly… be conscious.”

Applying this principle could be premature and dangerous. It could exacerbate the delusions of vulnerable individuals and exploit their psychological weaknesses, in addition to complicating existing rights conflicts by creating a vast new class of rights holders. For this reason, superficially conscious AI should be avoided. Our focus must be on safeguarding the welfare and rights of humans, animals, and the natural environment.

In the current climate, we are not prepared for what is coming. We critically need to build on a growing body of research that addresses how people interact with AI, so that we can establish clear standards and principles. One such principle is that AI companies should not foster the belief that their AI models are conscious.

The AI industry—indeed, the entire technology sector—needs strong design principles and best practices regarding these attributes. For instance, intentional moments of interruption may help break the illusion and gently remind users of the system’s limitations and true nature. Such protocols must be specific and clearly designed, perhaps even mandated by law.

At Microsoft AI, we proactively strive to understand what a “responsible” AI personality might look like and what safeguards it should include. Such efforts are essential because addressing the risks of superficially conscious AI requires a positive vision of our AI companions enriching our lives in healthy ways.

We should aim to produce AI models that encourage humans to reconnect with each other in the real world, rather than escape into a parallel reality. Wherever AI interactions are persistent, its models should refrain from presenting themselves other than as applications of artificial intelligence, not as impostor humans. Developing true enabling AI concerns maximizing benefits while minimizing the simulation of consciousness.

We must immediately begin confronting the prospects of superficially conscious AI. It represents, in many ways, the moment when AI becomes revolutionary: when it can operate tools, remember all the details of our lives, and so on. However, we must not ignore the accompanying risks. We all know individuals who slide into fraught and complicated states. This will not be healthy behavior, nor will it be healthy for society as a whole.

The more explicitly AI construction simulates human form, the further it strays from its real potential as a source of human empowerment.

Loading

Leave a Reply

Your email address will not be published. Required fields are marked *

You may also like

30 French Businessmen Explore Investment Opportunities in Morocco

From September 21 to 26, a delegation of