Is General Artificial Intelligence Really the “Last Invention”?

Is General Artificial Intelligence Really the “Last Invention”?

- in Opinions & Debates
Stylish Audio Player

Radio ExpressTV

Live

Is General Artificial Intelligence Really the “Last Invention”?

Karl Benedict Fry: Associate Professor of Artificial Intelligence and Work at the Oxford Internet Institute and Director of the Future of Work Programme at the Martin School, recently published a book titled “How Progress Ends: Technology, Creativity, and the Fate of Nations” (Princeton University Press, 2025).

In the mid-1960s, mathematician and cryptography expert from Bletchley Park, I. J. Good, proposed a thought experiment that has since become the secular gospel of Silicon Valley. Good claimed that if we could build a “superintelligent machine,” that machine would be able to design even better machines, igniting an intelligence explosion far beyond human comprehension. Thus, the first such machine would be “the last invention humanity ever needs.”

Today, that prophecy, once a figment of science fiction, is the primary goal of the world’s most powerful institutions. For instance, Demis Hassabis of Google DeepMind speaks of “solving intelligence” to “solve everything else.” It’s an enticing narrative. However, even if we assume that future systems will be capable of learning, experimenting, and generating truly innovative solutions that far exceed current models, the premise of the “last invention” still rests on numerous dubious assumptions.

The first is that innovation resembles a frictionless race from idea to impact. This is not true. The discovery process is more akin to a chain, the strength of which is determined by the robustness of its weakest links.

These weak links account for a significant portion of human progress. In 1986, the Space Shuttle Challenger exploded 73 seconds after launch, not due to a failure in its world-class engines or software, but because a small rubber O-ring malfunctioned when exposed to low temperatures (as Nobel laureate physicist Richard Feynman famously demonstrated during hearings about the disaster). Since then, the “circular leak-proof seal” has become a metaphor for critical bottlenecks capable of sinking even the most advanced systems.

Discovery operates similarly. General Artificial Intelligence (AGI), broadly understood as a model capable of performing any cognitive task, may significantly accelerate early-stage medical research, but if it fails to pass clinical trials, scale up for mass manufacturing, or gain regulatory approval, the “breakthrough” will never transform into an invention that improves lives. When early stages of discovery are automated, the human role doesn’t disappear; it simply shifts to the remaining barriers and bottlenecks where elements such as judgment, tacit knowledge, and practical skills become paramount.

These complexities point to a larger problem: general artificial intelligence will not only have to surpass humans; it will also need to outdo humans who are utilizing general artificial intelligence. For the “last invention” narrative to be true, it will require the obsolescence of humans even as partners or overseers of AI applications.

However, intelligence isn’t merely quantitative: “more” does not simply replace “less.” Even capable general artificial intelligence might differ fundamentally from humans: it excels in speed and pattern recognition but is fragile in the face of rare events. These different strengths inherently imply different blind spots, and when these blind spots do not overlap, the combination of human and machine judgment will continue to surpass either alone.

The game of Go provides a useful reminder. After AlphaGo from Google DeepMind defeated player Lee Sedol 4-1 in 2016, it seemed its dominance over human players was assured. However, in 2023, researchers demonstrated that an amateur human with modest computational skills could consistently defeat the best programs by directing advanced engines into unusual scenarios outside their training scope. The apparent superiority still conceals systematic weaknesses, and here human contribution often adds the greatest value.

A third issue relates to knowledge itself. The “last invention” thesis assumes that all relevant information can be codified, but this is rarely the case. Few inventions have changed the world more than the Ford Model T, which turned the automobile into a mass-produced product. But Henry Ford’s achievement was not solely in the new design; it was also in the approach he adopted to organize production.

This is why delegations from Italy, Germany, the Soviet Union, and elsewhere traveled to study Ford’s factories closely. The crucial technical know-how could not be extracted from any preliminary design. It was embedded in routines, sequences, tool selection, and the daily problem-solving of the factory workers. Similarly, Toyota’s lean production system has been hard to replicate because it is deeply rooted in human routines and culture, not in a blueprint.

More intelligence does not automatically solve the “knowledge problem”—the fact that what makes complex systems work is often scattered, localized, and frequently unarticulated information. If knowledge could be transferred without friction or barriers, industries would not cluster as they do in Silicon Valley or London.

AI enthusiasts might respond, “Well, just put sensors, cameras, and microphones everywhere, and we will capture the missing knowledge.” But this strategy assumes that monitored individuals will communicate openly and share the knowledge they generate, and it disregards politics and law. Recording “everything, everywhere” runs counter to the EU General Data Protection Regulation, which has become a model for privacy regulations around the world.

Furthermore, the EU AI Act does not grant blanket permission for the intense monitoring necessary for large-scale human knowledge capture. Even if it did, it is not reasonable to assume that all human knowledge, let alone judgment, can be digitally translated that easily.

Ultimately, general artificial intelligence may automate intelligence. But the process of invention relies on something far more complex. The hard part is often not thinking of a solution but translating it into practical application. You need local knowledge, reliable routine processes, supply chains, and institutional capacity to make something work reliably in the real world. And more intelligence does not automatically produce those complementary components.

General artificial intelligence will change the discovery process by making expertise cheaper and experimentation faster. But the claim of being “humankind’s last invention” is far stronger. For this to be true, we will require a world where practical knowledge can be entirely transferred through digital channels and where responsibility can be automated alongside comprehension. This is not the world we live in.

As the cost of intelligence decreases, the assets that achieve the most value will change. The advantage will go to those who can achieve results. Humans will not become redundant; rather, they are becoming the most critical chokepoints in the world.

Loading

Leave a Reply

Your email address will not be published. Required fields are marked *

You may also like

The rise in fuel prices doubles the burden on farmers in the Fes-Meknes region.

The rise in fuel prices doubles the burdens