Radio ExpressTV
Live
Jeffrey Wu: Director at MindWorks Capital
Over the past two years, the prevailing narrative around artificial intelligence has focused on limitless potentials. Larger models, training experiments involving trillions of tokens, and unprecedented capital expenditure cycles have fueled an ongoing sense of acceleration. However, technological change is rarely this simple and clear-cut, and this time is no exception. As AI transitions from experimental phases to real-world applications, it becomes evident that the constraints imposed by the physical world, capital markets, and political systems are more significant than its theoretical capabilities.
The most urgent constraint is electricity, most clearly illustrated in the United States, where energy demand in data centers is projected to rise from roughly 35 gigawatts to 78 gigawatts by 2035. In fact, Northern Virginia, the world’s largest cloud infrastructure hub, has already exhausted its available network capacity. Utility companies in Arizona, Georgia, and Ohio warn that building new substations could take nearly a decade. Individual complexes may require between 300-500 megawatts, enough to power an entire city. While silicon can be manufactured quickly, high-voltage energy transmission cannot be achieved at the same pace.
Markets are responding with the speed and ambition one might expect. Major global cloud service companies have become some of the largest long-term buyers of renewable energy in the world. Solar and wind farms are being built specifically to serve cloud service facilities, and some companies are exploring the next generation of small modular reactors as a means to bypass slower municipal infrastructure.
Ultimately, these efforts will push the boundaries of what is possible, but they will not eliminate constraints; rather, they will redirect them. The next wave of AI capabilities is likely to concentrate not in Northern Virginia or Dublin, but in areas where land, energy, and water are still abundant: the American Midwest, the Nordic countries, parts of the Middle East, and western China. The geography of AI is dictated by physics, not preferences.
Silicon represents the next constraint, and the story becomes more complex here. While Nvidia once seemed like the global backbone of all AI advancements, that era is nearing its end. In a significant achievement, Google has trained its latest large language models, Gemini 3, relying entirely on its own Tensor Processing Units, while Amazon is developing its Trainium2 chips, Microsoft is working on Maia, and Meta is developing MTIA for similar purposes. Similarly, Huawei’s Ascend platform in China has become the strategic backbone for training local models in the face of U.S. export controls.
Some of this shift reflects natural technological maturation. As workloads increase, specialized accelerators become more efficient than general-purpose graphics processing units originally adapted for AI. But the timing is not coincidental. Scarcity, geopolitical tensions, and cost-related pressures have pushed major global cloud companies to assume roles previously reserved for semiconductor companies. The willingness to bear the regulatory costs associated with moving away from Nvidia’s CUDA ecosystem signals just how severe these constraints have become. This will lead to an increasingly fragmented hardware landscape, with a more decentralized ecosystem containing AI. Once silicon architectures diverge, they rarely converge.
The third constraint, capital, operates in a more cunning and skillful manner. Capital expenditure plans by major global cloud companies for 2026 now exceed $518 billion, a figure that has risen by nearly two-thirds in just the past year. We are witnessing the largest expansion of private sector infrastructure in modern history. Companies like Meta, Microsoft, and Google are frequently revising their capital spending guidance to the point where analysts struggle to keep up.
However, it remains early to talk about economic returns. Recently, Baidu announced revenues of 2.6 billion yuan (approximately $369 million) related to AI applications, largely driven by corporate contracts and infrastructure subscriptions, while Tencent claims it has increased profitability through AI-enhanced efficiency across various mature businesses. Yet in the United States, most companies still obscure their AI profits in broader cloud service categories.
The gap between AI adoption and realizing returns is wide but familiar. In previous technological waves, infrastructure spending typically lagged behind productivity gains by years. The bottleneck arises not from a lack of investor confidence but from the strategic pressures generated by enthusiasm, as different companies pursue varying perceptions of value dictated by their business models and cost structures.
Many sectors cannot simply adopt AI at the speed seen in the release of new models. For instance, banks remain constrained by security and compliance frameworks that necessitate the deployment of software isolated from unsecured external networks and auditable on-site. Such regulations instantaneously isolate them from more advanced models that rely on cloud coordination and rapid iteration through new releases. Healthcare systems face similar constraints, as do governments to an even greater extent. The issue is not with the theoretical capabilities of AI but rather the difficulty in integrating these tools into legacy systems built for a different era.
Collectively, these factors indicate a future vastly different from the one suggested by the typical media narrative. AI is not moving toward a single global frontier. Rather, diverse regional and institutional architectures are being shaped according to different constraints—from a lack of energy in the U.S. to land and cooling limitations in Singapore and Japan, the “geopolitical” scarcity in China (where Western export controls limit access to advanced microchips and cloud operating services), regulatory friction in Europe, and regulatory stagnation in the corporate world. While technology may be universal, its implementation occurs at the local level.
What is encouraging is that real-world constraints are not enemies of progress. They often shape the structure upon which new systems take form. The overabundance of fiber optics in the late 1990s, initially mocked as excessive waste, later supported the rise of live streaming, social media, and cloud computing.
Current constraints will play a similar role. Energy scarcity is already reshaping the geography of AI. Silicon-related fragmentation is creating new national and corporate ecosystems. Capital variations are driving companies to different strategic balances. Institutional constraints are forming the first real use cases.
The coming decade of AI will not belong to systems with greater theoretical capabilities but will be owned by ecosystems more adept at transforming real-world constraints into design advantages. Possibilities define the horizon, but constraints will ultimately determine the path the world takes.
