It all starts with the holdings of Berkshire Hathaway, owned by legendary investor Warren Buffett.
In the early years when the AI wave swept Wall Street, Google’s parent company Alphabet was once regarded as a “laggard”—even labeled with the “original sin of AI.” The more Google, a technological leader, promoted AI development, the more it might accelerate the decline of its core search advertising model. Thus, when ChatGPT emerged suddenly, Google internally sounded a “code red,” facing an existential threat.
However, three years have been enough to reverse the perception of Wall Street traders.
Recently, with Google’s release of Gemini 3 and its continuous demonstration of amazing multimodal capabilities at the application layer—from real-time translation to complex logical reasoning—the market has begun to cast a more firm vote of confidence in Google. As AI development enters a phase of pullbacks and adjustments, Google’s market value has risen against the trend. Its momentum has caught up with Microsoft, which was once out of reach, and is now challenging NVIDIA, the hegemon of GPUs.

This reflects a new, more mature judgment by the capital market on the trend of AI development: the greatest competitiveness in the AI era, and even the ultimate moat, does not entirely depend on how advanced the technical indicators of the underlying large models are, nor is it simply about how much computing power is accumulated, or how much money is invested in electricity and data centers.
If the first wave of AI upsurge was an arms race about “who can build the strongest sports car (underlying model),” it has now quietly shifted to “who can let the sports car drive on the highway of real scenarios and applications.”
Updating AI Cognition: From Disruptor to Multiplier
A truth overlooked by early tech enthusiasts is that, like all technologies, AI capabilities must enter specific user scenarios to create real value for users. AI is not an independent “magic”; it must rely on “services” to take effect.
Three years ago, when GPT suddenly rose, the market responded that AI would reshape the relationship between technology and users as an independent product form. This plunged Google into anxiety, as the new interaction model seemed to bypass the search box. But today, three years later, the market tends to believe that AI does not necessarily have to be an independent product; it can also be integrated into existing products or services, presenting itself as an efficiency multiplier.
In other words, AI has transitioned from the “disruption theory” to the “multiplier theory”—perhaps less glamorous, but more pragmatic.
Shaped by this new cognition, having large models, cloud services, and the scenarios and traffic provided by products with billions of users has become Google’s unique and irreplicable moat.

Looking back at OpenAI, no matter how amazing its technology is, its biggest challenge has always been finding application scenarios for its technology—what people usually call “traffic entry points.” How to make ordinary people use AI seamlessly every day, rather than occasionally opening an independent dialogue box.
A closer look at OpenAI’s fundamental challenge—the entry point problem—does not exist for Google:
- Google Search: A giant entry point for global information flow and commercial monetization, where AI can directly serve trillions of queries.
- YouTube: The world’s largest video content library and traffic pool, where AI can directly empower content creation, review, and recommendation.
- Android Ecosystem: A mobile operating system covering billions of devices worldwide, where AI can be directly integrated into the device’s underlying layer.
- Gmail & Workspace: Powerful productivity tools, where every enhancement of AI means an exponential increase in user stickiness.
Unlike pure model companies, Google no longer needs to spend billions to “buy traffic” or “educate users” to use new apps. While GPT craves traffic entry points, Google has already embedded AI capabilities into the operating systems and toolchains that users use every day.
In essence, the logic of “traffic is king” from the mobile Internet era has not been weakened in the AI era, but rather strengthened. This has transformed Google from initial panic to possessing an unparalleled advantage in AI penetration.
Market Value Trends and Anti-Fragility
The market’s judgment is cruel and direct, reflected in market value.
After a period of catching up, Google has successfully matched or even surpassed Microsoft in market value at certain times, and is challenging NVIDIA, the “center of the universe.” In the coming period, it would not be a “black swan” event if Google’s market value approaches or even surpasses NVIDIA’s.
Because under the growth narrative, Google has built a formidable moat; under the contraction narrative, Google’s business model highlights its anti-fragility amid storms.
Google CEO Sundar Pichai’s recent reminder at the earnings call is more like a prophecy: “No one can escape the collapse of the AI bubble,” but in the storm, everyone’s risk resistance is inconsistent.
- Pure model companies: The risk is that once model performance is overtaken by open-source alternatives or competitors, their valuations may collapse rapidly, and they lack stable revenue sources to support huge R&D investments.
- Computing power sellers: Although powerful, their core business is tied to the hardware cycle. Once AI capital expenditure enters an adjustment period or powerful alternative chips emerge, revenue volatility will be significant.
- Alphabet Inc.: Google’s parent company has a diversified revenue structure. Its search advertising revenue is the world’s most stable and largest cash cow; Google Cloud provides it with stable B2B revenue; and AI is an enabler that enhances the efficiency and value of these three businesses.
This “cash cow + AI + cloud services” model gives Google stronger anti-fragility amid the AI storm compared to pure large model product companies or hardware companies that only sell computing power. It can gain valuation premiums when the market is booming, and when the bubble bursts, rely on the stable cash flow of its core businesses to continue investing in R&D and absorb fallen competitors.
Reflecting on Moats: NVIDIA, Apple, and Google
Of course, we should not underestimate NVIDIA’s dominant position in the AI era.
NVIDIA is a company that almost monopolizes the “shovels” in the AI gold rush, with an unshakable status. But it is worth pondering: will NVIDIA’s moat collapse due to “improvements in computing power efficiency”?
The answer is no.
Take a new generation of models like DeepSeek as examples; they claim to significantly reduce the computing power required for training and inference. This seems negative for GPUs. But the opposite is true: reduced computing power requirements unlock broader application scenarios, allowing thousands of small and medium-sized enterprises and independent developers to afford AI, leading to a surge in total demand for GPUs.
Mainstream global large model, robot, and autonomous driving companies all rely on the CUDA ecosystem.
NVIDIA’s real moat is surprisingly similar to Google’s—it is not just hardware, but a combined software-hardware ecosystem:
- CUDA Ecosystem: A decades-old ecosystem of software, tools, and developers, serving as the common language for AI scientists and engineers worldwide. Even if new entrants catch up in chip performance, they will need years to build equivalent software compatibility.
- Innovative Infrastructure Provider: NVIDIA has actually moved beyond “selling chips”; it provides the “water, electricity, and coal” of the AI era—innovative infrastructure.
This model is highly consistent with the logic of Google and Apple: none of them built their advantages through a single product or technology, but through providing platform-level, indispensable innovative infrastructure to establish unique structural advantages.
Thus, after three years of large model performance competitions, the market has confirmed a basic common sense:
Pure leadership in underlying model technology is often short-lived. As long as high-quality open-source models like Llama, DeepSeek, and Qwen emerge, the technical gap will soon be closed.
In an open ecosystem, technology itself is catch-upable, but the ecosystem and traffic built on billions of users are the truly unshakable core barriers.
The Moat in the AI Era: China and the United States
The logical evolution of the Silicon Valley AI arena will inevitably inspire Chinese pursuers across the ocean.
Currently, Chinese tech companies’ capital expenditure in the AI field lags significantly behind their American counterparts. This has to a certain extent avoided the hidden risks of huge bubbles like those in the United States, but also means a relative disadvantage in basic computing power. This asymmetric competitive landscape forces Chinese companies to learn from Silicon Valley’s experiences and lessons: the endgame of the AI competition does not lie in the inflated rankings of technologies, but in the depth of value creation.
Against the backdrop of narrowing technological gaps, the real test is how to efficiently inject AI capabilities into unique Chinese scenarios, transform the model’s “capabilities” into practical “applications,” and create irreplaceable commercial value. Chinese companies need to completely shift the focus of competition from the “arms race” of underlying models to “application efficiency and commercial monetization.”
The evolution of AI is essentially a long and cruel marathon. The ultimate winner is likely not a pure large model startup, but a company whose capabilities are close to Google’s:
It aggregates billions of user entry points to control traffic sovereignty, owns its own cloud service infrastructure to ensure cost and efficiency, and can deeply embed large models that efficiently match its own needs into core scenarios.
The moat in the AI era is not temporary technological leadership, but the ecological capability to transform technology into value.
Would you like me to sort out a concise English summary document that extracts the core viewpoints, typical enterprise cases, and key conclusions of this article for quick reference?