Why Google might win? (5/5)
On distribution
So far in this series, I have argued that Google has four supply-side moats.
Custom TPUs give it a cost edge on every token it serves.
Private infrastructure–efficient, integrated, end-to-end owned–eliminates the margin leakage every other lab pays.
Proprietary data that is complete, multimodal, and compounding as the public internet runs dry.
A frontier model, Gemini, trained on top of all three and is fairly competitive with the other players.
If the argument holds at four layers, the question becomes whether any of it actually reaches users. Cost advantages are inert without distribution. Better models are invisible without surfaces. The fifth layer is where structural advantage either compounds into a business or stays trapped as an interesting lab result.
In software, where distribution and transaction costs are zero, Ben Thompson’s Aggregation Theory, now a decade old, is the seminal thesis on who wins:
The most important factor determining success is the user experience: the best distributors/aggregators/market-makers win by providing the best experience, which earns them the most consumers/users, which attracts the most suppliers, which enhances the user experience in a virtuous cycle.
The question I had when I started thinking about it was whether it would still hold in the AI era. I think it will.
I strongly believe that we are in an interim, transient state in the development of AI as a technology.
When ChatGPT broke out in late 2022, the chat interface was essentially an accident. OpenAI was sitting on GPT-3.5 and needed a way to demo it. A dialogue box was the obvious shell. What no one expected was that the shell itself would go viral, with one of the fastest consumer product adoption rates in history. Every other lab scrambled to ship a chat interface in response. That is how chat became the default.
But chat is not the end state. We are already seeing three stages unfolding in parallel.
Standalone chat: ChatGPT, Claude, Gemini as destinations. You go to the product, type, read, leave. This is where most attention currently sits.
Embedded Copilot: Gemini in Docs, Copilot in Excel, Cursor in the IDE, Wispr Flow in the keyboard. You don’t go anywhere; the intelligence meets you inside the workflow.
Autonomous action: Agents that take a goal and execute across products–researching, drafting, scheduling, purchasing. The user specifies outcomes; the software handles the intermediate steps. e.g., Claude Code and Cowork.
As new long-run paradigms emerge, the constant underlying them is that users adopt technology to get a job done. Each major platform shift–the PC, the web, the smartphone–succeeded because it let people accomplish more with less friction. AI is the next layer in that sequence, and the one most likely to dissolve the boundary between “using an application” and “getting a thing done”.
Google has been laying the groundwork for this longer than any company competing in AI today. Cutting-edge machine learning has been quietly embedded across its products for over a decade, improving user experience in ways most people don’t recognise as AI at all.
Autocomplete and spelling correction in Search.
Magic Eraser and visual search in Google Photos.
Near-native translation in Google Translate.
Smart Compose finishing sentences in Gmail.
Live traffic and rerouting in Google Maps.
Call Screening and Circle to Search on Pixel.
Autonomous driving at Waymo.
All of it is downstream of foundational research by the Google Brain team, which has always focused on commercialisation–on embedding AI into products billions of people already use every day.
That is the point about distribution.
Getting a thing done entails three steps:
Capture
Understand
Serve
On capture, Google operates seven products with over two billion users each–Search, Android, Chrome, YouTube, Gmail, Maps, Photos. The nearest peer is Apple, only because the iPhone is itself a universal intent-capture device.
Muscle memory already routes users: Search for facts, Maps for places, YouTube for how-tos, Gmail for correspondence. As AI advances, the range of inputs will expand–voice, text, vision, ambient sensors, eventually neural. Google already controls most of the clients through which those inputs will flow.
On understanding, the proprietary data across services compounds. Location from Maps, queries from Search, calendar from Gmail, photos from Photos, viewing history from YouTube–all resolved to the same user account. That level of context is what turns a vague ask into a specific task.
On serving, the products needed to act on intent–calendar, email, maps, documents, browser, storage–are ones Google already ships. Every step of an agentic workflow is likely to reduce to an existing Google surface.
While Google is broadly dominant, advances in each of these three layers are also coming and will continue to come from different companies. Wispr Flow shows what voice intent capture looks like when done natively across devices. Perplexity shows what retrieval-first search can be when built AI-first. And Claude Code shows what agentic reasoning looks like when embedded in a developer workflow.
But subsuming these layers, either through acquisition or by shipping a native feature, is not a stretch for Google. A platform advantage of this breadth can be deployed anytime.
However, caveats do exist. There are at least three places where this thesis could break.
The first is that an incumbent subsuming a challenger is not automatic. Microsoft spent years trying to absorb Slack with Teams and largely succeeded, but only because enterprise buyers chose bundling. Google spent similar years trying to subsume WhatsApp with a succession of products–Allo, Duo, Chat, Meet–and mostly failed, because consumer network effects sit with the incumbent. The bet in this essay is that AI belongs to the first pattern, not the second. The case: unlike social, where the network is the product, AI capability is increasingly a commoditised input that integrates most cleanly where the user already is. Distribution compounds.
The second is form factor. If the dominant interface of the next decade is hardware, Google does not make–OpenAI’s collaboration with Jony Ive, Meta’s Ray-Bans, Apple’s on-device intelligence–then Google’s surface area stops being universal. Android is a hedge, not a guarantee. The ambient-compute assumption in this essay depends on Google retaining at least parity at the device layer.
The third is speed. Google is a public company with a billion-user Search business to defend and active antitrust cases in multiple jurisdictions. It will never move like a startup. It compensates with scale, bundling, and the ability to absorb a category through acquisition or feature launch. That trade–slower but broader–only holds if the next two or three years don’t produce a challenger large enough to refuse acquisition.
None of these is fatal on its own. But “Google might win” was always a probabilistic claim, not a foregone one.

