Google invented transformers. Now they might lose search because of them.
Google invented the transformer architecture—the foundation of every AI chatbot now threatening their search monopoly. When ChatGPT launched, Sergey Brin submitted his first code access request in years, targeting Google's internal language model.
The search market hasn't seen real competition in decades. Now it has.
ChatGPT became the fastest consumer product to reach 100 million users—then kept growing. By late 2025, ChatGPT had reached 900 million weekly active users.1 Microsoft integrated it into Bing, and a new paradigm emerged: instead of typing queries and clicking through results, users could ask questions and get direct answers.
For Google, this threatens the core business model. Search advertising generated $162 billion in 2022. If users bypass the search results page entirely, that revenue evaporates.
Google's research team published "Attention Is All You Need" in 2017—the paper that introduced transformers. Every modern LLM, including ChatGPT, descends from that architecture.
Google had the technology. They had LaMDA and Bard in development. They had more compute, more data, more AI talent than anyone.
They didn't ship first.
OpenAI did. Microsoft capitalized. Google scrambled to respond. The company that built the technology found itself playing catch-up on the product.
LLMs aren't ready to replace search entirely. The problems are well-documented:
Hallucination: LLMs confidently generate false information. Early estimates put hallucination rates at 10-20%. For queries requiring factual accuracy—medical information, legal questions, technical specifications—this is unacceptable.
Recency: Training data has cutoff dates. LLMs don't know what happened last week unless connected to real-time information sources.
Verification: Search shows sources. You can evaluate credibility. LLMs present answers without attribution, making verification harder.
Computation costs: Running LLM inference at Google-search scale would require massive infrastructure investment. The economics aren't proven yet.
Technology adoption follows S-curves: slow start, rapid growth, eventual plateau. The question is where LLMs sit on that curve.
Bull case: We're at the beginning. Accuracy improves, costs decline, use cases expand. LLMs become the primary interface for information retrieval.
Bear case: We're approaching a plateau. LLMs have trained on most available public data. Fundamental limitations (hallucination, reasoning failures) prove hard to solve. Progress slows.
AI has experienced this pattern before. The "AI winters" of 1974-1980 and 1987-1993 followed periods of rapid progress that hit ceilings. Whether current LLM development follows the same pattern remains unknown.
Google has advantages:
The question is execution speed. Incumbents typically move slower than startups. Google's ad revenue model creates incentives to preserve search rather than disrupt it. Internal politics and risk aversion may have delayed their response.
Microsoft, with little to lose in search, could move faster.
Regardless of who wins, the competitive dynamics have shifted:
For users: More options. Better answers for some queries. New interfaces emerging.
For Google: First real threat to core business in decades. Forced to accelerate AI deployment.
For Microsoft: Relevance in consumer tech they haven't had since the mobile transition.
For startups: New platforms create new opportunities. Perplexity, You.com, and others are building AI-native search experiences.
The search engine market, static for years, is suddenly dynamic. Google invented the technology that made this possible. Whether they maintain their position depends on whether they can out-execute the companies now using their invention against them.
Two years after this article was written, the competitive landscape has evolved significantly. ChatGPT's growth to 900 million weekly users1 exceeded most predictions. Google launched Gemini and integrated AI directly into search results. Perplexity emerged as a credible AI-native search alternative.
Google's search market share has declined modestly but remains dominant at approximately 89%.2 The "LLM replaces search" thesis hasn't fully materialized—hallucination remains a problem, and users still want verifiable sources for important queries. But the direction is clear: conversational AI interfaces are capturing a growing share of information-seeking behavior, particularly among younger users.
Google invented transformers. Now they might lose search because of them.
Google invented the transformer architecture—the foundation of every AI chatbot now threatening their search monopoly. When ChatGPT launched, Sergey Brin submitted his first code access request in years, targeting Google's internal language model.
The search market hasn't seen real competition in decades. Now it has.
ChatGPT became the fastest consumer product to reach 100 million users—then kept growing. By late 2025, ChatGPT had reached 900 million weekly active users.1 Microsoft integrated it into Bing, and a new paradigm emerged: instead of typing queries and clicking through results, users could ask questions and get direct answers.
For Google, this threatens the core business model. Search advertising generated $162 billion in 2022. If users bypass the search results page entirely, that revenue evaporates.
Google's research team published "Attention Is All You Need" in 2017—the paper that introduced transformers. Every modern LLM, including ChatGPT, descends from that architecture.
Google had the technology. They had LaMDA and Bard in development. They had more compute, more data, more AI talent than anyone.
They didn't ship first.
OpenAI did. Microsoft capitalized. Google scrambled to respond. The company that built the technology found itself playing catch-up on the product.
LLMs aren't ready to replace search entirely. The problems are well-documented:
Hallucination: LLMs confidently generate false information. Early estimates put hallucination rates at 10-20%. For queries requiring factual accuracy—medical information, legal questions, technical specifications—this is unacceptable.
Recency: Training data has cutoff dates. LLMs don't know what happened last week unless connected to real-time information sources.
Verification: Search shows sources. You can evaluate credibility. LLMs present answers without attribution, making verification harder.
Computation costs: Running LLM inference at Google-search scale would require massive infrastructure investment. The economics aren't proven yet.
Technology adoption follows S-curves: slow start, rapid growth, eventual plateau. The question is where LLMs sit on that curve.
Bull case: We're at the beginning. Accuracy improves, costs decline, use cases expand. LLMs become the primary interface for information retrieval.
Bear case: We're approaching a plateau. LLMs have trained on most available public data. Fundamental limitations (hallucination, reasoning failures) prove hard to solve. Progress slows.
AI has experienced this pattern before. The "AI winters" of 1974-1980 and 1987-1993 followed periods of rapid progress that hit ceilings. Whether current LLM development follows the same pattern remains unknown.
Google has advantages:
The question is execution speed. Incumbents typically move slower than startups. Google's ad revenue model creates incentives to preserve search rather than disrupt it. Internal politics and risk aversion may have delayed their response.
Microsoft, with little to lose in search, could move faster.
Regardless of who wins, the competitive dynamics have shifted:
For users: More options. Better answers for some queries. New interfaces emerging.
For Google: First real threat to core business in decades. Forced to accelerate AI deployment.
For Microsoft: Relevance in consumer tech they haven't had since the mobile transition.
For startups: New platforms create new opportunities. Perplexity, You.com, and others are building AI-native search experiences.
The search engine market, static for years, is suddenly dynamic. Google invented the technology that made this possible. Whether they maintain their position depends on whether they can out-execute the companies now using their invention against them.
Two years after this article was written, the competitive landscape has evolved significantly. ChatGPT's growth to 900 million weekly users1 exceeded most predictions. Google launched Gemini and integrated AI directly into search results. Perplexity emerged as a credible AI-native search alternative.
Google's search market share has declined modestly but remains dominant at approximately 89%.2 The "LLM replaces search" thesis hasn't fully materialized—hallucination remains a problem, and users still want verifiable sources for important queries. But the direction is clear: conversational AI interfaces are capturing a growing share of information-seeking behavior, particularly among younger users.