This quote from Rajan Anandan, managing director at venture capital firm Peak XV Partners, probably encapsulates the spirit behind Google’s current artificial intelligence (AI) resurgence.
Anandan, who led Google's business teams in India and Southeast Asia for nearly eight and half years until 2019, has watched the company’s evolution up close as it made its early bets in AI and began laying the groundwork for much of the breakthroughs and research that powers today’s AI landscape.
Then came the generative AI explosion triggered by OpenAI’s ChatGPT in 2022, which pushed Google onto the back foot and led leaders inside the company to declare ‘Code Red’. While Google had spent years developing similar technologies, it had been slower to release them because of reputational risks.
Under pressure to respond quickly, Google had a shaky start to its generative AI foray, with underwhelming products such as Bard and AI features that gave bizarre and erroneous search results, from telling users to eat rocks or put glue on pizza to producing historically inaccurate images. Some even questioned whether Sundar Pichai possessed the leadership to restore Google's momentum.
“Large companies take time because at Google's scale, models that hallucinate can become problematic," Anandan said.
‘Opportunity to reimagine products for new era’At the start of 2025, Pichai sent an email to his employees. He called it a moment of urgency and said that technology was evolving so quickly it created a rare opportunity to reimagine their products and processes for a new era.
These comments came after Google faced one of its most challenging years in 2024, marked by increased competition from rivals like OpenAI, Microsoft, Meta and Perplexity, who seemed to be moving faster in the fiercely competitive AI race.
The tech giant was also facing a threat of a breakup by the US government, which could have had a profound impact on the way it delivers products and services to consumers.
Cut to November 2025 and the tech giant has mounted a strong comeback with powerful frontier models topping major AI benchmarks, impressive AI products like NotebookLM, Google Beam, and Flow, and features like Nano Banana image generation that are attracting everyone’s attention.
Gemini 3, the company's newest AI model, is earning praise from rivals and other tech leaders like Tesla and xAI leader Elon Musk, OpenAI CEO Sam Altman, Box CEO Aaron Levie, and WordPress co-founder Matt Mullenweg.
In fact, Salesforce CEO Marc Benioff said he is switching to Gemini 3 after using ChatGPT daily for three years, saying the "leap is insane...It feels like the world just changed, again"
Earlier this week, Google also surpassed Microsoft in market cap and is now closing in on the $4 trillion mark, a 63 percent jump from last year.
Anandan said he is confident Google will become the first company in the world to have a market cap of $10 trillion.
Neil Shah, co-founder of market research firm Counterpoint Research said "Google has arguably always been the 'dark horse' in this AI race. I think it has successfully 'crossed the chasm,' overcoming the 'Innovator’s Dilemma' to leapfrog back to the front of the pack."
Read: Google I/O 2025 was a litmus test. Did Sundar Pichai deliver?
What has led to Google's AI resurgence?According to industry analysts and former employees, Google’s AI resurgence is being driven by its long-term bets paying off, its full-stack approach to AI, its pre-training/post-training prowess, and the return of co-founder Sergey Brin to a day-to-day role focused on AI development.
“Google has the highest talent density on planet Earth. No one comes close. It's the only company that's integrated - from chips to models to applications. They also have one of the healthiest balance sheets, which gives them long term orientation," Anandan said.
"Google is anything but vibe revenue. It's hardcore revenue," he added.
In October, Alphabet posted its first-ever $100 billion quarter driven by double-digit gains in Google Cloud, Search, YouTube ads, and its subscription business.
Counterpoint Research’s Shah also concurred, saying Google possesses the “ultimate foundation for gaining edge for its models” since the “freshest, most relevant proprietary data flows through Search, Android, and Ads”
The company in particular has invested heavily in making data more useful over the years, whether it is text, video, images, or data across different sectors and verticals through existing products and new softwares, said Soham Mondal, founding partner of Triveous, a product, design, and engineering studio.
Combined with custom Tensor Processing Units (TPUs), an AI software stack built to run efficiently on TPUs, and the expanding scale of data center capacity, Google has "unprecedented distribution power to instantly test, optimise, and deploy highly capable models at scale," Shah said.
Gemini models are trained and run on TPUs, which reduces reliance on Nvidia’s GPUs and has helped the company avoid some of the supply constraints that competitors face.
Read: Gemini 3 launch gives Google an edge in the fiercely competitive AI race
Google’s vertical moat in developing its own TPU silicon instead of buying merchant chips gives it a major capital expenditure advantage, Shah said.
Alphabet raised its capital expenditures in October to between $91 billion and $93 billion for 2025, up from its July forecast of $85 billion.
Anandan said Google is a company that can stay at it for a long time. “They were not the first or second player in search, videos, maps, or operating systems. Waymo also took them years,” he said.
How did Google’s AI-first shift begin?Pichai first outlined Google’s plan to become “AI-first” in 2016 and announced its first TPU AI chip to improve Search and meet the company’s growing compute demands. However, the company didn’t anticipate that people were to adopt generative AI-infused products, something ChatGPT proved in 2022.
This led Alphabet to merge its competing AI research units, DeepMind and Google Brain, into a single division called Google DeepMind, led by DeepMind CEO Demis Hassabis in April 2023.
The move helped the company break down silos between the two divisions and unify compute resources that were fragmented across multiple independent efforts, Jeff Dean, co-founder of Google Brain and now chief scientist at Google DeepMind and Google Research, explained in a podcast earlier this year.
Alphabet also kickstarted work on Gemini, a name Dean coined to represent the union of the twin AI research units, and began ramping up its investments in data centres and TPUs.
The initiative was led by Dean and Oriol Vinyals, vice president of research and deep learning team lead at Google DeepMind.
“I thought it would be much better for us to work together on one unified effort that was trying to achieve the goals of what these different efforts were doing and to get everyone working together," Dean said in the podcast.
Brin came back from his retirement in 2023, incidentally motivated by an OpenAI employee.
He works directly with the engineers and research teams at Google DeepMind on the Gemini text models, involved in the pre-training and post-training stages. Brin also periodically delves into some of the company's generative media models like AI video generator Veo, image generator Imagen, and music generator Lyria.
Mondal said while Google always had the right ingredients for success, there has been a stronger focus in recent times.
“They have a clear goal, they know they are behind and they need to get ahead. That will, that desire and that fire is there,” he said.
Read: Alphabet-backed Isomorphic Labs bags $600 million funding to build next-gen AI drug design engine
How did Google reorganise itself for faster execution?After the merger of the two research units, Google went on to streamline its organisational structure, including removing several management layers, to push employees to move at a faster cadence and improve execution across the company.
This includes consolidating the teams that build models under Google DeepMind, moving the Gemini app team there, and merging the Android, Chrome, and Pixel units to create a new Platform and Devices unit. Google also unified its various Search teams under longtime executive Liz Reid.
How Google DeepMind is turning into Alphabet’s innovation engine?Google DeepMind CEO Demis Hassabis often now describes the unified AI research lab as the "engine room" for the whole of Google and the broader Alphabet. It develops the frontier AI models and plugs them across Google’s suite of products including Search, Cloud, YouTube, which touch billions of users.
"Google now has 6-7 products with over a billion users globally. When I left Google in 2019, they had over six products with 100 million users in India. No one can match them in distribution," Anandan said.
Once Gemini was released, the company began exploring how to bring its capabilities into products like Search, Gmail, Workspace and YouTube, which reach billions of users, and iterate on them with the power of these models, Pichai explained in the company podcast.
For instance, AI Mode in Search now runs on Gemini 3, marking the first time Google has shipped Gemini in Search at launch. This has enabled the company to offer new generative UI experiences such as immersive visual layouts, interactive tools and simulations, all generated on the fly based on the user's query.
"When you have a full-stack approach, each layer, when it innovates, flows all the way to the top," Pichai said.
The Alphabet chief indicated this was one of the key reasons behind Gemini 3's significant gains during the pre-training phase, surprising many AI researchers and dispelling concerns that AI scaling had hit a wall.
"Everyone thought pre-training had hit a wall and you couldn't eke out any more gains from exposing a model to more knowledge and documents. But Google changed the architecture to prove that there is still room for improvement. Similarly post-training, there was more emphasis on solving long horizon tasks," a source familiar with the developments told Moneycontrol.
Having a natively strong multimodal model was also a factor and helped pave the way for producing an image output model like Nano Banana Pro, the source said.
In the podcast, Pichai explained that some of the improvements took time because of their full stack approach. "When we had to respond to this generative AI moment, I remember we were short on capacity. So, then we had to invest to ramp up all these things to get it to the scale," Pichai said.
"If you were on the outside, it looked like we were quiet or we were behind but we were putting all the building blocks in place and then executing on top of it. We are on the other side now, which is why you can now see the pace at which teams are moving forward," Pichai added.
The accelerated speed is evident. Gemini 3 comes just eight months after Google released Gemini 2.5 and nearly a year after Gemini 2.0.
Gemini as the unifying layerPichai sees Gemini becoming the unifying AI layer across its products, from Search and YouTube to Cloud and Waymo, making each better.
The company has also found success in monetising its AI offerings at a time when investors are closely watching how major technology companies are turning their tens of billions of dollars in AI investments into profits.
During the company’s Q3 earnings call, Pichai said that Alphabet crossed 300 million paid consumer subscriptions, and noted that Google One, with subscription tiers that provide access to AI products and tools, has been a major growth driver.
Google is also bringing ads to its AI search results, expanding the rollout in AI Overviews while still testing them in AI Mode.
Read: Google's Gemini now has over 650 million monthly active users, says CEO Sundar Pichai
Has Google’s developer outreach changed?Gemini is also a key strategy to attract developers to build applications using these models to expand the ecosystem.
Mondal said that the company is extending its full-stack approach to developer outreach as well. When a new Gemini model comes out, it conducts workshops, provides detailed guides, and offers India specific solutions around it.
“It is like I am creating the hardware, I am building the software and I am explaining that software across India and around the world. Very few companies can match that,” he said.
More than 70 percent of Google's Cloud customers use their AI offerings and 13 million developers have used its generative models for development, Pichai recently said. He added that Google Cloud backlog from customers also surged to $155 billion in Q3 2025.
In recent months, Google has signed major cloud deals with AI rivals OpenAI and Anthropic. OpenAI will use the cloud computing resources for ChatGPT, while Anthropic will get access to up to one million of Google’s custom TPUs, which is gaining momentum as a viable alternative that challenges Nvidia’s dominance. Social networking giant Meta is also reportedly in talks to use Google’s TPUs in its data centres.
Is Google’s AI comeback worrying its competitors?Google’s AI resurgence has created unease among rival tech firms.
In a memo to employees last month, OpenAI CEO Sam Altman reportedly said Google’s recent progress could “create temporary economic headwinds” for the company. Google’s pretraining gains have particularly worried researchers who have struggled with this issue on GPT-5.
“I expect the vibes out there to be rough for a bit,” he said. Despite that, ChatGPT still leads the market with 800 million weekly users, while Gemini has over 650 million monthly active users.
Earlier this week, Nvidia also took to X (formerly Twitter) to defend its AI chips amid reports that Meta is in talks to use Google’s TPUs in its data centres.
The world’s most valuable company said that while Google has “made great advances in AI”, Nvidia is “a generation ahead of the industry.”
Can Google keep its lead in the fast moving AI race?While Google has pulled ahead in the fiercely competitive AI race at present, maintaining that momentum will be crucial given the unprecedented pace at which the technology is evolving, experts say.
A week after the Gemini 3 launch, Anthropic debuted Claude Opus 4.5, which the AI research firm says is the "best model in the world for coding, agents, and computer use” and is "meaningfully better" at everyday tasks like deep research and working with productivity applications like slides and spreadsheets.
Shah said Gemini’s adoption among enterprises, which remains relatively weak for now, will be the final frontier for Google.
"Google is on the ascent right now, but data privacy, security and sustainability will be the key areas where it will have to continue to work hard if it wants to crack enterprise," he said.
Read: Google debuts Gemini Enterprise amid workplace AI race with Microsoft, OpenAI
Mondal said the company’s open-source play is also not particularly strong at the moment, and private and public companies that need to host Large Language Models (LLMs) or build data centers struggle to do that with Google’s stack.
Another key problem is the ever growing need for compute, a perennial challenge of the AI era. To tackle this, Google recently unveiled a ‘moonshot’ initiative called Project Suncatcher to explore building AI data centres in space.
As Pichai quipped on a recent company podcast, “In 2027, hopefully we will have a TPU somewhere in space, maybe we will meet a Tesla Roadster, which is going around there too. It will be fun.”
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!
Find the best of Al News in one place, specially curated for you every weekend.
Stay on top of the latest tech trends and biggest startup news.