Google’s recent push around its Gemini 3.0 and Nano Banana Pro models has put the company firmly back at the centre of the artificial intelligence conversation. Search, productivity tools, advertising systems and developer platforms are now increasingly built around Gemini-powered intelligence. While this shift may look sudden, a resurfaced video from the year 2000 shows that the idea of Google as an AI-first company was articulated clearly more than two decades ago by its co-founder Larry Page.
The clip, recorded just two years after Google was founded, captures Page describing what he believed the “ultimate search engine” should look like. His definition is strikingly close to how generative AI systems work today. Page said that a truly advanced search engine would not simply index web pages, but would understand everything on the web, understand exactly what a user wants, and then deliver the right answer directly. He described this capability as artificial intelligence, adding that such a system would be able to answer almost any question because most human knowledge already exists online.
At the time, this was an ambitious and largely theoretical idea. Search engines in 2000 were still primarily focused on matching keywords and ranking links. Page openly admitted in the footage that Google was nowhere near achieving true artificial intelligence. Instead, he framed the company’s work as a gradual journey towards that goal, focused on getting incrementally closer over time.
What makes the clip particularly relevant today is how Page explained the building blocks required to make that future possible. He spoke about the massive scale of data Google was already handling, noting that if the company printed out its search index, it would form a stack roughly 70 miles high. He also pointed to Google’s growing computing power, mentioning thousands of computers and enough storage capacity to hold multiple copies of the entire web. In Page’s view, this combination of vast data and large-scale computation created a rare opportunity for new kinds of intelligent systems.
From today’s perspective, that description reads like a blueprint for modern large language models. Generative AI systems rely on enormous datasets, extensive computing infrastructure and sophisticated models that can synthesise information rather than simply retrieve it. The idea that a system should understand intent and generate answers instead of returning links is now at the core of Google’s AI strategy.
Fast forward 25 years, and Google is actively reshaping its products around this vision. Gemini models are being woven directly into Search, allowing users to ask complex questions and receive summarised, context-aware responses. The company’s AI tools are no longer limited to finding information. They are designed to reason, explain, create and assist across a wide range of tasks.
What is notable is how closely this aligns with Page’s original framing. He did not describe AI as a separate product or feature. Instead, he positioned it as the natural end point of search itself. In that sense, Google’s current transformation is less a pivot and more a delayed fulfilment of its founding philosophy.
The clip also highlights why Google has been investing so aggressively in infrastructure for decades. Long before AI became a mainstream buzzword, the company was building the data pipelines, storage systems and computing scale required to support intelligent systems. That long-term focus is now paying off as competitors scramble to assemble similar foundations.
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!