Google is taking another major step toward making generative AI a native experience within its ecosystem. The company’s latest rollout integrates the Gemini Nano Banana model into Search and Google Lens, allowing users to generate, edit, and transform images without ever leaving these core apps. The feature, powered by the Gemini 2.5 Flash Image model, aims to blend creativity and practicality in one seamless workflow, according to a report by 9To5Google.
The update introduces a new ‘Create Images’ option within Google Search’s AI Mode. When users open AI Mode, they now see a plus icon in the bottom-left corner of the prompt bar, replacing the previous carousel of suggested prompts with a simpler list layout. Tapping the icon presents three new choices: Gallery, Camera, and Create Images — the last one marked with a distinctive banana emoji.
Selecting ‘Create Images’ prompts users to describe what they want to visualise. They can generate entirely new visuals or upload an existing photo for AI-based edits. Once processed, the results can be downloaded or shared directly, each image carrying a subtle Gemini spark watermark in the bottom-right corner. The watermark serves as Google’s signature indicator for AI-generated content, ensuring transparency while maintaining brand identity.
Meanwhile, Google Lens is receiving a similar upgrade. A new ‘Create’ tab is being added to Lens, expanding its functionality beyond object recognition and homework help. This tab, marked with the same banana emoji, encourages users to capture or create AI-generated selfies and live photos. When activated, it opens the front-facing camera by default, offering instant creative possibilities. After capturing an image, users can describe how they want it modified — whether changing styles, backgrounds, or artistic effects — and the AI instantly processes the request. The redesigned interface also features repositioned labels and a wider layout to accommodate more filters side-by-side.
For now, this feature is gradually rolling out to Android devices in the United States under Google’s AI Mode Search Lab. The company says it plans to expand access globally in phases and support more languages soon. The move reflects Google’s broader goal of embedding generative AI into everyday interactions, creating a unified ecosystem where tools like Search, Lens, and Gemini operate in sync.
By integrating the Nano Banana model into its most used apps, Google is blurring the lines between search, creativity, and personal productivity. Users can now imagine, create, and edit visuals without relying on external platforms — a subtle but powerful shift that reinforces Google’s long-term strategy to keep AI creation native, intuitive, and trustworthy within its products.
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!