Google is testing a macOS version of Gemini, bringing its AI assistant closer to ChatGPT and Claude, with new features that could let it understand what’s happening on your screen.
Google is expanding Gemini-powered features in Chrome to India, Canada and New Zealand, adding support for 50 additional languages while bringing its AI sidebar, image generation and app integrations to more users.
Google has unveiled Gemini 3.1 Flash-Lite, its fastest and most cost-efficient Gemini 3 model yet, targeting developers who need high-volume AI workloads delivered quickly, reliably and at significantly lower cost.
Google has begun rolling out the ‘Past Chats’ feature to free users of the Gemini app globally. The update allows Gemini to reference previous conversations for more personalised responses, reducing the need to repeat context. Europe will receive the feature later.
The new model replaces Nano Banana Pro across most Gemini experiences while promising sharper visuals, improved instruction following and production-ready controls.
Originally launched in July 2025, ProducerAI allowed users to collaborate with an AI agent to generate tracks, workshop lyrics and remix music from text prompts. Until now, it relied on its own models. By joining Google Labs, the platform gains access to a far larger AI toolkit.
Alphabet folds former X moonshot into Google as it leans on Gemini and DeepMind to compete in industrial automation
Gemini’s automation feature signals that Google wants Android to be a frontline platform for this shift, embedding AI agents directly into the operating system experience rather than keeping them confined to a chat window.
Google has apologised after an AI-generated news alert included a racial slur in a push notification sent to users. The alert linked to coverage of a recent BAFTA Film Awards incident and was quickly removed following backlash online.
The global race to build bigger AI models is running into a hard physical limit: memory chips. As demand from companies such as Google, Meta and OpenAI surges, supply remains tight, pushing up costs and slowing research.
The generative AI rush created a startup gold rush almost overnight. But as the hype cools, some once-hot business models are starting to look fragile. According to a senior Google executive, LLM wrappers and AI aggregators now have their “check engine light” on.
The new model builds on recent upgrades to Gemini’s core intelligence and marks a shift in Google’s update cycle, replacing the usual mid-year “.5” release pattern.
With images, video, text, and now music all living inside Gemini, Google is steadily building an all-in-one creative AI system. Google is positioning Lyria 3 as a lightweight, creative companion rather than a professional music replacement.
Google is rolling out a new Gemini-powered feature in Google Docs that turns long documents into short, listenable audio summaries. Designed for multitaskers and accessibility use cases, the update lets users consume reports, manuals and meeting notes in podcast-like form, without reading a single line.
Google has updated the Deep Think mode in its Gemini 3 model, extending its multimodal reasoning capabilities into practical engineering workflows. The latest upgrade enables Gemini to turn sketches, images, and real-world objects into 3D-printable models, signalling a significant shift from theoretical AI assistance to real-world fabrication.
Google says its latest Deep Think upgrade is designed to tackle research-grade problems in maths, science, and engineering, with access expanding to the Gemini app and API.
Google is reportedly developing new agentic features that will allow direct action for common everyday tasks, which include booking rides or placing online orders, without requiring direct interaction from a user.
The milestone highlights how quickly Gemini has scaled as Google pushes AI deeper into Search and consumer products. While it still trails ChatGPT, the growth underscores Google’s broader bet on AI as a core revenue driver.
Google has started rolling out Project Genie, an experimental feature powered by its Genie 3 world model, to AI Ultra subscribers in the US. The tool lets users generate short, playable, photorealistic worlds from text descriptions, highlighting Google DeepMind’s broader push towards general-purpose world models for AGI research.
Google is expanding NotebookLM’s capabilities on Android and iOS with Video Overviews support and more granular controls for Infographics. The update also hints at upcoming slide deck customisation options that are not yet fully live.
Google has rolled out its more affordable AI Plus plan to all markets where its AI subscriptions are available, including the United States.
Google has detailed what users actually get with its new Gemini AI Plus plan in the US, outlining daily prompt caps, context limits, and feature access, while also rolling out NotebookLM integration for iPhone and Workspace users.
Google has introduced Agentic Vision for Gemini 3 Flash, a new capability that improves how the model understands and responds to image-based prompts.
The European Commission has launched specification proceedings under the Digital Markets Act to clarify how Google must give rival AI services and search engines fair access to Android features and Google Search data, warning that non-compliance could eventually lead to heavy fines.
Google is rolling out a new Search experience that lets users move directly from AI Overviews into AI Mode conversations. The update also makes Gemini 3 the default model powering AI Overviews worldwide, signalling Google’s push to turn Search into a more interactive, AI-driven product.