
Google says state-backed hacking groups are actively using its Gemini AI to speed up real-world cyberattacks, moving far beyond basic phishing emails and spam campaigns. In a new report from the Google Threat Intelligence Group, the company says attackers linked to China, Iran, North Korea, and Russia have used Gemini across multiple stages of cyber operations.
According to Google, Gemini has been observed assisting with early-stage target research, social engineering copy, translation, coding help, vulnerability testing, and even debugging when tools fail mid-intrusion. None of this represents a new class of attack. What has changed is the pace.
AI as an accelerator, not a breakthrough
Google is careful not to oversell the threat. The report frames AI use as acceleration rather than transformation. Threat actors already conduct reconnaissance, write lures, modify malware, and troubleshoot broken exploits. Gemini simply reduces friction in that workflow.
In one example, Google describes China-linked activity where an operator adopted a cybersecurity expert persona and prompted Gemini to automate vulnerability analysis and generate targeted testing plans within a fictional scenario. In other cases, China-based actors repeatedly relied on Gemini for debugging, research, and technical guidance tied directly to intrusions.
The danger goes beyond phishing
The most concerning shift is timing. Faster targeting and tooling compress the window between early indicators and actual compromise. That leaves security teams with less opportunity to spot delays, inconsistencies, or human error that often surface in logs during manual operations.
Google also flags a separate but related risk: model extraction and knowledge distillation. In these cases, actors with legitimate API access issue large volumes of prompts to reverse-engineer how Gemini reasons, then use that insight to train competing models. One documented attempt involved more than 100,000 prompts focused on reproducing Gemini’s performance in non-English tasks.
Google characterises this as intellectual property abuse with broader security implications if scaled.
What it means for you
Google says it has disabled accounts and infrastructure tied to confirmed Gemini abuse and added targeted protections to its model classifiers. It also says it continues to test and refine guardrails.
For security teams, the message is pragmatic rather than alarmist. Expect AI-assisted attacks to move faster, not necessarily smarter. Watch for sudden improvements in social engineering quality, quicker tooling iteration, and unusual API usage patterns. Response plans need to assume attackers can now compress hours or days of work into minutes.
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!
Find the best of Al News in one place, specially curated for you every weekend.
Stay on top of the latest tech trends and biggest startup news.