Google has launched a new bug bounty programme focused entirely on artificial intelligence, expanding its long-running Vulnerability Reward Program to cover AI-specific threats. The company hopes the new incentives will help uncover “rogue actions” — AI-triggered exploits that can manipulate devices, leak data, or perform unintended tasks.
In a detailed breakdown, Google defines AI bugs as vulnerabilities that allow large language models or generative systems to cause harm or bypass security. That could include a poisoned prompt making Google Home unlock a door, or a malicious input that summarises someone’s email and sends it to an attacker’s account. The company cited past flaws such as a manipulated Google Calendar event that could remotely switch off lights or open smart shutters.
Simply making Gemini hallucinate doesn’t qualify. Google clarified that content-related issues — like generating hate speech or infringing material — should be reported through in-product feedback instead, allowing AI safety teams to retrain the models where needed.
Top-tier findings affecting Google Search, Gemini apps, Gmail, or Drive can earn researchers $20,000, with bonuses for report quality and originality pushing payouts up to $30,000. Lower rewards will apply to issues discovered in products like NotebookLM or the experimental Jules assistant.
The launch coincides with Google’s unveiling of CodeMender, a new AI tool designed to automatically patch security vulnerabilities in open-source software. So far, it has contributed 72 verified fixes after human review.
Google says the programme is part of its broader push to strengthen AI safety and security as its products increasingly depend on generative models. Since 2022, bug hunters have already earned more than $430,000 for exposing AI-related risks across Google’s platforms.
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!