
YouTube has announced an expansion of its AI-powered likeness detection tool to a pilot group of civic leaders, journalists and political candidates. The company said the move is aimed at helping individuals who are often at the centre of public debate identify and manage AI-generated impersonations on the platform.
The update builds on a feature introduced last year for creators in the YouTube Partner Program, designed to detect AI-generated videos that replicate a person’s face or likeness.
How the deepfake detection tool works
The likeness detection system functions in a way similar to YouTube’s Content ID technology. Instead of identifying copyrighted material, the tool scans content for visual matches related to a person’s appearance.
If the system detects AI-generated content that includes a participant’s likeness, the individual can review the video and request its removal if it violates YouTube’s privacy guidelines. This process is intended to help public figures address cases where AI is used to create impersonations or misleading content.
YouTube clarified that detection alone does not guarantee removal. Requests are reviewed under existing policies, and the company considers factors such as whether the content falls under parody, satire or other forms of public interest expression.
Focus on protecting public discourse
The company said the expansion is aimed at people who frequently appear in news coverage and public discussions. Journalists, elected officials and political candidates may face higher risks of AI-generated impersonation, particularly during election cycles or major public events.
By offering the detection tool to this group first, YouTube plans to evaluate how the system works in real-world scenarios and refine the process before expanding access more broadly.
Participants who enrol in the programme are required to verify their identity before they can use the likeness detection tool. YouTube said the information collected during this process will only be used for identity verification and to enable the feature.
The company added that the data will not be used to train Google’s generative AI models.
YouTube said it plans to expand access to the tool over the coming months as the pilot programme progresses. The company is also supporting regulatory efforts such as the proposed NO FAKES Act, which aims to establish legal protections against unauthorised digital replicas.
According to YouTube, combining detection tools with legal frameworks will help address the growing challenges posed by AI-generated impersonations while preserving space for legitimate expression online.
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!
Find the best of Al News in one place, specially curated for you every weekend.
Stay on top of the latest tech trends and biggest startup news.