When MeitY released the draft amendments to the IT Rules this week, it sent a strong signal: the government is taking the threat of deepfakes seriously. The draft rules mark India’s first direct attempt to regulate synthetically generated information - content that looks or sounds real but is created or altered by algorithms. From doctored videos of the Finance Minister falsely promoting investment apps to AI-cloned pornographic clips of influencers, India has recently seen a surge in high-risk deepfakes.
These draft amendments arrive amid growing public anxiety, judicial scrutiny, and electoral sensitivity. Courts have intervened on behalf of celebrities and influencers whose likenesses were exploited in AI-generated videos. The Election Commission has also warned political parties against deploying synthetic media ahead of the upcoming Bihar elections. Parliamentary committees have called for mandatory watermarking and labelling of AI-generated content. In short, deepfakes have moved from novelty to national concern. The question is no longer whether synthetic media should be regulated - but how.
A Step in the Right Direction
To its credit, the government’s draft rules acknowledge that trust in the information ecosystem is foundational to democracy and markets alike. If citizens cannot distinguish between truth and fabrication, the entire architecture of communication begins to erode. The amendments’ insistence on labelling is therefore a step in the right direction. By requiring intermediaries to embed visible or audible identifiers on synthetically generated content, and by directing significant social media intermediaries (SSMIs) to verify and label such content before publication, the rules attempt to restore a baseline of authenticity.
Well Meaning, But with Significant Misses
Yet, while the intent is laudable, the execution is overbroad. First, the definition of ‘synthetically generated information’ is so broad it could capture everything from profile-picture edits to AI-assisted text. By covering all algorithmically generated or altered information, the rules risk making labelling so ubiquitous that it loses its meaning. Not every use of AI is harmful, and not every filter or edit needs a warning. A more calibrated definition would better balance creativity with accountability.
Second, the 10% labelling requirement for visual and audio content is too prescriptive. Mandating that labels cover 10% of screen space or play through the first 10% of audio may make sense for certain media, but it ignores context. For example, an advertisement with a mandatory “synthetic content” announcement in its opening seconds could lose narrative coherence; similarly, creative works or innocuous photo edits could become visually cluttered. It may be better instead to define outcomes - labels must be ‘prominent and clearly identifiable’ - and let intermediaries decide how.
Third, the user declaration requirement could quickly devolve into compliance fatigue. Requiring every uploader to declare whether their content is synthetically generated may work on professional video platforms, but what about social media platforms like WhatsApp, where billions of messages and edited images circulate daily? The sheer volume of content makes recurring declarations impractical. If compliance becomes mechanical, users will stop paying attention.
Fourth, the rules’ reliance on automated verification tools overlooks their present limitations. Even the best AI-detection systems today yield false positives and negatives - flagging harmless, AI-assisted text while missing sophisticated deepfakes. Combined with an overbroad definition, this could result in over-labelling benign content and under-detecting genuinely harmful material.
Globally, voluntary standards such as C2PA are emerging to verify content origins through cryptographic signatures and metadata. India’s amendments could definitely be a useful nudge for domestic players to align with these standards and do more for labelling. But for such efforts to succeed, regulatory space for experimentation is essential. Overly prescriptive rules risk locking the ecosystem into compliance mode rather than fostering innovation in provenance technology.
The Safe Harbour Conundrum
Moreover, the draft’s tendency to tie every new obligation to safe harbour continues a worrying pattern. Safe harbour was never a compliance lever; it was meant to shield intermediaries from liability if they acted responsibly. Frequent amendments linking compliance lapses to loss of immunity not only distort that intent but also signal that the IT Act itself is due for reimagination - a need further underscored by the inclusion of AI content creation tools within the intermediary framework. Unlike traditional intermediaries that host content, AI systems generate it, blurring the line between creator and facilitator. With even global consensus on this issue still unsettled, India’s decision to fold AI tools into the intermediary category risks creating confusion.
Labelling Won’t Solve the Problem
Finally, the draft’s deepest limitation is the assumption that labelling equals safety. Labelling is certainly necessary, but it is also insufficient. Deepfakes are dangerous not because they exist, but because, when created with malintent, they spread faster than truth can catch up. Unless platform incentives are redesigned to slow the circulation of harmful content, labelling alone will remain a half measure. India’s policy imagination must now move beyond detection to designing information systems that prize accuracy over attention.
The draft amendments are a valuable step toward a more trustworthy digital ecosystem, but as India stands at the frontier of AI governance, we need to broaden our regulatory toolkit. Trust cannot be watermarked into existence; it must be built through systems that reward truth over virality.
(Rohit Kumar is the Founding Partner at the public policy firm The Quantum Hub - TQH.)
Views are personal, and do not represent the stand of this publication.
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!
