The Ministry of Electronics and Information Technology's newly notified amendments to the IT Rules, 2021, have introduced a three-hour deadline for intermediaries to remove or disable access to unlawful content upon receiving an order from a court or the government — a steep reduction from the earlier 36-hour window.
Notably, this compressed timeline applies across all categories of unlawful content, including but not limited to synthetically generated information (SGI) such as deepfakes, and is not limited to high-risk or narrowly defined cases.
Legal experts say this change could fundamentally alter how platforms moderate content.
"The reduction of takedown timelines from thirty-six hours to just three hours is one of the most consequential changes in the amendment. This was not part of the earlier consultation draft and it applies across all categories of content, without any risk-based gradation," Aman Taneja, partner at Ikigai Law told Moneycontrol.
"In practice, determining illegality is often context-dependent and rarely lends itself to an immediate, bright-line assessment. Requiring platforms to act within three hours will demand a significant operational overhaul and a substantial compliance lift at scale, and it increases the likelihood of precautionary removals, raising real concerns about over-censorship," Taneja added.
Policy researchers also warned that the three-hour window leaves little room to distinguish between harmful deepfakes and lawful speech such as parody or satire.
Meghna Bal, director at the Esya Centre, said, " This basically leaves companies with no real time to discern whether something is actually SGI or not - meaning even non-SGI content could be taken down. It also seems that any SGI parody or satire is not exempt the larger problem of course is that these rules presume that SGI is inherently bad - even though most SGI is benign."
Rohit Kumar, founding partner at public policy firm The Quantum Hub (TQH), said, "The significantly compressed grievance timelines - such as the two- to three-hour takedown windows - will materially raise compliance burdens and merit close scrutiny, particularly given that non-compliance is linked to the loss of safe harbour protections."
The final rules also narrow the definition of SGI compared to the consultation draft. Instead of covering all content that is "artificially or algorithmically created, generated, modified or altered," the notified version focuses on synthetic content that "reasonably appears to be authentic or true" and is likely to mislead users.
The amendments also relax earlier proposals around blanket visible labelling. Platforms are now required to make “reasonable efforts” to ensure that at least 10 per cent of SGI content is labelled or identified using appropriate technical means such as metadata or identifiers, rather than mandating universal visible watermarks.
Bal added that the rules could pose risks to satire and contextual speech, citing historical examples of parody being mistaken for fact.
"The amended IT Rules mark a more calibrated approach to regulating AI-generated deepfakes. By narrowing the definition of synthetically generated information, easing overly prescriptive labelling requirements, and exempting legitimate uses like accessibility, the government has responded to key industry concerns - while still signalling a clear intent to tighten platform accountability," Kumar from TQH added.
While the notified framework signals the government's intent to tighten platform accountability around deepfakes and synthetic media, experts say its success will depend on how the three-hour takedown mandate is implemented in practice.
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!
Find the best of Al News in one place, specially curated for you every weekend.
Stay on top of the latest tech trends and biggest startup news.