
The government will decide on a case-by-case basis whether punitive action (like removing safe harbour provisions) for deepfake or unlawful AI-generated content should apply only to the application involved or extend to the larger platform hosting it, a source in the know said.
These concerns stem from the newly notified rules on synthetically generated information (SGI) and how it would apply to AI tools hosted on large platforms such as X, Google or Meta.
On February 11, the IT ministry notified amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, which reduced to three hours the 36-hour deadline to remove such content.
Safe harbour under Section 79 of the IT Act protects online platforms from legal liability for user-generated content, and its loss means an intermediary can be held directly responsible for unlawful material carried or generated on its service.
The source said liability would depend on the role played by each entity in carrying or failing to correct unlawful content.
Also read: MeitY notifies IT Rules to regulate deepfakes and synthetic content
"If the AI application is the primary event and the larger platform continues to carry it and doesn’t correct something, then the AI platform is the primary entity and the larger platform is the secondary entity," the same source attributed above, said.
The official added that outcomes would vary based on "the facts and circumstances in each particular case."
Under the amended IT Rules, intermediaries retain safe harbour under Section 79 of the IT Act only if they comply with due diligence obligations and act on court or government orders to remove unlawful content.
The new amendments add another layer to this duty by requiring platforms to identify and label synthetically generated information and ensure that such content does not misrepresent real persons or events in a manner likely to deceive.
Also read: New three-hour takedown window under IT Rules on deepfakes raises fears of wrongful removals
The official said this does not mean labelling provides immunity.
"Labelling of synthetically generated content does not give you a licence to post content of this nature. You can’t produce unlawful content, put a label and say, 'here, take that'."
On whether failure to label or failure to meet takedown timelines could trigger loss of safe harbour, the official clarified that both fall within the broader due diligence framework.
Crucially, the official rejected the idea that safe harbour would be removed mechanically or across the board.
"The removal of safe harbour will go through a legal process. The case will have to be examined and it has to be decided whether they have violated the law and if they are still entitled to safe harbour or not,” the sources said.
The comments come amid growing scrutiny of AI tools embedded within major social media platforms and questions over whether misconduct by such tools could expose entire platforms to liability.
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!
Find the best of Al News in one place, specially curated for you every weekend.
Stay on top of the latest tech trends and biggest startup news.