Safe-harbour loss will likely be case-specific, not for entire platform

On February 11, MeitY notified amendments to the IT Technology rules, 2021, which reduced to three hours the 36-hour deadline to remove AI-generated such content

February 12, 2026 / 14:06 IST
Story continues below Advertisement
The Ministry of Electronics and Information Technology notified the amendments to IT Rules targeted at addressing concerns over deepfakes
The Ministry of Electronics and Information Technology notified the amendments to IT Rules targeted at addressing concerns over deepfakes
Snapshot AI
  • Government to decide liability for deepfake content on a case-by-case basis
  • Safe harbour removal will follow legal process, not applied mechanically
  • Platforms must label AI-generated content and act on unlawful material

The government will decide on a case-by-case basis whether punitive action (like removing safe harbour provisions) for deepfake or unlawful AI-generated content should apply only to the application involved or extend to the larger platform hosting it, a source in the know said.

These concerns stem from the newly notified rules on synthetically generated information (SGI) and how it would apply to AI tools hosted on large platforms such as X, Google or Meta.

Story continues below Advertisement

On February 11, the IT ministry notified amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, which reduced to three hours the 36-hour deadline to remove such content.

Safe harbour under Section 79 of the IT Act protects online platforms from legal liability for user-generated content, and its loss means an intermediary can be held directly responsible for unlawful material carried or generated on its service.