HomeNewsOpinionOPINION | The Deepfake Rules: A reform in need of nuance

OPINION | The Deepfake Rules: A reform in need of nuance

India’s draft IT Rule amendments target deepfakes through labelling and verification, a positive step but overbroad and impractical. True digital trust needs smarter definitions, balanced enforcement, and redesigned platform incentives

October 30, 2025 / 09:40 IST
Story continues below Advertisement
deepfake
India’s draft IT Rule amendments target deepfakes through labelling and verification

When MeitY released the draft amendments to the IT Rules this week, it sent a strong signal: the government is taking the threat of deepfakes seriously. The draft rules mark India’s first direct attempt to regulate synthetically generated information - content that looks or sounds real but is created or altered by algorithms. From doctored videos of the Finance Minister falsely promoting investment apps to AI-cloned pornographic clips of influencers, India has recently seen a surge in high-risk deepfakes.

These draft amendments arrive amid growing public anxiety, judicial scrutiny, and electoral sensitivity. Courts have intervened on behalf of celebrities and influencers whose likenesses were exploited in AI-generated videos. The Election Commission has also warned political parties against deploying synthetic media ahead of the upcoming Bihar elections. Parliamentary committees have called for mandatory watermarking and labelling of AI-generated content. In short, deepfakes have moved from novelty to national concern. The question is no longer whether synthetic media should be regulated - but how.

Story continues below Advertisement

A Step in the Right Direction

To its credit, the government’s draft rules acknowledge that trust in the information ecosystem is foundational to democracy and markets alike. If citizens cannot distinguish between truth and fabrication, the entire architecture of communication begins to erode. The amendments’ insistence on labelling is therefore a step in the right direction. By requiring intermediaries to embed visible or audible identifiers on synthetically generated content, and by directing significant social media intermediaries (SSMIs) to verify and label such content before publication, the rules attempt to restore a baseline of authenticity.