Leading technology industry associations - the Internet and Mobile Association of India (IAMAI), Nasscom, and the Business Software Alliance (BSA), ITI Council (ITIC) - have urged the Ministry of Electronics and Information Technology (MeitY) to narrow the scope of its proposed deepfake regulations under the Information Technology Rules, 2021.
They warned that the draft framework, aimed at tackling AI-generated misinformation and “synthetically generated information,” risks being overly broad, capturing harmless edits, and creating compliance burdens that may be difficult for intermediaries to meet.
MeitY deepfake rules
The draft amendments released by the Ministry of Electronics and Information Technology (MeitY) on October 22 define “synthetically generated information” (SGI) as any information “artificially or algorithmically created, generated, modified..." It says that intermediaries that host such content would be required label or identify either through visible disclosures or embedded metadata.
They must also “make reasonable efforts to ensure that users are informed that the information has been synthetically generated.”
IAMAI, in its yet-to-be-submitted draft response, a copy of which Moneycontrol has reviewed, said the proposed requirements “may be impracticable and could create significant compliance costs for intermediaries, while also impacting users’ privacy and freedom of expression.”
It warned that “ordinary edits like AI-assisted grammar correction, image enhancement, or sound mixing could be captured under the broad definition of synthetically generated information.”
The association is also arguing that the draft “focuses excessively on the means of content creation rather than its intent or impact,” and urged MeitY to “adopt a harm-based, risk-tiered approach” so that obligations are triggered only where “synthetic content poses a reasonable likelihood of causing material harm.”
Nasscom’s submission dated November 8 echoed this view, stating that “almost all forms of digital content could risk being classified as ‘synthetically generated information’ and consequently all such content could be subject to labelling obligations.”
It said that the definition, as drafted, “does not distinguish between editing that carries no risk of deception and fabrication that is intended to mislead.” To address this, Nasscom recommended that the term “synthetically generated information” be replaced with “deceptive synthetic content,” defined as content “reasonably likely, in the context of its presentation and distribution, to materially mislead a person into believing false or manipulated information.”
The BSA — representing companies such as Adobe, Microsoft, and OpenAI — also urged MeitY to avoid “a one-size-fits-all approach to addressing SGI risks.”
It said the draft “should specify that upstream entities in the AI value chain are not responsible for transparency obligations that should be borne by companies providing AI tools and services to downstream users.”
The alliance further warned that visible labelling mandates “foreclose the use of globally supported metadata-based transparency methods,” calling instead for “flexible and appropriate technical solutions” like machine-readable provenance standards.
The ITIC, whose members include firms like Google, Apple, Microsoft, IBM and Amazon, recommended narrowing the definition of SGI to capture only audio or visual content or to content that is altered "by AI with a likelihood to deceive or mislead and cause material harm".
"As drafted, the amendments could capture a much broader range of routine AI-supported editing activities, such as brightness or contrast adjustments, sharpening, cropping, captioning, or background noise removal. To avoid imposing unnecessary regulations on creative and standard editing activities, ITI recommends introducing a clear exclusion for non-substantial alteration or
assistive functions," their submission added.
Across all four submissions, a common theme emerges: industry bodies broadly support MeitY’s intent to curb the misuse of deepfakes and synthetic media but caution that the draft’s framing could “fragment the current due diligence framework” and “divert resources from genuinely harmful content.”
The draft amendment to the rules also say that significant social media intermediaries (SSMIs) would have an added responsibility to verify synthetic content, with the explanatory note stating that the objective is to ensure “users are able to easily distinguish between content that is AI-generated and content that is not.”
MeitY said the proposed rules seek to strengthen due diligence obligations for platforms “that enable the creation or modification of synthetically generated content” to make the internet “safe, trusted and accountable.”
Update, November 13, 11.37 am: Added ITI Council's submission on the draft amendment to IT Rules
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!