India, like many other countries, is also grappling with the evolution of artificial intelligence (AI) and its implications for public trust, creativity, and democratic discourse. In the past few months, regulatory efforts by the government, whether pertaining to synthetic media or AI-facilitated harms against women, signal the growing recognition of the genuine challenges posed by deepfakes, misinformation, and deceptive AI-generated content. More recently, the government notified the amendments to the IT Rules 2021, introducing regulations for Synthetically Generated Information (SGI), including labelling, provenance requirements, and accelerated compliance timelines, marking one of the more consequential policy interventions to emerge in 2026.
The notified framework is rooted in the principles of user safety and platform responsibility and provides a legal basis for labelling, traceability, and accountability related to SGI. At the same time, its approach is largely confined to intermediary liability, primarily operationalising obligations at the level of intermediaries and significant social media platforms. This is also an opportunity to reflect on whether this lens alone is sufficient to respond to the technical, economic, and institutional realities associated with generative AI systems.
While regulatory approaches will continue to evolve, this moment calls for a broader conversation that looks beyond any single instrument and instead asks what effective AI governance should entail in the Indian context, especially when it comes to safety and information integrity.
Misaligned Liability in a Fragmented AI Value Chain
Traditional internet regulation was built around relatively clear roles where users created content, platforms hosted it, and the law governed both. However, generative AI complicates this model, i.e., a single piece of synthetic content today may involve a foundation model developed by a research lab, a deployment layer built by a startup, a tool used by an individual creator, and dissemination through multiple platforms. Each actor in this chain plays a distinct role, and yet none has complete oversight over the entire lifecycle of the content. This shift has also been acknowledged in the Principal Scientific Advisor’s report on India’s AI Governance Guidelines, which noted that existing legal categories of intermediary and publisher under the IT Act do not adequately reflect the structure of modern AI systems.
It would be crucial to consider a more resilient governance approach that recognises differentiated responsibility, wherein AI developers may be best placed to implement safety-by-design measures and provenance technologies, and creators and publishers may carry disclosure obligations where synthetic content intersects with elections, health, or impersonation. In turn, this will likely enable platforms to focus on enabling transparency tools and responding to substantiated complaints.
How Responsibility Assignment Shapes Institutional Responses
The framing of obligations under the law does more than just assigning responsibility to the platforms; they play an active role in shaping institutional responses. When obligations are framed broadly, and uncertainty is high, platforms respond by prioritising legal risk mitigation over contextual judgment, which often translates into conservative enforcement practices, including expansive labelling, delayed publication, or removal of content that may be lawful but difficult to assess with certainty.
In the context of generative AI, where intent and harm cannot be reliably inferred through automated systems, responsibility design becomes a powerful determinant of how speech is governed in practice. If platforms are expected to proactively assess or verify synthetic content at scale, often within compressed compliance windows, the incentive structure inevitably favours over-compliance. In such cases, the result is not targeted intervention against harmful deepfakes, but a general tightening of content controls across the board.
Understanding this dynamic is essential as our governance choices at this point in time can have consequences for how information circulates online, how platforms moderate speech, and how users experience digital participation. Therefore, it will be important to account not only for formal legal responsibility, but also for the behavioural incentives that our regulations may create.
Protecting Innovation and India’s Creator Economy
India’s creator economy has emerged as a significant driver of digital inclusion, employment, and cultural expression. Millions of Indians rely on digital platforms to create content, educate audiences, and build sustainable livelihoods, often using AI-enabled tools to enhance quality and reduce production costs. AI today supports a wide range of legitimate creative uses: image enhancement, video editing, background generation, audio clean-up, subtitles, and translation. For many small creators and MSMEs, these tools are essential to remaining competitive.
Treating broad categories of synthetic media as presumptively high-risk may impose disproportionate compliance burdens, particularly on smaller creators, and may dilute the effectiveness of safeguards intended to identify serious harms. It, therefore, becomes essential that governance frameworks distinguish between AI-assisted creativity and genuinely harmful synthetic content and consider a risk-based approach that dedicates regulatory attention to deceptive impersonation, manipulated depictions of real events, and synthetic content in sensitive domains, while preserving space for legitimate creative expression and economic growth.
Toward a Coherent Vision of AI Governance
Regulating synthetic content is an important and necessary step, but it cannot, on its own, serve as a complete response to the challenges posed by generative AI. As these technologies become more deeply embedded across governance frameworks, they must be designed not only for today’s risks, but also for the pace and direction of technological change itself. While policies play an essential role in setting guardrails and accountability, their effectiveness ultimately depends on institutions' ability to implement, interpret, and adapt them in real-world conditions. Where regulatory ambition outpaces operational capability, the result is often uneven enforcement, over-compliance, or unintended restrictions on lawful activity.
A coherent governance vision, therefore, requires layered responses rooted in clear legal standards, risk-based responsibility allocation, interoperable technical systems such as provenance infrastructure, and regulatory institutions equipped with the expertise and flexibility to respond as technologies evolve. This approach recognises that AI risks are dynamic, context-dependent, and often systemic, requiring governance that is equally adaptive.
India has the opportunity to shape an AI governance model that is principled and pragmatic, one that protects citizens and democratic integrity while enabling innovation, creativity, and economic growth. Achieving this balance will require moving beyond singular regulatory instruments toward a durable governance ecosystem that evolves alongside the technologies it seeks to guide.
(Krishna Prasad Tenneti is a member of the Lok Sabha representing TDP.)
(Views are personal, and do not represent the stand of this publication.)
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!
Find the best of Al News in one place, specially curated for you every weekend.
Stay on top of the latest tech trends and biggest startup news.