In an era where digital manipulation is becoming increasingly sophisticated, the phenomenon of deepfakes has emerged as a formidable social challenge. Yesterday, India’s revered cricket legend, Sachin Tendulkar alerted his followers about deepfake videos circulating under his name. This incident, involving one of India’s most respected icons, is another reminder to recognise and address this growing threat that jeopardises public trust and individual reputations.
Addressing the growing concerns around deepfakes, the IT Ministry initiated a series of significant steps – from industry consultations, leading to a recent advisory that obliges tech companies to regularly caution users against posting content violating Rule 3 (1) (b) of the IT Rules, 2021 which covers various harmful digital materials.
Simultaneously, now there are indications that MeitY is contemplating amending of the IT Rules, to strengthen the advisory’s legal standing.
These amendments may potentially include a definition of deepfakes under the IT Rules and lay down requirements like reminding users of the legal penalties under IPC and IT Act for posting harmful content every 15 days and making all user grievances appealable to the Grievance Appellate Committee.
While the intent behind MeitY’s move is commendable, its practicality and efficacy requires careful deliberation. Is enhancing platform liability alone sufficient to tackle the multifaceted problem of deepfakes? Are frequent legal reminders and overburdening grievance mechanisms the most effective ways to tackle the deepfake menace? Is it not more prudent to focus on enhancing and refining existing systems rather than imposing new, potentially cumbersome regulations?
Granularities Of Proposed Deepfake Regulation
The considerations for including a definition for deepfakes under the IT Rules is a critical step towards regulating this technology. However, it’s essential that this definition is carefully crafted to acknowledge that, while “deepfakes” can be problematic and may refer to malicious content, there is “synthetic content” – that is any content created by AI – which have genuine applications. The definition must, therefore, focus on prohibiting unfair use of this technology without stifling innovation and legitimate usage.
Moreover, the effectiveness of constant reminders is questionable. There’s a risk of “legal fatigue” where users, overwhelmed by the repetitive and intricate legal language, may choose to ignore these warnings altogether. This desensitisation could lead to a paradoxical situation where increased warnings result in decreased attention and awareness. Research in psychology has shown that overexposure to warnings can lead to desensitisation. A study titled Warning Fatigue: Insights from the Australian Bushfire Context found that repeated exposure to warnings could lead to diminished attention.
Further, legal scholars and behavioural scientists have studied the effectiveness of legal notices and warnings. For instance, the paperSimplifying Privacy Disclosures – An Experimental Test suggests that complex legal language is often less effective and that simplified disclosures are more likely to be understood by users. A more nuanced approach, perhaps involving simpler language and more engaging methods of communication, might yield better results in terms of user awareness and responsible use of the platform.
Likewise, the consideration to channel all grievances through a platform’s grievance officer seems fraught with practical challenges. The current dual-route system comprising in-app reporting and the option to approach a grievance officer offers flexibility and efficiency. Users should be allowed to choose the most appropriate path based on the nature of their grievance, ensuring quicker resolution for less complex issues through in-app mechanisms, and the option to resolve complaints without necessarily approaching the government.
Mandating all complaints to be processed through a grievance officer will lead to an overwhelming influx of cases, many of which might be minor and easily resolvable through automated processes. A more balanced approach that optimises both in-app reporting and grievance officer channels, while ensuring each operates efficiently and effectively, would likely serve the users with multiple resolution channels.
Beyond Legal Mandates To Practical Response
The deepfake issue is multifaceted, and its solution demands a multidimensional approach that extends beyond legal mandates.
Refining and enhancing existing legal norms and processes is more pragmatic than instituting new regulations that may prove cumbersome. Under the current IT Rules, there is a blanket timeline of 36 hours for content takedown following court or government orders. A more nuanced approach would involve grading this timeline based on the severity of the content.
For critical issues like deepfakes, a shorter time frame for action in specific contexts such as elections where timely action is critical – may be considered, while less severe offences could be allotted a more extended period. This gradation would allow intermediaries to prioritise their responses more effectively, leading to more efficient and targeted action where it is most needed. Similarly, the IT Rules mandate platforms to provide assistance to law enforcement authorities (LEAs) within 72 hours.
This mandate can be better operationalised by the establishment of a standardised format for LEA assistance. By setting a standard operating procedure and clearly defining which officials are authorised to request information from intermediaries, the investigation of deepfake-related cyber crimes could be made more swift and effective.
It is equally cardinal to invest in more technical research to arrive at nuanced technical solutions that facilitate effective prevention and detection of deepfakes. Besides, it is paramount to envision greater guardrails and scrutiny on smaller third party apps that allow the use of AI softwares without enough checks and precautions.
By balancing legal measures with effective redressal systems, technical innovations and proportionate oversight on all kinds of AI based service providers we can create a more resilient digital environment. This balanced strategy respects the complexity of emerging technologies like deepfakes, ensuring that our response is as sophisticated and adaptable as the challenges we face.
Shruti Shreya is Senior Programme Manager, Platform Regulation & Gender and Tech, The Dialogue. Views are personal, and do not represent the stance of this publication.
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!