In the era of rapidly advancing technology, we are confronted with a perilous threat that transcends the boundaries of digital deception and breaches the sanctity of individual rights. Deepfakes, once confined to the realm of science fiction, have now emerged as a potent weapon capable of inflicting profound harm.
It is imperative to recognise that deepfakes are not merely a technological novelty but a form of Tech-Facilitated Gender-Based Violence (TFGBV). They are a menace that poses a severe threat to user safety, particularly for women, and a challenge that India cannot afford to overlook.
The Existing Legislative Incompetence
In India, currently, there is no law explaining the concept of deepfake or banning their misuse explicitly. Sections 66D and 66E of the IT Act criminalise publishing or transmitting obscene material in electronic form and material containing sexually explicit acts in electronic form, respectively.
Further, Section 500 of IPC provides punishment for defamation. However, these laws are limited only to misuse of deepfakes in the domain of sexually explicit content and, in a sense, present only a myopic view of the otherwise various domains that deepfakes can percolate into.
Similarly, under the IT Rules of 2021, platforms are obligated to respond promptly to user complaints related to misinformation or privacy breaches. They are required to take action within 72 hours of receiving such complaints.
Additionally, if a platform receives knowledge from the government or courts about objectionable content, they must remove it within 36 hours. While these provisions can be used in cases of deepfakes given the associated misinformation spread and privacy breach concerns, they too fail to comprehensively address this deep rooted menace.
Using AI To Unmask Deepfakes
The advent of artificial intelligence (AI) has undeniably amplified the risk posed by deepfakes. AI algorithms, particularly generative models, have ushered in a new era where the creation of hyper-realistic media, almost indistinguishable from authentic images, videos, or audio recordings, has become an attainable feat.
This transformative capability stems from AI's prowess in analysing and synthesising vast datasets, allowing for the generation of content that seamlessly mimics human expressions and voices. However, the implications of AI in the deepfake landscape are not limited to its technical capabilities alone. What truly compounds the issue is the accessibility and affordability of AI algorithms.
These sophisticated tools, once considered the domain of researchers and organisations with substantial resources, have become increasingly attainable. Now, individuals with minimal technical expertise can access and harness these AI algorithms.
This democratisation of technology has lowered the barrier for entry into the realm of deepfake creation, making it a potential tool for not only mischievous hobbyists but also cybercriminals. The consequences of such misuse are manifold, ranging from reputational damage and privacy violations to financial fraud and societal distrust.
As we grapple with the repercussions of this AI-fueled deepfake landscape, it becomes imperative to prioritise the development and implementation of countermeasures.
Towards this, AI can be employed across diverse domains to thwart this emerging threat. First, in the development phase, AI can be instrumental in crafting robust detection algorithms that meticulously scrutinise media for subtle anomalies, including incongruities in facial expressions, voice patterns, and metadata, effectively unmasking fabricated content.
Second, in the deployment stage, AI can act as a sentinel on social media platforms and video-sharing websites, with automated systems equipped with AI algorithms scanning uploaded content in real-time to detect potential deepfakes. This not only serves as a deterrent but also allows for swift action to contain the spread of deceptive content.
Lastly, at the end-user level, raising awareness is vital. Public campaigns can educate users about the existence of deepfakes and equip them with the ability to discern the telltale signs of manipulation. Simultaneously, user-friendly AI tools can empower individuals to independently verify the authenticity of the content they encounter, providing an additional layer of defence in the hands of the public.
How To Unite Against The Deepfake Threat
Deepfakes not only violate individual privacy but can also damage reputations, incite harassment, and propagate falsehoods. Accordingly, as we stand on the cusp of enacting a new IT law, it is critical to bolster our legal efforts against them.
It is equally essential to adopt a more granular lens towards addressing this problem, recognising that its impact and consequences extend beyond technological spheres.
While legislation and content removal are essential components of our fight against Tech-Facilitated Gender-Based Violence, we must equally invest in educating users about the existence and dangers of the safety threats like deepfakes. By empowering individuals to recognise and protect themselves against the insidious harms, we can make a more tangible difference.
Research is an equally vital pillar of an effective response strategy. Collaborative efforts like the Deep Fake Detection Challenge led by prominent tech companies emphasise the importance of pooling resources for technological solutions. It is important that we continue this momentum and invest in deeper research and capacity building to stay ahead of the deepfake curve.
Finally, and most importantly, we must ensure that our responses prioritise the rights and recovery of survivors. A survivor-centric approach isn’t just about legal action or removing content; it is about helping victims heal and confidently reengage online.
In addition to the interventions towards enhancing online safety and content integrity, it is also important to leverage the transformative potential of AI itself to combat the escalating threats of deepfakes. AI, with its potential to both create and detect synthetic media, emerges as a double-edged sword in this digital battleground.
The ongoing development of innovative detection technologies, empowered by AI's analytical prowess, offers a promising path to mitigate the insidious impact of deepfakes. Simultaneously, our unwavering commitment to education, raising awareness about the existence and potential threats of deepfake technology, can empower individuals to navigate this digital minefield with discernment and critical thinking.
And lastly, the ethical considerations that guide our technological advancements underscore the moral imperative of using AI for the greater good, thereby safeguarding the integrity of our digital world. Thus, as we stand on the precipice of this transformative era, the convergence of technology, education, and ethics paves the way for a future where we harness AI as our ally, not our adversary, in the relentless battle against deepfakes. In unity, we can preserve the authenticity of digital media, uphold trust, and ensure that the potential of AI serves the betterment of society.
Shruti Shreya and Jameela Sahiba are Senior Programme Managers at The Dialogue, a think-tank working in the intersection of tech, society and policy. Views are personal, and do not represent the stance of this publication.
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!
Find the best of Al News in one place, specially curated for you every weekend.
Stay on top of the latest tech trends and biggest startup news.