Moneycontrol PRO
Loans
Loans
HomeNewsOpinionIndia’s Misinformation Crisis: Deepfakes, bots, and broken trust

India’s Misinformation Crisis: Deepfakes, bots, and broken trust

India faces a growing threat from digital misinformation, driven by deepfakes, bot factories, and poor media literacy. Urgent action is needed through regulation, education, tech innovation, and global cooperation

June 16, 2025 / 11:33 IST
fake-news

India accounts for as much as 40% of globally detected fake news instances.

A report by the World Economic Forum puts misinformation and fake narratives as the biggest threat facing India amongst the 34 major risks globally. No other country in the world is facing this grave a risk at the top. The menace of digital misinformation is growing at an unprecedented velocity, shaping perceptions, influencing decisions, and moulding societal norms.

The darker undercurrent of deepfake technology and fake news is threatening the very fabric of trust in social media. India, with over 70Cr internet users, has much to worry about, as sophisticated bot factories operating across Eastern Europe provide cheap bot campaigns for as low as Rs 16,000. It is not easy to stop this pervasive menace and perhaps requires a multi-pronged approach of tech innovation, legislation, media literacy, and cross-border collaboration.

Deepfake technology employs AI to create highly realistic but manipulated images, audio, and video. News that is deliberately false or misleading spreads like wildfire through social media platforms before any fact-checkers can intervene. The consequences? Electoral interference as we have seen in India and the US, incitement of violence, defamation, financial scams, and the erosion of trust in democratic institutions. In some cases, deepfake videos have falsely manipulated speeches of public figures, triggering diplomatic crises.

According to recent studies by Reuters Institute and the Digital News Report, India accounts for as much as 40% of globally detected fake news instances. Research indicates that misinformation in India mostly revolves around politics, health (during COVID-19), and social issues. WhatsApp and Facebook help promote rapid dissemination of misinformation. Poor media literacy exacerbates the problem further.

The “bot factories” – large-scale operations that generate and amplify false narratives at scale – are central to this menace. With advanced cyber-infrastructure, they sell their services to political parties, business rivals, and malicious entities. Bots are used to post, share, and comment to artificially inflate content’s popularity, and use big data analytics to customise messages to specific audiences to maximise impact.

These factories benefit from the cheaper and more sophisticated infrastructure in Eastern Europe, where data centres, VPN services, and fast internet help lower cost scale. Vested parties in countries like India benefited from peddling fake news. The cumulative effect of all this is the erosion of public trust. A recent Edelman Trust Barometer report puts an alarming 45% worldwide don’t trust the information they consume. In India, this scepticism is impacting democratic participation, for sure. Misinformation has also polarised societies, incited violence, and undermined governments’ credibility.

The EU has taken concrete steps by developing a comprehensive code of conduct for online platforms. The Digital Services Act (DSA) aims to regulate digital platforms more stringently. European bot factories have been targeted through coordinated law enforcement actions, disrupting operations and reducing the proliferation of artificial content.

To restore the truth and trust, a multi-layered remedy is needed.

Strengthen regulatory frameworks: Enact comprehensive legislation that criminalises the malicious creation and dissemination of deepfakes and fake news. Define clear penalties and a framework for accountability. India has taken some right steps in this direction but these need tightening specifically for bot content and clandestine operations. The EU’s Code of Practice on Disinformation and the US’s proposed laws against AI-generated disinformation might provide some guidelines.

Enhance media literacy and public awareness: The strongest safeguard against misinformation is a well-informed populace. Governments, academia and civil society must work together to promote media literacy from an early age. Curriculum modules on critical thinking, source verification, and understanding AI-driven content should be part of it. Awareness campaigns on social media can help. Encourage responsible sharing practices for suspicious language, inconsistent visuals, or unverifiable sources.

Leverage technology for truth: Cutting-edge AI tech for fact-checking can analyse and flag suspicious content in real-time. Google has invested in automatic detection tools that evaluate video authenticity and identify bot activity. Blockchain promises verifiable content provenance. By timestamping and digitally signing authentic information, genuine content can be distinguished. Deepfake detection involves analysing inconsistencies in visual and audio signals, and has seen significant advancements. Make it easily accessible for public use.

International collaboration and data sharing: Since bot attacks happen from across the borders, a unified global approach is essential. Countries should share data, best practices, and intelligence about bot networks. The UN can facilitate this and create a security-like framework against misinformation. Initiate joint programmes like shared databases of known misinformation sources, coordinated takedown procedures, and standardised legal provisions.

Platform accountability: Social media giants seem to be the fertile ground for fake news spread and should bear primary responsibility. They must adopt transparent and strict content moderation policies, and ignore the dilution of it by X for US elections. Disclose algorithms' functioning, and be accountable to independent oversight. Implement “verified-user-driven” reporting mechanisms, alongside swift action on verified misinformation. Platforms should be mandated to flag AI-manipulated content so that users know they are seeing synthetic content.

Further to the above, every stakeholder (governments, the private sector, civil society, and individual users) must be involved in restoring trust in the digital ecosystem. Yes, tech safeguards are vital, but they must be complemented by ethical standards and transparent governance.

Here are some best practices for each stakeholder:

* Governments: Enact robust anti-misinformation laws, promote digital media literacy, and facilitate international cooperation.

* Civil Society & Academia: Conduct awareness campaigns, develop educational tools, and empower citizens to critically evaluate information.

* Tech Platforms: Ensure transparency, invest in detection algorithms, and implement clear accountability frameworks.

* Individuals: Exercise scepticism, verify sources, and refrain from sharing unverified content.

The menace of deepfakes isn’t just a problem for India but an epidemic. As the (un)intended world capital of digital deception, India isn’t just breaking the internet, but bending reality. Urgent, coordinated action is needed to quarantine the virus of misinformation. If we don’t fact-check the future, we’ll be fast-tracked to fiction. In other words, in the meme economy, if truth had a Tinder profile, humanity keeps swiping left. Isn’t it time we matched with reality?

Dr M Muneer is a global expert columnist and managing director of CustomerLab Solutions, an innovative consulting firm delivering measurable results to clients.
first published: Jun 16, 2025 11:33 am

Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!

Advisory Alert: It has come to our attention that certain individuals are representing themselves as affiliates of Moneycontrol and soliciting funds on the false promise of assured returns on their investments. We wish to reiterate that Moneycontrol does not solicit funds from investors and neither does it promise any assured returns. In case you are approached by anyone making such claims, please write to us at grievanceofficer@nw18.com or call on 02268882347