Moneycontrol PRO
Loans
Loans
HomeNewsTrendsHow to solve the problem of deepfake pornography? Code Dependent author Madhumita Murgia says regulation is key

How to solve the problem of deepfake pornography? Code Dependent author Madhumita Murgia says regulation is key

'Code Dependent' author Madhumita Murgia explains that regulation will have to be a big part of how we respond to deepfake photos and videos globally. Plus, the responsibility must be shared between governments and tech companies - with the latter holding much of the data, infrastructure and knowhow around AI systems.

July 08, 2024 / 18:04 IST
Madhumita Murgia is AI editor at the Financial Times. Her first book, 'Code Dependent', looks at examples of AI's uses and harms around the world, in segments from healthcare to deepfake videos. (Image via Instagram/Madhumita Murgia)

Madhumita Murgia is AI editor at the Financial Times. Her first book, 'Code Dependent', looks at examples of AI's uses and harms around the world, in segments from healthcare to deepfake videos. (Image via Instagram/Madhumita Murgia)

"The entire promise of AI (artificial intelligence) is that it is going to be superior to humans, right? The point of building these systems is that we need to augment our own intelligence or that it is able to correct flaws in human decisions," Madhumita Murgia says over a video call. Her book 'Code Dependent' was shortlist for the Women's Prize for Non-fiction.

Murgia, the AI editor at 'Financial Times', drew on her experience of reporting on tech for 'Code Dependent', her first book. In the video interview from London, where she is now based, Murgia spoke about the problem of deepfakes, why AI isn't inherently a force for good or evil and how AI is as dependent on humans. Edited excerpts:

Why is the book called Code Dependent?

It's a pun on co-dependency. We think a lot about how our lives are going to be changed by AI systems, how we are becoming more dependent on technology, whether it is from the social media era to now as it is getting more and more automated, as AI replaces human decision-making and human creativity, as we are seeing with generative AI. But I also have found in over a decade of reporting on this that these systems have baked-in biases, baked-in perspectives on the world which are really designed by the people who build them (the AI systems), and often those people come from a very small octave, a bubble, which is essentially California or San Francisco, in particular, Silicon Valley. So these AI systems only reflect the views of some of us. So just as we are dependent on AI, AI is also fully dependent on us in order to be trained, in order to be more broadly reflective. I wanted to show that through the examples and stories in the book. I also wanted it to be global, to show that the technology is ubiquitous and how it is affecting people all over the world.

You have stories in the book from India and Kenya...

I wanted to look at as many regions as possible. Of course, I wanted to look at India, being from India. I would say that that was one of the brighter spots for me in the book; the most positive example of how AI can play a role, and in that case, it was in healthcare.

Each chapter is focused on a wider theme. I had criminal justice and predicting crimes, and that one is based in Amsterdam. I look at data labelling and data labour - which is the labour that is hired to train AI systems behind the scenes - and for that I travelled to Bulgaria, Buenos Aires and Kenya.

With the healthcare chapter in India, it was really looking at the story of a doctor based in Nandurburg district on the border of Maharashtra and Gujarat who was helping to train a TB diagnostic AI system on the population of patients there. There you can see that if you can have a really accurate AI system that can bridge these gaps, then you can reach people whom you would otherwise not be able to reach.

You have this playing out in the US as well, where you have African American populations without access to healthcare. So this can be life-changing use for this technology.

Another example in how AI was used to track and identify the Uighur Muslim population in China, and how a humans right activist brought this to light.

Each of these (examples) is about people who have either been victims of AI systems or have helped to implement and design them, and in some cases have started to fight back against the harms.

Nobel Laureate Daniel Kahneman who died earlier this year said there may be some merit in letting algorithms dispense justice in courts to cut out the noise—including how a hot day can affect the outcome of a trial—that is typically a factor when humans make judgments. What's your take on that?

The entire promise of AI (artificial intelligence) is that it is going to be superior to humans, right? The point of building these systems is that we either augment our own intelligence or that it is able to correct course in human decisions. Daniel Kahneman was a pioneering in explaining human decision-making. Of course, we understand that humans are biased, we have a proclivity towards those who look like us or are like us compared to an other. And AI is supposed to be more objective, and I can see how that would help in diagnostic healthcare because you have a set of conditions that need to be taken into account irrespective of whether the doctor is tired that day. (But) there are situations where there isn't a statistically correct answer. When it's not just black and white. Say, you're trying to judge whether a minor is guilty. There are so many aspects that social workers, lawyers, judges bring to these decisions. So, yes, bias is a part of that but, at least at a human level, we all understand what human bias looks like because we have all experienced it. We can account for it and design for it. The issue with AI systems is that they are far from perfect, they are predictive statistical systems that are looking for probability. Of course you are going to have errors of judgement. But the difference in errors or biases among humans and AI is that on the AI front, you can scale that up to millions of people at once. It's not just one judge or his constituency.

You see that with healthcare, too. I give the example of a (AI-driven) flawed healthcare system that was deciding who should get extra chronic care. And that system was making errors and was biased against people of colour, African Americans in the US. And it didn't just affect one hospital; it was a system that was used, I think, on 70 million Americans and many millions of Europeans too.

The danger with AI systems is that we don't understand the biases because they don't come from a human behavioural or human cultural context, and secondly, they are scaled up way faster and at a much larger scale compared to humans. So any errors or harms that come out of them, it affects many more people so much more quickly. That is what we need to balance.

Do you have a solution for deepfake photos and deepfake videos?

Deepfakes is something that has been evolving for a few years but particularly now with generative AI being so much more sophisticated at reading pictures and videos—we've seen this with systems like Dali, MidJourney, Google's Gemini, Sora which is coming out of OpenAI, which can create videos—they are so accurate that they are largely indistinguishable from reality.

(In the book) I speak in particularly to two ordinary women. One was a student; an Indian immigrant in Australia, an undergraduate. And the other was in Northern England and the mother of a young child, who had no idea this was happening. They both discovered deepfake pornography of themselves on the Internet. The reason I focused on these two women is that the data shows that about 98 percent of deepfakes online are pornographic. And of that, overwhelmingly, that same percentage—98-99 percent—is of women. So these are the victims, and this plays out again and again: the harms are actually felt by those who are marginalized.

So, in the criminal justice predictions, the harms are often felt in the immigrant community—in Amsterdam, it is often people who come from North Africa. In terms of gig workers—Uber, Swiggy, these types of apps—it's migrant workers, who are already precarious, who are likely to be harmed by the errors and biases of these systems. So it plays out again and again, and in the case of deepfakes, it's not just women who are celebrities, women who are speaking out—although it's them as well—but also ordinary women who don't have agency to change these circumstances because regulation is so weak globally.

One of the solutions that I talk about in the book is how do we strengthen our voice, our agency as individuals, regulation has to be a big part of it. We have to be able to hold to account when AI systems create harm or go wrong. Whether that is deepfake pornography, whether that is misinformation around election time or a wrong diagnosis that kills someone or something that puts someone in jail who shouldn't be there. In any of these examples, somebody has to hold the systems to account because they are such opaque black boxes, and to do that, we need governments to ask these companies to be accountable for what their systems create.

Where does the responsibility lie?

This is a huge challenge. We saw this with the last big shift we had online, which was the shift to mobile. It sparked the growth of the entire app ecosystem, and that included the growth of social media where, today, billions of us access Facebook, Instagram, Tiktok, Snapchat and so on. And we have struggled to regulate those platforms here in the UK, in the US, all over the world. I think regulators came to it too late, at a point when it was already so embedded with users around the world and these companies did not want to take responsibility for any of the harms that were precipitated by their platforms. And we have seen that in India, in Myanmar, and elsewhere.

We are seeing the next evolution, the next revolution even, in our online lives into an AI reality. The responsibility has to be multipronged - not with any one company or government.

We know that the power lies very much with the companies today, they are the only ones that have the data, the infrastructure, the actual chips to build these systems, the talent and knowhow to build them and what even goes inside these systems.

Chanpreet Khurana
Chanpreet Khurana Features and weekend editor, Moneycontrol
first published: Jul 8, 2024 06:04 pm

Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!

Advisory Alert: It has come to our attention that certain individuals are representing themselves as affiliates of Moneycontrol and soliciting funds on the false promise of assured returns on their investments. We wish to reiterate that Moneycontrol does not solicit funds from investors and neither does it promise any assured returns. In case you are approached by anyone making such claims, please write to us at grievanceofficer@nw18.com or call on 02268882347