HomeNewsOpinionWe need to know how AI firms fight deepfakes 

We need to know how AI firms fight deepfakes 

AI companies have unparalleled freedom to conduct their work in secret. But if they want to ensure the trust of the public, regulators and civil society, hiring more humans like Facebook's content moderators, wouldn’t be a bad idea either. Too much focus on racing to make AI “smarter” so that fake photos look more realistic, or text more fluent, or cloned voices more convincing, threatens to drive us deeper into a hazardous, confusing world

February 12, 2024 / 18:13 IST
Story continues below Advertisement
Artificial Intelligence
AI and tech companies aren’t investing enough in safety.

When people fret about artificial intelligence, it’s not just due to what they see in the future but what they remember from the past -- notably the toxic effects of social media. For years, misinformation and hate speech evaded Facebook and Twitter’s policing systems and spread around the globe. Now deepfakes are infiltrating those same platforms, and while Facebook is still responsible for how bad stuff gets distributed, the AI companies making them have a clean-up role too. Unfortunately, just like the social media firms before them, they’re carrying out that work behind closed doors.

I reached out to a dozen generative AI firms whose tools could generate photorealistic images, videos, text and voices, to ask how they made sure that their users complied with their rules. Ten replied, all confirming that they used software to monitor what their users churned out, and most said they had humans checking those systems too. Hardly any agreed to reveal how many humans were tasked with overseeing those systems.

Story continues below Advertisement

And why should they? Unlike other industries like pharmaceuticals, autos and food, AI companies have no regulatory obligation to divulge the details of their safety practices. They, like social media firms, can be as mysterious about that work as they want, and that will likely remain the case for years to come. Europe’s upcoming AI Act has touted “transparency requirements,” but it’s unclear if it will force AI firms to have their safety practices audited in the same way that car manufacturers and foodmakers do.

For those other industries, it took decades to adopt strict safety standards. But the world can’t afford for AI tools to have free rein for that long when they’re evolving so rapidly. Midjourney recently updated its software to generate images that were so photorealistic they could show the skin pores and fine lines of politicians. At the start of a huge election year when close to half the world will go the polls, a gaping, regulatory vacuum means AI-generated content could have a devastating impact on democracy, women’s rights, the creative arts and more.