Moneycontrol PRO
Black Friday Sale
Black Friday Sale
HomeNewsOpinionThere’s too much money going to AI doomers

There’s too much money going to AI doomers

Nonprofit groups that fight AI harms today are getting far less funding than those preventing a theoretical AI apocalypse.  

August 16, 2023 / 16:43 IST
Artificial-Intelligence

Imagine if during the Industrial Revolution, amid the explosion of looms, steam engines and other automated machinery, there had been safety organisations making sure that everything worked out in people’s best interests. One branch of these groups made sure the machines didn’t create faulty products and that workers weren’t exploited. The other worked on making sure the machines didn’t become sentient, rise up and kill humanity.

In hindsight, which safety organisations deserved the most funding? You’d think the former. Yet in today’s AI revolution, the opposite is true.

Research and advocacy groups that are working to address present-day harms from AI are getting a fraction of the funding that’s going to those studying existential risks from increasingly powerful machines.

Take, for example, the European Digital Rights Initiative. It’s the largest network of nonprofit digital rights groups in Europe, and its campaigns around the use of facial recognition and biased algorithms helped put more civil rights protections in the region’s AI Act. Its annual budget? About $2.2 million per year, according to the organisation’s director.

Or take the AI Now Institute based out of New York University, which is pushing to scrutinise how AI is used in healthcare, criminal justice and education. It operates on an annual budget of less than $1 million.

Now compare these with the Future of Life Institute, a nonprofit focused on the existential risks of AI getting access to the internet or weapons. In 2021, it announced a $25 million grant program via a donation from crypto magnate Vitalik Buterin.

And look at the Center for AI Safety, which says it carries out technical research to reduce existential risk from AI. Last year it got a $5.2 million grant from a single donor, Facebook co-founder Dustin Moskovitz via his Open Philanthropy organisation. Berkeley, California’s Center for Human-Compatible AI got an $11.4 million donation from Open Philanthropy, to be used over five years.

Open Philanthropy is perhaps the biggest donor backing rogue AI research, having poured nearly half a billion dollars into various efforts to combat it, according to a Washington Post report in July.

There is nothing wrong with scrutinising AI systems to make sure they are aligned with human values. After all, AI has more potential to go off the rails than an 18th century spinning jenny. But the enormous disparity in funding between theoretical risks of the future and real problems that exist today, which stand to get worse in the absence of regulation, makes no sense at all.

Why the disparity? One is ideological. Another may be commercial: Existential risk groups often say they need to make more powerful AI models in order to do their research. Over time, that can make them more valuable as investments.

A perfect example of this is OpenAI. It started life as a nonprofit in 2015 that took donations from billionaire benefactors to create safe AI tools that would benefit humanity. Over time, as its language models became more powerful and costly to maintain, it became a for-profit company — and its early donors found themselves holding a valuable investment.

Venture capitalists are keen to invest in startups on a similar journey to OpenAI’s. For instance, when a group of OpenAI engineers split away in late 2020 because they felt its commitment to safety had become warped, and created their own AI safety startup called Anthropic, they soon found themselves collecting millions of dollars from a mix of sources: the usual AI safety donors like Moskovitz and the crypto exchange FTX (before it went bankrupt), as well as VC firms such as Spark Capital and the venture arm of Google. As of today, it’s raised $1.2 billion.

Of course, Anthropic is not a nonprofit organisation and it sells a product, but it’s also part of a grey area that has emerged from groups and companies that position themselves as doing work to prevent catastrophic risk from AI by, bizarrely enough, racing to create more powerful AI.

And while all this money is pouring into their research, more examples of discrimination have been bubbling up from AI tools already in the wild. Rona Wang, a lab assistant at MIT, recently uploaded a photo of herself to Playground AI, an online tool that uses AI image generators to automatically edit images. She asked it to turn her picture into a “professional LinkedIn profile photo.” The tool disturbingly made the face of Wang, who is Asian, Caucasian.

Playground AI’s founder responded to Wang’s post on Twitter, saying, “[The AI models] aren’t smart enough.”

But she is not the only one. Lana Denina, a painter, also tried generating a professional headshot with AI editing tool Remini. But after uploading photos of her face, the tool gave her overly sexualised portraits of herself, according the examples she posted on Twitter. Other AI tools have depicted women in the same egregious way when they just want normal portraits; women of color in particular have been impacted. While hundreds of millions of dollars are being spent on fighting a theoretical AI apocalypse, AI models are perpetuating a longappalling history of sexualising black women and girls.

“There’s a lot of issues that are in very bad need of attention and more robust support, and nowhere near enough is going into those spaces,” says Sarah Myers, managing director of AI Now.

A strange divergence is happening to the donations aimed at taming this modern industrial revolution. The current race dynamic means that when startups say they’re creating safe AI, their ideals need to be taken with a grain of salt. More pressingly, those defending the rights and livelihoods of humans now — versus in a distant future — need all the help they can get.

Parmy Olson is a Bloomberg Opinion columnist covering technology. Views are personal and do not represent the stand of this publication.Credit: Bloomberg 
Invite your friends and family to sign up for MC Tech 3, our daily newsletter that breaks down the biggest tech and startup stories of the day

Parmy Olson is a Bloomberg Opinion columnist covering technology. Views are personal, and do not represent the stand of this publication.
first published: Aug 16, 2023 04:43 pm

Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!

Subscribe to Tech Newsletters

  • On Saturdays

    Find the best of Al News in one place, specially curated for you every weekend.

  • Daily-Weekdays

    Stay on top of the latest tech trends and biggest startup news.

Advisory Alert: It has come to our attention that certain individuals are representing themselves as affiliates of Moneycontrol and soliciting funds on the false promise of assured returns on their investments. We wish to reiterate that Moneycontrol does not solicit funds from investors and neither does it promise any assured returns. In case you are approached by anyone making such claims, please write to us at grievanceofficer@nw18.com or call on 02268882347