Some of the biggest names in tech are calling for a pause on training systems more powerful than OpenAI's newly launched model GPT-4. Tesla chief Elon Musk, Apple co-founder Steve Wozniak and Stability AI CEO Emad Mostaque are among the 1,344 signatories on an open letter urging a pause on giant AI experiments.
The letter calls on all AI labs to immediately pause, for at least 6 months, the training of AI systems more powerful than GPT-4, citing the risk of creating nonhuman minds that could eventually outsmart humans. It also said that AI labs have become locked in an “out-of-control” race to develop “ever more powerful digital minds” that cannot be understood or controlled even by their creators.
“Contemporary AI systems are now becoming human-competitive at general tasks,” reads the open letter, posted on the website of Future of Life Institute, a non-profit. “Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?”
You can read the full text of the open letter by clicking on the link below:
We're calling on AI labs to temporarily pause training powerful models!Join FLI's call alongside Yoshua Bengio, @stevewoz, @harari_yuval, @elonmusk, @GaryMarcus & over a 1000 others who've signed: https://t.co/3rJBjDXapc
A short on why we're calling for this - (1/8)
— Future of Life Institute (@FLIxrisk) March 29, 2023
“This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities,” the letter said.
The letter has been signed by a host of high-profile names, including founders, CEOs, researchers and more. AI heavyweights Yoshua Bengio and Stuart Russell, author Yuval Noah Harari, Pinterest co-founder Evan Sharp, Getty Images CEO Craig Peters and Conjecture CEO Connor Leahy are just a few of the signatories, besides Elon Musk and Steve Wozniak.
Reactions to the letter were mixed on social media.
Princeton professor Arvind Narayanan predicted that it would only serve to fuel the hype around AI
This open letter — ironically but unsurprisingly — further fuels AI hype and makes it harder to tackle real, already occurring AI harms. I suspect that it will benefit the companies that it is supposed to regulate, and not society. Let’s break it down. https://t.co/akQozgMCya— Arvind Narayanan (@random_walker) March 29, 2023
Others were glad it was written
surely – *surely* – this open letter about existential AI risk signed by lots of well-known top AI researchers will put the "only people who don't work in AI worry about existential AI risk" canard to bed https://t.co/ryORYonUph— Tom Chivers (@TomChivers) March 29, 2023
Given this open letter on the serious risks posed by AI (I have not signed it but thinking about it), it is time for a serious debate in India on this topic as well.I am worried enough that I spend most of time figuring out Zoho's way ahead.https://t.co/Dc508lq5UI
— Sridhar Vembu (@svembu) March 30, 2023
"The letter isn't perfect, but the spirit is right: we need to slow down until we better understand the ramifications," said Gary Marcus, a professor at New York University who signed the letter. "They can cause serious harm... the big players are becoming increasingly secretive about what they are doing, which makes it hard for society to defend against whatever harms may materialize."
(With inputs from Reuters)