“Should we let machines flood our information channels with propaganda and untruth?” asks the open letter from Future of Life Institute that has been all over the news this month for calling for a six-month halt on training new AI systems. “Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?”
Alarmist as it sounds, the letter has over 50,000 signatures, including from Tesla founder (and formerly OpenAI cofounder) Elon Musk, Apple cofounder Steve Wozniac, the historian and author Yuval Noah Harari and professors and intellectuals from universities and think tanks from around the world.
The letter arrived on the eve of OpenAI releasing ChatGPT 4, a significantly more powerful chatbot than the ones that came before it. The speed at which ChatGPT has developed—it first launched last November—is just one of the many reasons that it is raising calls for caution around the world.
What’s equally, if not more, shocking, is the verisimilitude factor: Just how closely ChatGPT approximates human interaction. Now, ChatGPT can process entire college essays, write covering letters, give relationship advice, code, have philosophical conversations, and make your grocery lists. One of its greatest threats is the loss of jobs. Paradoxically, a reel going around on an Instagram account by a “career educator” with 176K followers, reveals the biggest “job search hack of all time”: “Go to ChatGPT and enter in the job you want to apply for.”
The Future of Life Institute’s letter basically acknowledges the power flex that OpenAI has demonstrated, and asks for companies in the business to halt and consider the ramifications—in order to make this leap in technology work for us, and not the other way round. As a nearly decade-old nonprofit with a mission “to steer transformative technologies away from extreme, large-scale risks and towards benefiting life”, it feels it is well-placed to issue such words of caution.
Also read: AI needs the equivalent of Asimov's 3 rules of robotics. Here's why
The Future of Life Institute wants to mitigate long-term existential risks to mankind that they say might emanate from highly advanced and potentially dangerous technology such as nuclear technology, biotechnology and now artificial intelligence. What’s interesting to note here is that the institute does not call for a ban or an end to any of these technologies.
Not quite refuseniks, the Future of Life Institute acknowledges that massive technological advancement has been intrinsic to the human race’s development. But now, they say, “we have entered an era in which life and intelligence will be engineered by life and intelligence, rather than by evolution.” To wit, that means incredible power, of which we may not fully understand the ramifications.
Recently, in a CNN interview, after the anchor called the claims of the open letter “histrionic”, Future of Life Institute’s president and a Swedish-American cosmologist Max Tegmark clarified that it didn’t call for a pause on all AI research, but only on systems more advanced than ChatGPT4. “Having to share our planet with smarter entities than ourselves who do not care about us is something that can be quite inconvenient,” Tegmark said with a slight smile.
Fans of Hollywood sci-fi and people who’ve engaged with the philosophical aspects of technological development at least know that he was putting it mildly.
Also read: AI is coming for your password, and really fast
The institute and Tegmark are being called out for furthering the AI hype with this letter. Along with this counterproductive result, the “longtermism” of their view is also being called into question.
Longtermism is an ethical stance that gives priority to improving the long-term future, embraced widely in Silicon Valley and by Big Tech, but which many consider dubious for it allows justifications for present-day actions by the uber-wealthy.
And yet, Tegmark is another one in a long line of people who’ve worked closely with Silicon Valley to raise alarm. In 2014, the same year that the Future of Life Institute came into being, the late theoretical physicist Stephen Hawking told the BBC: “The development of full artificial intelligence could spell the end of the human race. It would take off on its own, and re-design itself at an ever-increasing rate.”
Elon Musk, virtual reality pioneer Jaron Lanier, philosopher Nick Bostrom, author James Barratt, even John Wick actor Keanu Reeves have been suspicious of AI. “People are growing up with these tools: We’re listening to music already that’s made by AI in the style of Nirvana, there’s NFT digital art,” Reeves said recently. “It’s cool, like, look what the cute machines can make! But there’s a corporatocracy behind it that’s looking to control those things.”
“Culturally, socially, we’re gonna be confronted by the value of real, or the non-value,” Reeves told Wired magazine. “And then what’s going to be pushed on us? What’s going to be presented to us? It’s this sensorium. It’s spectacle. And it’s a system of control and manipulation.”
AI isn’t even the only technology that has detractors. For years, a cross-section of Silicon Valley insiders and employees has been speaking to the influence of the Internet and social media on human behaviour and society. In 2013, Tristan Harris, a former Google employee famously sent out a memo, “A Call To Minimise Distraction & Respect Users’ Attention,” to 10 close colleagues. It became a movement, leading to Harris being branded the “conscience” of Silicon Valley.
According to a 2017 piece in The Guardian, “Harris insists that billions of people have little choice over whether they use these now ubiquitous technologies, and are largely unaware of the invisible ways in which a small number of people in Silicon Valley are shaping their lives.” The same piece also talks about James Williams, also a former Google employee who co-founded the advocacy group Time Well Spent with Harris, Justin Rosenstein, who once helped create G Chat for Google, and now speaks widely about office productivity; and Nir Eyal, the author of Hooked: How to Build Habit-Forming Products and creator of the Habit Summit, a “behavioral design conference”. The allure of the attention economy to the decline of democracy, it's all related and it’s all dangerous, from their perspective.
Also read: The aliens have landed, and we created them: Niall Ferguson
Unlike with the Internet and social media, the noise around responsible AI has been loud for decades. Dozens of organisations around the world—including Humanity+, Foresight Institute, Ought, Algorithmic Justice League, AI Now Institute and Electronic Frontier Foundation—have been considering ethical ways for AI to be used.
They work to find ways for AI and human life to co-exist, so human civilisation isn’t at risk, and so AI doesn’t become yet another tool in the hands of a few powerful humans to alter the fabric of society to their will. Undoubtedly, we are on the cusp of the era of artificial intelligence. For now, the rest of us will just have to wait and watch, and pray mankind doesn’t regret AI that way Frankenstein did his monster.
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!
