In a wide-ranging conversation with Tucker Carlson, OpenAI CEO Sam Altman admitted that running one of the world’s most influential AI companies comes with a burden that often keeps him from sleeping well. But it’s not the big moral questions that weigh most heavily on him—it’s the small ones.
“Every day, hundreds of millions of people talk to our model,” Altman said, referring to ChatGPT. “I don’t actually worry about us getting the big moral decisions wrong… but probably nothing more than the very small decisions on model behaviour. Those can have massive repercussions.”
The example at the top of his mind is suicide. OpenAI is currently facing a lawsuit from a family who claim ChatGPT played a role in their teenage son’s death. Altman acknowledged the gravity of the issue.
“They probably talked about [suicide], and we probably didn’t save their lives,” he said. “Maybe we could have said something better. Maybe we could have been more proactive.”
For Altman, these are the moments where AI’s limitations collide with human vulnerability. He argued that no model is perfect, but OpenAI must constantly reassess how ChatGPT responds in sensitive contexts where words can mean the difference between life and death.
Carlson also pressed him on whether ChatGPT could be weaponised by the military. Altman admitted he doesn’t know exactly how it’s being used in that arena. “I suspect there’s a lot of people in the military talking to ChatGPT for advice,” he said, before conceding he wasn’t sure “exactly how to feel about that.”
The bigger picture is clear: while OpenAI debates AI’s place in society, its CEO is kept up at night by the smallest of decisions—what the model says, or doesn’t say, to people at their most vulnerable.
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!
