
OpenAI CEO Sam Altman said artificial intelligence is not yet capable of making critical military decisions, commenting amid reports that an AI system may have assisted in the US operation to capture former Venezuelan President Nicolas Maduro.
"I don’t think AI systems should be used to make warfighting decisions. I don’t think they’re at a level of sophistication or reliability where this is a good idea," Altman told The Indian Express earlier this week.
"That said, we certainly want to support the government, and there are a lot of things we can do already. Someday, there will be really important applications of AI in defence. But right now, the models have clear limitations," he has been quoted by the newspaper.
Asked specifically whether AI was used in the Caracas raid, Altman said, "No, I just don’t know. I’m sure it was used in some ways. There are things that AI can do a great job of today. I think using AI to analyse a huge amount of intelligence reports, probably a great use of AI, and maybe it was used in some ways like that".
According to reports in the Wall Street Journal earlier this month, Anthropic’s AI model Claude, deployed via data firm Palantir Technologies, may have supported the operation. Reuters has not independently verified the claims, and the Pentagon, White House, Anthropic, and Palantir did not immediately comment.
Anthropic’s policies explicitly prohibit using Claude for violence, weapons design, or surveillance.
Altman’s caution echoes previous statements by Chris Lehane, OpenAI’s Chief Global Affairs Officer, who at the NDTV Ind.AI Summit said, "We do build all sorts of safety mechanisms into our models before they are publicly released… how does society start to build resilience out there".
Not all experts share Altman’s optimism, though.
UC Berkeley professor Stuart Russell, a leading AI safety advocate, warned that AI poses grave risks to humanity if left unchecked. "Some of the CEOs, pretty much all the leading CEOs, have admitted there is enormous risk to humanity. Privately, they will say, 'I wish I could stop'. One said the scenarios are so grim that the best case would be a Chornobyl-scale disaster," Russell said, urging governments to proactively regulate AI risks.
In the meantime, the Pentagon is reportedly pushing AI firms, including OpenAI and Anthropic, to make their tools available on classified networks, easing standard restrictions to support military applications. While many AI tools remain on unclassified networks, Anthropic’s Claude is among the few accessible via classified channels, though under strict usage limits.
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!
Find the best of Al News in one place, specially curated for you every weekend.
Stay on top of the latest tech trends and biggest startup news.