Moneycontrol PRO
Swing Trading 101
Swing Trading 101

AI not ready for military decisions, warns OpenAI CEO Sam Altman amid Maduro capture reports

I don’t think AI systems should be used to make warfighting decisions, the OpenAI CEO remarked.

February 21, 2026 / 13:20 IST
Sam Altman
Snapshot AI
  • Sam Altman says AI isn't ready for critical military decisions
  • AI may have aided US operation, but Altman urges caution
  • Experts warn AI poses grave risks if not properly regulated

OpenAI CEO Sam Altman said artificial intelligence is not yet capable of making critical military decisions, commenting amid reports that an AI system may have assisted in the US operation to capture former Venezuelan President Nicolas Maduro.

"I don’t think AI systems should be used to make warfighting decisions. I don’t think they’re at a level of sophistication or reliability where this is a good idea," Altman told The Indian Express earlier this week.

"That said, we certainly want to support the government, and there are a lot of things we can do already. Someday, there will be really important applications of AI in defence. But right now, the models have clear limitations," he has been quoted by the newspaper.

Asked specifically whether AI was used in the Caracas raid, Altman said, "No, I just don’t know. I’m sure it was used in some ways. There are things that AI can do a great job of today. I think using AI to analyse a huge amount of intelligence reports, probably a great use of AI, and maybe it was used in some ways like that".

According to reports in the Wall Street Journal earlier this month, Anthropic’s AI model Claude, deployed via data firm Palantir Technologies, may have supported the operation. Reuters has not independently verified the claims, and the Pentagon, White House, Anthropic, and Palantir did not immediately comment.

Anthropic’s policies explicitly prohibit using Claude for violence, weapons design, or surveillance.

Altman’s caution echoes previous statements by Chris Lehane, OpenAI’s Chief Global Affairs Officer, who at the NDTV Ind.AI Summit said, "We do build all sorts of safety mechanisms into our models before they are publicly released… how does society start to build resilience out there".

Not all experts share Altman’s optimism, though.

UC Berkeley professor Stuart Russell, a leading AI safety advocate, warned that AI poses grave risks to humanity if left unchecked. "Some of the CEOs, pretty much all the leading CEOs, have admitted there is enormous risk to humanity. Privately, they will say, 'I wish I could stop'. One said the scenarios are so grim that the best case would be a Chornobyl-scale disaster," Russell said, urging governments to proactively regulate AI risks.

In the meantime, the Pentagon is reportedly pushing AI firms, including OpenAI and Anthropic, to make their tools available on classified networks, easing standard restrictions to support military applications. While many AI tools remain on unclassified networks, Anthropic’s Claude is among the few accessible via classified channels, though under strict usage limits.

Moneycontrol News
first published: Feb 21, 2026 01:20 pm

Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!

Subscribe to Tech Newsletters

  • On Saturdays

    Find the best of Al News in one place, specially curated for you every weekend.

  • Daily-Weekdays

    Stay on top of the latest tech trends and biggest startup news.

Advisory Alert: It has come to our attention that certain individuals are representing themselves as affiliates of Moneycontrol and soliciting funds on the false promise of assured returns on their investments. We wish to reiterate that Moneycontrol does not solicit funds from investors and neither does it promise any assured returns. In case you are approached by anyone making such claims, please write to us at grievanceofficer@nw18.com or call on 02268882347