
The United States military used an advanced artificial intelligence system to help strike roughly 1,000 targets in the first 24 hours of its campaign against Iran, relying on technology developed by Palantir and Anthropic, according to a report by The Washington Post.
The system, known as the Maven Smart System, is built by data-mining company Palantir and processes large volumes of classified intelligence data from satellites, surveillance platforms and other sources, the report said, citing three people familiar with the system.
According to the report, the platform generated real-time targeting insights and prioritised strike locations during the campaign in Iran.
Claude AI integrated into Pentagon targeting platform
Embedded within the Maven system is Claude, a generative AI model developed by Anthropic.
The Washington Post reported that Claude was integrated into Maven to help analyse intelligence data, suggest targets and prioritise them based on operational importance.
Two people familiar with the system told the newspaper that Maven, powered by Claude, suggested hundreds of targets, generated location coordinates and ranked targets as US planners prepared the campaign.
The combined system has accelerated the pace of military planning by converting processes that previously took weeks into near real-time operations, one of the people told the newspaper.
According to the report, the AI tools are also used to evaluate the outcomes of strikes after they are initiated.
First major war deployment for Claude
While Claude has previously been used in security operations — including counterterrorism work and the raid that captured Venezuelan President Nicolás Maduro, according to two people cited by The Washington Post — the Iran campaign marks its first use in large-scale military combat operations.
Over the past year, military planners have expanded the system’s use across different branches of the armed forces.
Two people familiar with the matter told the newspaper that the technology is now used daily in many parts of the US military.
Pentagon banning Anthropic tools after dispute
The deployment of the technology has come alongside a policy dispute between the US government and Anthropic.
Hours before the bombing campaign against Iran began, US President Donald Trump announced a ban on the use of Anthropic’s AI tools across government agencies, according to The Washington Post.
The administration has given agencies six months to phase out the company’s technology, following disagreements over how the systems could be used, particularly in mass domestic surveillance and fully autonomous weapons, the report said.
Two people familiar with the matter told the newspaper that the military will continue using the technology during the transition period while a replacement system is developed.
One person cited by The Washington Post said the Pentagon had become heavily dependent on the system and could invoke government authority to retain the technology temporarily if needed.
“We’re not going to let [Anthropic CEO Dario Amodei’s] decision making cost a single American life,” the person told the newspaper.
Pentagon expands use of Maven system
The Pentagon began integrating Anthropic’s Claude into Maven in late 2024, according to public announcements cited in the report.
The system is used to generate proposed targets, summarise intelligence from the battlefield and track logistics data.
The Trump administration has expanded Maven’s deployment across the military, with more than 20,000 military personnel using the system as of May last year, according to the report.
Rear Adm. Liam Hulin, deputy director of operations at US Central Command, said in a 2024 talk that the system pulls intelligence from 179 different data sources.
“Centcom is heavily using MSS,” Hulin said, referring to the Maven Smart System by its acronym.
US and Israel collaborated on target bank
The Washington Post reported that it was unclear whether Maven’s target lists were shared with Israel prior to the strikes, but the two countries had coordinated extensively in the lead-up to the operation.
In a statement released shortly after the attacks began, the Israel Defense Forces said it had worked closely with the US military for months to develop a large database of potential targets.
“The Israel Defense Forces, in close cooperation with the US Army, worked for thousands of hours to build as valuable and extensive a target bank as possible,” the statement said.
AI adoption expands across defence sector
The Pentagon has been moving rapidly to incorporate generative AI into defence operations.
Anthropic was among the first major AI companies to work with classified US government data, as defence agencies explored ways to use AI for intelligence analysis and operational planning.
Anthropic CEO Dario Amodei said last week that Claude had been “extensively deployed” across the US Department of Defense and other national security agencies.
“I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries,” Amodei wrote in a blog post during negotiations with the Pentagon.
NATO and rival AI firms entering defence space
The technology is also spreading among US allies.
NATO, which signed a contract with Palantir last year, has promoted its own version of the Maven system as enabling commanders to oversee battlefields in near real time.
A study by Georgetown University examining the US Army’s 18th Airborne Corps found that the system allowed one artillery unit to perform work previously requiring around 2,000 staff with a team of just 20 people.
At the same time, rival AI firms are positioning themselves to replace Anthropic’s role in Pentagon systems.
According to the report, Elon Musk’s xAI and OpenAI both signed agreements last week to work on classified US government systems.
Experts debate risks of AI-driven warfare
The growing use of generative AI in military operations has triggered debate among defence analysts about oversight and reliability.
Paul Scharre, executive vice president at the Center for a New American Security, told The Washington Post that AI allows the military to develop targeting packages “at machine speed rather than human speed”.
However, he warned that human oversight remains necessary.
“AI gets it wrong,” Scharre said. “We need humans to check the output of generative AI when the stakes are life and death.”
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!
Find the best of Al News in one place, specially curated for you every weekend.
Stay on top of the latest tech trends and biggest startup news.