
OpenAI is under fire after swiftly securing a Department of Defense contract just days after Anthropic’s talks with the Pentagon collapsed.
CEO Sam Altman acknowledged the agreement was pushed through quickly, admitting online that it was “definitely rushed” and that the optics were far from ideal. The timing intensified scrutiny, particularly after President Donald Trump directed federal agencies to phase out Anthropic’s technology within six months and Defense Secretary Pete Hegseth labeled the rival AI firm a supply-chain risk.
Soon after, OpenAI announced it had reached its own arrangement to provide AI models for use in classified settings.
The contrast between the two companies raised immediate questions. Anthropic had publicly stated it would not permit its systems to be used in fully autonomous weapons or mass domestic surveillance. Altman has repeatedly said OpenAI maintains similar restrictions. Critics quickly asked why one company walked away while the other closed a deal.
In response to the backlash, OpenAI published a detailed blog post outlining what it described as firm boundaries. The company said its models cannot be used for mass domestic surveillance, autonomous weapon systems, or high-stakes automated decision-making such as social credit scoring.
OpenAI argued that its safeguards go beyond written policies. According to the post, the company retains control over its safety systems, delivers access through cloud-based APIs, keeps cleared personnel involved in oversight, and relies on contractual protections alongside existing U.S. law.
The company also said it was unclear why Anthropic could not finalize a similar agreement, adding that it hoped other AI labs would consider comparable arrangements.
The assurances did little to quiet critics. Techdirt founder Mike Masnick pointed to language referencing Executive Order 12333, a longstanding intelligence authority that governs surveillance conducted outside U.S. borders. Masnick argued that compliance with the order could still allow the collection of communications involving Americans if intercepted abroad, calling into question claims that domestic surveillance is off the table.
Katrina Mulligan, OpenAI’s head of national security partnerships, pushed back on that interpretation in a LinkedIn post. She suggested much of the criticism assumes a single contract clause is the primary safeguard against misuse.
“That’s not how any of this works,” Mulligan wrote, emphasizing that system architecture plays a more decisive role than policy wording. By limiting deployment to cloud infrastructure, she argued, OpenAI prevents its models from being directly embedded into weapons platforms, sensors, or operational hardware.
Altman, responding to critics on X, framed the agreement as a strategic calculation. He acknowledged that the move had triggered significant backlash — including a temporary surge that pushed Anthropic’s Claude ahead of ChatGPT in Apple’s App Store rankings.
But he defended the decision as an effort to ease mounting tension between the Pentagon and leading AI firms. If the agreement helps stabilize the relationship between government agencies and the AI industry, Altman suggested, the company’s gamble could ultimately be vindicated. If not, OpenAI may continue to face accusations that it moved too fast and underestimated the political fallout.
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!
Find the best of Al News in one place, specially curated for you every weekend.
Stay on top of the latest tech trends and biggest startup news.