By Tannmay Kumarr Baid and Bharath Reddy
Trump recently announced America’s new AI Action Plan. With it, the United States has defined the goal as global domination in artificial intelligence. This new focus, however, is crowding out two fundamentals: a competitive marketplace and safety. The race narrative is shaping export rules, investment flows, and regulatory choices in the United States, with a downstream impact on the rest of the world. This may deliver gains in chips and models, but it also entrenches a small set of gatekeepers and shifts attention away from whether systems are safe and accountable.
The race narrative is setting the agenda
Washington’s plan sets three goals: accelerate AI innovation, build AI infrastructure, and lead global standards. It advances them through four levers. First, it tightens export controls on high-end compute and parts of chipmaking to limit rival access. Second, it eases rules across agencies to speed deployment through coordinated revisions and testbeds. Third, it drives a national buildout of data centres, power, and grid upgrades to meet AI demand. Fourth, it promotes an exportable AI stack for allies, from chips and cloud to models, applications, and standards.
The ripple effects of the race narrative are visible outside the United States as well. For example, the Chinese government has announced massive venture capital investments for AI startups, and the EU is setting up AI “gigafactories”. The reduced focus on safety in global AI summits is also indicative of this mindset shift.
Market power risks
Frontier AI is capital-intensive. It needs large data centres, reliable power, and specialised chips. Only a few firms can finance and operate these systems at scale. A policy mix like the US AI Action Plan, which removes regulatory friction, channels public money into infrastructure, and imposes restrictions on inputs, will favour incumbents in the US.
Big tech platforms already benefit immensely from network effects and distribution integration. For example, Gemini is integrated into Google's workspace and search offerings, and Grok into the microblogging platform X. Such integration provides a ready-made platform for widespread adoption, enhances the overall offering through exclusive access to user data, and significantly improves over time with user feedback.
In addition, government backing also creates a geopolitical effect. While it makes US companies more competitive against China, it also creates a single-point exposure to decisions by a few companies and by US regulators. India has already seen what dependence looks like when access to advanced chips was limited under the now-withdrawn US diffusion rules. Existing export rules limit access to advanced chips to China and other arms-embargoed countries. The plan also advocates for exploring location verification features on advanced AI chips to ensure they are not used in “countries of concern”. For allies that buy these stacks, including India, this is a significant geopolitical concern.
Safety risks
The plan funds evaluations, interpretability, and robustness research, which is useful. Yet it shifts attention away from guardrails and liabilities that might “unduly burden innovation” and moves towards aggressively promoting a dynamic “try-first” culture for AI across American industry. The weaker national baseline in the US also becomes the global benchmark. For example, European companies are pushing back against the EU AI Act, arguing that it would make them less competitive globally.
Internationally, importing full-stack AI often means inheriting the regulatory standards under which it has been developed. If the United States lightens its regulations while exporting its stack, the global baseline for safety will drift downward. Regulatory competition then becomes a cycle where obligations converge toward the minimum. Countries that are behind in the AI race are then incentivised to cut regulations further to help their ecosystems catch up.
The path ahead
The diffusion of capabilities beyond the two dominant poles, the United States and China, is desirable. Open models offer a promising alternative to address both market power and safety concerns.
Open models, such as Meta’s Llama or BLOOM from BigScience, which publish training data, weights, and research artefacts in the public domain, help downstream developers to study, use, and recreate the systems. This not only reduces entry barriers and enables broader diffusion of the technology but also improves safety through collective scrutiny of the risks. To realise this, governments should provide regulatory exemptions and funding support for open models.
While LLMs have impressed us with their capabilities, the real productivity growth will come from agentic AI, where autonomous agents bridge silos. Imagine an agentic system checking calendars, resolving conflicts and making reservations for a meeting. Such interactions open up several possibilities in sectors from healthcare to logistics. However, they require robust open protocols for secure and federated data exchange to ensure interoperability, privacy and accountability.
Transparent and interoperable AI offers a pathway towards ensuring the benefit from AI is widely distributed. It ensures that the focus remains on public benefit and not on a race for power. This should feature on the agenda for the AI Summit in Delhi in February 2026, where leaders can commit to taking the necessary steps towards these goals.
(The authors are researchers with the high-tech geopolitics programme at the Takshashila Institution.)
Views are personal and do not represent the stand of this publication.
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!
Find the best of Al News in one place, specially curated for you every weekend.
Stay on top of the latest tech trends and biggest startup news.