Moneycontrol PRO
HomeNewsOpinionAI's Economic Potential for India: A banyan tree metaphor

AI's Economic Potential for India: A banyan tree metaphor

India’s AI strategy blends regulation, infrastructure, and innovation to build a sovereign AI ecosystem. With a focus on transparency, talent, and responsible use, it aims to leverage AI for public good and inclusive growth

July 15, 2025 / 15:56 IST
artificial intelligence

The domestic AI‑venture pool has doubled inside twenty‑four months; roughly seventy percent of the money last year chased generative‑AI ideas.

By Vivek Agarwal 

Even cautious analysts whisper that Artificial Intelligence (AI) may nudge our GDP upward by something in the ballpark of $360–430 billion – roughly five to seven percent – by 2030. Flip the telescope: the home market for AI tools and services is forecast to touch $17 billion by 2027, compounding at a head‑spinning 30% or so each year. However, remember that our software exports already generate approximately $320 billion. What this means is that, if targets are met, AI revenues could rival the entire IT-services empire we have spent three decades building. Little wonder every government memo now treats AI as if it were electricity circa 1900.

In this context, the grand narrative of India’s AI strategy summons the image of a banyan anchoring itself beside a twitchy river. India’s national AI mission resembles a banyan whose trunk plunges deep into native soil, anchoring the project firmly. Meanwhile, the varied strategies of individual states sprawl like aerial roots – widening the canopy – while remaining in lively conversation with the swift river of global innovation that courses alongside.

Learning from The World But On Our Terms

Look abroad, and you’ll spot wildly divergent recipes for tackling AI. Brussels has gone down the regulatory path: the EU AI Act slices systems by risk, forces human babysitters for “high‑risk” applications, and insists on digital watermarks so deepfakes can’t roam free. Washington, in contrast, decided to re-shore the silicon itself: the CHIPS and Science Act shovels $52.7 billion into fabs and requires that frontier-model developers submit their safety homework to the federal government. Beijing, never shy, wants every recommendation algorithm on a national ledger and stipulates that generative‑AI outputs must mirror its core values. Then there is Singapore, which rolled out AI Verify, an open-source compliance kit that says, in effect, “Trust, but test.”

New Delhi borrows a little from each playbook. Subsidise GPUs à la America, legislate some EU‑style transparency, and drape it all in a moral vocabulary of inclusion that China can’t convincingly imitate. Tricky balancing act. Squeeze too hard, and you’ll suffocate the garage inventor; turn laissez‑faire, and public trust evaporates. As the Tony Blair Institute’s crisp memo on “How Leaders in the Global South Can Devise AI Regulation That Enables Innovation” reminds us, well‑designed rules can serve as fertiliser, not shackles, for technological ingenuity.

Building Sovereign with Numbers, Not Slogans

In March 2024, the Union Cabinet green-lit the IndiaAI Mission, carving out ₹10,300 crore over five years. Half the funds strengthen hard muscle – compute clusters and desi multimodal models; the rest backs datasets, skilling, and a raft of “responsible‑AI” prototypes. Within a single season, the IndiaAI Compute Portal emerged, offering 14,000 GPUs and tendering for an additional 18,000. If bids clear, we leapfrog into the top echelon of publicly accessible AI clouds.

Yet, infrastructure is hollow without talent. Current head‑hunting data reveal a grim gap: for every ten vacancies labelled “generative‑AI engineer,” there is roughly one qualified candidate. Demand, depending on whom you ask, overshoots supply by perhaps 45–50%. NASSCOM pegs the AI workforce at 4.2 lakh today, predicting a sprint to 12.5 lakh by 2027. Uttar Pradesh’s new AI Pragya scheme, which vows to upskill a million youngsters in data analytics, is one of the few attempts to plug the chasm. Other states should watch and plagiarise.

On the intellectual‑property front, an inflection is underway. Sarvam AI – less than two years old – has been commissioned to craft India’s maiden sovereign large‑language model. Built entirely on local silicon and brainpower, the model promises to chat breezily in Khasi one moment and parse corporate law the next, all while keeping user data within our jurisdictional fences. The aspiration is explicit: an LLM that “feels familiar, not foreign.”

Data, that core raw material, is finally being corralled. The IndiaAI Datasets Platform is meant to serve as a national pantry of meticulously curated corpora, housed onshore. Alongside, programmes such as Bhashini push open‑source language stacks into the public domain, the unspoken logic being: if we give away the ladders, more folks will climb.

Make Responsible from Principle to Practice

Responsibility is not a decorative tassel; it is constitutional cement. Under the Safe & Trusted AI banner, eight home‑grown projects – bias mapping, synthetic‑data engines, even “machine unlearning” apps – have already secured grants. Four are slated to debut on the AI Kosha portal this year. An AI Safety Institute is in gestation, and if it evolves as planned, it could stand as a democratic counterweight to both Big Tech swagger and state overreach. Why fuss? Because opaque algorithms can sabotage Article 15 faster than broken systems: deny a pension here, flag a neighbourhood as “high risk” there, and voilà – automated inequality.

Meanwhile, the start‑up feeding frenzy continues. The domestic AI‑venture pool has doubled inside twenty‑four months; roughly seventy percent of the money last year chased generative‑AI ideas. The geography, alas, is predictable: Bangalore, NCR, and Mumbai. If GPU credits and open datasets never reach Indore or Aizawl, we are reenacting colonial extractivism on a digital stage.

The task does not end with tidy legislation or a photogenic GPU farm. The harder part is cultural: cultivating habits of transparency and iterative self‑correction within a technology community conditioned to move fast and break things. That means writing sunset clauses into every AI rule so we revisit the evidence every few years; funding research that critiques algorithms; and insisting that public-private partnerships publish model cards in the languages of those they serve, not just in polished English.

So, where does all this leave us? Back at the banyan, admittedly. Deep roots, unruly offshoots, room for saints and sceptics alike. India’s rendezvous with AI will be judged by sterner metrics: How many GPUs land in a Coimbatore garage? Can a Jharkhand farmer appeal a botched crop‑insurance verdict? Will the next flagship language model recognise Santhali, Dogri, perhaps even the lilt of Tulu?

We have pledged to build sovereign and make responsible. Success hinges on disciplined policy, spirited private enterprise, and hawk‑eyed civil‑society watchdogs. Get the balance roughly right, and the banyan thrives as proof that the world’s largest democracy can bend avant‑garde technology to public purpose without bartering away responsible-use or inclusive-access.

(Vivek Agarwal, Global Policy Expert, Country Director – India, Tony Blair Institute for Global Change.)

Views are personal, and do not represent the stance of this publication.

Moneycontrol Opinion
first published: Jul 15, 2025 03:56 pm

Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!

Subscribe to Tech Newsletters

  • On Saturdays

    Find the best of Al News in one place, specially curated for you every weekend.

  • Daily-Weekdays

    Stay on top of the latest tech trends and biggest startup news.

Advisory Alert: It has come to our attention that certain individuals are representing themselves as affiliates of Moneycontrol and soliciting funds on the false promise of assured returns on their investments. We wish to reiterate that Moneycontrol does not solicit funds from investors and neither does it promise any assured returns. In case you are approached by anyone making such claims, please write to us at grievanceofficer@nw18.com or call on 02268882347