Moneycontrol PRO
HomeArtificial IntelligenceIndiaAI Impact Summit: Proposed ‘AI Safety Commons’ to focus on Global South use cases

IndiaAI Impact Summit: Proposed ‘AI Safety Commons’ to focus on Global South use cases

Proposed repository aims to adapt AI testing and safety frameworks to local contexts in developing economies

December 24, 2025 / 12:07 IST
IIT Madras professor Balaraman Ravindran is the chair of the Safe and Trusted AI Working Group of the IndiaAI Impact Summit
Snapshot AI
  • AI Safety Commons proposed as global hub for context-aware AI evaluation
  • Commons targets unique risks in Global South, including caste and language bias.
  • Platform will enable shared learning and locally relevant AI safety benchmarks

The Safe and Trusted AI working group at the India AI Impact Summit is proposing the creation of an AI Safety Commons — a shared, global repository aimed at improving how AI systems are tested, evaluated and deployed in the Global South, according to Balaraman Ravindran, chair of the working group.

Ravindran is the founding head of the Wadhwani School of Data Science and AI, the Robert Bosch Centre for Data Science & AI and the Centre for Responsible AI (CeRAI) at IIT Madras.

In a conversation with Moneycontrol, Ravindran said the commons is being conceived as an open repository of datasets, benchmarks, evaluation tools and documentation frameworks that reflect how AI systems are actually developed and used across the Global South.

Ravindran noted that the Global South have distinct linguistic diversity, social structures and constraints from advanced economies.

“Most of the safety tools, benchmarks and datasets we have today are created in and for very specific contexts,” Ravindran said. “When you take those and apply them directly in India or other parts of the Global South, they often fail to capture the risks that actually matter on the ground.”

The proposal has found support within the Safe and Trusted AI working group, which includes representatives from more than 50 countries and several UN organisations. Ravindran said interest has come also from advanced countries.

The AI Safety Commons is expected to feature in the working group’s chair’s summary to be submitted to the Indian government by the end of January.

Moving beyond Western bias frameworks

One of the core drivers behind the AI Safety Commons, Ravindran said, is the absence of context-aware bias and fairness testing in mainstream AI safety work.

Global discussions on bias tend to focus on categories such as race and gender which reflects Western social realities, while overlooking other dimensions that are central in many developing societies.

"In India, bias may show up very differently, through caste, religion, region or language," he said. "If our evaluation frameworks don’t even acknowledge these dimensions, then we are effectively blind to many of the harms that AI systems can cause in our societies.”

The proposed commons would allow countries and institutions to contribute locally relevant datasets and testing methodologies, making it easier to evaluate AI systems against risks that are specific to their social and cultural contexts.

"The idea is not that one country defines bias for everyone else," Ravindran said. "It's about creating a shared space where different societies can document what bias looks like for them, and how they test for it."

From principles to practice

Ravindran also pointed to a widening gap between AI governance principles and day-to-day implementation.

While there is growing alignment among governments on ideas such as transparency, accountability and safety, developers and public agencies are often left without clear guidance on how to operationalise those ideas.

"We keep talking about safe and trusted AI, but when a startup or a government department asks, 'How do I actually test my model? What data should I use? What benchmarks apply to my use case?' there are very few clear answers," he said.

The AI Safety Commons is intended to fill that gap by developing evaluation protocols, sector-specific benchmarks, question banks and documentation tools such as model cards and data cards and so on.

"Think of it as basic quality infrastructure for AI," Ravindran said. "In traditional software, we have well-established testing and assurance practices. For AI, especially in our contexts, that infrastructure is still missing."

Shared learning across similar contexts

The commons is being designed as a contributory platform. Ravindran said this approach is particularly relevant for Global South countries that face duplication challenges as they often work in silos.

“There is a lot of duplication happening today,” he said. “Different countries are independently trying to solve the same problems—how to test language models in low-resource languages, how to evaluate AI in agriculture or welfare delivery. The commons allows us to learn from each other instead of reinventing the wheel.”

“There is a shared understanding that if AI safety frameworks only work for a handful of countries, then they are incomplete,” he said. “What we are trying to build is something that reflects the diversity of how AI is actually used across the world.”

 

Invite your friends and family to sign up for MC Tech 3, our daily newsletter that breaks down the biggest tech and startup stories of the day

Aihik Sur covers tech policy, drones, space tech among other beats at Moneycontrol
first published: Dec 24, 2025 12:07 pm

Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!

Subscribe to Tech Newsletters

  • On Saturdays

    Find the best of Al News in one place, specially curated for you every weekend.

  • Daily-Weekdays

    Stay on top of the latest tech trends and biggest startup news.

Advisory Alert: It has come to our attention that certain individuals are representing themselves as affiliates of Moneycontrol and soliciting funds on the false promise of assured returns on their investments. We wish to reiterate that Moneycontrol does not solicit funds from investors and neither does it promise any assured returns. In case you are approached by anyone making such claims, please write to us at grievanceofficer@nw18.com or call on 02268882347