As India gears up to harness artificial intelligence (AI) across sectors, the government is also laying the groundwork to ensure that these technologies are developed and used responsibly.
One of the cornerstones of this strategy is the creation of the IndiaAI Safety Institute (IASI), an initiative under the broader IndiaAI Mission launched by the Ministry of Electronics and Information Technology (MeitY).
Think of the IndiaAI Safety Institute as a watchdog, think tank, research body, and policy lab all rolled into one.
The government is currently accepting applications for the safety institute till July 9. Here's what you need to know about what the institute will do.
What is the IndiaAI Safety Institute?
The IndiaAI Safety Institute (IASI) is a national-level institute established by MeitY as part of the “Safe & Trusted AI” pillar of the IndiaAI Mission.
It is designed to be the central body that will work on ensuring the safety, fairness, transparency, and accountability of AI systems deployed in the country.
What will it broadly do?
The IASI has a wide-ranging mandate. At a high level, it will:
- Establish and promote India’s own Safe and Trusted AI principles
- Research and develop technical tools and AI safety benchmarks
- Create policy frameworks, legal guidelines, and governance protocols
- Build and maintain risk inventories, testing frameworks, and evaluation standards
It will not be a physical institute with one location and will work as a decentralised network of institutions and experts across India, according to an IndiaAI mission document.
These could be IITs, NITs, research institutes, universities, R&D labs, or even eligible startups and civil society organisations.
Each selected partner will set up an IndiaAI Safety Cell, which will be a dedicated team working on specific safety-related tasks.
These cells will be staffed with experts. Each cell must have a lead with at least 10 years of AI safety research experience and three full-time technical staff. One member must be physically stationed at MeitY in Delhi, while the others will work from the partner institution.
What kind of work will it do?
Areas of focus include:
- Technical AI safety research: Including red-teaming, metrology, and machine unlearning
- Legal and governance research: Crafting India-specific laws and policies for AI accountability
- Development of tools: Such as privacy-enhancing technologies and provenance systems
- Data governance: Defining metadata standards and managing high-quality datasets
- Evaluation and benchmarking: Testing AI models for safety, accuracy, and fairness
- Capacity building: Designing training modules and stakeholder engagement programs
- Taxonomy and risk mitigation: Building risk inventories, and mitigation strategies
- Global cooperation: Engaging with other AI safety institutes and aligning with global norms
The IndiaAI Safety Institute will operate on a co-funding model:
- Partner institutions must contribute 50 per cent of the total project cost from their own funds.
- IndiaAI will fund the remaining 50 per cent, with strict guidelines on how this funding is to be used.
- IndiaAI’s funding cannot be used for hardware purchases.
- All project assets and outputs must be open source and made publicly available, unless classified for national security reasons.
- In-kind contributions (like offering infrastructure or staff time without financial accounting) are not accepted. All matching contributions must be monetary.
No. While research is central to the IASI, it is meant to translate that research into policy and real-world impact. For example, the institute could help a government department adopt a safe facial recognition system or assist startups in evaluating the fairness of their algorithms.
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!