Union minister of state for electronics and information technology Rajeev Chandrasekhar is focused on outcomes. And achieving them fast.
At a meeting with his counterparts from the UK and Japan on the sidelines of the three-day Global Partnership for Artificial Intelligence (GPAI) Summit in New Delhi on December 13, Chandrasekhar cautioned that time is running out when it comes to regulating artificial intelligence (AI).
He wants the 29 “like- minded” countries of the grouping and its techno-diplomats to come to an agreement on governing AI within the next 6-9 months.
"I am aggressive and I am an optimist... We can't wait for two years as AI is at an inflection point," he says.
Chandrasekhar has a long and important to-do list when it comes to AI. He is trying to engage with social media platforms to tackle the menace of deepfakes, help Indian startups and innovators to ride the AI wave, and set up large compute facilities locally to train AI models, and draw up new regulations to create a filter to separate the good AI from the bad.
While Chandrasekhar paced up and down the slick hallways of Bharat Mandapam in the national capital to negotiate on the ground rules to govern AI internationally, Moneycontrol caught up with him to talk about the government's efforts to rein in the ills of AI while providing an impetus to innovate with the technology. Edited excerpts:
You have said that only trusted platforms will get access to Indian datasets for training their AI models. What is the framework to judge who is trusted?That will be defined. That nomenclature of what is trust is like the discussion on AI risks today. Nobody can predict the risks fully. But we know that deep fakes are a risk. We know there are 11 types of harms in the conventional internet and social media. Some of them align themselves to AI. We know specific AI risks like malicious models or biased algorithms.
What the Digital Personal Data Protection (DPDP) Act, 2023, the India Datasets platform and the Digital India Act (DIA) effectively lays out is the following framework: personal data cannot be harvested by anybody without consent and its purpose should be limited for the service that is being used. The non-personal or anonymised data will flow through the India Datasets platform or, as the proposed legislation of DIA says, only to trusted platforms and models which are working on datasets that are trusted.
The DPDP Act prevents anybody from taking personal data for anything other than the purpose for which it has been given consent. So if I am Swiggy, Zomato or Flipkart, I can only take consent for the data that is needed to deliver you the product or service. I can't use all your data for my AI model.
The only way platforms can train models using India datasets is to use anonymised data. The proposed DIA says you cannot do it, unless it is through the India datasets platform. And the India datasets platform's access will be limited to Indian researchers, Indian startups and any other platform that is trusted.
What happens when AI models are trained on our tweets and blog posts that may not be personal data and are publicly available?The proposed DIA will put the brakes on that for untrusted models. You cannot scrape the Indian internet if you are a model sitting in a country that has animosity towards India. You cannot take the data and train the model there.
PM Modi made suggestions like a watermark for AI software and testing models before deployment to regulate AI. Are these things also being implemented by the government without waiting for an international agreement?Any effort to regulate AI has to be global or at least near-global. Having a good AI in India doesn't mean there cannot be a bad AI somewhere else, which is accessed by people in India. However, the global agreement can't be abstract. It has to define what is safe, trusted and harm with granularity. And, it has to be done quickly. We can't wait for two years. We need to be able to do it within months as AI has reached an inflection point globally.
In a meeting with social media companies on deepfake recently, it was said that the government will issue either an advisory or make an amendment to the IT Act. Will it be an advisory or an amendment?The answer is yes and yes. We will start with an advisory because there's nothing in the rules that needs to be changed. We will advise them that you are not following the rules and if you want an understanding of the rules, we will lay it out in the advisory. If there's still some resistance to that, we will amend the rules. Our job is to make sure rules are followed.
When the government's AI report came out recently, it suggested setting up an AI compute of around 25,000 graphics processing units (GPUs). We are hearing that the number may have gone down. Can you shed some light on it?I don't want to share it till the Cabinet approves it but we will create substantive GPU compute capacity in the country. It is not an issue that we are overly worried about because this shortage of GPU capacity is very short term. This domination by NVIDIA of GPUs is short term because Intel and AMD are going to catch up very soon. They are about six to nine months behind NVIDIA. So, we are not looking at this as some issue that is insurmountable.
We will do it in a PPP model is the broad idea. Also, the Centre for Development of Advanced Computing (CDAC) will create substantial GPU capacity with its own designed Rudra servers. So, there will be public sector capacity, private-public partnership and private sector capacity.
PM Modi recently talked about how deepfakes have been made with him doing garba. Are deepfakes that are not intended to harm or just for entertainment going to be treated differently under laws?If you have not taken my consent to do a deep fake of me and you put it out there, irrespective of whether it is harmless or a satire, I can be aggrieved and have the right to complain. Even if you make me look good in the deepfake, I can still be aggrieved. If I take you to court, you may argue that no harm was caused. And I will argue that some kind of harm was caused.
Social media platforms are opposing the advisory that if a user is not happy with the first response by a platform on a content violation, he can directly go to the grievance appellate committee under IT Rules. They are saying they will get flooded with too many cases. What's your view?When did that become your problem or my problem? Our job is not to make things easy and convenient for platforms. Our duty is to make things easy and convenient for the citizens who use the platforms. Agree or not agree, rules are rules. They have to follow it. The user will have the right to automatically go after the first response on the platform to the GAC. We don’t intend to make life tough for platforms, but create ease of living for our citizens.
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!
Find the best of Al News in one place, specially curated for you every weekend.
Stay on top of the latest tech trends and biggest startup news.