The government’s recent advisory asking companies to take its permission before rolling out artificial intelligence (AI) products has triggered a huge debate, with the industry terming the move as "regressive" and capable of "stifling innovation".
The ministry of electronics and information technology issued the advisory following a controversy over responses by generative AI platforms such as Google's Gemini and Ola's Krutrim.
While Gemini’s response to queries if PM Narendra Modi was a fascist raised eyebrow, Krutrim was found to be returning inaccurate responses.
Also read: Government asks firms to take permission to launch AI models
Moneycontrol has reviewed a copy of the advisory, issued on March 1 under the IT Act 2000 and IT Rules 2021.
Here are five takeaways:
1 Permission must for AI models still in testing stage
The advisory says, "The use of under-testing/unreliable artificial Intelligence model(s) /LLM/ generative AI, software(s) or algorithm(s) and its availability to the users on Indian Internet must be done so with the explicit permission of the Government of India and be deployed only after appropriately labeling the possible and inherent fallibility or unreliability of the output generated."
This part of the one-page advisory has proved to be the most contentious, with startup founders slamming the directions. It has been described as a "bad move" and "demotivating".
The outcry prompted minister of state for electronics and information technology Rajeev Chandrasekhar to clarify the direction would not apply to startups and was restricted to significant platforms.
2 AI platforms can’t threaten poll process, spread misinformation
The advisory says that platforms have to ensure that AI models do not permit users to publish or host any unlawful content as defined under Rule 3(1) (b) of the information technology rules.
Platforms also have to ensure that their "computer resource" do not permit any bias or discrimination "or threaten the integrity of the electoral process including via the use of artificial intelligence model(s)/ LLM/Generative AI/software(s) or algorithm(s)".
3 ‘Permanent unique identifier’ for AI-generated content
The advisory says if a platform creates any synthetic content that can be used to spread misinformation or deepfake, "it is advised that such information… is labeled or embedded with a permanent unique metadata or identifier..." This metadata or identifier can be used to identify the "creator or first originator of such misinformation or deep fake", the advisory says.
The government has also advised platforms to use a "consent popup" mechanism about possible inaccuracies in any output that has been generated by AI.
4 Users ‘dealing’ with unlawful information can be punished
The government has asked AI platforms to communicate that if users "deal" with unlawful information, then it can lead to suspension from the platform, and from the user's account, or may also incur punishment under applicable laws.
5 Non-compliance can lead to penal consequences
"It is reiterated that non-compliance to the provisions of the IT Act and/or IT Rules would result in potential penal consequences to the intermediaries or platforms or its users when identified, including but not limited to prosecution under IT Act and several other statutes of the criminal code," the advisory read.
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!