Moneycontrol PRO
HomeNewsOpinionSam Altman’s advice to exempt smaller companies and startups from AI regulation is problematic ​

Sam Altman’s advice to exempt smaller companies and startups from AI regulation is problematic ​

It is important to nuance the conversation on “big vs small”, and look for an approach that identifies harmful use of AI. Regulators need a better understanding of AI architecture to gauge what can be regulated. Startups deploying AI and Indian use-cases of AI must be studied. India must strike the right balance between harm reduction and innovation. An impulse to hurry might backfire

June 14, 2023 / 16:36 IST
Sam Altman

Sam Altman, chief executive of OpenAI, was in India last week, and met Prime Minister Modi and other officials at the PMO. The 38-year-old chief of the company that released ChatGPT has met with officials in several countries as he attempts to shape the conversation around AI governance. Notably, he was in the EU before India, another jurisdiction that is discussing AI regulation. The EU recently passed a negotiation mandate for a proposed EU Act and India may include AI regulation in the proposed Digital India Act.

Sam Altman’s visit and public comments in India have elicited mixed reactions from different groups of people. Importantly, he called for more nuance in the conversation around AI regulation and said that larger AI companies could be regulated but smaller companies and startups should be exempted from scrutiny.

Graded Approaches In Tech Regulation

More obligations on bigger companies and less regulation for smaller companies is not new. There are several examples of such an approach in India and in other jurisdictions. The 2021 Intermediary Guidelines and Digital Media Ethics Code created a new category of intermediaries called ‘significant social media intermediaries’, that have more than 50 lakh registered users. They are subject to additional content filtering, user verification, and grievance redressal obligations.

The Digital Services Act in the EU created a class of very large online platforms (VLOP) that have more than 45 million weekly active users and “very large online search engines”. They must follow additional transparency requirements, and disclose more information publicly on algorithmic processes for content recommendation and moderation after the law comes into effect.

The approach of grading platforms on the basis of user thresholds do not address regulatory objectives effectively. Threshold-based regulation for social media platforms sought to curb virality of violent or harmful content. But, harmful content often circulated in echo chambers like 4chan in the United States or the Bulli Bai app in India. Further, EU’s VLOP lists included Wikipedia, which follows a distinct content moderation practice from other social media platforms. Volunteers moderate content on Wikipedia while automated mechanisms and content policies moderate content on other platforms like Facebook and Twitter.

Not As Simple As Big Vs Small

A graded approach for AI regulation that simply segregates the ‘big’ from the ‘small’ is likely to ignore key technical nuances like in case of the EU list, or prove to be ineffective. To address this, it is necessary to unpack AI regulation from both ends.

At one end, regulators need a better understanding of AI architecture to understand what can be regulated. At the other end, it is also important to understand how startups deploy AI and Indian use-cases of AI.

For instance, a large food delivery platform like Swiggy or Zomato may integrate Generative AI into their chatbot but it is unlikely to be fed training data that could potentially produce harmful content. Similarly, a small startup trained on police records to do checks on prospective new tenants moving into an apartment complex is likely to reproduce biases present in the training data.

Don’t Rush Into Regulation

To this end, it is important to nuance the conversation on “big vs small”, and look for an approach that identifies harmful use of AI. The proposed EU Act is a step towards this direction because it correctly identifies use-cases like police facial recognition for further scrutiny.

The latest amendments to the Intermediary Guidelines also follow this principle because it segregates online games that involve money from other online games. Online games that involve money require additional verification and the verification mechanism will weed out illegal betting and gambling applications that lead to harms like financial addiction, and money laundering.

India must strike the right balance between harm reduction and innovation in the case of AI governance. It is important to be patient, understand nuances of the underlying technology and study use-cases in India.

More research on AI technology and a market study of AI use-cases should be the first milestones for India’s AI regulation journey. Any impulse to hurry the process might backfire and stifle innovation as a consequence.

Varun Ramdas is a technology policy researcher working at the intersection of emerging technology, public policy and society. Views are personal and do not represent the stand of this publication.

Invite your friends and family to sign up for MC Tech 3, our daily newsletter that breaks down the biggest tech and startup stories of the day

Varun Ramdas is a technology policy researcher working at the intersection of emerging technology, public policy and society. Views are personal and do not represent the stand of this publication.
first published: Jun 14, 2023 04:36 pm

Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!

Subscribe to Tech Newsletters

  • On Saturdays

    Find the best of Al News in one place, specially curated for you every weekend.

  • Daily-Weekdays

    Stay on top of the latest tech trends and biggest startup news.

Advisory Alert: It has come to our attention that certain individuals are representing themselves as affiliates of Moneycontrol and soliciting funds on the false promise of assured returns on their investments. We wish to reiterate that Moneycontrol does not solicit funds from investors and neither does it promise any assured returns. In case you are approached by anyone making such claims, please write to us at grievanceofficer@nw18.com or call on 02268882347