The European Parliament’s Internal Market Committee and the Civil Liberties Committee had proposed a draft AI regulation text on May 11, 2023. The draft regulation could become binding rules to regulate specific Artificial Intelligence use-cases like biometric identification and categorisation, predictive policing, and general-purpose AI in the EU. Negotiations with the European Council and member countries to finalise the law will begin after the European Parliament endorses the same and the law is expected in 2025.
Last week, OpenAI CEO, Sam Altman said there are technical limits to their ability to comply with the proposed obligations and they may halt services like ChatGPT in Europe if they could not comply with the upcoming legislation. He withdrew his statement later after meeting with lawmakers in the EU. But his earlier statement raises several pertinent questions about the future of AI regulation and its impact on technological innovation in this space.
Technology companies are worried that the EU's AI Act may overregulate and curb innovation in technology that is still evolving. OpenAI is not alone here. Google CEO Sundar Pichai also met EU officials last week to discuss AI and they agreed to enter into a voluntary pact on AI.
In EU’s Sights: AI Foundational Models
The proposed regulation emphasises the need to regulate foundational AI models. Foundational AI models provide general and versatile outputs that can be customised for a specific use-case. For example, Large Language Models (LLMs) are deep learning foundational AI models trained to analyse and predict text. In simple terms, it can understand a question and predict an answer based on the data it has.
LLMs are the foundational architecture of use-cases like chatbots and translation software. A website that offers customer support through a chatbot only has to acquire an LLM and feed it a training data set, and it will develop the ability to engage with customers. In most cases, customers have to choose from multiple prompts to elicit a trained response from the chatbot. ChatGPT goes one step further because it is trained on a vast pool of data, and equipped to predict and answer queries beyond prompts.
The EU proposal cites the complexity and uncertainty of foundational AI models, and the lack of control that stakeholders down the AI value chain have as the rationale for regulating foundational AI models. In simple terms, a website providing customer support through a chatbot has limited control over the foundational LLM it is built upon.
So, a harm-based approach that seeks to mitigate risks as and when they arise may prove to be ineffective when concerns stem from the foundational architecture itself. The EU looks to create AI guardrails for the foundational model to address this.
Big Tech’s Regulation Worry: Disclosing Training Data
Technology companies are particularly worried about the EU proposal on transparency under Article 52. Article 52 imposes transparency obligations on certain AI systems including foundational models and companies like OpenAI fear that it may require them to disclose trade secrets and copyrighted material used to train systems.
Training datasets could reveal sensitive information about unique business strategies that give AI models an advantage over its competitors. They are also IP assets that companies develop with considerable investment of time and resources. Companies are apprehensive that disclosure and third-party assessments may dilute IP protection. Similar concerns were raised in India when the Telecommunications Engineering Centre (TEC) published a draft template for AI fairness assessment in December 2022.
AI companies have an uphill task ahead of them to restore and maintain public trust in AI. OpenAI did not disclose training data used to train the latest model GPT-4 citing - “competitive landscape and safety implications”. The decision to be secretive about GPT-4 training data drew criticism from several fronts.
OpenAI decided not to disclose data considering the changing competitive landscape with Google unveiling Bard, and other generative AI models like Baidu’s Ernie Bot and Alibaba’s chatbot expected to enter the fray soon. However, it is also important to build trust with the public and regulators. Google’s decision to enter into a voluntary pact with EU lawmakers until the AI Act comes into force is a positive step towards building trust.
Similar discourse might play out in India soon as the Ministry of Electronics and Information Technology (MeitY) looks to regulate AI within the upcoming Digital India Act. The Minister of State, Rajeev Chandrasekhar said that rules under the law will regulate high-risk AI systems, a category that the EU AI Act also carves out.
A graded approach to segregate intermediaries based on their activity, and a combination of minimal regulatory guardrails and self-regulation in the form of voluntary pacts or conduct codes could balance AI risk mitigation and India’s ambition to become a one trillion dollar digital economy by 2026.
Varun Ramdas is a technology policy researcher working at the intersection of emerging technology, public policy and society. Views are personal and do not represent the stand of this publication.
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!