Moneycontrol PRO
The Learning Curve
The Learning Curve
HomeNewsOpinionRegulation of generative AI like ChatGPT and Bard mustn’t hinder their growth

Regulation of generative AI like ChatGPT and Bard mustn’t hinder their growth

Generative AI raises serious concerns about jobs, education, inaccuracy, biases and disinformation. Regulation based on principles like non-discrimination, keeping humans in the loop, veracity, inclusivity, risk-based content moderation and ex-post correctives for evolving harms can help achieve positive outcomes

February 09, 2023 / 13:20 IST
Generative AI blurs the line between human-produced and AI-proposed content.
(Representational image)

Generative AI blurs the line between human-produced and AI-proposed content. (Representational image)

ChatGPT, like the 90s character Vicki from "Small Wonder," uses AI to process and convert information into knowledge. While offering great opportunities, it also has potential harm, making regulation crucial to minimise risk.

Inaccuracy, Biased Outcomes

AI is prone to biased outcomes. Generative AI as a predictive tool is trained using a massive amount of data to provide a human-like response and critical thinking. However, there is less clarity on the integrity, quality, and diversity of the data used for training their language models. The premise of using past datasets to predict future outcomes is concerning, as these applications will be prone to reproduce the same mistakes and patterns in future, causing real-life implications for humans.

For instance, training data of ChatGPT goes only until 2021, and if any individual has exercised their right to be forgotten in the recent past, this information wouldn't be captured by the system, adding to the inconsistency of the nature of information. However, it has been reported that Bing integration might make ChatGPT more up-to-date.

Outsourcing Labour, Knowledge

Beyond whether AI will replace humans completely, it brings out the question of:

* What would happen to labour-intensive industries with high administrative and junior-level jobs?

* Would Generative AI depreciate the cost of labour, especially for the semi-automated sectors?

* How would it impact the creative economy and jobs which require critical thinking, reporting and content production?

Generative AI blurs the line between human-produced and AI-proposed content, making it difficult for evaluators in educational institutions to check for plagiarism and discern whether students are exercising their intellectual capability. Moreover, such applications may bring out long-term child cognitive development concerns, where outsourcing reasoning could impact intellectual development.

Information-related Concerns

Since some Generative AI applications use Reinforcement Learning with Human Feedback (RLHF) without proper checks and balances, it may cause manipulation of the feedback system, which could result in misinformation.

Additionally, veracity of the training data, fed into the language model to distinguish between truth and falsehood, is not clear. Their output might also have the risk of infringing on copyright, as they are trained with a pool of data including books, articles, journals etc., which gets paraphrased and replicated in the query response.

Principles-based Regulation

It is crucial to minimise the impact and harms of Generative AI to make it a success. Countries across the globe are taking steps to regulate AI, such as the recent draft of Brazil’s AI Bill, the EU’s AI Bill, and the US National Institute of Standards and Technology’s Artificial Intelligence (AI) Risk Management Framework (RMF). NITI Aayog too has produced a series of discussion papers, putting forth various principles for the responsible use of emerging technologies. While these regulatory measures are trying to make AI systems trustworthy through risk management, there is less discussion on specific concerns related to generated content of AI systems.

A co-ordinated regulatory approach is required for establishing principle-based regulation, incorporating new principles to cater to evolutions like generative AI with some key existing principles like non-discrimination and humans in the loop. The new principles must include veracity, quality and diversity of data, children's safety, risk-based content moderation, authenticity, preserving critical thinking, ethical utilisation, and inclusivity.

Besides, regulation must differentiate between ‘impact’ and ‘harm’ to map the responsibilities accordingly. The former is the error rates of the Generative AI applications like the rate of inaccurate information or disparate errors whereas the latter is tangible and intangible real-life implications, like the spread of dis/misinformation, unfair exclusion and impact on the creative economy.

Other Regulatory Challenges

Some of the existing and upcoming global AI regulations tackle the “impact” (which happens at the development stage) through ex-ante measures for systematic identification of harms. However, falls through the cracks happen at the user level generating unanticipated “harms”. These harms are a consequence of abuse and misuse of the technology, and must be addressed through ex-post regulations.

International-level regulatory consensus building is crucial as the impact and harms of Generative AI technologies move beyond borders. Also, consistency with other existing and upcoming regulatory frameworks at the domestic level like data protection, consumer protection, intellectual property rights, etc. is essential.

Besides, while establishing principle-based regulations, it is important to weed out an ethical dilemma. For instance, while we suggest the quality of data through more representative and diverse datasets, at the same time intellectual property rights must be protected.

The government must also aid AI developers to implement principles and codes of ethics within their processes by forming various operational guidelines, SOPs, awareness programmes, and private consultations. And companies should hire AI developers from ethnically, culturally and socially diverse backgrounds to ensure that the technology is as inclusive and unbiased as possible.

Kazim Rizvi is the Founding Director of The Dialogue, a think-tank working in the intersection of tech, society and policy. Kamesh Shekar is Programme Manager, The Dialogue, and leads the data governance vertical. Views are personal and do not represent the stand of this publication.

Invite your friends and family to sign up for MC Tech 3, our daily newsletter that breaks down the biggest tech and startup stories of the day

Kamesh Shekar is Programme Manager, The Dialogue, and leads the data governance vertical. Views are personal and do not represent the stand of this publication
Kazim Rizvi is the Founding Director of The Dialogue, a think-tank working in the intersection of tech, society and policy. Views are personal and do not represent the stand of this publication.
first published: Feb 9, 2023 01:20 pm

Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!

Advisory Alert: It has come to our attention that certain individuals are representing themselves as affiliates of Moneycontrol and soliciting funds on the false promise of assured returns on their investments. We wish to reiterate that Moneycontrol does not solicit funds from investors and neither does it promise any assured returns. In case you are approached by anyone making such claims, please write to us at grievanceofficer@nw18.com or call on 02268882347