HomeNewsOpinionRegulation of generative AI like ChatGPT and Bard mustn’t hinder their growth

Regulation of generative AI like ChatGPT and Bard mustn’t hinder their growth

Generative AI raises serious concerns about jobs, education, inaccuracy, biases and disinformation. Regulation based on principles like non-discrimination, keeping humans in the loop, veracity, inclusivity, risk-based content moderation and ex-post correctives for evolving harms can help achieve positive outcomes

February 09, 2023 / 13:20 IST
Story continues below Advertisement
Generative AI blurs the line between human-produced and AI-proposed content.
(Representational image)
Generative AI blurs the line between human-produced and AI-proposed content. (Representational image)

ChatGPT, like the 90s character Vicki from "Small Wonder," uses AI to process and convert information into knowledge. While offering great opportunities, it also has potential harm, making regulation crucial to minimise risk.

Inaccuracy, Biased Outcomes

Story continues below Advertisement

AI is prone to biased outcomes. Generative AI as a predictive tool is trained using a massive amount of data to provide a human-like response and critical thinking. However, there is less clarity on the integrity, quality, and diversity of the data used for training their language models. The premise of using past datasets to predict future outcomes is concerning, as these applications will be prone to reproduce the same mistakes and patterns in future, causing real-life implications for humans.

For instance, training data of ChatGPT goes only until 2021, and if any individual has exercised their right to be forgotten in the recent past, this information wouldn't be captured by the system, adding to the inconsistency of the nature of information. However, it has been reported that Bing integration might make ChatGPT more up-to-date.