Moneycontrol PRO
HomeNewsOpinionThe awakening of generative AI: The genie can’t be bottled but policymakers must guide its safe and ethical usage

The awakening of generative AI: The genie can’t be bottled but policymakers must guide its safe and ethical usage

Generative AI technology is progressing faster than our ability to compute future and present harms. Co-operation at the global and public-private levels can bridge critical information gaps, which will help frame regulations for safe and ethical use of AI

February 13, 2023 / 15:08 IST
The potential of generative AI is arguably limitless. It already helps in the production of images, improvement of low-resolution images, audio-synthesis, text generation, and may even compete with search engines. (Representational image)

The potential of generative AI is arguably limitless. It already helps in the production of images, improvement of low-resolution images, audio-synthesis, text generation, and may even compete with search engines. (Representational image)

While performing live recently, world-famous DJ David Guetta played a song featuring the voice of Eminem, a renowned rapper. The “collaboration” buoyed the crowd; the oblivious cheers masking the inauthenticity of the moment. The voice they were listening to, and even the lyrics, did not belong to the rapper. Both were generated by AI tools.

What initially seemed like satire (it wasn’t), slowly became a metaphor of what could be described as a watershed moment for the world. Governments, businesses, and society are all confronting the ethical and legal implications of a technology known as generative AI.

Phenomenal Versatility

Powered by machine-learning, generative AI refers to algorithms that are trained in vast amounts of data to produce content in response to human prompts. GPT-3 can generate complex texts, including emails, essays, articles, poetry, and responses to medical examination questions. DALLE-2 produces original images of any nature, composition, or imaginable style.

Ask it to create images of “alien cats with guns painted by Picasso” and the output is remarkable. If you want to listen to an interview of Steve Jobs by Joe Rogan – worry not, Podcast.ai has you covered.

The potential of generative AI is arguably limitless. It can transform existing industries and create others. It already helps in the production of images, improvement of low-resolution images, audio-synthesis, text generation, and may even compete with search engines. The pharmaceutical sector is using generative AI to create proteins for medicines. It is used in manufacturing to design physical objects. Gaming developers are using it optimise their world-building processes.

The Spectre Of Harm

At the same time, the technology is progressing at a rate which outpaces the ability to compute future and present harms. Concerns range from its ability to:

* Enable misinformation by producing deep-fakes

* Violate the intellectual property rights of creators on whose works it is trained

* Threaten jobs, especially in content writing and research

* Boost cybercrime by helping create malicious emails and code

* Increase plagiarism, among others.

These concerns are raising interesting questions for policymakers worldwide. Europe is in the process of finalising its AI Act- which takes a risk-based approach and outlaws “unacceptable risk” use cases of AI technology. Canada, UK, US, and India are developing their own approaches. But the interaction of these policy developments with generative AI is unclear.

The fundamental reason for this is that such regulation is outcome-oriented, and not focused on the technology. It is typically pegged to certain harms (privacy, discrimination, surveillance), and specific decisions (hiring, lending, public service delivery). In contrast, it is hard to predict risks associated with generative AI, due to its unprecedented nature. Harms will emerge as users discover new applications, making the planning of regulation harder.

Dilemmas For IP Law

The questions raised by the risks that we already know of, are equally – if not more – challenging to grapple with. For instance, it is unclear how intellectual property laws – especially notions of authorship and copyright over training-data – can govern AI-generated works.

This raises other questions:

* How will regulation protect copyrighted works against AI models that leverage the works of others at scale?

* How would you compensate those artists?

* And more fundamentally, what is the meaning of creativity in a world where you can generate a seemingly original work in response to one-line prompts?

* How would you reimagine a legal architecture which never even contemplated this technology?

Similar questions emerge when considering how to affix accountability for deep-fakes and misinformation related harms.

Different Approaches To Regulation

Certain countries have already taken a lead in regulating generative AI. China, in China-style, has recently introduced rules on “deep-synthesis” technology. These are in line with its general Internet governance approach – with state certification, user verification, and censorship and monitoring requirements for service providers. They must also train the models, as long as personal data is involved, in line with the national privacy law. There are also specific provisions targeted at misinformation, deep fakes, and copyright concerns.

In contrast, in countries like the US, the private sector has developed its own best-practices. Developers adhere to responsible AI principles and mature risk-management frameworks. Other organisations provide tools to label or watermark AI-generated content to address copyright and misinformation related concerns.

Down-stream deployers have also started to declare the use of these models to enhance transparency. The use of alignment techniques utilises human-in-loop feedback to train models to produce less offensive language, reduce its use for misinformation, among others. Other proposals target the scaling and standardisation of user-notification mechanisms, which would allow them to report harmful generations.

The existing business applications of and regulatory approaches for generative AI only scratch the surface of what this technology holds for society. While businesses stand to gain immensely from its use, ethical concerns will also rise. The world, as a whole, needs to understand these systems better to govern them effectively. Governments stand to learn from each other, and also the private sector. Co-operation at the global and public-private level can bridge critical information gaps and guide the safe and ethical use of generative AI.

Vijayant Singh is Principal Associate and Aman Taneja is Lead-Emerging Technologies at tech-focused law and public policy firm, Ikigai Law. Views are personal and do not represent the stand of this publication

Invite your friends and family to sign up for MC Tech 3, our daily newsletter that breaks down the biggest tech and startup stories of the day

Vijayant Singh is Principal Associate at tech-focused law and public policy firm, Ikigai Law. Views are personal and do not represent the stand of this publication
Aman Taneja is Lead-Emerging Technologies at tech-focused law and public policy firm, Ikigai Law. Views are personal and do not represent the stand of this publication.
first published: Feb 13, 2023 03:08 pm

Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!

Advisory Alert: It has come to our attention that certain individuals are representing themselves as affiliates of Moneycontrol and soliciting funds on the false promise of assured returns on their investments. We wish to reiterate that Moneycontrol does not solicit funds from investors and neither does it promise any assured returns. In case you are approached by anyone making such claims, please write to us at grievanceofficer@nw18.com or call on 02268882347