Moneycontrol PRO
HomeNewsOpinionAI and Extinction: Why are AI visionaries skirting transparency, audits and the real risks?

AI and Extinction: Why are AI visionaries skirting transparency, audits and the real risks?

The Centre for AI Safety terse statement signed by the world’s top AI entrepreneurs and researchers warning of the risk of extinction from AI achieved a rare consensus. But why aren’t they addressing more pressing questions on the impact of AI models, the socio-economic risks they pose, and the framing of an effective auditing mechanism?

June 01, 2023 / 09:11 IST
The need of the hour is the framing of regulations governing AI, in addition to understanding the impact of such AI models

The need of the hour is the framing of regulations governing AI, in addition to understanding the impact of such AI models

It is without a doubt the need of the hour, for every human being to be made aware of the risks associated with Artificial Intelligence (AI). However, considering the risks of extinction from AI is an extreme end of the spectrum of the discourse surrounding the topic. Without addressing core issues like misinformation and fake news, data mining and privacy, the issuance and signing of statements, such as the one from the Centre for AI Safety by well-known figures in the field, prioritising the mitigation of risks of extinction from AI alongside global threats such as nuclear war and pandemics, may, in fact, be futile and facetious.

It cannot be denied that there is growing concern about the risks associated with AI. However, such concern must result in actionable and auditable steps. Collectively agreeing on a belief that AI may lead to extinction, without first determining the steps to be taken to prevent such an occurrence, however plausible, may be seen as both ineffectual and an attempt at fear mongering.

A Disappointing Statement

Such fears and concerns notwithstanding, we must realise that AI and its associated technologies are at a very nascent stage, considering its potential for growth.  The day when we may look up at a bright night sky lit up by a deadly blaze of incoming comets is in the distant future.

Therefore, would it not be wise to first agree or at least publish a comprehensive analysis of predictions, or to deploy AI tools to map the situation and publish such doomsday scenarios?

The signing of such a statement by persons of eminence, including the respective CEOs of OpenAI and Anthropic, Sam Altman and Dario Amodei, without a plan or a road map is indeed disappointing.

The primary concern at this point should be to reach an agreement on the risks involved, and the stage that we are at. Further, can the risks be articulated, explained in context, and quantified?

How About Some Transparency?

The need of the hour is the framing of regulations governing AI, in addition to understanding the impact of such AI models, determining and understanding their primary and secondary functions, building an AI audit framework and developing an AI compliance checklist to be periodically updated and enforced.

It is of utmost importance to inquire whether leaders will demonstrate enhanced transparency regarding the mechanisms and underlying algorithms employed in these systems. Is it possible to disclose the intricacies and fundamental assumptions that propel the remarkable tools which may ultimately pose a significant existential risk? In order to address this profound threat, could we implement a rigorous audit mechanism, with the audit findings made available in a globally accessible format for public scrutiny?

It is crucial to consider the establishment of multiple audit teams, both at national and global levels, to address the gravity of this apparently “unprecedented threat”. These audit teams should consist of individuals from diverse backgrounds beyond the realm of computing. In particular, the inclusion of mental health professionals, lawyers, and historians would bring their valuable perspective.

An Invisible Adversary

It appears that nothing works like an invisible formless adversary that can be shaped or reshaped at will and convenience to suit the situation at hand. I concur that there are inherent risks associated with this phenomenon.

These risks manifest in various aspects, such as the potential loss of employment for individuals, the invasion of personal privacy due to extensive data collection and its sophisticated analysis, as well as the impact on the information ecosystem, particularly concerning misinformation and fake news.

To address these concerns, it is imperative to develop a comprehensive plan of action. Initiating this process with statements endorsed by respected figures in the technology industry would be a constructive starting point.

These statements can serve as a foundation for further discussions and actions aimed at mitigating the identified risks. Prioritising strategies to safeguard job security, protect personal privacy, and combat the detrimental effects of misinformation should be the initial focus.

The Good And The Bad

Considering the immense power and potential of AI as a tool, it would be reasonable to explore the possibility of relieving humans from the burden of continuous training. It is also worth questioning the motivations behind exaggerated concerns raised about AI, and who stands to benefit from amplifying these fears.

An actionable declaration on AI and inequality is the need of the hour. Can we have a plan for combating the risks of AI on income inequality or on sustaining and perpetuating a disturbing lack of integrity in the distribution and availability of information resources?

Can we address the risks that are present and tangible at present? We need to address critical issues like AI-enabled surveillance – both political and economic, AI-powered algorithms with unaudited bias and above all, the critical enabling of misinformation and information asymmetry.

Visionaries And Their Hypocrisies

In the statement, it is surprising that a group of experts argue against a technology that they are breathlessly building without a pause. While they continue to provide financial resources, personnel, and strategic impetus, they warn of grave risks and call for tighter regulation.

Could these visionary leaders advise us of the measures that they are presently taking and their impact? I hope we can begin by identifying and labelling risks honestly.

What is the real risk? Where is the long-term impact likely to be most devastating? This is what I, and I am sure quite a few others, would like to know. On the inevitable end, to paraphrase Robert Frost rather clumsily and may I add conveniently, whether with fire or with ice, it would be nice and for now certainly suffice.

Rodney D. Ryder is Founding and Senior Partner with Scriboard; a full-service law firm with an intellectual property, technology and media law practice. He is the co-author of “Artificial Intelligence and Law: Challenges Demystified”. Views are personal, and do not represent the stand of this publication.

Invite your friends and family to sign up for MC Tech 3, our daily newsletter that breaks down the biggest tech and startup stories of the day

Rodney Ryder is Founding and Senior Partner with Scriboard, a full service law firm with an intellectual property, technology and media law practice. He is the Co-Author of “Artificial Intelligence and Law: Challenges Demystified”. Views are personal and do not represent the stand of this publication.
first published: Jun 1, 2023 08:40 am

Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!

Advisory Alert: It has come to our attention that certain individuals are representing themselves as affiliates of Moneycontrol and soliciting funds on the false promise of assured returns on their investments. We wish to reiterate that Moneycontrol does not solicit funds from investors and neither does it promise any assured returns. In case you are approached by anyone making such claims, please write to us at grievanceofficer@nw18.com or call on 02268882347