Moneycontrol PRO
you are here: HomeNewsOpinion

Accountability For AI: Dependency on Artificial Intelligence will spawn chaos and mistrust without regulation

In the euphoria over ChatGPT’s rapid progress, we shouldn’t turn a blind eye to the risks posed by opaque logic that drives AI decision making. Absence of governance standards to address anti-competitive and malicious actions could lead to disastrous consequences

March 06, 2023 / 08:41 AM IST
AI-based decisioning has been touted as a transformative tool to eliminate human oversight and bias.  (Representative image)

AI-based decisioning has been touted as a transformative tool to eliminate human oversight and bias. (Representative image)

Advancements in natural language processing capabilities enabling conversational engagement with Artificial Intelligence (AI), coupled with open-source access to AI driven tools, have chipped away at the traditional skepticism surrounding AI and its capabilities.

ChatGPT’s answers have been called “a proximate echo of human speech” and this human-like engagement with AI has, to some extent, softened the defensiveness we once harboured against the deployment of AI. Our amusement at how ChatGPT mimics human artistic and abstract abilities has made us empathetic to its inaccuracies. However, these inaccuracies viewed against the pervasiveness of AI specifically to aid decision-making underscores the need for regulation.

Trusting AI Too Much?

AI-based systems are already outperforming human specialists, from composing poetry to aiding in conducting surgery. Naturally, the reliance on AI to assist with taking critical decisions has been steadily growing – from determining creditworthiness and eligibility for financing to ratifying hiring decisions, based on background checks powered by AI. However, a reliance on AI without regulation undermines the accountability that we have come to expect from its designers.

Although convincingly exhibiting intelligence, AI is trained to interpret situations within the contours of the logic it has been trained by, and with its intelligence formed basis data sets that may not necessarily apply to every situation. Accordingly, AI is guided by logic that it has learned, and applies it within tightly defined exceptions.

Human decisions are constantly scrutinised based on varying parameters, ranging from ethics to reasonableness to professional prudence. In stark contrast, society has adopted a trusting reliance on AI-driven decisions, despite the logic used to arrive at such decisions being cloaked in confidentiality and often obfuscated in the face of scrutiny.

AI decisioning although “data driven” must be capable of being audited to eliminate malicious logic, prejudice or bias inherited from the architects and designers of the logic creeping into the decision making.

Recognise AI’s Limitations

As a result, whilst appearing ostensibly data-driven, and bereft of human infirmities such as emotion or prejudice, reliance on AI decisioning tools may exacerbate biases that are inherited from its human creators. For example, self-preserving or self-promotional logic taught to an AI powered search engine may result in the elimination or suppression of results to queries, articles or content that is critical of that search engine.

Another example is how app based taxi hailing services determine surge pricing based on AI. Allegations have already been rife that certain apps have used data to ascertain that users are more likely to pay for surge pricing if their mobile phone battery is low and they’re away from public transport.

This is ironically reminiscent of price gouging by taxi drivers based on extraneous factors perceptible to humans, which accelerated adoption of app based taxi services in the first place.

How AI May Undermine Itself

For unsavory decisions, we would rather not have our conscience, there has been an increasing reliance on AI decisioning for guilt offsetting. In situations where the outcome involves amplified outcomes, such as determining who should be prioritised in an emergency room, even the staunchest advocates of AI adoption would cringe at blind reliance on AI.

While AI-based decisioning has been touted as a transformative tool to eliminate human oversight and bias (with AI driven outputs being predominantly data-driven decisions), the very bias that humans are fraught with, often trickles into the logic that drives AI based decision making.

These exaggerated examples merely underscore some of the insidious ways in which reliance on AI, without accountability, will eventually result in trust erosion in AI based decision making.

With this ever increasing reliance, comes challenges in regulating and setting governing standards for AI in a manner that preempts and prevents adverse or illicit end-uses. AI driven decisions are often based on opaque logic and with no regulation or auditability, ever changing parameters allow for anticompetitive or malicious logic to be introduced into AI and then retracted without consequence.

Liability For AI-based Decisions

Until a clear consensus is gained regarding accountability and attributability for liability arising out of reliance on AI driven decisions, government regulators are saddled with the unenviable responsibility of having to frame an evolving regulatory framework. The design and deployment of AI must be governed in a manner that balances the need to foster innovation whilst ensuring transparency and accountability.

Transparency in AI-based decision-making is predicated on the traceability of developer inputs, and the auditability of the source code used in the development of AI is key to instilling confidence in AI. Any statutory framework adopted, must first address this need, by holding accountable the developers of AI to ensure that it does not exploit the murkiness of the logic it is driven by.

Regulating a technology when its end uses are not fully understood, admittedly seems counterintuitive, even suppressive. However, in the wake of the crypto meltdown, it has become apparent that public confidence can be accelerated by regulating tech products to prevent adverse outcomes.

For example, adopting a framework to ensure that AI is deployed factoring personal privacy, ethical considerations, and moral guardrails that we have come to take for granted, could build trust, and reduce resistance to the adoption of AI. Rather than scrambling to legislate after an adverse incident allowing AI to operate in an ecosystem where broad-based guidelines apply, would limit creative avenues for abuse.

Akash Karmakar is a partner with the Law Offices of Panag & Babu and leads the firm’s fintech and regulatory advisory practice. He tweets @akashxkarmakar. Views are personal and do not represent the stand of this publication.

Invite your friends and family to sign up for MC Tech 3, our daily newsletter that breaks down the biggest tech and startup stories of the day

Akash Karmakar is a partner with the Law Offices of Panag & Babu and leads the firm’s fintech and regulatory advisory practice. He tweets @akashxkarmakar. Views are personal and do not represent the stand of this publication.
first published: Mar 6, 2023 08:38 am