Artificial Intelligence (AI) is no longer a fiction or even a far-fetched possibility. With the advent of technology and the need for automation, the dependency and the permeation of AI in day-to-day lives have become more relevant. It's pervading into all spheres of human life, be it for facial recognition, detecting cancer cells or even driving vehicles.
While the launch of AI tools like ChatGPT, Bard, DALL-E and DALL-E 2 has fuelled discussion among policymakers about regulating the use of AI, the increased dependency also brings with it a host of issues including the ability of AI to directly or indirectly cause harm such as loss of privacy, financial loss, loss due to inaccurate information and physical damage to property or injury to humans by autonomous machines. These issues have become a matter of concern for regulators globally.
Ability to Take Decisions
This has triggered discussion in several jurisdictions about according artificial person or juristic personality status to AI, even though currently no law accords AI such a status. The strongest argument for considering according this status to AI, and not to traditional machines or even computers, is the ability of AI to take autonomous decisions, interact independently and to self-learn, and therefore the ability to cause harm or damage which cannot always be attributed entirely to the creator or user of the AI.
The computer-generated output may not always reflect the intention of the creator or the user, and AI may function in a manner not contemplated in advance by the creator or even the user and may cause physical or financial harm to third parties. Such ‘unintended’ output/operation may be a result of programming errors, interactions with other systems, external interference (like hacking) or a self-learning not contemplated in advance. The incorrect answer by Google's Bard during its test run which led to a dive in Alphabet's shares is a recent example of such inaccuracy which may be caused by AI. Scholars have stated, “that these AI tools have a tendency to confidently state incorrect information since they are trained on massive amounts of text and analyse patterns to determine word sequences, but they do not query a database of proven facts”.
Providing artificial person or juristic personality status to AI would mean accepting AI under law as a legally recognised entity independent of its creator or user and on its own having rights and obligations, the ability to own assets and enter into contracts, being liable for its actions, being able to sue and be sued. This concept is not alien to Indian law or to laws of other countries which have in the past accorded juristic personality to non-human entities including bodies corporate, ships, idols, rivers, forests and even pet animals.
Difficult to Apportion Liability
However, according artificial person or juristic personality status to AI would present its own challenges. For example, it would be a struggle to attribute and apportion liability for a loss caused due to or by the AI between AI, the creator and the user. Similarly, how would liability be discharged by the AI and where would the AI derive financial resources to settle any such liability? For criminal actions, while you can attribute fines and financial liability to AI, how would punishments (such as imprisonment) be imposed on AI? Unlike in the case of bodies corporate that are accorded juristic personality status, where the ‘officer-in-default’ (which is a natural person responsible for affairs) is liable for criminal actions, there may not be an identifiable natural person who could be held liable for criminal actions of AI. In the case of a body corporate with a juristic personality, a clear line of liability can be traced to the persons in charge by lifting the corporate veil, but no such line of liability is traceable in the case of AI.
On the other hand, if the creator or user is held liable as ‘officer-in-default’, that would defeat the purpose of according artificial person or juristic personality status to AI. Similar issues with the identification of the guilty person and attribution of liability and tracing the line of liability would arise with respect to tax liabilities arising due to AI's actions and in determining negligence, the extent of negligence or contributory negligence on part of the AI, the creator and the user for determining their respective liability and in establishing guilty intention which is the cornerstone for imposing criminal liability in various jurisdictions.
Regulators in the US faced a similar dilemma of laying accountability in a case currently under trial, wherein Uber's self-driving car met a fatal crash in 2018 and the self-driving operator was charged for the crash while Uber was let go with a warning.
Debate in India
In India, the Ministry of Electronics & Information Technology (MeitY) had constituted four committees for promoting AI initiatives and developing a policy framework. The reports of these committees are available for public scrutiny. The draft report by the Committee on Cyber Security, Safety, Legal and Ethical Issues recommends granting AI artificial person or juristic personality status, but eventually leaves the decision to the stakeholders to consider. The recommendation also states that legal personhood should be accompanied by an insurance scheme or compensation fund to compensate for damages.
There is also strong opposition to granting artificial person or juristic personality status to AI on the ground that according such a status to AI will help absolve owners/companies/founders from the responsibility of their creations. Creating a separate legal identity could allow the owners to take advantage and escape liability. The creator and user would always be tempted to attribute the liability to the AI instead of taking responsibility for their own actions/ omissions.
Though the discussions surrounding artificial personhood continue and have escalated recently, there are issues which require an in-depth understanding and solutions before any framework is put in place. AI is a creation built to assist humans in simplifying tasks and increasing efficiency and any legal identity granted to AI must be thought through to ensure that it does not leave us to grapple with the consequences of the approach of giving it human-like qualities that would entitle them to comparable recognition before the law.
(With contributions from Poorvisha Jindal, Associate at Phoenix Legal)
Pranav Srivastava is Partner and Kripi Kathuria is Associate Partner at Phoenix Legal. Views are personal and do not represent the stand of this publication.