Moneycontrol PRO
HomeNewsOpinionIs a Global Central Force needed for Artificial Intelligence regulation?

Is a Global Central Force needed for Artificial Intelligence regulation?

The debate over a global central governance body for AI raises important considerations. While centralisation can promote standardisation and address transnational challenges, a decentralised approach may better accommodate national sovereignty and diverse ethical frameworks

October 25, 2024 / 13:07 IST
AI governance would likely face similar challenges.

By Vidhi Sharma and Sagar Vishnoi

The rapid advancements in Artificial Intelligence (AI) technology have raised a critical question: should there be a global central force for governance - a single global institution, or should it remain separated across multiple forums? Drawing lessons from historical governance structures and considering the unique nature of AI, we argue that while a global force offers some benefits, it also comes with significant drawbacks that make a decentralised approach more suitable, especially in the early stages of AI governance.

Historically, the international community has debated the merits of centralised versus decentralised governance across various domains. As Peter Cihon, Matthijs M Maas, and Luke Kemp point out in their 2020 paper, "Should AI Governance Be Centralised? Design Lessons from History," centralised governance refers to the concentration of oversight, regulation, and coordination of policy issues under one global entity, such as the World Trade Organization (WTO) in the trade domain. Conversely, decentralisation is a fragmented structure where multiple bodies govern the issue with overlapping mandates and areas of jurisdiction. In the case of AI, this debate takes on particular urgency due to the technology's rapid evolution and far-reaching implications.

Challenges in Global Governance

The current global regulatory landscape for AI is already fragmented. For example, the European Union's AI Act is a "Regulation" that directly applies to most member states, whereas the United Kingdom relies on existing regulators to interpret AI principles within their domains. In the United States, AI governance is handled by a mix of federal and state initiatives, executive orders, and actions from agencies like the Federal Trade Commission (FTC). This divergence means compliance obligations vary significantly by location, making it difficult to create a unified global framework for AI governance.

Another obstacle to centralising AI governance is the lack of a universal definition of AI. Different regulatory bodies, countries, and international organisations define AI in varying ways. For example, the European Union AI Act defines AI differently from the OECD, and in the United States, various states have proposed different definitions. This definitional challenge is not unique to AI; similar issues have arisen in fields such as environmental regulation and energy governance. Yet, despite these challenges, there has been no significant push for centralised governance in these areas, raising doubts about the necessity of centralising AI governance.

A centralised AI governance body would have to navigate these definitional differences, potentially leading to disagreements among member states and further delays in policymaking.

Third, the current regulatory landscape for AI is characterised by inconsistent approaches across different regions. Regulations range from binding laws to non-binding guidelines, and enforcement is often left to existing regulators with varying levels of authority. For instance, in the UK, existing regulators implement AI principles without creating new legal obligations, which can lead to uneven interpretation across sectors. In contrast, the EU's AI Act establishes new regulations and enforcement bodies at the national and EU levels, including the creation of an AI Board and AI Office. In the US, agencies like the FTC, the Equal Employment Opportunity Commission (EEOC), and the Department of Justice (DOJ) claim authority over AI-related issues, with states potentially introducing additional regulations.

This fragmentation complicates the task of centralising AI governance. Different regions have different priorities, from data privacy in the EU to innovation promotion in the US. A one-size-fits-all approach would likely fail to address these diverse concerns adequately.

Historical Examples of Global Centralisation

Lessons from other international governance regimes reveal the potential pitfalls of centralization. For example, environmental governance remains decentralised, with no single institution wielding absolute authority. This fragmentation has led to inefficiencies and conflicts, such as the challenges around regulating hydrofluorocarbons (HFCs), where environmental agreements have clashed with climate change efforts. Centralised institutions are often slow to adapt to new developments, as seen with the Kyoto Protocol, which took three years to negotiate and eight to implement.

AI governance would likely face similar challenges. Establishing a centralised body would require extensive international negotiations, delaying its creation by years—during which time AI technologies would continue to evolve. In contrast, a decentralised system allows for more agile responses, as different countries and organisations can experiment with tailored regulations that address their specific concerns.

However, one may argue in favour of centralisation is that concentrating political power in a single body allows for stronger, more consistent governance. This model has proven effective in areas such as global trade, where the World Trade Organization (WTO) has established clear rules and standards that most nations adhere to. A centralised AI governance body could similarly enforce global norms, ensuring that AI technologies are developed and deployed ethically across jurisdictions.

Although, AI’s diversity of applications—ranging from healthcare to military uses—makes it difficult to house all governance under one global institution.

Having said that, these arguments in itself generate a separate “Analogy Debate”. One common approach to regulate emerging technologies is to draw analogies to other governance frameworks. AI has been compared to nuclear power, particle physics, and even pharmaceuticals. For instance, Sam Altman and other leaders from OpenAI have suggested that AI governance could be modelled after the International Atomic Energy Agency (IAEA), which oversees nuclear safety. Similarly, the German Research Center for Artificial Intelligence has proposed that AI governance could follow the model of CERN, the intergovernmental body responsible for fundamental research in particle physics.

Moreover, these analogies are imperfect. Nuclear power and particle physics are highly specialised fields with limited applications, whereas AI is a general-purpose technology with far-reaching implications across industries. As Nelson argues, AI is not merely a danger to be controlled; it is also a tool with immense potential for social good. By focusing too much on the risks—like we do with nuclear technology—there is a danger of stifling innovation in areas like healthcare, climate change, and education, where AI has the potential to bring transformative benefits.

Limitation of Global central force

The decision of whether to establish a global central force for AI regulation will likely depend on a careful balance of competing interests, sovereignty and priorities. A centralised approach will provide standardisation of frameworks and globalisation, promoting best practices and addressing transnational challenges. Constraints like bureaucratic roadblocks need a strategic intervention and the international co-operation should have enough space to provide national autonomy to the nations, to hold consultations with their proposed frameworks of consistent AI ethics, and safety standards. A balanced approach, which can give authority to the centralised institution while providing a window to sovereignty can be a way forward.

(Vidhi Sharma, runs an AI tracker for global tracking of news in governance and emerging technology and Sagar Vishnoi is the CoFounder of Future Shift Labs, a think tank working in AI and governance.)

Views are personal, and do not represent the stance of this publication.

Invite your friends and family to sign up for MC Tech 3, our daily newsletter that breaks down the biggest tech and startup stories of the day

Moneycontrol Opinion
first published: Oct 24, 2024 11:03 am

Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!

Subscribe to Tech Newsletters

  • On Saturdays

    Find the best of Al News in one place, specially curated for you every weekend.

  • Daily-Weekdays

    Stay on top of the latest tech trends and biggest startup news.

Advisory Alert: It has come to our attention that certain individuals are representing themselves as affiliates of Moneycontrol and soliciting funds on the false promise of assured returns on their investments. We wish to reiterate that Moneycontrol does not solicit funds from investors and neither does it promise any assured returns. In case you are approached by anyone making such claims, please write to us at grievanceofficer@nw18.com or call on 02268882347