The timing and theatrics of Geoffrey Hinton’s warning about the potential of artificial intelligence (AI) catastrophe, the necessity of regulation, and the White House’s response were intriguing and raised question marks about its purpose. It was followed by another high-visibility Congressional hearing with Sam Altman of Open AI recently. In the natural course of business, the US government exploring monitoring and regulation of AI would seem to be another endeavour by the US to take the lead in addressing a challenge of global nature, which potentially found a voice in Hinton’s arguments. The almost integrated messaging emerging from a series of interviews of Hinton, the US government’s communication and Atman’s hearing made everything appear strategically scripted to a purpose.
The high decibel moves in the US triggered a global discussion, wherein control and regulation seemed to emerge as the default means to safeguard against potential AI risks, which is too simplistic a view and seen in the context of the huge stakes involved. Hinton and the US government simultaneously changed their stance in favour of regulating AI, despite arguing against it for long as it would stifle innovation. Meanwhile, AI companies involved in discussions chose to remain silent on the White House’s significant policy shift, which contrasts with the scenario last year when serious question marks were raised when a whistle-blower made a disclosure related to AI turning sentient. Next, Altman echoed Hinton’s argument but suggested that the government restrict the development of advanced AI to only a few licenced and regulated players, who would provide AI-based platforms to others. This contrasts with the regulation model of other sectors like healthcare, food and automobile, which is competition and innovation friendly.
Shaping Public Opinion
The story emerging from the US may be successfully shaping public opinion internationally, which is critical for policymakers, but in this high stake powerplay, some key strategic elements present a more holistic perspective.
Firstly, it is questionable why Hinton waited till AI products and solutions landed in the market and built a global following. Being the “Godfather of deep learning”, Hinton ought to have known about the opportunities and risks as the technology had been in development for years. Sceptics are bound to question Hinton’s sudden push for AI regulation, and the argument for involving AI companies in regulating AI. Altman went a step further and advised how selecting AI companies would enable regulation.
Secondly, while the US government’s actions may support the process of building a global consensus on regulating AI to manage potential risks, it also puts other countries in a blind as most other governments in other jurisdictions are still trying to build a better understanding of AI. While the Australian government has sought scientific advice, the UK and the EU have argued for user and IPR protection focussed mechanisms in future, while the Indian government had formally announced it wouldn’t regulate AI.
US, China Approaches
Finally, the US and China, the primary contenders for AI supremacy, till recently had contrasting approaches. The US government advocated against the regulation of AI and focussed on enabling AI innovation by giving US companies a free hand while arguing for self-regulation. It helped foster a private sector-led AI ecosystem and the development of multiple AI platforms, applications, etc, which now seem to have matured. Sectoral regulators did get involved when required, as was the case with Boeing 737 Max, self-driven cars, etc.
The Chinese government, in contrast, has been very hands-on and deeply involved. Chinese companies like Baidu were seen rolling out their AI-based consumer products earlier this year, besides extensive AI applications in cutting-edge projects in defence and space. In contrast to the typical muted approach, China had announced plans and has been rolling out rules on regulation.
With the White House also shifting its stance, the two countries dominating the AI ecosystem are now arguing for its regulation. However, the push by the US and China for regulating AI could limit access to AI technologies and compromise the development of domestic AI capabilities in other countries. They would be increasingly dependent on solutions and technologies developed by the American and Chinese companies; wherein geo-political considerations will also come into play. This not only puts a lot of pressure on other governments to act, and each country is bound to view this differently, especially in the current geo-political scenario. Policymakers in other countries including India would be reminded of the sanctions restricting Russia from using the SWIFT banking system, and the US’s protectionist approach to the export of AI technology. China is among the countries subject to US’s AI controls under Export Control Reform Act or ECRA. While India’s decision to keep AI out of the scope of regulation was on expected lines, given its access to technical skills, capabilities and the desire to be independent, it may be pushed to rethink.
As with any critical sector, control and checks are needed for AI technology, but regulation over the technology itself would concentrate power and gains to a few. Since everyone would want to gain from AI technology, much like GPS, finding common ground is critical, at least till alternatives are available or domestic technology matures.
Rahul Gossain is a communications strategist and works at the intersection of business, law, policy, ESG and climate impact. Views are personal, and do not represent the stand of this publication.
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!
Find the best of Al News in one place, specially curated for you every weekend.
Stay on top of the latest tech trends and biggest startup news.