Ever since the launch of OpenAI’s ChatGPT in November 2022, the field of artificial intelligence (AI) in general and generative AI in particular has just gone from strength to strength. According to CB Insights, in 2023 itself, generative AI has seen funding touching $14.1 billion across 86 deals.
At one of Europe’s largest technology conferences, Internationale Funkausstellung or IFA Berlin (from September 1-5), this trend was discussed at length across various panels. Many home appliances giants also had products incorporating some form of AI which was highlighted. Samsung Electronics’ major launch at IFA Berlin wasn’t a smartphone or a TV but a Samsung Food app that uses generative AI to help you decide recipes and make shopping lists. But on the sidelines of the gadget displays at IFA Berlin, the discussions on generative AI focused on the more human aspects such as job security, regulations, media coverage of AI and more.
AI taking over our jobs
There have been many reports from reputed institutions already claiming how AI will take over millions of jobs. Earlier this year, Goldman Sachs predicted that AI could replace the equivalent of over 300 million full-time jobs.
Aljoscha Burchardt, principal researcher at the German Research Center for Artificial Intelligence (DFKI), noted in a panel on ‘The promise and potential of generative AI’, that despite the advances brought about by generative AI, it will be humans who would be driving that transformation.
“15 years ago, translators compared the work around deep learning as the Manhattan Project. The translation technology has improved a lot since then, but translators still have their jobs and in fact are using machine translation as their assistants in their work,” said Burchardt highlighting how future jobs will also evolve in the AI age.
Jonas Andrulis, founder and CEO of German AI startup Alepha Alpha, noted that Germany has a labour shortage problem, and it will only keep getting worse given the horrible demographics. “I believe the big challenges of this generation can be solved by using technology. There’s no point waiting to develop AI technologies (to prevent job losses) as the US and China are already racing far ahead.”
Burchardt acknowledged the issue of labour shortage in Germany and the fact that in many sectors machines would have to be used. He elaborated on the concept of externalization of cognitive tasks, giving examples of how using calculators freed us to do more meaningful work.
“Now we have a more flexible technology that can learn from how we learned and interacted in the past. We will also see hybrid systems. We will definitely externalize more cognitive tasks for machines,” said Burchardt.
AI used to be a topic that was relegated to the technology sections of newspapers, but since the arrival of ChatGPT, AI has moved to the front page. According to Ivana Bartoletti, global data privacy officer at Wipro, when the media talks about AI, it speaks of it in terms of The Terminator.
“We are having an uninformed conversation. There is no nuance. There is a need for better coverage of AI and the need to get behind the dichotomy of AI regulation and innovation,” said Bartoletti, noting that regulation in the AI sector didn’t necessarily hinder innovation.
The European Union recently passed the EU AI Act, a first of its kind regulation around AI that is expected to become a law by next year. Within days of the draft being passed, an open letter was drafted with signatories including large companies such as Airbus, Siemens, Renault to AI startups.
“In our assessment, the draft legislation would jeopardise Europe's competitiveness and technological sovereignty without effectively tackling the challenges we are and will be facing,” reads a line from the letter which calls for reconsideration of many aspects of the EU AI Act.
The anglo-centric world view of the AI foundation models was also pointed out by Burchardt. “People in Africa and India are working to ensure the problematic content from the generative AI services is taken care of. But a blanket anglo-centric model cannot be applied everywhere,” he said, suggesting that machines and algorithms have to mirror the users and be compatible with geographical diversity.
“There are a lot of similarities when it comes to regulating AI, especially when it comes to fairness and transparency. For instance, India just released its data privacy law. China has strict regulations around the use of algorithms. I think we are seeing a lot of similarities in regulation but at the same time, there will be national sovereignty discourses as well,” said Bartoletti.
The Cyberspace Administration of China (CAC) had passed regulations to vet recommendation algorithms back in March 2022. One of the effects of these regulations has been stronger protections for gig economy workers who are reliant on algorithmic decisions for their pay. More recently, on 15 August, the “Interim Measures for the Management of Generative Artificial Intelligence Services” came into force with the aim of ensuring security and transparency in generative AI services. But the fact remains that China can do this because it uses a highly top-down approach for many regulations with little public debate on these matters. These heavy-handed regulations wouldn’t work in democratic countries. The EU AI Act, for instance, will still be discussed among the EU Commission, EU Parliament and the EU Council before it becomes a law.
OpenAI CEO Sam Altman had visited India on his global tour in June 2023. But one comment by Altman made quite the news. On being asked how Indian startups should go about building foundation AI models, his response was blunt.
“The way this works is we’re going to tell you, it’s totally hopeless to compete with us on training foundation models (and) you shouldn’t try. And it’s your job to try anyway. And I believe both of those things. I think it is pretty hopeless,” Altman had said.
Foundation models are also called general purpose AI systems that are capable of general tasks such as text analysis, image manipulation and audio generation. OpenAI’s ChatGPT is a type of foundation model called Large Language Model (LLM) for text-based conversation. In the case of AI-image generation, the foundation models are called diffusion models, something that’s seen in Midjourney or OpenAI’s DallE2. Foundation models are important as they are the starting point for a lot of the general purpose AI applications.
Altman’s jibe on foundation models development in India prompted Tech Mahindra CEO CP Gurnani to tweet, “challenge accepted”. In a writeup on Rest of World, Gurnani said, “I remain steadfast in my faith in the Indian tech ecosystem’s ability to create AI foundation models on par with — or above — global standards.”
India's Reliance Jio and Tata Communications this week announced collaboration with Nvidia, to develop AI infrastructure and foundational models in Indian languages.
Prof Joanna Bryson, a professor of Ethics and Technology at the Hertie School, Berlin, told Moneycontrol that Altman would like people to believe that there’s no point competing with OpenAI on foundation models.
“I don’t think it’s game over. Some amazing things are being done with much smaller (AI) models. There is a lot of exaggeration and chest-thumping, specifically because America is afraid that digital is a leveler. What you need to do is think of how to innovate one or two tiers down,” said Prof Bryson.
Referencing how a lot of low-energy tech innovation was coming out of African countries, Prof Bryson feels India’s diversity, knowledge base will hold it in good stead.
“I am sure India will also be doing these scaled innovations and shouldn’t let anyone shut them down,” said Prof Bryson.
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!