Moneycontrol PRO
Sansaar
HomeTechnologyAI's imminent risk is automation of critical decisions by organisations: Zerodha CTO

AI's imminent risk is automation of critical decisions by organisations: Zerodha CTO

In an interview to Moneycontrol, Kailash Nadh talks about fintech companies selling AI 'snake oil', the receding fear of hallucinations, the AI infra vs use-case debate and how his team's mental health improved by going back to office

September 26, 2024 / 12:41 IST
Zerodha CTO Kailash Nadh

Zerodha CTO Kailash Nadh

Imagine your insurance claim has been turned down and your insurer can’t explain why. Even after you have explained everything at length, your insurer can’t give you a reason for saying no. All you get to hear is, “It's not us, it's AI”.

This is what Zerodha chief technology officer Kailash Nadh fears — the looming threat of loss of accountability within organisations as a lot of work gets automated. The real artificial intelligence (AI) risk is not rogue robots taking over the world but they making critical decisions.

"The real risk is not AI just taking over the world but humans indiscriminately adding these systems in place of critical decision-making," he tells Moneycontrol.

Nadh would know. Since completing his doctorate in AI over a decade back, he has been one of the core team members at Zerodha, the country's biggest discount stock broker by revenue.

Though he describes himself as an "absurdist" with a "bleak view of the future", Nadh still makes an effort to get to the best possible outcome with AI — free up employees from tedious tasks to take on other work.

In a freewheeling conversation with Moneycontrol, Nadh talks about fintech companies selling AI “snake oil”, the receding of hallucination (industryspeak for misleading results generated by AI models) fears, the AI infra-vs-use case debate in India, how his team's mental health improved by going back to office, and more. Edited excerpts:

The last time we spoke, there were a lot of concerns about hallucinations and Generative Artificial Intelligence (genAI) being a black box. It seems like the magnitude of those fears has greatly reduced in the past year. Why?

Models, both closed and open, have improved significantly since we last spoke. Hallucinations — and this phenomenon really needs a better, more technical term — are inherent to how generative AI systems like LLMs work, so that is never going to fully go away with this particular class of technology. People are still talking about these issues, but as these systems have become commoditised and commonplace, the number of people discussing their functional and utilitarian aspects has exponentially outgrown those who talk about concerns and fears. LLMs seem to have fast become a baseline expectation in no time!

There's a debate on whether India should try developing local AI infra or focus resources on use-cases. Where do you stand?

When we say, "India should...", do we mean Indian tech communities, developers, industry, academia, or the state, or all of them together? All have different incentives, skills, and capabilities.

In my view — and there are mountains of evidence throughout history — decentralised, collaborative innovation is most effective and scalable. Take LLMs, for instance. Open-weight models are being used everywhere for innovation, including in India, already.

Should different actors in India attempt R&D and innovation based on their capabilities and resources? Absolutely. Should people build use-cases? Absolutely. Should people build infrastructure? Absolutely.

Logically, there is nothing to debate if innovation is happening freely, unless there is some top-down push for a specific outcome.

The lack of a deep, collaborative academic plus industry plus R&D universe in India, to attempt the more foundational, infrastructure problems is where this debate stems from. Like Carl Sagan said, ‘If you wish to make an apple pie from scratch, you must first invent the universe’. And unfortunately, we do not have that.

There is a major piece of marketing in fintech happening around genAI helping improve fraud and malware detection. Is that something you agree with?

This is just the Nth iteration of marketing hype with little substance. At some point, it was "big data", then ML/AI on and off and now "genAI" is back in vogue.

In 2021, I had written a rather ranty article after being annoyed sufficiently by snake-oil sellers pitching nonsense "powered by AI/ML" fintech solutions. With the advent of generative AI, it has now just exploded.

Is there any estimate of how much genAI helped improve efficiency in tasks within Zerodha? 

We've seen massive gains in efficiency and reduction of tedium for people at Zerodha. For instance, we have a team of about 100 members who sample and analyse the quality of customer support calls, of which there are several lakhs that are recorded every month.

We have orchestrated new workflows using open-source models that now analyse, classify, annotate, tag, and structure calls and present them to the team. This provides 100 percent coverage of all calls instead of relying on random sampling. It eliminates 99.9 percent of mundane effort and drudgery for the team, where they can now just focus on evaluating the quality of automatically flagged calls rather than listening to thousands of hours of random calls.

Have any roles in Zerodha become redundant as a result of this AI workflow?

In a way, the earlier roles are now fully redundant but we have repurposed them into new roles. We have a clear policy internally that the decision-making agency for all critical processes will lie with humans not AI. We are following this approach across use cases, across departments.

It seems that agentic AI tools are gaining traction quickly. Do you think automating decision-making is a risky thing?

I don't like the term agentic AI. It's a proper marketing term. It's just an abstraction. It's a certain kind of workflow built on top of these more foundational things like language models. Is there a risk? Yes. But is it because of the agent-like workflow? No, it's just because of how people use these tools.

If they fully trust whatever an agent generates or if they fully trust whatever the LLM chatbot generates to run it, then there's a risk.

What happens when there is no human layer in between? For example, a fraud detection system in fintech may flag and block outlier events. What happens when outliers are not frauds?

That is a big risk and it's not just a fintech thing. The real risk is not these AI systems just taking over the world but humans indiscriminately adding these systems in place of critical decision-making.

We've heard the horror stories of people's Google accounts getting blacklisted by the automated systems, then you're lost in a hell of automated chatbots and bureaucracy and you can never get your account back because there's no human process.

These generative AI models now make it extremely easy to just slap in some automated decision-making.

If organisations now offload decision-making to AI, that is the real risk.

At some point, nobody will even remember why a certain decision was taken because that agency, process and the hierarchy from an organisation are lost.

When your insurance claim gets denied, nobody handling your case will truly know why. If that happens, which I think is already happening in many organisations, then the organisational capacity and DNA to hold people accountable for things will slowly be lost. Everybody will just say ‘It's not me, it's the system’.

Have you hired any AI engineers? Has your tech team expanded or contracted in the past year?

We have not hired any AI engineers. One developer in the team who indicated a specific interest in exploring this area devoted time to experiments and R&D and we ended up making successful deployments. Now that knowledge is slowly spreading across the team.

We have added three people in the last two years. We are at 34 people now. Skills and capacity scale up but headcount doesn't have to; it's a myth.

Are you using generative AI to code?

Pretty much every developer in the world out there is using these tools to improve their workflows. I have stopped using search engines like Google for technical queries. I just depend on Claude or GPT-4 for technical debugging. The idea, especially for technical people, of web search as a medium for finding answers is dying.

Does it also mean there are a lot of things that a programmer had to know is redundant?

A good engineer will use this as another tool to make their life easier. However, new engineers should not depend on GPT to generate code without understanding it. In the short term, it'll work because these things do produce great code but I don't think it’ll automatically make them good engineers without learning the fundamentals.

Generating code is just one aspect when you design a piece of technology. The other aspect is understanding the essence of why something is being built. You have to understand consumer expectations. You have to take bets. You have to factor in business considerations. You have to factor in human psychology.

In the very long run — decades or centuries — perhaps these things won't matter anymore as software and technology mature as disciplines but in the medium term, they will absolutely matter.

You recently said in your blog that the team's mental health has improved after coming back to the office. Can you share what happened?

Again, there are a lot of nuances and elements of cultural context which I have talked about in the aforementioned article.

The gist of it is that Zerodha's history, culture, and way of working is built on well-balanced interpersonal relationships within teams that fuel collaboration.

We have always hired people for these traits, and not traits that are essential for remote work to be successful such as effective async remote communication and articulation skills.

During our remote work years, this fundamental incompatibility took a big toll on the team's mental health and productivity, including mine.

Last year, about over 80 percent of the team reported their mental health to be in a poor state. After we started coming back to the office, this year, about more than 90 percent of the team reported that their mental health is in a positive place; mine included.

Invite your friends and family to sign up for MC Tech 3, our daily newsletter that breaks down the biggest tech and startup stories of the day

Deepsekhar Choudhury
Deepsekhar Choudhury Deepsekhar covers tech and startups at Moneycontrol. Tweets at @deepsekharc
first published: Sep 26, 2024 09:35 am

Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!

Subscribe to Tech Newsletters

  • On Saturdays

    Find the best of Al News in one place, specially curated for you every weekend.

  • Daily-Weekdays

    Stay on top of the latest tech trends and biggest startup news.

Advisory Alert: It has come to our attention that certain individuals are representing themselves as affiliates of Moneycontrol and soliciting funds on the false promise of assured returns on their investments. We wish to reiterate that Moneycontrol does not solicit funds from investors and neither does it promise any assured returns. In case you are approached by anyone making such claims, please write to us at grievanceofficer@nw18.com or call on 02268882347