In securities laws specifically but also in personal finance generally, questions about the impact of AI models (e.g. ChatGPT, Google Bard and Bing Chat) have been raised. Would AI replace financial advisors or at least substantially reduce the value of their services? Should SEBI intervene here?
Should SEBI treat such AI as intermediaries that should have some sort of registration and some sort of code of conduct with various disclaimers, diligence, etc? As it does for Research Analysts, Investment Advisers, etc? Should the proposed new guidelines by SEBI for fininfluencers also apply to these tools?
But the larger question is, at least in areas of personal finance, have such models which are publicly available, advanced so much?
They look formidable. I used them for many purposes and they gave mixed results. For informal purposes, like say drafting a casual email or greeting, the results were good. But more serious work like creating a draft article or editing was a waste of time except that it fed my curiosity.
Querying AI On Personal Finance
On personal finance, I did simple querying on the three major AI models. But even that produced platitudes, albeit good ones. But, shockingly, basic calculation mistakes were seen and some important parameters ignored.
I raised basic queries on personal finance from the perception of a 28-year-old youngster, a couple in their forties with a home mortgage, and then a couple nearing retirement. I asked questions such as how to plan and allocate portfolio, how to save for a house, how much home loan could be advisably raised, etc. And, finally, what should be the retirement fund for a couple nearing retirement.
It is well known that AI models rely on published material across the web, collated and digested well and this clearly came out in the answers. One AI specifically gave sources for its replies, others did not. The only value addition was that instant and well re-written advice, perhaps culled wisdom, was provided. Without AI, one would have hunted a lot through the web for hours, or even days. Yet, it was apparent that there was a lack of depth. Indeed, they seemed substantially improved versions of the superficial answers available on the web.
A Good Start And Then Downhill
Worse, there were elementary mistakes/omissions resulting in useless and even absurd answers. Take the query regarding the retirement fund needed. I queried about a couple in India of around 55 years of age who had current annual expenses of Rs 12 lakhs. While Bing simply referred to a financial newspaper article and suggested Rs 5 crore, ChatGPT and Google Bard proceeded with fair assumptions of inflation rate, life span, and even retirement age typical to India.
But a series of bizarre mistakes later, ChatGPT came up with a total retirement fund of a measly Rs 87 lakh, which any person quite well versed in personal finance will recognise as completely wrong, while Google Bard erred in other direction by suggesting a huge intimidating retirement fund of Rs 10.50 crore. What I found was that basic mistakes like missing decimal points, calculating inflation adjusted amount on date of retirement instead of end of retirement, doing multiplication instead of division, etc led to these two figures.
What this brought to my mind was that human oversight would have caught all these errors by the AI model. Interestingly, I asked the models the same question again and again. Each time, it gave me a different method, and a different figure, all again containing mistakes/omissions.
Humans Ahead. For The Moment
One could go on and on. But, to me, the optimism – or even the scare – of AI seems overrated at this stage. Firstly, it helps doing basic laborious tasks quickly and intelligently. But since it is based on existing material, and it is necessarily generalised, more work is needed to refine the output. Secondly, several basic calculation mistakes have repeatedly gone through. Thirdly, important factors have not been considered while doing calculations. Fourthly, and this is partly derived from the previous two, because of the absence of human oversight before releasing the answer, absurd results come through.
To conclude, let us come back to the questions with this admittedly anecdotal experience. Firstly, in personal finance, the results are rudimentary, error prone and at best can be used for education, taking the specific results either warily or totally ignoring them. Secondly, and taking further from what appears, SEBI does not have to worry so much about regulating this sector. They do not come anywhere near a reasonably qualified intermediary! What best SEBI could actually do is to educate the public of the pitfalls of this model.
This is not to say that AI will not improve or that AI does not perform stupendously good work in many areas. But it will be many, many more iterations before it can be taken seriously for areas like personal finance. Even then, whether it compensates for the lack of human oversight is also to be seen. Humans may remain supreme after all, not replaced, even if many jobs may get lost.
Jayant Thakur is a chartered accountant. Views are personal, and do not represent the stand of this publication.
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!
Find the best of Al News in one place, specially curated for you every weekend.
Stay on top of the latest tech trends and biggest startup news.