It is a curious quirk of the age that, when confronted with the outputs of large language models or predictive systems, we slip into the language of personality. We say “the AI thinks”, “it refuses to answer”, “it believes”, as if a set of statistical weights and vector embeddings could possess will, conviction or conscience. After all, we don’t treat a calculator as a colleague.
It has become fashionable to sound AI-ready, to signal technological sophistication by talking of algorithms as if they were colleagues or advisers. Such phrasing can create the illusion of control and insight, even when the underlying understanding is superficial. This habit is increasingly creeping into corporate boardrooms, policy speeches, and even regulatory drafts. And therein lies the danger. The moment we start anthropomorphising what are, at their core, probability distributions over possible outputs, we open the door to flawed governance, misplaced accountability, and the wrong kind of trust.
An AI model, whether a credit risk predictor, a fraud detection engine, or a generative chatbot, does not and cannot decide in any human sense. It calculates, conditioned on the data and instructions it has been trained upon. When it denies a loan or flags a transaction, it is not exercising discretion; it is executing statistical inference. The output is shaped by prior probabilities, model architecture, and training bias, not by the empathy, intent, or malice we might ascribe to a human underwriter or compliance officer.
Why does this matter so acutely in Indian finance? Indian financial regulators like the RBI, bring exceptional expertise in their core domain, and hopefully it is this mastery that will enable them to engage with AI effectively without losing sight of the fundamentals that underpin sound regulation. The regulatory system is actively debating what AI regulation should look like across sectors, including financial services, exploring where guidance, boundaries, and safeguards are needed. But worryingly much of our regulatory corridors, including the RBI’s recently released committee report on FREE-AI, carry an implicit mental model of AI as a quasi-human agent - something that can be “trustworthy” like a bank officer or “transparent” like a Board.
But AI is neither. It is a probability machine. Mistake its nature, and we will regulate its shadow rather than its substance.
Financial markets know what it is to live with probability machines. No one asks the Black-Scholes formula what it “thinks” about volatility; they interrogate its assumptions, stress-test its parameters, and monitor its variance. A value-at-risk model is not “confident” or “nervous”; it has an expected loss distribution that can widen or narrow with inputs. These same disciplines of model validation, back-testing, and independent review are codified in model risk management norms. AI governance needs to extend, not replace, these principles.
We do not teach probability to be moral; we teach it to be precise, and AI is no different.
Anthropomorphising probability distributions distorts policy in three ways. First, it encourages the belief that bias or error can be “trained out” by appealing to values, as one might coach an employee. In reality, bias is a statistical property of data and architecture, and mitigation demands data audits, re-weighting, constraint optimisation, and post-deployment monitoring. Second, it tempts supervisors to use qualitative impressions — “is the AI behaving fairly?” — rather than measurable thresholds — “does the model’s false negative rate differ by more than X% across protected classes?” Third, it fosters a false sense of relational trust. Customers and staff may imagine they are dealing with an intelligent counterpart; in truth, they are interacting with a brittle calculator that can fail silently and at scale.
When we call AI “responsible” or “ethical”, we are not describing a genuine moral capacity; we are anthropomorphising a system that is, at its core, nothing more than a probability distribution generating outputs from learned patterns. A model has no intentions, values, or conscience. Any appearance of ethical behaviour is the result of human design and curation, training it to align with our frameworks and constraints. The responsibility and ethics in such systems reside entirely with the humans who conceive, deploy, and regulate them — because only they can exercise moral judgement. To forget this is to blur the line between tool and actor, and to build policy for an illusion.
In India’s digital financial infrastructure, these statistical truths are not abstractions. UPI, Account Aggregator, and Aadhaar-linked systems mean AI models operate in high-volume, low-latency environments. A miscalibrated fraud-detection model can lock thousands of accounts in seconds. A drift-affected credit scoring model trained on pre-DPI data can systematically misjudge borrowers in the post-DPI era because the input distributions have shifted. This is not a change of “mind”; it is a covariate shift problem.
Fraud and AML models offer another caution. They are essentially anomaly detectors over transaction graphs, where false positives can flood back offices and false negatives can miss coordinated attacks. Supervisors must think in terms of Type I and Type II error trade-offs, detection thresholds, and tail-risk exposure — not in narratives about “AI vigilance”.
A regulator, as a student of the statistical nature of AI, would frame policy very differently.
* Explainability would not be a story about “what the AI was thinking” but a documented chain of data transformations, feature weights, and performance metrics.
* Accountability would not mean holding “the AI” responsible but ensuring human owners of the lifecycle can trace, test, and, if necessary, kill-switch a model.
* Risk classification would be driven by variance, drift rates, and uncertainty under stress scenarios, similar to how macroprudential stress tests shock economic variables to see portfolio impact.
This shift also changes consumer protection. If a customer is denied a loan, the explanation must point to the statistical factors behind the score, not a comforting anthropomorphic metaphor about “judgement”. If a model behaves erratically in volatile markets, the supervisory question is not “why did it change its mind?” but “what shifted in the input distribution, how was it detected, and how fast did the institution act?”. Confidence intervals matter here.
Embedding this probability-literate mindset means drafting rules that mandate measurable performance bounds, bias metrics, drift thresholds, and scenario-based robustness tests. It means mandating model cards, audit trails, and stress-scenario documentation as non-negotiable, akin to KYC or capital adequacy norms. It also means recognising systemic correlation risk: if multiple lenders adopt the same third-party model, their misclassifications will be correlated, amplifying shocks in the same way identical credit-risk models did before 2008.
The RBI’s committee report gets much right - its emphasis on governance, data controls, and systemic risk is overdue and welcome - but it is worrying when “responsible” and “ethical” are framed as if the machine itself can embody those traits. Can a probability distribution “be fair” in the moral sense, or does it merely produce outputs that meet a statistical definition of parity? Can an algorithm “act responsibly” when it has no agency, only parameters and loss functions? Who exactly is being trusted - the model, or the humans who deploy and monitor it? And if ethics are reduced to a comforting narrative about AI “doing the right thing”, how will we detect when a silent drift in data has quietly turned the right thing into the wrong one? By anthropomorphising, are we not risking the same regulatory complacency that comes from believing a glossy risk model’s output without interrogating its assumptions?
Our regulators are still early enough in their AI oversight journey to avoid the Western trap of “trust frameworks” built for imaginary digital colleagues. We can instead regulate AI for what it is — statistics in action — keeping the burden of proof and control squarely on accountable humans, and measuring outcomes in probabilities, not personalities.
We do not need to strip the romance from our conversations about technology. But we must strip it from our policy. The sooner we stop humanising algorithms, the sooner we can regulate them for what they are: probability distributions that can alter the stability and resilience of our financial system. That should be the foundation of sound, responsible, and ethical AI governance.
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!
Find the best of Al News in one place, specially curated for you every weekend.
Stay on top of the latest tech trends and biggest startup news.