Artificial Intelligence (AI) models anchored in machine learning algorithms are exceptionally adept at spotting patterns embedded in vast amounts of data. Trained on massive datasets, these models have demonstrated the ability to learn complex relationships—both explicit and latent—among large sets of variables, thereby transforming the landscape of predictive techniques. Their capacity to explore combinatorially explosive spaces has helped solve many hitherto unsolved problems. AlphaFold2 by Google DeepMind is one such system which, by solving the long-standing protein-folding problem and predicting a protein’s three-dimensional structure, promises to revolutionise drug discovery, proteomics, and functional genomics.
Accuracy Without Understanding
However, as numerous studies have shown, these algorithms learn what is present in the data without understanding why or how these patterns occur. Experiments have revealed that AI systems reporting high accuracy in object recognition often do so correctly only when objects appear in familiar contexts. For instance, a dog may be recognised accurately when seen in a garden or inside a home, but not when it is swimming. Such findings indicate that AI models do not necessarily learn features intrinsic to a dog; instead, they learn contextual cues associated with the environment.
This raises serious concerns about deploying such algorithms for decision-making in real-world scenarios.
The Need for Explainability in High-Stakes Decisions
Accuracy is undoubtedly a desirable attribute of any computational algorithm. However, for systems designed to generate affective outputs, such as health or financial risk assessments, explainability and transparency are non-negotiable. Understanding causal associations between predicted outcomes and underlying input variables is essential for end users to trust AI systems as decision-makers.
For example, when adopting an AI algorithm to detect the malignancy of brain tumours, medical practitioners must be assured that the system’s decisions are grounded in biologically plausible causal factors. Given the consequences of an incorrect diagnosis, clinicians need confidence that the outcome is not influenced by image artefacts, lighting conditions, or irrelevant obstructions.
Limitations in Reasoning and Inference
The inability of the current generation of AI models to solve simple counting or inferencing problems further highlights significant gaps in their reasoning capabilities. Human reasoning relies on a judicious combination of quantitative and qualitative knowledge, spanning domains such as world facts, causal understanding, and semantic associations. In contrast, the maximum-likelihood-based associations learned by contemporary AI models are insufficient to emulate this form of reasoning.
Trust, Accountability and Human Oversight
Traceability or explainability of an AI model is closely linked to its trustworthiness. Without the ability to understand how and why specific inputs produce particular outputs, decision-makers find it difficult to adopt these systems, as their results are neither explainable to end users nor actionable.
Consider an AI-based financial risk prediction system that rapidly evaluates hundreds of applications. While efficient, such a system—without human oversight to inspect, evaluate, and override predictions when necessary—can lead to accountability gaps and serious legal consequences.
Bias, Context and India’s Plurality
Foundational language and vision models often exhibit biases inherited from their training data. Models predominantly trained on Western datasets frequently fail to capture the Indian context. For example, a request generated in India to paint a tribal woman fishing in a forest often results in an image of a Native American. Similarly, these models perpetuate historical biases: an image of a nurse tending to a patient is almost invariably depicted as female.
Failure to contextualise poses a major challenge for decision-making systems in a pluralistic society like India’s. In a country where language, culture, food habits, clothing, and traditions change every few kilometres, even a task as simple as prescribing a balanced meal must account for availability, affordability, and religious beliefs. While animal protein may be acceptable to one group, others may require plant-based diets or exclude certain food categories altogether. Similar diversity exists in agricultural, ecological, financial, and healthcare practices. For AI-driven digital wellness solutions to be effective, their fairness must be evaluated across all social groups.
The Road Ahead: From Correlation to Causation
Although fully automated AI-driven decision-making systems are not yet feasible, advances in conversational interfaces and voice-based technologies now allow digital services to reach remote populations. This will be possible only if underlying models are trained in vernacular languages, a task requiring enormous volumes of local data. The coming year promises significant progress as multiple organisations invest in collecting indigenous content, which will also help address contextualisation challenges.
While foundational models have revolutionised predictive reasoning and driven advances in disease prediction, personalised healthcare, and financial risk assessment, it remains premature to expect them to replace human decision-making entirely. Major technological innovations are required to embed verifiability, transparency, and fairness into AI systems.
Causal reasoning models—still in their infancy—aim to learn cause-and-effect relationships rather than mere correlations. An ideal solution would integrate the scale and power of foundational models with causal frameworks to enhance robustness, transparency, and trust. Such systems would enable users to simulate multiple futures, explore counterfactuals, and understand how and why decision boundaries change. Ultimately, the success of AI systems will depend on their ability to deliver fair, accountable, and equitable outcomes for individuals and society at large.
(Lipika Dey, Professor of Computer Science, Ashoka University.)
Views are personal, and do not represent the stance of this publication.
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!
Find the best of Al News in one place, specially curated for you every weekend.
Stay on top of the latest tech trends and biggest startup news.