
The rapid global push to democratise artificial intelligence may be overlooking a more fundamental challenge – whether these systems are technically correct in the first place. Subbarao Kambhampati, professor of computing and augmented intelligence at Arizona State University, said that wider access to AI models does not automatically translate into reliability, safety, or societal benefit.
Challenging the prevailing optimism surrounding AI democratisation, Kambhampati told Moneycontrol in an interview, “Democratisation is also frequently misunderstood. Simply making a system widely accessible does not automatically make it beneficial. You can democratise flawed systems just as easily as good ones.”
“Bad models can also be democratised. Broad distribution alone is not inherently virtuous — it depends entirely on the quality and reliability of what is being distributed,” he added
Correctness before ethics
While debates around ethical AI, inclusiveness, and accessibility dominate public discourse, Kambhampati stressed that these concerns are separate from technical validity. “Questions of inclusiveness or ethics are separate issues. They are orthogonal to the technical challenge. Before debating whether a system is ethical or inclusive, one must first ask whether it is correct.”
He framed the distinction using a deliberately stark comparison. “Correctness is my primary concern. Whether a system should be built or used is an ethical question. But if you are building something, its behaviour must at least be correct. If I build a bomb, technically speaking, I want it to function as intended. Whether it should be built at all is a different matter.”
Flawed understanding
“World models are not new. In essence, any intelligent agent needs an understanding of how actions taken from a given state can lead to other states. It does not guarantee that the underlying conceptual understanding is correct,” Kambhampati said, explaining the conceptual foundations behind how intelligent systems represent reality.
He pointed to research highlighting that models can appear accurate while harbouring deep structural errors. “Research has shown that a model can produce correct answers for a finite set of queries while still possessing a fundamentally flawed internal representation. For instance, a model may correctly describe routes between two points in Manhattan, yet fail completely when probed about the structure of the Manhattan map itself.”
This disconnect, he suggested, creates a hidden risk in evaluating model performance. Systems may agree with users on specific tasks without actually possessing a coherent internal model of the domain. “A system can be right about a limited number of things and still be wrong about an infinite number of things. The danger is that those errors may remain hidden until the model is examined more deeply — particularly if its internal world model is inaccurate.”
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!
Find the best of Al News in one place, specially curated for you every weekend.
Stay on top of the latest tech trends and biggest startup news.