A recent study has claimed that the prestigious National Institutional Ranking Framework (NIRF) has several inconsistencies and raised concerns about its reliability.
The paper titled ‘Unpacking inconsistencies in the NIRF rankings’, mentions huge fluctuations in the rankings, overemphasis on bibliometrics neglecting non-traditional research outputs, subjective nature of perception rankings that introduces biases, and challenges in the regional diversity metric, among others.
The paper, published in the peer-reviewed ‘Current Science’ journal, is authored by former IIT-Delhi director and current VC of Birla Institute of Technology and Science (BITS) V Ramgopal Rao and co-authored by BITS’ Abhishek Singh.
NIRF was approved by the then Ministry of Human Resource Development (MHRD) in 2015. The framework outlines a methodology to rank institutions across the country and the first report was released in 2016. The parameters broadly cover “Teaching, Learning and Resources,” “Research and Professional Practices,” “Graduation Outcomes,” “Outreach and Inclusivity,” and “Perception”.
Below is the breakdown of the major findings from the study:
Ranking fluctuations
The study suggests that while some fluctuations in performance rankings may reflect genuine changes, others could stem from factors beyond an institution's control, such as temporary data reporting issues or interpretation errors.
“Unlike some international ranking systems, such as the QS World University Rankings, which utilise a damping mechanism to spread large, interannual swings in data, the NIRF rankings lack a similar mechanism,” it said.
It added that while institutions in the top 20 positions tend to maintain stability, those outside this range experience significant fluctuations. These changes might be driven by minor adjustments rather than substantive improvements.
Reliance on bibliometrics
The study said a close examination of the research and professional practice metric reveals a heavy reliance on bibliometrics, which offers a numerical view of research impact but fails to capture elements like relevance, innovation, and social impact.
“The overemphasis on bibliometrics raises concerns about the comprehensive evaluation process, especially within the research and professional practice metric. A truly effective assessment of institutional performance should consider a broader spectrum of characteristics to avoid an incomplete and skewed representation of the contributions of an institution to academia,” the report said.
The NIRF's dependence on commercial databases for bibliometric data further limits scope and precision, potentially overlooking significant non-traditional research contributions and encouraging spurious publications, it added.
Regional diversity
In addition to issues with the perception metric, the study said the regional diversity metric, which measures the proportion of students from other states or countries, also poses a problem.
This metric often biases institutions in states with larger populations in India, it claimed. To address this, the study suggested that a normalisation method, similar to those used by international ranking agencies, could be employed.
“The normalization process may involve selecting a population measure, such as the overall population or ages 15–25, for each state. Utilizing the total population of each state, a logarithmic transformation may be applied to these numbers to mitigate size-related discrepancies,” the study said.
Quality of teaching
While research is important, the study said educational institutions primarily aim to deliver high-quality education and equip students with essential career skills. The NIRF rankings, however, lack direct mechanisms to assess teaching quality, such as classroom observations, student evaluations, and alumni feedback, it said.
“The omission of these evaluation methods hinders a comprehensive assessment of teaching effectiveness, leading to an incomplete depiction of the educational prowess of an institution. Moreover, the NIRF rankings overlook the practical dimension of teaching, a crucial aspect in various disciplines,” the study said.
Data integrity
According to the study, the NIRF rankings' effectiveness hinges on the accuracy and reliability of data submitted by participating institutions. Since the rankings depend on self-reported information, questions arise about data precision and trustworthiness.
“Given the diverse landscape of educational institutions in India, encompassing a wide spectrum of universities, colleges and specialised institutions, ensuring a uniform standard for data reporting becomes a formidable challenge. The reliance on self-reported data raises pertinent questions regarding the consistency and accuracy of the information presented,” it said.
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!
