The relationship between truth and information has become increasingly complicated in the digital age. Truth, a distinctive type of information that accurately reflects reality, now competes in a world overflowing with data—much of which is disconnected from reality. The philosopher and historian Yuval Noah Harari, in his latest book Nexus: A Brief History of Information Networks from the Stone Age to AI, explores the differences between truth and fiction, another type of information that is far more prevalent than the truth.
In a free market of information, truth often loses out because it is more costly, complicated, and often hard to digest. Fiction, by contrast, is cheap, can be made simple, and is more appealing. Uncovering truth requires rigorous research, thorough validation, and expert input, while fiction can be crafted to suit any narrative and made attractive for mass consumption. Harari’s observation helps explain why misinformation and simplistic narratives spread more quickly than carefully verified facts and nuanced accounts.
This dynamic applies not only to the media and politics but also to the world of science, where the distinction between truth and information has become increasingly blurred. In academia, researchers face pressures to publish more papers, leading to the dominance of what Harari calls the “naive view” of information.
The Naive View: More information is always better
The naive view suggests that increasing the volume and speed of information naturally leads to more beneficial outcomes. This perspective is increasingly prevalent in academia, where researchers are driven to publish as much as possible. The idea is that more published research equates to more academic prestige and scientific advancement. However, the focus on quantity over quality has significant downsides.
A perfect example of how this naive view manifests in academia is the H-index, a popular metric used to measure the impact of a researcher’s publications. The H-index quantifies both the productivity and citation impact of published papers. For example, a researcher who has published ten papers that have each been cited at least ten times has an H-index of 10. On the surface, it seems like a reasonable indicator of influence. However, it reinforces the naive belief that more publications are related to scientific success.
The H-index favours scholars who publish frequently, even if their individual papers are only moderately cited. In contrast, a scholar who writes two ground-breaking works that are cited tens of thousands of times may have a much lower H-index. Such metrics can also be gamed by networks of researchers who strategically cite each other’s work, following an implicit understanding of “You cite my papers, and I’ll cite yours”.
The emphasis on the H-index encourages a “publish or perish” culture, where researchers are pressured to produce more papers to increase their scores. This can lead to practices like “salami slicing,” where a single study is divided into multiple smaller papers to inflate publication counts. While this may boost a researcher’s H-index, it does little to advance meaningful scientific discovery. This constant race for higher scores dilutes the integrity of academic work and may even encourage researchers to focus on citation-friendly topics rather than pursuing ground-breaking, high-risk research.
Also read: Digital India Programme: A journey of transformation
University Rankings and the Pursuit of Quantity
The H-index is only part of a larger problem. Universities, too, are deeply invested in the naive view of information. Many academic institutions are ranked based on the volume of research they produce, and publication counts are often considered a key metric of a university’s prestige. As a result, researchers face immense pressure to publish new work continually. This has led to the rise of “predatory journals” and low-quality publications that prioritise quantity over quality.
The naive view of information creates a cycle of overproduction. It encourages the publication of incremental or redundant findings rather than high-impact discoveries that take time and effort to develop. As a result, the academic publishing industry has been flooded with papers, many of which add little value to the broader scientific community.
This obsession with metrics has also given rise to unethical practices in scientific research. Some researchers manipulate data or engage in fraudulent behaviour to secure publications in high-impact journals. These behaviours erode the credibility of scientific research and compromise the pursuit of truth.
Transcending the Naive View: Quality and Impact
To transcend the naive view of modern science, academic institutions need to rethink how they measure success. Institutions, researchers, and funding bodies need to emphasise the quality and impact of research rather than relying on quantity-based metrics to gauge success. Universities should reward researchers for producing rigorous and meaningful work rather than focusing on how many papers they publish.
In order to measure impact within and beyond the scholarly community, academia also needs to consider a wider variety of scholarly output in the form of books, case studies, policy papers, industry perspectives, opinion pieces and so on. Several pieces of seminal work, particularly in the social sciences, have been published as books, not journal papers. For example, consider the legendary sociologist and philosopher Pierre Bourdieu, whose works have been cited more than 11 lakh times. His most cited work is a book titled Distinction, which has been cited more than 1 lakh times.
Whether it is a technical patent or a management framework, a clear indicator of impact is usefulness for and adoption by practitioners. Knowledge and innovations need to be created while keeping real-world needs and problems in mind. It also takes effort and expertise to translate that knowledge into the real world through products, processes, and practices. For example, the number of patents filed and granted are metrics commonly used by universities and ranking frameworks. However, the impact can be measured better by looking at how many corporations have licensed those patents, how much license fee was generated, how many new products were created based on them, whether they have created a widespread impact on the way businesses operate and so on. Of course, impact metrics need not always be related to the business world.
By shifting the focus from output to impact, the academic world can reclaim its commitment to truth and rigour. Of course, the focus on quality and impact over quantity is not an easy problem to solve. It will require the scholarly world to go back to first principles and reinvent the meaning of scientific impact in the age of artificial intelligence.
Rohan Chinchwadkar is an assistant professor of finance and entrepreneurship at IIT Bombay. Views are personal.
[This article has been reproduced with permission from Indian Institute of Technology Bombay, Mumbai]