Benefits and Misuse of Research Impact Indicators: Three New Books on Metrics

Does research output have any impact? Do the investments in personnel and other research resources actually advance understanding and contribute to societal and environmental well-being? These questions have captivated the minds of researchers, funders, and policy makers for decades. While some quantifiable indicators, such as citations, have been widely adopted, debates about the merits and misuse of these measures continue. As new assessment methods are proposed to address weaknesses of previously introduced measurements, confusion about which indicators should be used occurs. “Who knows what counts?” John Tregoning pointedly asked in a recent “spunky essay” published in Nature (Tregoing, 2018; see also reactions to Tregoning – Beattie, 2018; Tang & Hu, 2018; and Zaratin & Salvetti, 2018). Earlier this month, Jean Lebel and Robert McLean highlighted the “limitations of dominant research evaluation approaches” in the contexts of the global south and described Research Quality Plus (RQ+), a tool developed by Canada’s International Development Research Centre to tackle the problem.

The ability to measure information-related activities is essential for understanding the nuanced relationships of actors and knowledge pathways found at the science-policy interface. While citation analysis, or bibliometrics, can gauge levels of awareness of scientific information and uncover how information affects marine policy decisions, bibliometrics are definitely not sufficient for understanding use and influence of information. The research conducted by the Environmental Information: Use and Influence (EIUI) research team has found that bibliometrics are best used in conjunction with other measurement methods for developing a fuller understanding of information-policy interactions (Soomai et al., 2016).

Bibliometrics and other performance indicators do not provide a comprehensive assessment of the impact of information, and their accuracy and value has been vigorously debated. As altmetric scores produced by Google and other digital platforms now appear on résumés, applications for tenure and promotion include such scores along with citation counts and journal impact factors (JIF), and researchers are pressured to consider how their Scopus or Web of Science rankings affect their visibility among fellow academics and policy makers. In addition, the variety of metrics currently in use has prompted questions about the value of particular indicators. Three recent books offer insights about the history and use of methods developed over the past half century to measure the impact of research results and evidence. The authors emphasize that these metrics range from informative and functional to intrinsically biased and shallow. These books provide an excellent overview about the meaning and manipulations that can underlie seemingly quite straightforward numbers.

Cassidy R. Sugimoto and Vincent Larivère – Measuring Research: What Everyone Needs to Know

As the title implies, Sugimoto’s and Larivère’s book provides an excellent starting point for developing a foundational understanding of the history and use of research impact measurements. By describing the who, what, when, where, and why of these indicators, this book provides an accessible account of the mid-twentieth century development of the indicators by librarians and other scholars up to their more recent application to assist with policy strategies and to assess online social media exposure.

The majority of the book falls within chapter 3 (The Indicators), which presents a broad-based introduction about what can be measured with bibliometrics, what stands to be gained, and the types of tools that provide these assessments (H-index, JIF, Altmetrics, etc.). Of the three books reviewed here, Sugimoto and Larivère offer the most consistently neutral perspective on the use of performance metrics. Still, they gently draw readers’ attention to the potential biases intrinsic to the process of producing the impact indicators. With helpful charts, their discussion of the abuses of metrics is supported with well-researched evidence that allows readers to draw their own conclusions.

The final chapter contains a brief glimpse of “the big picture,” namely, the use of impact indicators like biblometrics in a variety of stakeholder and policy-making decision processes. Sugimoto and Larivère conclude their book with insights about the future of research measurements and predict a continued trend towards use of larger data sets that can be more quickly and accurately contextualized to the needs of administrators, policymakers, or researchers.

Yves Gingras – Bibliometrics and Research Evaluation: Uses and Abuses

Like Sugimoto and Larivère, Gingras presents an interesting overview of the history of measuring the impact of research and he traces a similar path through the origin and development of the contributions of Alfred Lotka, Eugene Garfield, and Derek de Solla Price. However, Gingras is not reticent in expressing strong views about abuse that detracts from the potential credibility of the measure-ment tools. With an air of condemnation of academics who are swept up into the hype of bibliometrics and performance indicators, Gingras draws attention to the inherent favoritisms that can exist within rankings. With helpful illustrations, he shows how biases can fail to acknowledge the advantages of particular languages and disciplines when impact is measured, which, without consider-ation, give impact numbers false credibility that results from the lack of context.

Gingras emphasizes that evaluation of the assessment tools is needed, and that scholars should strive to understand what lies behind the numbers. By offering criteria through which to assess metrics, he acknowledges their potential value in light of recent policy demands, but he stresses the need for anyone using these tools to consider the objectives of the organizations producing or benefitting from the numbers. The message of this book is that we ultimately do ourselves a disservice by blindly accepting performance rankings without examining the intentions and context. In order for impact measurements to be of value, they should be accompanied by careful human judgment, rather than replacing it.

Jerry Z. Muller – The Tyranny of Metrics

By moving beyond the realm of academia, Muller presents a broader view of the far-reaching implications of performance metrics and their potential for corruption and manipulation when used improperly. Muller coined the term “metric fixation,” and presents interesting insights about modern society’s propensity for transparency to the point of dysfunction. Muller does not believe measurement is actually the issue. Rather, an excessive and inappropriate use of metrics, and their potential to cause goal displacement can result when they are associated with rewards or compensation.

Muller, like the authors of the other two books, outlines the origin and history of metrics, but he extends the discussion to their use in several sectors ranging from education, to medicine, policing, and philanthropy. He offers critiques, philosophies, and psychological explanations about why academics continue to use metrics despite their rampant critique. His research and explanations of behaviors in modern settings has the ability to prompt readers to question their own behaviors, and to consider why we unknowingly crave the concrete quantifiable simplicity that metrics promise but do not often deliver. This book is a fascinating read for anyone interested in exploring the application and influence of metrics beyond their use in scholarly settings.

All three books echo the sentiment that performance metrics are tools that should be used carefully, in combination with other indicators of impact. They are not inherently corrupt, manipulative or faulty, but must be contextualized and their limitations acknowledged. Ultimately, bibliometrics, altmetrics, impact factors, or any other quantifiable research measurements cannot negate the need for human judgment. They can provide an objective lens to help facilitate decision-making, and when used properly performance indicators can help to eliminate non-credible or irrelevant information from within the growing body of literature on environmental issues.

Although all three books approach metrics and the measurement of research in different ways and with different opinions, the overwhelming consensus is that the human mind cannot be fully removed from the equation. As the volume of research literature grows, the desire to leave decisions to the efficiency of algorithms that produce neatly packaged numbers is appealing, especially as the need for timely decision making in the environmental policy realm becomes increasingly important. However, metrics cannot be used to replace human reason or the tacit knowledge that comes from experience and expertise, even though they can help to support and validate critical decisions. As the volume of literature available to policy makers is growing rapidly, it is important to work towards answering the timely questions posed by Tregoning (2018), Lebel and Mclean (2018), and others before them about the impact of research outputs. While these three books cannot provide all the answers, they raise informed and interesting questions and critiques, and they focus on the importance of contextualizing methods of measurement rather than discarding them from the suite of currently available options.

References

Beattie, A. (2018, July 19). Evaluation woes: We saw it coming. Nature, 559(7714), 331. https://doi.org/10.1038/d41586-018-05749-y

Gingras, Y. (2016). Bibliometrics and research evaluation: Uses and abuses. Cambridge, MA: MIT Press. ISBN 978-0-2620-3512-5

Lebel, J., & McLean, R. (2018, July 5). A better measure of research from the global south. Nature, 559(7712), 23–26. https://doi.org/10.1038/d41586-018-05581-4

Muller, J. Z. (2018). The tyranny of metrics. Princeton: Princeton University Press. ISBN 978-0-691-17495-2

Soomai, S. S., Wells, P. G., MacDonald, B. H., De Santo, E. M., & Gruzd, A. (2016). Measuring awareness, use, and influence of information: Where theory meets practice. In B. H. MacDonald, S. S. Soomai, E. M. De Santo, & P. G. Wells (Eds.). Science, information, and policy interface for effective coastal and ocean management (pp. 253-279). Boca Raton, FL: CRC Press.

Sugimoto, C. R., & Larivière, V. (2017). Measuring research: What everyone needs to know. New York: Oxford University Press. ISBN 978-0-19-064012-5.

Tang, L., & Hu, G. (2018, July 19). Evaluation woes: Metrics beat bias. Nature, 559(7714), 331. https://doi.org/10.1038/d41586-018-05751-4

Tregoning, J. (2018, June 21). How will you judge me if not by impact factor? Nature, 558(7710), 345. https://doi.org/10.1038/d41586-018-05467-5

Zaratin, P., & Salvetti, M. (2018, July 19). Evaluation woes: Start right. Nature, 559(7714), 331. https://doi.org/10.1038/d41586-018-05750-5

 

Author: Jillian Pulsifer

 

Please follow and like us: