Measuring the Use and Influence of Research-Based Information

student submissionMeasuring the use of research-based information is a difficult task, as perhaps best summed up by Nutley, Walter and Davies: “we are unlikely any time soon to see . . . comprehensive evidence neatly linking research, research use, and research impacts” (2007, p. 271). Despite the challenge, it is important to try, for at least four reasons given by the same authors (p. 273):

  • Accountability – reporting back on activities and achievements to e.g., a funding agency, programme manager, research broker organization;
  • Value for money – demonstrating that benefits are commensurate with the cost;
  • Learning – better understand the process of use and impact to enhance future impact; and
  • Auditing evidence-based policy and practice – evaluating whether policy-makers are actually using research to support or challenge decision making.

Similarly, Soomai et al. (2016) note several incentives for optimal use of information by decision-makers, amongst them: the high cost of research, understanding the effect of responses, and ultimately continual improvement in decision-making.

Short Review of Models of the Research Process

The influence of the research process cannot be measured without first establishing what the process is. This is a complicated question in and of itself, which is explored in earlier entries in this blog series, e.g., by AlSharif (2016). There are variations in both the reality and in the modelling of how research is conducted, and how the research results flow into the policy process. What is important to understand for this post is that research use may be in one of three broad categories: direct, indirect/conceptual, and selective (Soomai et al., 2016).

Approaches to Measuring Research Use

Nutley, Walter and Davies (2007, pp. 275-282) note three main approaches to measuring research impact:

  • Forward tracking from research to consequences, i.e., starting with one or more studies, and assessing whether and how they have been used;
  • Understanding research use in user communities, i.e., starting with the user community of interest, and working backward to assess what research was used and how; and
  • Assessing initiatives aimed at increasing research impacts, i.e., making a higher level of assessment of attempts to increase the use of research in an organization, field or programme.

This last approach is still somewhat tentative, and the authors sound a note of caution about digging too deeply in that area, when there are already several clear obstacles to research use. Leaving this third approach aside, the first two correspond fairly well with what Soomai et al. (2016, p. 254) refer to as the “science push” and “science pull” perspectives to understanding information use.

Within those overall approaches, Soomai et al. (2016, p. 261) list six general “methods to measure awareness and use of information”:

  • Bibliometrics (e.g., citation analysis);
  • Webometrics and altmetrics, for example, web statistics based on link searches and web content;
  • Semi-structured interviews; online surveys;
  • Content analysis of print and digital sources, news media and social media, for example, blogs, Twitter records;
  • Direct observation of meetings; content analysis; discourse analysis;
  • Network analysis; social network analysis, for example, study of Twitter records.

Quantitative Metrics for Impact

The use of bibliographic citations to measure the impact of research has become routine in the academic community. These citations will in general only be able to tell us something about the use of research in research-based contexts, and little about the use of research in policy contexts, as citations are not commonplace in the documents provided to decision-makers, and internal or “grey” literature is poorly indexed in citation databases like Web of Science (Soomai et al., 2016).

Altmetrics – such as views, discussions, saves, and recommendations – may provide a more complete view of how a given publication has been used (Soomai et al., 2016). However, altmetrics have their own issues, as a much discussed paper may simply be controversial or infamous. It is also likely that due to barriers including paywalls, technical language, and their length, most journal articles are not directly accessed by policy-makers. This indicates altmetrics are still at best an indirect measure of information use in policy contexts.

Moving beyond just considering research papers, Thelwall et al. (2010) are part of a research community coalescing around “webometrics,” using links between sites (rather than only academic citations) to identify important connections between players in a policy field. They have found that webometrics may be more appropriate to fast-moving or new fields, and types of research less well covered by traditional citation databases, e.g., the social sciences and humanities, as well as grey literature generally. Their work does not seem to be engaged with the research of the web search engine community, which has done extensive work to improve the robustness of ranking algorithms based on web links (see, e.g., Google, n.d.).

Qualitative Analysis

In addition to quantitative metric based analyses, Soomai et al. (2016) note several different ways to approach qualitative analysis of research use. For instance, one of the paper’s authors worked directly embedded inside several fisheries organizations to enable conducting interviews and direct observations. The authors also give examples of studies tracking the frequency with which politicians mention certain issues, as well as certain evidence about those issues, as a way to measure how research has indirectly worked its way into the policy making sphere. Surveys are perhaps the most straightforward way to qualitatively reach the user community (Nutley et al. 2007), but it is important to consider that respondents cannot report indirect use of information of which they are not consciously aware.

Acting on Measurement

Another important element is how research has been integrated directly into decision support tools. Eden (2011) laments that critical examination of decision support systems is lacking, and that “guidance based on study analysis is just beginning to appear in the literature” (p. 12). However, it is unclear if this is the case outside of the environmental field, as the journal Decision Support Systems has existed since 1985 (Klein & Hirschheim, 1985), and even within the environmental field one can find a book chapter considering environmental impact analyses in decision support tools as early as 1990 (Fedra & Reitsma, 1990).

In the area of evidence-based behaviour change, in March 2015 more than 40 scientists (Naeem et al., 2015) came together to express concern about how the lack of proper design and evaluation of “payment for ecosystem services” projects may be having negative consequences. This development is concerning, as the intent of these projects is to change behaviour, and if the science is being done or used improperly that behaviour change may be ineffective, or even counter-productive. The authors identified thirty-three guidelines (twenty-one essential), grouped under six principles (four essential). They found that only 60% of 118 projects they surveyed adhered to even their four essential principles.

Concluding Thoughts

There are many ways one may attempt to measure research use, each with a variety of drawbacks. Researchers therefore emphasize the importance of mixed methods, e.g. Soomai et al. (2016), or of bringing in additional context when using the methods, e.g., Thelwall et al. (2010).

Nutley et al. (2007, p. 273) also note there is a danger of focusing too much on use and impact, as this may ignore or understate the importance of how research may diffuse in unexpected ways over long time periods, and ultimately caution against “too instrumentalist a view about what kind of ‘research’ is worth pursuing.” Importantly, it is not just research findings, but also the process of doing research that may have impact.

 

References

AlSharif, N. (2016). Information flow frameworks: Communication of research-based information into policy. Environmental information: Use and influence. Retrieved from https://eiui.ca/?p=2928

Eden, S. (2011). Lessons on the generation of usable science from an assessment of decision support practices. Environmental Science & Policy, 14(1), 11-19.

Fedra, K., & Reitsma, R. F. (1990). Decision support and geographical information systems. In H. J. Scholten & J. C. H. Stillwell (Eds.). Geographical information systems for urban and regional planning (pp. 177-188). Houten, The Netherlands: Springer Netherlands.

Google (n.d.). FAQ: Crawling, indexing & ranking. Google webmaster central. Retrieved from https://sites.google.com/site/webmasterhelpforum/en/faq–crawling–indexing—ranking

Klein, H. K., & Hirschheim, R. (1985). Fundamental issues of decision support systems: A consequentialist perspective. Decision Support Systems, 1(1), 5-23.

Naeem, S., Ingram, J. C., Varga, A., Agardy, T., Barten, G., Bloomgarden, E., … Wunder, S. (2015). Get the science right when paying for nature’s services. Science, 347(6227), 1206-1207.

Nutley, S. M., Walter, I., & Davies, H. T. O. (2007). Using evidence. How research can inform public services. Bristol: The Policy Press.

Soomai, S. S., Wells, P. G., MacDonald, B. H., De Santo, E. M., & Gruzd, A. (2016). Measuring awareness, use, and influence of information: Where theory meets practice. In B. H. MacDonald, S. S. Soomai, E. M. De Santo, and P. W. Wells (Eds.). Science, information, and policy interface for effective coastal and ocean management (pp. 253-279). Boca Raton, FL: CRC Press.

Thelwall, M., Klitkou, A., Verbeek, A., Stuart, D., & Vincent, C. (2010). Policy-relevant webometrics for individual scientific fields. Journal of the American Society for Information Science and Technology, 61(7), 1464-1475.

 

Author: Matthew R. MacLeod

 

This blog post is part of a series of posts authored by students in the graduate course “The Role of Information in Public Policy and Decision Making,” offered at Dalhousie University.

Please follow and like us: