Measuring Use and Influence of Research-Based Information

New and diverse research results are produced daily. However, once research-based information is made available to users, how do we know whether the information is relevant and helpful? Has the information been used in its intended form, or has it been misinterpreted? As these questions relate to connections and disconnections in the science-policy interface, we reviewed these questions by considering methods of evaluation and assessment. The main argument we highlight in this post is that no single methodology adequately measures and evaluates information use and influence. Consequently, it is necessaryto carefully recognize which methods are suitable in any particular evaluation regarding the extent of information use alongside recognizing nuances of both the needs of the user and the intention of the producer.

Key Themes

Assessing the impact of research can be complex, and the approach to assessment will vary depending on the ultimate use of the research findings. Some expected outcomes of research assessment could include: understanding of knowledge production, research capacity building, policy or product development, sector benefits, or societal benefits. Using these assessments one can begin to build models for effective research use. Some models discussed by Nutley, Walter and Davies (2007) include classic knowledge driven models (a linear approach), problem solving, interactive (a feed-back approach), political, tactical, and enlightenment frameworks.

Is Use and Influence of Research-Based Information Measurable?

Scientific methods of “measuring awareness, use and influence of information,” are not straightforward and are dependent on context. In a review of literature on information management, research suggests that no single measurement works in isolation (Soomai, Wells, MacDonald, & De Santo, 2016). For example, in measuring how information supports the case of integrated coastal and ocean management (ICOM), Soomai and collaborators showed that measurement methods complement each other when used together. In the case of ICOM, they suggested a triangulation of qualitative (survey, interviews, discourse analysis, indirect observation, and content analysis), quantitative, and mixed methods approaches. Key quantitative methods of measurements include techniques such as bibliometric analysis, and strategies such as webometric and altmetrics which have emerged to respond to growing use of the internet and related technologies, such as, social media.

In a demonstration of the strength of each technique, qualitative measures can “assess how information is communicated in policy and decision-making processes” (Soomai et al., 2016, p. 263). Conversely, quantitative methods are effective in determining the movement of information (Soomai et al., 2016). Despite the fact that citations are useful indicators of information use, they are not an adequate measure of influence. Mixed methods address this gap by allowing use of more than one dataset and method while gaining an understanding of information behaviour of more than one stakeholder group (Soomai et al., 2016). By applying a mixed methods approach to examine socio-ecological systems, Holzer, Carmon, and Orenstein (2018) reported they were able to use multiple data sets to generate “narrative, quantitative, and visualized data about problem context, research context, achievements and shortcomings of the research” (p. 815).

Measuring Credibility

Use of indicators is a popular technique for establishing and gauging the credibility of information. The term indicator continues to proliferate among numerous disciplines. Grounding the concept in information management is one approach and Lehtonen defines the term as “a technique, scheme, device, or operation which can be used to collect, condense, and make sense of different kinds of policy relevant knowledge to perform some or all of the various inter-linked tasks of policy formulation” (2017, p. 163). By settingindicators into three categories, namely, descriptive, performance, and composite, Lehtonen highlights three functions of indicators: modelling facts and reasoning, assisting in conceptualizing the problem, and visioning or promoting a desired future.

In one recent study, Elgert examined sustainability ratings to assess their application as informative indicators of “measurability and transparency and unexpected outcomes at the knowledge policy interface” (Elgert, 2018, p.16). Elgert noted the caveat that reliance on sustainability ratings as indicators requires careful consideration during their operationalization to enhance legitimacy of representation of the subject being assessed. Misrepresentation in the science policy interface is risky because of the possibility of skewing a policy direction. For example, in her study, Elgert showed that using sustainability ratings contributed to skewed outcomes thereby misleading understanding of policy interventions.Building on this observation, Elgert presented a critique of numeric indicators, which are assumed to be objective measures, and noted that “recent approaches to assessing ratings … illuminate the social and political influences on processes of quantification, and strongly suggest that neither knowledge nor policy are clearcut, uncontested concepts, as is suggested by conventional linear accounts of the knowledge-policy interface” (Elgert, 2018, p. 23).

In another study, Heinket al. (2015) experienced similar challenges when attempting to develop indicators to evaluate credibility, relevance, and legitimacy. In the end, Heinket al. emphasized that efficiency of indicators is dependent on appropriate definition and measurability in order to be functionally useful in understanding activities in the science policy interface.


There are challenges in determining the use and influence of research-based information. One of the main limitations is the fact that methods of “measuring awareness, use and influence of information” are not straightforward. Articulating this concern, Soomai et al. (2016) emphasized the inherent problems of ambiguity concerning what constitute “use” and “influence.” Defining the parameters of “use and influence” can be problematic because of the lack of consistency that the measurement methods apply from time to time from the available spectrum of assessment techniques.

There are also limitations regarding the use of indicators since they may not be adaptable (Lehtonen, 2017). In critiquing indicators, Lehtonen argued that they can be incomplete, and therefore are prone to failure in meeting “their expectations in terms of their intended use, and generate most of their impact inadvertently, through indirect and often unforeseen pathways” (2017, p. 175). Lehtonen noted that operationalized indicators construct the notion of linearity of the policy formulation process and thereby neglect to account for uncertainties that may arise either before or after the formulation process.

Lehtonen also stated that indicators can be problematic because they cannot sufficiently account for all aspects of their subject, and as a consequence, the results can be contested. The possibility that some perspectives may be omitted from indicators speaks to Lehtonen’s observation that indicators can tend to be reductionist. To avoid this problem, Lehtonen, citing a 2002 paper by Cash et al., recommends that policy formulators consider using indicators that are “salient, credible and legitimate to their expected users” (Lehtonen, 2017, p. 171).

Another issue with measuring the impact of research is determining who is responsible for the assessments and how they are conducted. Assessments should focus on the primary use of the research to ensure it is the most relevant and effective for the intended use (Elgert, 2018). Finally, a major limitation affecting measurement of the use and influence of research can be funding available to carry out the assessment. Financial resources for such work may vary and funding constraints can prevent thorough study (Nutley, Walter & Davies, 2007).


The capacity for assessing the use of research-based information can be complex. It is important to evaluate measurement methods and to develop a good grasp of important indicators employed within the field. However, despite the associated complexities, determining how research evidence is used can be important for understanding activities in the science-policy interface. Such understanding can be important for funding opportunities, it can provide accountability to stakeholders, and it can allow for future research opportunities to continue to meet the needs of the ever changing science policy interface (Nutley, Walter & Davies, 2007).



Elgert, L. (2018). Rating the sustainable city: “Measurementality,” transparency, and unexpected outcomes at the knowledge-policy interface. Environmental Science and Policy, 79, 16-24.

Heink, U., Marquard, E., Heubach, K., Jax, K., Kugel, C., Neßhöver, C., … Vandewall, M. (2015). Conceptualizing credibility, relevance and legitimacy for evaluating the effectiveness of science–policy interfaces: Challenges and opportunities. Science and Public Policy, 42, 676-689. DOI:10.1093/scipol/scu082

Holzer, J. M., Carmon, N., & Orenstein, D. E. (2018). A methodology for evaluating transdisciplinary research on coupled socio-ecological systems. Ecological Indicators, 85, 808-819.

Lehtonen, M. (2017). Operationalizing information: Measures and indicators in policy formulation. In M. Howlett & I. Mukherjee (Eds.), Handbook of policy formulation(pp.161-179). Cheltenham, UK; Northampton, MA: Edward Elgar Publishing

Nutley, S. M., Walter, I., & Davies, H. T. O. (2007). How can we assess research use and wider research impact? In Using evidence. How research can inform public services (Chapter 9, pp. 271-296). Bristol: The Policy Press.

Soomai, S. S., Wells, P. G., MacDonald, B. H., & De Santo, E. M. (2016). Measuring awareness, use, and influence of information: Where theory meets practise. In B. H. MacDonald, S. S. Soomai, E. M. De Santo, & P. G. Wells (Eds.). Science, information, and policy interface for effective coastal and ocean management (pp. 253-279). Boca Raton, FL: CRC Press.


Authors:Rachael MacNeil & Micheal Ngara


This blog post is part of a series of posts authored by students in the graduate course “The Role of Information in Public Policy and Decision Making” offered at Dalhousie University.


Please follow and like us: