Measuring the Use and Influence of Research-Based Information: Finding Meaning in Complexity

While research-based information can be transmitted in a variety of ways, such as through publications or conference presentations, the question of whether such information is being used by or influencing policymakers themselves remains. The resulting interest in evaluation of both the use and influence of research is driven by two groups: decision makers, who are increasingly being held accountable for the decisions they make with public funds (Soomai et al., 2016), and researchers who are under mounting pressure to demonstrate tangible impacts from their research (Cvitanovic et al., 2021). As such, this blog post explores approaches to evaluating the use and influence of research-based information to demonstrate the variety of methods available to determine if and how this information is being used.

Evaluation of the use and influence of research is a difficult undertaking, particularly because multiple barriers exist. Bogenschneider et al. (2021) outline four contextual complexities that obstruct the evaluation of the research-to-policy process: the variety and number of stakeholders, multiple potential outcomes, methodological dilemmas, and the time between research dissemination and obvious indicators of use. Stakeholder, outcome, and methodological complexities are problematic when there are differences in opinion about what constitutes evidence, appropriate measurement methodologies, or effective outcomes. For example, researchers may tend to value statistical quantitative evidence, explicit policy outcomes, and randomized controlled trials, whereas policymakers may consider the testimonial of a constituent as evidence or simply obtaining access to research as an effective outcome.

Yet these difficulties have not deterred researchers from investigating the activities involved in measuring the research-to-policy processes and exploring different measurement methods. Historical approaches, such as Weiss’ ideal model and Knott’s and Wildavsky’s seven-stage chain of utilization, have characterized the research-to-policy process as a sequence of activities beginning with a definition of the problem or the reception of information, and ending with the implementation of policy or the information providing tangible benefits (Soomai et al., 2016). Ultimately, such models promote the idea of the research-to-policy process as a linear pathway moving logically from one stage to the next.

Other researchers have notably diverged from this conceptualization, with the literature we reviewed identifying some academics who have envisioned a more fluid and dynamic process of information use measurement. For example, Soomai et al. (2016) provide an overview of the variety of methods with which to measure the use of research in policy, including methods that can be used concurrently. Qualitative methods such as interviews with policymakers themselves can identify whether scientific advice was acted upon and why, while direct observations of meetings help to identify who is involved, how much attention was given to specific issues, and existing biases (Soomai et al., 2016). Meanwhile, quantitative techniques such as webometrics and social network analysis assess “patterns of relations and information flows” among actors, including constituents, lobby groups, and governments (Soomai et al., 2016, p. 267). Most importantly, the authors conclude that “no method alone can fully answer questions about information use and influence” and, as such, “multiple methods” need to be employed to promote greater reliability and certainty, once again highlighting the plurality of techniques that can be used for measurement (Soomai et al., 2016, p. 268).

Soomai et al. (2016) are not alone in advocating for the use of multiple measurement methods to facilitate more robust assessments of use and influence. Williams (2022) also recommends using mixed methods through combining citation and altmetric attention scores to examine values, meanings, and ultimately, impact. This bibliometric approach assesses use by counting citations in academic journals, as well as the number of “hits,” or views, in blogs, social media posts, news articles, and policy documents. The author also employs qualitative interviews where she asks users to describe the “worth” of the outputs of certain research papers (Williams, 2022, p. 523). This approach particularly exhibits the strength of mixed-method measurements of information use and influence as it involves academics and non-academics, thereby enhancing its objectivity by considering multiple perspectives.

Other researchers have taken broader approaches to capture the diversity of uses and influences that occur across the science-policy interface over longer time periods. Several approaches attempt to move beyond typical measures of impact or inclusion in policy to find more comprehensive ways to measure successful use (Cvitanovic et al., 2022). In one approach, Cvitanovic et al. (2022) suggest using five indicators, namely, individuals, organizations, policies, ecosystems, and science, to monitor and track research impact. The authors use both quantitative and qualitative metrics to assess the full range of impacts, such as measuring the number of advice requests received or examining informal verbal feedback, respectively. The Family Policy Education Theory of Change Model created by experienced knowledge brokers running the Family Impact Seminars also uses evaluation protocols to assess use and influence of research at what they identify as early, intermediate, and long-term stages of the policy process (Bogenschneider et al., 2021).  In particular, this model articulates how research evidence flows through the policy process and identifies measurable stages where research is used and where knowledge brokering efforts might have influence. For example, they identify a measurable intermediate stage in the policy process as “changing the attitudes of policy makers towards research” (Bogenschneider et al., 2021). Accompanying protocols for evaluation of research use and influence of this outcome include pre-and post-testing to assess changes in participant perceptions of their understanding of the research presented (Bogenschneider et al., 2021). Both the approach by Cvitanovic et al. (2022) as well as the Family Policy Education Theory of Change model are beneficial as they can identify outcomes that could otherwise be overlooked as successful uses of research-based information (Bogenschneider et al., 2021).

Cvitanovic et al. (2022) also identify barriers that hinder the successful use and influence of research, as well as enablers that could help to increase the probability of success. An important practical enabler they identify is for researchers to collaborate with users, such as decision-makers, to help them understand and apply the researchers’ recommendations, as they share the goal of promoting evidence-based policies (Cvitanovic et al., 2022). Arnott and Lemos (2021) concur that collaborative approaches where researchers and users of their knowledge work together, in their case to facilitate scientific knowledge to inform sustainable coastal resource management, increase the use of the research while at the same time bridging the research-policy divide. The authors admit, however, that there is a “tension” between “the desire to understand use more concretely and the reality of its complex nature,” with researchers struggling to articulate the specific uses, user identities, attribution, and evidence of broader outcomes that comprise “actionable knowledge” (p. 223). Despite these difficulties, Arnott and Lemos encourage researchers, users of knowledge, and funders to keep working together on a co-produced approach to address the use of sustainability science, with the third group playing a particularly important role as the initiators of such collaborative efforts.

Englund et al. (2022) also aim to assess the use and influence of co-produced knowledge, albeit within the context of evaluating co-produced climate research in contrast to Arnott’s and Lemos’ exploration of collaborative sustainability science. Englund et al. (2022) specifically propose an evaluation framework consisting of four methodological guidelines to determine whether co-produced climate services are conducive to information uptake: that developmental evaluation practices are present, as co-produced information is “complex, non-linear, and emergent” and measuring its use and influence requires constant re-evaluation of what “successful” use truly means; that a theory of change is built and refined; that stakeholders are consistently involved using participatory evaluation methods; and that visual products are included to ensure accessibility (p. 6). Ultimately, the authors argue that these guidelines can help determine whether co-produced climate information actually contributes to societal change, while more importantly, helping to gauge the utility of co-production itself, “captur[ing] the many, often intangible or unexpected, effects that emerge” when knowledge is co-produced by researchers and users (Englund et al., 2022, p. 11).

To conclude, measuring the use and influence of research-based information can be a complex undertaking. The literature discussed above demonstrates how researchers possess different understanding of what to measure and how to measure it. As such, no single approach will result in a comprehensive understanding of information use and influence. However, as Arnott and Lemos (2021) argue, “no matter how complex the relationship between knowledge and use may be, understanding more about it is critical to harnessing the power of science to serve society” (p. 229). Both future researchers and policymakers would, therefore, be advised to consider the merits of employing multiple methods to facilitate an assessment of use and influence that more accurately captures the underlying complexities of this aspect of information in policy and decision-making.

 

References

Arnott, J. C., & Lemos, M. C. (2021). Understanding knowledge use for sustainability. Environmental Science & Policy, 120, 222-230. https://doi.org/10.1016/j.envsci.2021.02.016

Bogenschneider, K., Normandin, H., Onaga, E., Bowman, S., Wadsworth, S. M., & Settersten, R. A. (Jr.). (2021). Evaluating efforts to communicate research to policy makers: A theory of change in action. In K. Bogenschneider & T. J. Corbett. Evidence-based policymaking: Envisioning a new era of theory, research, and practice (2nd ed., Chapter 8, pp. 195-231). New York: Routledge.

Cvitanovic, C., Mackay, M., Keenan, R. J., van Putten, E. I., Karcher, D. B., & Dickey-Collas, M. (2021). Understanding and evidencing a broader range of “successes” that can occur at the interface of marine science and policy. Marine Policy, 134, 104802. https://doi.org/10.1016/j.marpol.2021.104802

Englund, M., André, K., Gerger Swartling, Å., & Iao-Jörgensen, J. (2022). Four methodological guidelines to evaluate the research impact of co-produced climate services. Frontiers in Climate, 4, 909422. https://doi.org/10.3389/fclim.2022.909422

Soomai, S. S., Wells, P. G., MacDonald, B. H., De Santo, E. M., & Gruzd, A. (2016). Measuring awareness, use, and influence of information: Where theory meets practice. In B. H. MacDonald, S. S. Soomai, E. M. De Santo, & P. G. Wells (Eds.), Science, information, and policy interface for effective coastal and ocean management (Chapter 11, pp. 253-279). CRC Press, Division of Taylor & Francis.

Williams, K. (2022). What counts: Making sense of metrics of research value. Science and Public Policy, 49(3), 518-531.https://doi.org/10.1093/scipol/scac004

 

Authors: Brian Cheung, Laura Hardie, and Robin Willcocks Musselman

This blog post is part of a series of posts authored by students in the graduate course “Information in Public Policy and Decision Making” offered at Dalhousie University.

Please follow and like us: