Challenges in Measuring the Use and Influence of Research-Based Information

Conducting research and producing reports are important pursuits, but they are only some of the activities found in research-to-policy processes. When data have been collected, analysed, and reports prepared, the next step, namely, communication, can be fraught with challenges that can impede use of the research. Some studies are conducted to address questions posed directly to researchers by decision makers who seek evidence to inform particular practices or development of policies. Even though use of the results of these studies may seem to be straightforward, the influence the research will have on an industry or a specific policy is still dependent on variables that affect the uptake of any research findings. As noted in literature about the science-policy interface selected for this blog post, three factors, i.e., credibility, relevance, and legitimacy (CRELE) of information and of the individuals and organizations that produce the research, are key indicators of the usability of information.

Heink et al. (2015) state that although CRELE attributes act as the main determinants of the effectiveness of science-policy interfaces (SPIs), different interpretations about the role(s) of evidence in particular decisions can result in different conclusions about the value of the same SPI. The relationships among the three CRELE attributes may be adjusted to accommodate expectations or requirements in decision-making processes without making judgements about the importance of any one of three characteristics. In other words, while accepting trade-offs among the characteristics, biased views about any of the characteristics can still be kept to a minimum. Decisions regarding trade-offs, such as the “clarity-complexity” and the “speed-quality” trade-offs, can, therefore, justifiably be made.

It is in these trade-offs that questions about knowledge mobilization also arise. The credibility of research findings is strengthened in the public’s mind, when the research is understandable. Choosing the “clarity-complexity” trade-off may open the door for broader understanding of research, leading to extended use, and therefore greater influence on the intended audience or policy. Levin (2013) explains that often expert or veteran employees in a field may have extensive, hands-on knowledge, but that does not inherently mean they are fluent in the jargon or language common in the research-based information of their field. This situation leads to the question of whether the “clarity-complexity” trade-off is a trade-off at all. Even though complex research findings may be presented in the clear language of a discipline, if that language is not readily understood by an audience, knowledge transfer will falter.

By simplifying reports of research findings, more people will likely be able to comprehend the information and find it easier to understand the decision-making and policy-implementation process that drew on the information. Research findings do not need to be presented in complex language, i.e., to be viewed as credible, in order to be utilized; in fact, the opposite can be more effective. By making research findings accessible through clear presentation, experts, government agencies, and the general population will find it easier to understand and adhere to policies and guidelines that the research informed. This practice greatly increases the value of information.

The value of information can have a considerable impact on the way it is used in decision making. Much like the CRELE attributes, value is dependent on who determines the value. Bolam et al. (2019) discuss the challenges of decision making when research evidence is uncertain. They also drew attention to the diverse makeup of disciplines that are often involved in decision processes. For example, in conservation decision making, some of the disciplines involved include theory, cognitive psychology, operations research, economics, and statistics (Bolam et al., 2019). Opinions from various academic backgrounds can prove helpful in decision-making, but they can also create added confusion and disputes, which increases the uncertainty.

Although uncertainty is a given, as 100 % certainty is rare, some steps can be taken to tackle uncertainties. Bolam et al. (2019) note the importance of estimating the magnitude of the uncertainties, predicting the probable consequences, and figuring out how to address an uncertainty in the final decision. By assessing the uncertainties surrounding information that may be used in policy decisions, the value and CRELE attributes of the information can be more easily determined and a decision reached about the best way, or ways, to mobilize the information into policy. Uncertainty can contribute various levels of consequences resulting from policies. Thus, assessment of the value of evidence and measurement of indicators of CRELE often occupy a large portion of policy formation activity.

In a paper entitled, “Operationalizing information: Measures and indicators in policy formulation,” Markku Lehtonen (2017) asserts that indicators can have powerful and unforeseen consequences in the development of policies. He also discusses various roles that indicators can play in policy formulation. He defined indicators as “variables that summarize or otherwise simplify relevant information, make visible or perceptible phenomena of interest, and quantify, measure, and communicate relevant information” (Lehtonen, 2017, p. 163). Economic and social performance indicators are prime examples of commonly used indicators (Lehtonen, 2017). In order to make research more accessible, indicators should be utilised. According to Lehtonen, “indicators have become an increasingly common policy tool in practically all sectors of policymaking, produced and used at all levels of governance by a multitude of policy actors, and for a wide range of purposes” (2017, p. 161). However, Lehtonen explains that, thus far, the role of indicators in public policy has received little attention within the research community (2017. Previously, indicators were perceived as tools designed to harness accountability and evidence-based policy (Lehtonen, 2017). Furthermore, the author alludes to the possibility of using indicators as another function of policy formulation, that is, consensus-building or consolidation.

These indicators are key when assessing the information that will be used in policy formation. In “Assessing and labelling evidence,” Davies, Nutley, and Hughes (2019) discuss ways that evidence-promoting organisations have responded to the challenge of reaching a workable consensus on appropriate ways to identify and label good evidence. Similar to the indicators noted by others, the authors herald bodies around the world, such as government agencies, independent public bodies, etc., that aim to inform and guide policy through the practice of synthesising, gathering, and sifting of evidence-based information (Davies et al., 2019). However, in determining the validity of evidence-based information, some issues arise. The authors explain: “discussion of evidence standards can be confusing because there is no overall consensus on the definition of terms such as the ‘quality’ or the ‘strength’, or how these aspects should be translated into evidence standards that set appropriate and demonstrated levels of attainment” (Davies et al., 2019, p. 229). Furthermore, the source of research-based evidence is of equal concern because some sources of research-based evidence are seen as more credible than others (Davies et al., 2019). For example, peer-reviewed analyses from respectable research institutions are generally preferred (Davies et al., 2019). To conclude, the authors argue that perhaps there is one neglected criterion regarding the quality of evidence, which is, that “it can garner attention, be engaged with and influence change,” which is discussed in the subsequent chapter in the same book (Davies et al., 2019, p. 246).

After the research and evidence is assessed, there are still challenges in using research-based information. In “Using evidence,” Boaz and Nutley (2019) discuss these challenges. They consider “whether our understanding of research has progressed to a point where it is possible to identify, with confidence, the key features of an effective evidence ecosystem” (p. 252). To be clear, it is important to draw the distinction between the “use” and “influence” of research (Boaz, & Nutley, 2019). Use of research refers to the instrumental application of research-based evidence, whereas influence of research refers to the gradual conceptual shifts and problem reframing (Boaz, & Nutley, 2019). Essentially, the typologies of research use point to different ways of modelling the research-use process (Boaz, & Nutley, 2019). Ultimately, “research on using evidence is often ambiguous about what is meant by evidence, with many studies focusing rather more narrowly on the use of research-derived findings” (Boaz, & Nutley, 2019, p. 272).

Measuring use and influence of research-based information is often very difficult, as different sectors use different methods and indicators to track how information is used. Calculating the value of information can lead to disagreements, just as trying to settle on a single definition of relevance, credibility, or legitimacy can. Identifying key indicators of the value of information for a particular sector, and mobilizing that information to that sector, can lead to greater use of information in policy. This outcome, in turn, will make policy makers more informed and allow them to make better decisions.

 

References

Boaz, A., & Nutley, S. (2019). Using evidence. In A. Boaz, H. Davies, A. Fraser, & S. Nutley (Eds.). What works now? Evidence-informed policy and practice (pp. 251-277). Bristol: Policy Press.

Bolam, F. C., Grainger, M. J., Mengersen, K. L., Stewart, G. B., Sutherland, W., Runge, M., C.,

& McGowan, P. K. (2019). Using the Value of Information to improve conservation decision making. Biological Reviews, 94, 629-647. https://doi.org/10.1111/brv.12471

Heink, U., Marquard, E., Heubach, K., Jax, K., Kugel, C., Nesshöver, C., … Vandewall, M. (2015). Conceptualizing credibility, relevance, and legitimacy for evaluating the effectiveness of science-policy interfaces: Challenges and opportunities. Science and Public Policy, 42, 676-689. doi:10.1093/scipol/scu082

Lehtonen, M. (2017). Operationalizing information: Measures and indicators in policy formulation. In M. Howlett & I. Mukherjee (Eds.), Handbook of policy formulation (pp. 161- 179). Cheltenham, UK; Northampton, MA: Edward Elgar Publishing.

Levin, B. (2013). The relationship between knowledge mobilization and research use. In S. P. Young (Ed.), Evidence-based policy-making in Canada (pp. 45-66). Don Mills, ON: Oxford University Press.

Nutley, S., Davies, H., & Hughes, J. (2019). Assessing and labelling evidence. In A. Boaz, H. Davies, A. Fraser, & S. Nutley (Eds.). What works now? Evidence-informed policy and practice   (pp. 225-249). Bristol: Policy Press.

 

Authors: James Ledger and Lauren Skabar

 

This blog post is part of a series of posts authored by students in the graduate course “The Role of Information in Public Policy and Decision Making” offered at Dalhousie University.

Please follow and like us: