Information Use in Policy Formulation: Determining Impact

studentsubmission2014thumbDetermining the impact of research in policy-making contexts is a challenging task, a fact which recent literature highlights. This blog post surveys several publications that address this subject. We begin with Sandra Nutley, Isabel Walter, and Huw Davies, who draw attention to important issues regarding measurement of the impact of research. They point out, for example, that determining comprehension and use of research is problematic since research publications can be quite complex to read and use is itself a multivariate concept (Nutley, Walter, & Davies, 2012). Nonetheless, they note that when financial resources to underwrite the costs of research are being contemplated, especially by governments, consideration should be given to the impact that the research will have on policy development. In their view, assessing impact is important in determining the effectiveness of research.

Questions surround every method used to assess the impact of research. One widely accepted approach works from a forward-looking perspective to track research from its beginning, to its use, and then to its impact. Bibliometric analysis, for example, applies a quantitative measure, assigning impact factors to research literature to demonstrate its effectiveness. This technique is seen often in academic settings (Nutley, Walter & Davies, 2007). However, variation in the reasons why researchers cite publications raises questions about how much can be determined about the impact of research with this method. A second approach for determining the impact of research takes a backward-looking perspective, which focuses on research user communities. This approach starts with the research output and works backward in a manner similar to a criminal investigation. Challenges with this approach include untangling various influences, occasional lack of clarity about who did what, changes in focus occurring during a research project, and frequent turnover of staff who could describe what happened in many contexts. Both simple and complex models used to assess the impact of research all present challenges. As Nutley, Walter, and Davies state, “research impact … is a somewhat elusive concept, difficult to operationalise, political in essence, and hard to assess in a robust and widely accepted manner” (p. 295).

The view that public money should be spent on scientific research to support decision-making has been gaining momentum in recent decades (Eden, 2011). The multi-year American Sustainability of semi-Arid Hydrology and Riparian Areas project (SAHRA), described by Susanna Eden, is a good example of an effort responding to this trend. Projects conducted within SAHRA are location-based and are grouped by stakeholder-relevant questions in order to achieve decision-relevant research. Eden notes that the “principle for producing science for decision support,” which involves research users, has been emphasized in the literature (Eden, 2011, p. 14). Applying this principle resulted in both researchers and stakeholders reporting better communication, increased trust, and mutual understanding for projects that involved direct stakeholder participation. In SAHRA, adaptive learning was observed through cross-disciplinary programs that encouraged connection of capabilities and horizontal linkage building. Efforts to provide transparent processes, build relationships and trust, and engage stakeholders closely were implemented to facilitate adoption of scientific research by stakeholders.

Policy- and decision- makers increasingly realize the importance of understanding environmental problems and the necessity of formulating strategies to mitigate or adapt to environmental change, which in turn requires appreciation of scientific and social processes. This perspective has led to a growing demand for scientific knowledge that can support decision making. Global environmental assessments have become an arena where science and policy interact and formal efforts have been pursued to make assessments available in a form intended to be useful for decision making (Clark, Mitchell & Cash, 2006).

Clark, Mitchell and Cash note that there are two models of information influence in decision making. The “rational actor” model sees policy makers analyzing costs and benefits of available alternatives to further their objectives. In this view, policy makers are assumed to understand the problems they face and can ask the right questions to help them decide the best course of action. The “rule of thumb” is another model wherein decision makers reduce the need to collect and process information and make “good enough” decisions (Clark et al., 2006, p. 9).

National policymakers may be suspicious of research generated by others because of the belief that governments disseminate information to manipulate and gain advantage. Clark et al. concluded that information is more influential in policy decision making when it is seen as credible and also relevant and legitimate. Policy makers must believe the information was produced by a process that took account of the concerns and insight of relevant stakeholders and was deemed procedurally fair.

Ouimet et al. (2010) explore the role that direct interaction with academics and researchers plays in the decision making (i.e., policy development) processes. The authors took an alternative approach by first conducting an historical analysis of the most prominent publications of the same vein, looking particularly for gaps or faults in methodologies and metrics. Ouimet et al. also employed data collected through a cross-sectional survey of public servants, across a variety of Quebec ministries (with several exclusions). They posited that by targeting a narrow cross section of lower level policy analysts in specific roles and departments they could show that scientific information moves upstream by way of briefing notes and press reviews, which contain carefully and succinctly synthesized information. These outputs, which they refer to as “translated research,” work their way upward through ministries to eventually factor into policy decisions.

Ouimet et al.’s survey was conducted by telephone and captured roughly 50% of the intended target audience, resulting in a larger sample size than any prior study of its kind. While this study found that direct interaction with scientific and academic researchers was the strongest correlate with research information being used in policy formation, the authors also noted that it is only the primary factor by a slim margin. Age of the respondents, education level, access to scientific and academic information and other correlates were also closely related to the frequency of information use. Ouimet el al. carefully incorporated improvements to avoid errors made in prior studies with similar methodologies, and they explored in detail the potential weaknesses that still exist in their methods. The conclusions they reached will undoubtedly inform future studies of the same sort, and continue to sharpen understanding of how research is incorporated into policy formation.

In the appendix to their report, Prewitt, Schwandt and Straf (2012), discuss methods used to mitigate the effects of variables. Experiments, in particular Randomized Controlled Field Trials (RCFTs), in which the effects of a treatment are compared to a similar, untreated control group, are one example of research methods used to obtain findings that can frame policy. Because of the randomization in RCFTs, secondary variables do not significantly impact the effect of the treatment. In contrast to experiments, observational studies do not allow direct control by the researcher; instead, they involve observing subjects or outcomes. Observational studies risk selection bias due to lack of randomization. Nonetheless, these studies are still important for revealing associations and guiding the development of theory and models, and may uncover patterns or rich context that might otherwise missed. Meta-analysis involves the statistical combination of the results of multiple studies to help identify the effect of a policy or program more precisely. This analytical method also provides an opportunity to explore how different contexts can affect the outcome of a policy. Qualitative research involves the use of methods, such as interviews, content analysis of documents, and comparative studies, to assess the effectiveness of various policy interventions and to understand the conditions that favor this effectiveness. Alternatively, qualitative archival studies may help in development of models based on past evidence or decisions in order to predict future behaviors.

Thelwall et al. (2010) address the suitability of using webometrics for measuring information flows across organizations and scientific fields. By analyzing the presence of hyperlinks and their connections among websites, it is expected that evidence of the flow of information from one organization to another will be revealed and also will indicate key actors in the information flow. Webometrics can be an appropriate method for identifying patterns of relationships among organizations that might not be understood by use of other methods or for identifying key nations and international connections (Thelwall et al., 2010). However, webometrics may not determine all channels of knowledge transfer. In the study by Thelwall et al., interviewees identified seminars, workshops, and scientific conferences as important sources of information, which may not be represented by measured online presence as these sources may not exist on the web. Webometric measures are not always entirely accurate representations of what actually happens and should be used with other approaches (e.g., interviews). Thelwall et al. (2010) recommend that webometrics be seen as one way to understand research processes and suggest that they might be used to monitor emerging fields, target policy to enhance collaboration, identify missing links, and further understand the most effective communication channels. Although in some ways inferior to traditional methods (such as bibliometrics), webometrics can be useful for new, small fields and can “deliver policy-relevant…indicators to promote effective collaboration and communication” (Thelwall et al., 2010, p. 1473).

The literature discussed in this blog post effectively surveys the challenges faced jointly by researchers and policy makers in measuring the influence of research in policy development. Gaps in research use, limitations in research methodologies, and changes in the operation of government departments and ministries are all cited as problems seeking solutions. The continued work to improve the application of research in policy formation processes is encouraging. Further study is merited, supported by all levels of government, and conducted in collaboration with academics and other organizations.

 

References

Clark, W. C., Mitchell, R. B., & Cash, D. W. (2006). Evaluating the influence of global environmental assessments. In R. B. Mitchell, W. C. Clark, D. W. Cash & N. M. Dickson (Eds.), Global environmental assessments: Information and influence (pp. 1-28). Cambridge, MA: MIT Press.

Eden, S. (2011). Lessons on the generation of usable science from an assessment of decision support practices. Environmental Science & Policy, 14(1), 11-19.

Nutley, S. M., Walter, I., & Davies, H. T. O. (2007). Using evidence. How research can inform public services. Bristol: The Policy Press.

Ouimet, M., Bédard, P., Turgeon, J. Lavis, J. N., Gélineau, F, Gagnon, F., & Dallaire, C. (2010). Correlates of consulting research evidence among policy analysts in government ministries: A cross-sectional survey. Evidence & Policy, 6(4), 433-460

Prewitt, K., Schwandt, T. A., & Straf, M. L., (Eds.). (2012). Using science as evidence in public policy. Appendix A. Selected major social science research methods: Overview (pp. 91-101). Washington, DC: The National Academies Press

Thelwall, M., Klitkou, A., Verbeek, A., Stuart, D., & Vincent, C. (2010). Policy-relevant webometrics for individual scientific fields. Journal of the American Society for Information Science and Technology, 61, 1464-1475.

 

Authors: Melissa Archibald, Benjamin Palmer, Bi Ying (Michelle) Qui, and Sabrina Sullivan

This blog post is part of a series of posts authored by students in the graduate course “The Role of Information in Public Policy and Decision Making,” offered at Dalhousie University.

 

Please follow and like us: