What is “Evidence” in Evidence-Based Policy? Brief Discussion of Selected Recent Literature

In today’s post-pandemic world where evidence-based decision making has gained extensive public prominence, understanding how evidence is defined helps to explain why reference is frequently made about its use in many and diverse settings and why evidence can be contested. In this blog post, we summarize and integrate the key findings of six recent academic articles that explore the question “what is evidence?” in the context of evidence-based policy (EBP). This post is structured around the key themes that emerged from our review of this select scholarship.

These articles make clear that “evidence” is not a singular referent but rather an umbrella category denoting different kinds of information that are classified in different ways. Whereas the scholarship on EBP has historically focused narrowly on the use of research-based evidence, newer scholars and practitioners have pushed for a broadening the evidence boundary to include additional types of information—e.g., personal testimony, by-products of scientific research—that, while not necessarily “valid” or “generalizable” in the scientific sense, can nonetheless inform policy in myriad useful ways (Choi et al., 2022). For example, in their study of 189 testimonies delivered during public hearings on single-payer healthcare in New York State, Choi et al. (2022) discuss how anecdotes, despite being an experiential kind of evidence based on single observations, could inform policymakers of the emotional impacts of specific health insurance harms, like the familial devastation and financial ruin that can result from an uninsured person becoming ill (p. 651). Bozeman (2022) argues that the by-products of scientific research—e.g., analytical tools, data sets, and methods—are a kind of evidence that can usefully inform government responses to crises, which was made clear during the COVID-19 pandemic. Fleming & Noyes (2021) explain that, while qualitative evidence is not generalizable, methodologies such as qualitative evidence synthesis (QES), in which new evidence is synthesized from primary qualitative research, help make qualitative kinds of evidence applicable to a broader range of policy problems and contexts. Yet, despite broadening conceptions of what counts as evidence and emerging methodologies like QES, an evidence hierarchy that privileges randomized placebo-controlled trials and meta-analyses above all else persists in the minds of many researchers and policy actors.

Nutley et al.’s (2021) chapter on assessing and labelling evidence highlights the variety of ways that evidence is assessed, classified, and labelled. While their description is not an exhaustive list, evidence can be categorized by its potential uses, as with Cairney’s (2016) distinction between evidence about the extent of a problem versus evidence about the effectiveness of different policy solutions; by the methods used to generate it, as with qualitative and quantitative distinctions (Bozeman, 2022; Fleming & Noyes, 2021); and by the observation period from which it emerged and its relative level of subjectivity (Choi et al., 2022). The scientific community has developed a process (peer-review) and a set of standards (including validity and generalizability) for assessing the quality and integrity of research. The rigour and transparency of this process and standards afford research-based evidence its “cream of the crop” status in the hegemonic EBP paradigm (Nutley, et al., 2019, p. 225 & 238). However, Nutley et al. (2019) remark that the standards researchers use for determining what is good evidence may not be the appropriate standards for research-using organisations, such as government departments or non-profits (p. 226). They assert that what constitutes “good evidence” is highly contextual. Because this is the case, organizations that use evidence do not have the luxury of pulling ready-made evidence standards off the shelf; instead, they must decide for themselves how they will assess the quality and legitimacy of evidence. The editors of Nature (2022) discuss how evidence-reviewing organizations can build a reputation for quality and reliability, just like Cochrane has done within the field of medicine. Indeed, brand names can serve as useful heuristics for assessing evidence quality, especially composite evidence types like meta-analyses and systematic reviews (Editors, 2022).

Geddes’ (2020) and Bozeman’s (2022) research demonstrates how, regardless of its integrity or type, evidence takes on a life of its own once it is published, a life influenced in part by how various policy and policy-adjacent actors interpret and use it. Geddes (2020) observes that actors within the UK political and legislative spheres use evidence in ways that promote their political goals, goals informed by their desire to forward their personal beliefs while appeasing the political arena (p. 43 & 51). Political actors gather, record, and interpret knowledge in ways that are “not entirely led by evidence,” raising concerns that the meanings of evidence will be skewed (Geddes, 2020, p. 50-51). Bozeman (2022) discusses how citizens, policymakers, and scientists tend to rely on curated research and various second- or third-hand reports for evidence, not original form peer-reviewed science. Instead of directly “following the science” or being “entirely led by evidence,” most users of evidence, including scientific evidence, are actually “following the curation” (Bozeman, 2022, p. 7). Bozeman (2022) argues that while curation can improve access to exclusionary peer-reviewed literature, it often incorporates the biases of the curators and their users. Social media, as a knowledge curator of sorts, enables users to consume information that aligns with their beliefs or produced by those they view as trustworthy (Bozeman, 2022). While it tends to be easy for scientists to discern differences in quality and validity in the evidence that curators present, this task is much more difficult for most policymakers and citizens (Bozeman, 2022). Evidence curation thus has the potential to lead to the skewing and/or misuse of evidence, especially as evidence must increasingly compete with political self-interest and influential media claims (Bozeman, 2022).

In terms of evidence’s post-publication life, Bozeman (2022) also observes that science does not remain static. The dynamic nature of science is well-acknowledged and understood within the science community; scientists expect findings to change or become obsolete as science evolves (Bozeman, 2022). Non-scientist users of scientific information, however, often overlook the dynamism of science (Bozeman, 2022). Bozeman (2022) suggests that understanding and embracing the limits of science is paramount to effectively utilizing scientific knowledge in a policy setting; policies must evolve with the science informing them, and policymakers must acknowledge uncertainties to build citizen trust. Bozeman (2022) and Oreskes (2021) believe that the research and theory surrounding the use of science in public policy fails to emphasize the dynamic, changeable, and constantly contestable nature of science, and that this neglect limits the effective use of science in public policy.

The literature we reviewed for this blog post did not provide a definitive answer to the question, what is evidence? Instead, it emphasized the plurality of evidence types, making clear that each type has unique strengths and weaknesses. The literature outlined different evidence taxonomies and strategies for assessing the quality of evidence; and it documented how the meanings attached to evidence are mutable, liable to change as evidence is used (including curated and interpreted) by different actors in the policy sphere. While an evidence hierarchy dominated by research-based evidence continues to reign within the traditional/hegemonic EBP paradigm, initiatives from different corners intend to broaden the evidence boundary to allow for the consideration of less systematic and sometimes less objective kinds of information in the formation of evidence-informed public policy.

 

References

Bozeman, B. (2022). Use of science in public policy: Lessons from the COVID-19 pandemic efforts to “Follow the Science.” Science and Public Policy, scac026. https://doi.org/10.1093/scipol/scac026

Choi, Y., Fox, A. M., & Dodge, J. (2022). What counts? Policy evidence in public hearing testimonies: The case of single-payer healthcare in New York State. Policy Sciences, 55(4), 631-660. https://doi.org/10.1007/s11077-022-09475-1

Editors. (2022, November 17). Agriculture sorely needs a system for evidence synthesis. Nature, 611(7936), 425-426. https://doi.org/10.1038/d41586-022-03694-5

Flemming, K., & Noyes, J. (2021). Qualitative evidence synthesis: Where are we at? International Journal of Qualitative Methods, 20, 160940692199327. https://doi.org/10.1177/1609406921993276

Geddes, M. (2021). The webs of belief around “evidence” in legislatures: The case of select committees in the UK House of Commons. Public Administration, 99(1), 40-54. https://doi.org/10.1111/padm.12687

Nutley, S. M., Davies, H. T. O., & Hughes, J. (2019). Assessing and labelling evidence. In A. Boaz, H. T. O. Davies, A. Fraser, & S. M. Nutley (Eds.), What works now? Evidence-informed policy and practice (Chapter 11, pp. 225-249). Bristol: The Policy Press.

Oreskes, N. (2021). Why trust science? Perspectives from the history and philosophy of science. In Why trust science?(pp. 15-68). Princeton: Princeton University Press.

 

Authors: Julia Ackroyd, Therese Wilson, and Kendra Perrin

This blog post is part of a series of posts authored by students in the graduate course “Information in Public Policy and Decision Making” offered at Dalhousie University.

Please follow and like us: