Grounding the Science-Policy Interface in Empirical Study

Science and Public PolicyWhen studying the science-policy interface, it is easy to become lost in abstractions and forget that what is being examined is a living, breathing entity comprised of institutions and people immersed in the ebb and flow of complex forces. In a recent paper entitled “Balancing credibility, relevance and legitimacy: A critical assessment of trade-offs in science-policy interfaces,” Sarkki et al. (2014) interviewed members of science-policy interfaces (SPIs) to examine the impacts of trading-off the attributes of credibility, relevance, and legitimacy. The authors also explicitly address the need for grounded, empirical research that can move the study of SPIs from abstract theory toward the development of useful frameworks for individuals operating at the science-policy interface. The necessity for empirically-driven research that can produce practical and applicable recommendations is a viewpoint espoused by researchers in the Environmental Information: Use and Influence (EIUI) program who regularly pursue real-world case studies to examine the science-policy interface.

 

SPIs and CRELE

While the science-policy interface is commonly understood as the point of intersection between science and policy, Sarkki et al. (2014) adopt a more tangible definition of SPIs as groups created to act as bridges between science and policy camps (e.g., the Millennium Ecosystem Assessment, the Intergovernmental Platform on Biodiversity and Ecosystem Services, and the Economics of Ecosystems and Biodiversity. In concert with existing literature (see McNie, 2007; Mitchell, Clark, & Cash, 2006), it is argued that SPIs require the key attributes of credibility, relevance (sometimes referred to as salience), and legitimacy (CRELE) in order to be effective.

Credibility refers to the perceived quality, i.e., scientific validity and/or accuracy, of the information being exchanged, relevance refers to the timeliness and the appropriateness of the information to the context for which it was produced, and legitimacy refers to the perception that information is free from bias, i.e., that the information was derived in a fair and balanced manner that reflects a diversity of perspectives. While CRELE has become an integral aspect of science-policy interface research, Sarkki et al. (2014) discuss the concept not as a panacea, but rather as an imperfect framework with sometimes unforeseen trade-offs that can produce tension and uncertainty in SPIs.

 

Trade-offs

CRELE cannot always be mutually inclusive. Certain attributes are often favoured over others for a variety of complex, and contextually specific reasons. Trade-offs is the term used by Sarkki et al. (2014) to define the impact of promoting some aspects of CRELE over others, e.g., simplifying information to increase clarity and uptake often necessitates omissions, which can reduce quality. They identified trade-offs commonly encountered in SPIs by systematically searching interview data for tensions and contradictions among incompatible, but important issues. Four primary trade-offs were discovered and are summarized in the table below:

 

Personal time trade-off Interfacing (spending time in science-policy work) versus focusing on the main role (conducting research).
Clarity-complexity trade-off Simple, strong, clear messages (relevance) versus thorough treatment of uncertainties and systemic dimensions (credibility and legitimacy).
Speed-quality trade off Timely and rapid responses to policy needs (relevance) versus time-consuming quality assessment (credibility) and/or consensus building (legitimacy).
Push-pull trade-off Following strong policy demand (relevance) versus more supply oriented research strategies to enable identification of emerging issues or development of innovative solutions (credibility and legitimacy).

Table 1- Adapted from Sarkki et al., 2014, p. 197

 

Recommendations

The primary take-away of the paper is that trade-offs, and the actions taken to mitigate their effects, are highly dynamic and context dependent. Despite this admission, several recommendations were presented to help SPIs navigate the complexity:

 

  • Understanding the Context: Understanding the context in which a policy problem operates can help SPIs to identify which aspects of CRELE are most lacking, at which point steps may be taken towards improvement, e.g., in cases where scientists are suffering from a lack of credibility, trade-offs that promote quality assurance and the communication of uncertainties should be favoured.

 

  • Considering the Policy Cycle: The Policy Cycle, like the Information Life Cycle, divides policy creation into a repeating cycle of constituent components. The stages of the Policy Cycle can help to inform which trade-offs are most appropriate, e.g., the communication of uncertainties can be important in early stages, but less relevant near the end of the Policy Cycle.

 

  • Policy Problem Type: Policy problems can be well-structured, moderately structured, badly structured, or unstructured (Sarkki et al., 2014). The type of policy problem can indicate which CRELE trade-offs should be improved. For example, badly structured problems require legitimacy, therefore SPI’s might employ strategies to increase stakeholder involvement.

 

  • Resource Dependent Trade-offs: Sometimes trade-offs will be resource dependent, i.e., limited resources preclude exploring all options. In these cases, knowing what resources are available and accepting limitations can help to reveal what aspects of CRELE are realistically achievable.

 

Filling a Gap

The notion that obtaining a perfect balance of credibility, relevance, and legitimacy is not always possible, or even desirable, is not new. In an earlier articulation, Mitchell, Clark, and Cash (2006) suggested that one of the keys to the framework is to find the appropriate CRELE balance for a given context. However, the value of Sarkki et al.’s (2014) paper is its shift away from theory towards practical, empirically-generated data that explore the perspectives of those working in the field. In so doing, the authors have highlighted a larger issue at the heart of science-policy interface research: namely, the absence of empirically-driven studies offering practical, applicable recommendations for individuals operating within a science-policy interface.

To help fill this gap, the EIUI research team has conducted several case studies that use empirical data, gleaned from interviews, surveys, and direct observation of science and decision-making bodies, e.g., the Food and Agriculture Organization of the United Nations. Recent studies include: a case study examining the awareness and use of the State of the Scotian Shelf Report; a case study investigating how the science-policy interface operates in the FAO, the Canadian federal Department of Fisheries and Oceans, and the Northwest Atlantic Fisheries Organization; a study that examines the perceptions of users, creators, and curators of digital coastal atlases; a study about the information products of the Gulf Watch monitoring program of the Gulf of Maine Council on the Marine Environment; and a case study probing inter-organisational communication networks of tidal power stakeholders operating in the Bay of Fundy. The results from several of these studies will be posted to this website in the coming weeks.

The EIUI program was developed to explore the use and influence of scientific information, particularly as it pertains to policy creation. In addition to advancing our own understanding of these complex phenomena, the knowledge gained from this initiative is being shared with our partner organizations and beyond to promote the effectiveness of science-policy interface processes. In highlighting the contextual nature of the science-policy interface, Sarkki et al. (2014) reiterate the need for context-specific research that grounds the discussion on this subject and generates recommendations that are credible, relevant, and legitimate for individuals living and working in the amorphous the science-policy interface.

 

References

McNie, E.C. (2007). Reconciling the supply of scientific information with user demands:

An analysis of the problem and review of the literature. Environmental Science &

Policy 10(1), 17–38. doi:10.1016/j.envsci.2006.10.004.

 

Mitchell, R., Clark, W., & Cash, D. (2006). Information and Influence. In R. Mitchell, W.

Clark, D. Cash, & N. Dickson (eds.), Global Environmental Assessments:

Information and Influence (pp. 307-338). Cambridge: MIT Press.

 

Sarkki, S., Niemelä, J., Tinch, R., van den Hove, S., Watt, A., & Young, J. (2014). Balancing

credibility, relevance and legitimacy: A critical assessment of trade-offs in science–

policy interfaces. Science & Public Policy (SPP), 41(2), 194–206.

 

Author: Lee Wilson

Please follow and like us: