Disinformation, Misinformation, and Decision Making

The recent literature selected for review in this post examined the role that social media plays in propagating disinformation amongst the public and how this activity can influence decision-making. The following questions spurred our interest in this investigation: What is the role of the news media and contemporary social media in promoting awareness and use of research-based information? and What happens when the information is “fake”? Beginning with these questions, coupled with our analysis of the readings, we developed four key learning outcomes to take away from this post about disinformation:

  1. Understand how social media impacts the way people process information;
  2. Learn how misinformation and disinformation spread on social media;
  3. Discover how evidence-based information is used to influence decision making and public policies; and,
  4. Explore specific policies and cross-border collaboration initiatives, such as could be taken by Canada to combat disinformation.

We assigned the themes of each reading into these categories to facilitate our discussion. The main takeaways, derived from the learning outcomes, are addressed in the order of the list above.

Social networks, information ecosystems, and macro-level variables need to be considered when developing an understanding of how people interact with information (Scheufele, & Krause, 2018). Because information access is primarily online today, digital platforms play a major role in the way people interact with information. Misinformation exists at three levels: individual, group, and societal. Individuals may not understand the information they read and they may also have trouble deciphering fact from fiction (Scheufele, & Krause, 2018). Misinformation circulating in groups alters the way they evaluate credibility. Misinformation also has an impact on the decisions people make. Platforms like Facebook apply algorithms to create consistent interactions between computers and individuals. When a connection is suggested, a person responds, and an algorithm makes adjustments (Bergstrom, & Bak-Colman, 2019). This human-machine interaction creates filter bubbles. Filter bubbles and gerrymandering (purposefully skewing others’ perceptions of information) have the potential to create spaces for extremism. Today, misinformation can have a greater impact on society than previously because it spreads quickly and attempts to debunk it often lead to more people believing the information because, by trying to debunk fake information, ultimately, more attention is drawn to it.

Continuing this exploration into how misinformation spreads, Stier, Schunemann, and Steiger (2018) suggest in their study, “Of activists and gatekeepers,” that people with extensive influence and a substantial social media presence can create interest in an issue, which they call an issue public. An issue public is seen on social media in the form of a hash tag. Stier et al. studied how trends in hash tags can represent ways that issue publics are taken up, discussed, and potentially reach a viral status, which could influence policy through the Advocacy Coalition Framework (ACF).

The ACF is a system for representing how external parties, such as non-governmental organizations (NGOs) and advocacy groups, influence the decision-making process. The concept was originally introduced by Paul Sebatier ­­­­­­­­­in 2007 in a book entitled Theoretical lenses on public policy: Theories of the policy process. Since 2007, much has changed in the way in which advocacy groups operate. For instance, Stier et al. suggest that the ability to create an issue public is a new method for NGOs and advocacy groups to influence decision-making processes, adding a new layer on the old layers of the ACF model. Stier et al. found in their study of Twitter posts about two policy debates (climate change and net neutrality) that Twitter certainly influences decision-makers, and decision-makers using Twitter also influence the public. However, the rate of influence, the timing of the influence, and the total impact of the influence can fluctuate on a case-by-case basis (Stier et al., 2018).

Of course, if Twitter activity is able to influence policy processes, and disinformation is spread and becomes viral on Twitter, then Twitter traffic may also contribute to disinformation influencing policy development processes. To address this situation, the European Commission created a High Level Expert Group (HLEG) on fake news and online disinformation. The HLEG was asked to assess a range of current policies to combat disinformation, as well as potential policies, for their effectiveness, with the objective to identify a set of best practices for governments seeking to combat disinformation. The best practices that the HLEG identified could be applied in other jurisdictions. For example, the Canadian Radio-Television and Telecommunications Commission (CRTC), which has responsibility for regulating social media in Canada, could institute the following practices recommended by the HLEG:

  1. Enhance transparencyof online news, involving an adequate and privacy-compliant sharing of data about the systems that enable their circulation online;
  2. Promote media and information literacyto counter disinformation and help users navigate the digital media environment;
  3. Develop tools for empowering users and journaliststo tackle disinformation and foster a positive engagement with fast-evolving information technologies;
  4. Safeguard the diversity and sustainability of the … [Canadian] news media ecosystem, and
  5. Promotecontinued research on the impact of disinformation in … [Canada] to evaluate the measures taken by different actors and constantly adjust the necessary responses. (HLGoFNandOI, 2018, p. 5-6)

We believe that if Canada were to implement this suite of measures, it could be a helpful step towards reducing the negative impacts of social media on the public, as well as on decision-makers. Coupled with these practices, an interesting statistic in the Stier et al. paper suggests that inequality and the ability to decipher false news from true stories, as well as the seeming significance of this issue amongst the public, is directly correlated with education (Stier et al., 2018). With this point in mind, we believe it is essential that governments in Canada and elsewhere institute public education programs that can educate citizens across all socioeconomic strata about misinformation, in an effort to combat the effects of fake news, and its spread through social media.



Bergstrom, C. T., & Bak-Coleman, J. B. (2019). Gerrymandering in social networks. Nature, 573(7772), 40–41. https://doi.org/10.1038/d41586-019-02562-z [Online version is entitled: Information gerrymandering in social networks skews collective decision-making].

High Level Group on Fake News and Online Disinformation [HLGoFNandOI]. (2018). A multi-dimensional approach to disinformation. Report of the independent High Level Group on Fake News and Online Disinformation (pp. 39). Luxembourg: European Union. Retrieved from https://ec.europa.eu/digital-single-market/en/news/final-report-high-level-expert-group-fake-news-and-online-disinformation

Scheufele, D. A., & Krause, N. M. (2019). Science audiences, misinformation, and fake news. Proceedings of the National Academy of Sciences, 116(16), 7662–7669. https://doi.org/10.1073/pnas.1805871115

Stier, S., Schünemann, W. J., & Steiger, S. (2018). Of activists and gatekeepers: Temporal and structural properties of policy networks on Twitter. New Media & Society, 20(5), 1910–1930. https://doi.org/10.1177/1461444817709282


Authors: Cora-Lynn Munroe-Lynds & Jean-Luc Lemieux


This blog post is part of a series of posts authored by students in the graduate course “The Role of Information in Public Policy and Decision Making” offered at Dalhousie University.

Please follow and like us: