“Current Literature” is a recurring feature highlighting recent publications of interest on the science-policy interface.
One of the common ways people find and access information is through search engines like Google and Bing. While search engines can be useful tools and definitely aid users in locating information, the way they are structured can pose challenges for the uptake of scientific information. Search engines are not impartial arbiters of information; each applies its own algorithms to match results to a user’s query and many track a user’s search history to return results it determines to be relevant to that user.
How people use search engines and how the search engine result pages (SERPs) present information are important to understand, especially given current concerns over fake news. It is necessary to determine how search engines organize search results, and understand how SERPs interact with the biases users bring into the search process (Novin & Meyers, 2017). Researchers should also aim to understand how SERPs present information and how that display primes users to accept or reject certain results based on their perceived credibility of the information (Unkel & Haas, 2017). This blog post focuses on three recent papers by Segev and Sharon (2016), Unkel and Haas (2017), and Novin and Meyers (2017) by discussing the information gathering process and the role SERPs play in how users treat results. The three papers emphasize different aspects of information seeking, but when they are read together they provide a valuable synopsis of why people may use search engines and how SERPs can influence their decisions about using particular information sources.
Why people search for scientific information
In addition to focusing on how individuals obtain information, it is important to understand why they may have been seeking information to begin with. Various studies have indicated that people frequently use the Internet to access scientific information, but little research has been conducted to ascertain what triggers this behaviour (Segev & Sharon, 2016). Segev and Sharon examined how two specific types of cues may trigger searches for scientific information: ad hoc, which tend to be tied to news coverage about specific topics, and cyclic, which tend to correspond with academic calendars. Using publically available data from both Google and Wikipedia, Segev and Sharon theorized that users would apply different methods to find information depending on either the ad hoc or cyclic cues.
Segev and Sharon studied how long users search with certain terms and what the data reveal about users’ interest in scientific information. Of the four hypotheses put forward in their paper, one in particular is of interest to broader research into why individuals may use search engines. They hypothesized that how long users search for a term varies depending on which cues trigger the information seeking behaviour. Segev and Sharon found that cyclic cues did lead to extended use of particular search terms, particularly if a term was related to a science curriculum subject. In comparison, an ad hoc cue like a natural disaster or a scientific discovery may lead to a short, intense burst of searching.
In their conclusions, Segev and Sharon (2016) advise policy makers, when making announcements or initiating public educational campaigns, to give attention to search engine optimization of their internet-based information products in order to extend their reach. Attention to this matter could include testing possible searches prior to a launch and ensuring relevant information is displayed in the search results. The authors argue that implementing this strategy prior to the roll out of a campaign or announcements of programs or policies can help to extend the reach of the project and make the information easier for users to find. They also suggest that instructional sessions on information literacy could be included in school science courses to help students to understand how search engines work and how best to use them.
How SERPs influence information seeking behaviours
Segev and Sharon investigated why people search for information and they tracked how searching patterns relate to research analyzing users’ responses to how SERPs present information. Both Unkel and Haas (2017) and Novin and Meyers (2017) looked at how SERPs display information and how that visual presentation interacts with biases that users bring to the search process.
One way users react to SERPs is to look for certain credibility cues to guide their selection from the search results. Unkel and Haas (2017) identified three types of such cues: reputation, neutrality, and social recommendation. They posited that how results are ranked and displayed on a page influences users’ decisions. It is generally believed that for many users the higher a search result is ranked the more likely it is believed to be credible. Unkel and Haas (2017) tested whether credibility cues were cumulative for users, and the extent to which previous Internet experience and use of search engines might affect their information seeking behaviour. To test their hypotheses, Unkel and Haas (2017) gave university students a modified search for the DuckDuckGo search engine that looked at issues relating to rent control and video streaming. Unkel and Haas (2017) found that users were far more likely to select results based on perceived reputation of a result in addition to choosing higher ranked results. The search results ranked first were more likely to be clicked, and users rarely scrolled to the bottom of the results page. The credibility of a hit and how users determined credibility were secondary factors compared to where a result was positioned on the page.
In related research, Novin and Meyers (2017) studied how cognitive biases affect users seeking scientific information from SERPs. They found four related cognitive biases when students were presented with mock SERPs containing information regarding biofuels: priming (familiar layout or websites directing users to certain results); anchoring (that is, where a result falls on the page); framing (how the information is presented); and availability (how easily users can access the information). How much information each result presented also influenced users. Novin and Meyers found, for example, that search results of academic articles that offered little background information or additional text were more likely to be skipped over, even if they were placed high in the results list in the SERP. In contrast, students were much more likely to select a site like Wikipedia that was optimized for search engines.
While all three papers highlight important issues about how the public interacts with search engines such as Google to acquire information, some limitations are worth noting, especially in the studies about user interactions with SERPs. Unkel and Haas focused on two non-scientific issues, whose credibility cues may be different. In their examination of how users’ cognitive biases affected how they searched for scientific information, Novin and Meyers used a relatively uncontroversial topic. These papers provide a good base, but further research could consider how users search for information on more controversial topics such as climate change or vaccines and how search engines present results to users’ queries.
The papers gave limited attention to how repeated use of a specific search engine may influence the results it returns. Novin and Meyers (2017) discussed the benefits of collaborative searching, where students work together to research a topic and discuss their findings. They frame this proposal as allowing for different viewpoints to be brought into the search process, helping students find information they might otherwise miss either due to the search engine algorithms or the biases identified earlier. In cases of scientific issues where there are controversies, a user’s previous searches may affect the information the SERP provides or how it ranks the results. This subject is a rich area for future research, and one that will remain relevant as long as search engines are key tools for information seekers.
The three articles all examined information seeking behaviour, with Segev and Sharon looking into why users turn to search engines such as Google and Unkel and Haas, as well as Novin and Meyers examining how SERPs affect interpretation of results. These papers advance understanding about how best to present accurate, relevant information through SERPs in order to reach the widest possible audiences. Understanding how the public gains access to scientific information, as well as what factors influence their decisions about the information presented in SERPs is increasingly important for scientists and policymakers. If evidence-based policies are to be successful with public buy-in, especially on contentious issues, finding ways to ensure accurate and easy to understand information is accessible is a necessary step.
Novin, A. & Meyers, E. (2017). Making sense of conflicting science information: Exploring bias in the search engine result page. CHIIR ’17 Proceedings of the 2017 Conference on Conference Human Information Interaction and Retrieval, Oslo, Norway (pp. 175-184). New York: ACM. DOI: 10.1145/3020165.3020185
Segev, E. & Sharon, A. J. (2016) Temporal patterns of scientific information-seeking on Google and Wikipedia. Public Understanding of Science, 1-17. https://doi.org/10.1177/0963662516648565
Unkel, J. & Haas, A. (2017). The effects of credibility cues on the selection of search engine results. Journal of the Association for Information Science and Technology, 68(8), 1850-1862. DOI: 10.1002/asi.23820
Author: Diana Castillo