The Transparency and Trust Paradox: More Accessible Knowledge and Weaker Accountability

As scientific evidence has become increasingly more accessible and automated than ever previously, why is it harder to trust the authenticity of research work? Emerging tools in artificial intelligence (AI), open data portals, open science, and preprints have enabled faster, more efficient knowledge access and sharing. At the same time, these advances have created new challenges by facilitating the output of paper mills, AI-generated manuscripts, fabricated researchers, and deceptive reviewers, making it more difficult to discern the reliability of evidence that can be used in public decisions (Ashkenazi & Browman, 2025; Naddaf, 2025; Reichmann & Wieser, 2022). Together, Ashkenazi and Browman, Naddaf, and Reichmann and Wieser emphasize that while accessibility to scientific knowledge has increased, reliability, understanding, and effective use by decision-makers are not automatically ensured.

Open science is an “umbrella term for a programme of reforming science by reforming scholarly communication, with a clearly normative thrust,” i.e., to make science better (Reichmann & Wieser, 2022, p. 2). Open science is often presented as a solution to problems in research and policy. This discourse assumes that removing access barriers will automatically increase the uptake of scientific knowledge in policy. However, Reichmann and Wieser (2022) found that access alone is far from sufficient for increasing use and recognition of research by decision-makers.

The tension between decision-makers’ lack of recognition of evidence and the growing abundance of relevant scientific knowledge is called the evidence-policy gap (Reichmann & Wieser, 2022). Reichmann and Wieser argue that a linear view of knowledge transfer shapes open science discourse, which suggests it can boost research uptake. However, the authors reject this linear model, arguing that it ignores the complexity of the science-policy interface and the ways in which knowledge actually enters policy. The linear approach overlooks timing, institutional procedures, informal networks, and competing priorities. Reichmann and Wieser’s (2022) review of the literature concludes that open science practices have not yet demonstrated success in bridging the evidence to policy gap.

Ashkenazi and Browman (2025) focused on evidence production and the impact of generative AI on manuscript writing. They explored how AI tools now draft literature reviews, methodology sections, and entire articles that appear to be rigorous and coherent. However, these articles often lack substance, showing diminished human inventiveness, less accountability for errors, and questionable authorship and scholarship (Ashkenazi & Browman, 2025). Ashkenazi and Browman argue that authentic human intellectual contributions, including responsibility for research work, are diminished as AI-generated content increases. When AI generates large portions of a manuscript through opaque processes that the approving authors cannot fully understand or reproduce, attaching their names to such a manuscript ultimately undermines scholarly trust (Ashkenazi & Browman, 2025). Despite this reduced trust, authors still remain accountable and intellectually responsible for their work. Ashkenazi and Browman ultimately warn that heavy reliance on AI can turn scholarly writing into prompt-based work, reducing expertise, creativity, and authorship.

Naddaf (2025) adds to the discussion by exploring academic fraud, which is being magnified in the age of AI. Naddaf specifically highlighted the rise of paper mills and the growing number of fake authors and reviewers. These examples show the exploitation of systems that rely on remote communication and good faith. Naddaf began with a case of fabricated mathematicians and reviewers who manipulated peer review and published flawed work in reputable journals. Naddaf argues that traditional signals of expertise, such as authors’ names, affiliations, and institutional email addresses, have become less reliable, which defies editors’ and readers’ traditional expectations of reliability.

To detect fraud and falsification, publishers have begun requiring institutional email addresses, Open Researcher and Contributor Identifiers (ORCIDs), and, in some cases, official documents, such as passports or driver’s licenses. However, two recent International Association of Scientific Technical and Medical Publishers (STM) reports caution that identity checks must allow for multiple solutions to avoid unfair exclusion. Stricter verification risks are increasing the marginalization of researchers without strong institutional ties, such as individuals in low- and middle-income countries or those pursuing non-traditional careers. Rigid identity barriers risk reinforcing existing inequalities in scholarly participation.

It is important to consider the limits in the papers considered in this blog post. Reichmann and Wieser (2022) focused on health policy in their narrative review and did offer new empirical research. They outline valuable conceptual insights but presented few concrete or testable solutions, beyond synthesizing factors identified in the literature, to promote the uptake of scientific knowledge by public decision makers.

Ashkenazi and Browman (2025) took a similar approach in their editorial. Although they examined AI and declining scholarly values, they presented little systematic evidence for how researchers across research fields use AI. Their editorial also focused on worst-case scenarios involving heavy AI use. This approach did not consider how modest, transparent uses of AI might support, rather than undermine, expertise and scholarship. Naddaf (2025) drew attention to dramatic cases of paper mills and fake identities, but the true frequency and scale of such fraud remain uncertain, which Naddaf acknowledged. In addition, the trade-offs associated with identity safeguards were only partly explored.

Even with these limitations in mind, it is notable that the three articles converged on several key points about why trust feels more difficult to achieve in an era of accessibility and automation.

The first takeaway is that access to research literature is not a sufficient condition for superior policy making. While making publications and data available is necessary for evidence to influence policy, use of data and information are also affected by other factors such as the “quality of relationships and informants, resources and access to research, communication formats and policymakers’ research skills and the policy context and discrepancies in values and goals” (Reichmann & Wieser, 2022, p. 6). The second key takeaway is that trust is undermined by two forces: on the one hand, generative AI undercuts traditional accountability mechanisms, and on the other hand, paper mills and fake identities exploit the publishing system’s reliance on good faith and remote communication (Ashkenazi & Browman, 2025; Naddaf, 2025).

That authorship and expertise are being redefined is the third takeaway. As AI is used to write content and create fake authors and reviewers, traditional markers of credibility become less reliable. Policymakers and the public now face greater difficulty in determining which evidence to trust (Ashkenazi & Browman, 2025; Naddaf, 2025). The final takeaway is that efforts to improve governance have had mixed results. Tools like open science platforms, ORCID, and stricter identity checks can strengthen integrity. However, they also risk reinforcing inequalities by favouring well-resourced institutions and established scholars (Naddaf, 2025; Reichmann & Wieser, 2022).

Going forward, the main goal should be to ensure responsible and accountable use of information, not just its automation and accessibility. The three papers reviewed for this blog post emphasized that strong policy decisions depend on evidence in which human accountability is kept intact, and other factors are considered (Ashkenazi & Browman, 2025; Naddaf, 2025; Reichmann & Wieser, 2022). Accountability requires clearly identifiable individuals who take responsibility for their work. These individuals must stay active in policy processes, avoid over-reliance on AI tools, and recognize that accessibility alone does not guarantee knowledge uptake (Ashkenazi & Bowman, 2025; Naddaf, 2025; Reichmann & Wieser, 2022). As generating and obtaining knowledge becomes easier through open access initiatives, public trust will continue to depend on determining who created the knowledge, how it was produced, and how it was used or ignored in public decision-making.

 

References

Ashkenazi, I., & Browman, H. I. (2025). What’s the point of generative artificial intelligence in science and scientific publishing? ICES Journal of Marine Science, 82(10), fsaf179. https://doi.org/10.1093/icesjms/fsaf179

Naddaf, M. (2025, October 23). The fake scientists infiltrating journals. Nature, 646(8086), 792-794. https://doi.org/10.1038/d41586-025-03341-9

Reichmann, S., & Wieser, B. (2022). Open science at the science-policy interface: Bringing in the evidence? Health Research Policy and Systems, 20, 70. https://doi.org/10.1186/s12961-022-00867-6

Authors: Anna Doyle and Marian Lawson

This blog post is part of a series of posts authored by students in the graduate course “Information in Public Policy and Decision Making” offered at Dalhousie University.

Tags: Information Use & Influence; Public Policy and Decision-Making; Science-Policy Interface; Scientific Communication; Student Submission

Please follow and like us: