With the increasing popularity of academic blogging, the emergence of Twitter as a tool for promoting and sharing scholarly resources, and the proliferation of open access journals, there are more ways for researchers to share their discoveries, and more ways for interested parties to learn about and engage with those findings, than ever before. A scholar’s work could be published in an open-access journal, discovered by an interested reader who relays her find on Twitter, disseminated through research and professional communities via retweets, and discussed, critiqued, and expanded upon by others on platforms like Blogger or WordPress before it receives even a single citation in another journal. Yet, when it comes time to evaluate academics for promotion and tenure, the traditional metric of scholarly influence—citation counts in peer-reviewed journals—prevails. This divide between actual and measured influence has given rise to a movement arguing for the use of alternative metrics (or “altmetrics”) in determining academic promotions and tenure.
These alternative metrics take on a variety of forms, as various start-up companies develop and refine methodologies for measuring a publication’s impact that do not rely on traditional citation counts or the Journal Impact Factor of the publishing journal. Some altmetrics, such as ImpactStory, combine citation counts with a wide variety of other measures, from web analytics to social bookmarking presence to Facebook likes. Others, like Altmetric, a company that was quick-on-the-draw to claim the generic title as their own, eschew citations entirely to focus solely on the social media and online news presence of an article. Some researchers even opt to present their own altmetric quantifications of impact: a recent article in the Chronicle of Higher Education (Howard, 2013) highlighted the tenure package prepared by University of Washington professor Steven B. Roberts. Roberts supplemented a traditional CV with statistics revealing “how many people viewed his laboratory’s blog posts, tweeted about his research group’s findings, viewed his data sets on a site called Figshare, downloaded slides of his presentations from SlideShare, and otherwise talked about his lab’s work on social-media platforms.” (para. 2). This unconventional effort to “quantify online scientific outreach” resulted in Roberts achieving tenure (Howard, 2013, para. 2). One thing these approaches to altmetrics all share in common is an attempt to reach beyond traditional citation impact measures to determine the broader societal influence of a scholar’s work.
While proponents of altmetrics share a common sense that existing measures of research impact are inadequate, they differ in the degrees of their zeal. During a recent presentation at Dalhousie University titled “From Science Communication to Altmetrics,” Jean Liu, data curator and blog editor for Altmetric, was quick to stress that she views altmetrics not as an alternative to traditional citation counts, but as an “alternative to only citations.” A glance at ImpactStory’s description of its altmetric reveals a similar philosophy, as it counts numerous citation measures alongside the social media and web analytic data in its formula. This conciliatory tone is nowhere to be found in Jason Priem, Dario Taraborelli, Paul Groth, and Cameron Neylon’s (2010) “Altmetrics: a manifesto,” which proclaims that the entire peer-review process is “slow, encourages conventionality, and fails to hold reviewers accountable” (para. 3) and anticipates that the existing peer-review process will eventually be entirely supplanted by an as-of-yet unexplained system based on altmetrics. Regardless of which camp is correct in its assessments, it is clear that the proliferation of altmetrics will have a significant effect on how we measure research impact, and by extension how academic tenure is awarded.
Yet, at the end of the day, the range of individuals who will be directly affected by changes to the process governing tenure and academic promotion is relatively narrow. The most significant result of altmetrics, from a broader societal standpoint, may well be a change in the incentive structure surrounding academic publishing and scholarly communication. I have argued elsewhere on this blog that advocates of evidence-based policy must acknowledge that the effective communication of science to the broader public is an essential component of policy success, as without buy-in from the public even the most scientifically sound policy can fail. One notable element of the altmetrics approach to assess impact is that data tends to be drawn from media that is readily available to the public, in stark contrast to the high-priced journals that dominate academia. With altmetrics proliferating, and alternative forms of scholarly communication becoming more and more common as a means of evaluating tenure and promotion applications, academic scientists will have a strong incentive to develop publicly accessible venues for their research. As early-adopters like Steven Roberts leading the charge in establishing altmetrics as legitimate elements of a well-rounded curriculum vitae, those less inclined to push the envelope will begin to follow suit; as increasingly tech-savvy generations come of age, they will have access to a plethora of scientific information, with explanation, analysis, and commentary from scientific experts. Ultimately, the most important movement in Web 2.0 researchership may not be “From Science Communication to Altmetrics” so much as “From Altmetrics to Science Communication.”
By: James Ross
Howard, J. (2013, June 3). Rise of “Altmetrics” revives questions about how to measure impact of research. The Chronicle of Higher Education. Retrieve from http://chronicle.com/article/Rise-of-Altmetrics-Revives/139557/
Priem, J., Taraborelli, D., Groth, P., & Neylon, C. (2010). Altmetrics: a manifesto. Retrieved from:http://altmetrics.org/manifesto/