Recently, I have written on this blog about the rise of altmetrics: alternative measures of research influence and impact that have the potential to fundamentally transform the academic promotion process and encourage unprecedented levels of scholarly and scientific communication. However, for all their promise, and for all their proponents’ lofty proclamations that altmetrics will “distill communities impact judgements algorithmically” and extirpate existing systems of peer review and journal publication (Priem, 2013, p.437-438), there are numerous challenges ahead for developers to contend with as they try to develop truly effective alternate measures of impact. Which data sources are best utilized to measure influence? Are altmetrics vulnerable to being gamed by ambitious and dishonest researchers? Does the ever-increasing number of users on social media platforms such as Twitter present a biased picture of the impact of newer papers? If altmetrics are truly going to usher in a new age of research dissemination, these are just a few of the questions developers will have to answer.
Impact and influence are multidimensional constructs, and, in accordance with the law of requisite variety, we need a battery of metrics to capture the full range of effects attributable to a scholar’s thinking and research over time. –Blaise Cronin (2013)
The first challenge facing altmetrics is fundamental: if, as Blaise Cronin argues in his inimitable way in the quote above, a battery of metrics must be assembled to accurately assess impact, then it must be determined which palette of metrics will paint the best picture of an article’s influence. As a moment’s glance at the burgeoning field of altmetrics providers will tell you, consensus is a long way off: whereas Altmetric relies entirely on measurements of social and mainstream media presence, rival ImpactStory tabulates social media data alongside traditional citation counts, download rates, and even past-its-prime social bookmarking service Delicious. This variety must inevitably limit the effects altmetrics can have on existing procedures for academic promotion. Imagine two chemists, competing for the same tenure track position: one produces Altmetrics data demonstrating that he is the rising star in his specialty of xenochemistry (the made-up-for-the-purpose-of-this-post study of alien chemicals), while the other has an ImpactStory assessment in his promotion package showing that his xenochemistry publications are cited more than any other scientist’s. Neither of them are lying or misrepresenting their data: they are simply using different metrics assembled by different providers and, as a result, leaving the selection committee with the challenge to research each altmetric and debate which is the more reliable. This issue is not lost on altmetrics developers: Altmetric’s Jean Liu and Euan Adie have argued recently that “common standards are required” in the field, but admit that for the time being “developing these standards has taken a back seat to developing the actual tools themselves” (2013, p. 32). Until standards for altmetrics can be agreed upon at least at across entire disciplines of research, altmetrics cannot achieve the full potential of their influence on the academy.
During the Q&A portion of a recent talk at Dalhousie University, titled “From Science Communication to Altmetrics,” one of the first questions Altmetric data curator Jean Liu was asked was whether or not altmetrics were vulnerable to gaming. The concern is that, with the measures altmetrics providers utilize being public knowledge and the tools that generate these metrics (Twitter accounts, Mendeley databases, etc.) being publicly available, unscrupulous academic climbers may use fake accounts and other methods to enhance their work’s performance in altmetric appraisals. In fact, Wired’s Eric Steuer recently reported on a range of companies offering to “create fake users and even pay real account holders” to follow and “like” a customer’s social media content, artificially inflating the customer’s perceived social media impact (2013, para.1). This risk is not lost on altmetrics providers: at the 2012 ACM Web Science Conference, Jennifer Lin of the Public Library of Science, developers of the Article-Level Metrics (ALM) altmetric, gave a presentation on the subject of emergent anti-gaming mechanisms for altmetrics. ALM is currently developing an auditing tool for its altmetric called DataTrust, which “flags articles that experience activity beyond a set of parameters, defined by [PloS] as incongruous behavior, and reports [the activity]” (Lin, 2012, para. 12). Lin emphasizes that human oversight is required even when using an application like DataTrust, and that ultimately a community norm where “willful manipulation of altmetrics comes to be treated as a professional offense and reported to respective institutions for further action” will have to take hold in academia (2012, para. 14). Once again, the diffuse nature of the altmetrics field militates against a solution, as there are numerous altmetrics offering services to researchers, each drawing on a wide range of sources, resulting in many more potential approaches to gaming and a much greater demand on monitoring systems than traditional measures such as journal impact factor. Particularly in these early years of altmetrics, careful attention will have to be paid to the development of effective anti-gaming measures.
Yet another challenge identified during Liu’s Q&A was the matter of stability: many of the social media tools relied on by various altmetric providers are emergent technologies, liable to exponential short-term growth in their user bases. A comparison of Twitter presence, to choose a popular example, between two papers published only a year or two apart could potentially be compromised if the second paper’s publication was greeted by an audience that was up to 40% larger. Unfortunately, Liu was unable to identify exactly what methods Altmetrics’ developers use to deal with this problem, simply observing that she knows they are aware of the problem and assumes they take measures to deal with it. Most likely, altmetrics that incorporate data from relatively new Web 2.0 tools will require constant tweaking as adoption becomes more widespread. While this constant micro-adjustment will surely level off in tandem with the growth rate of the various data sources’ user bases, in the short term there is a risk that altmetrics will be too unstable to provide an accurate comparison between the impact of two researchers’ bodies of work.
Three serious issues face the developers of altmetrics, and by no means do they constitute an exhaustive list. Though proponents of altmetrics can reasonably assert that many of these criticisms—particularly vulnerability to gaming and validity of data sources for measuring influence—can be levelled at existing measures of impact, the fact that one metric is entrenched as the standard of the academy and one is a would-be disruptive upstart necessarily, if not fairly, requires any altmetric to withstand a greater degree of scrutiny if it is to gain wide enough acceptance to become a standard component of impact measurement tools. However, acknowledging these barriers should not diminish the excitement over of altmetrics. There is a lot of work to be done by altmetrics developers, but this is surely true of any disruptive technology: the recurring themes of the problem outlined above are that the altmetrics movement, in its nascence, is both too diffuse and too reliant on emerging technologies to achieve its full potential. Besides the hard work of fine tuning metrics and developing effective anti-gaming mechanisms, the key to overcoming these challenges may simply be time. As some combination of standardization, consolidation, and specialization inevitably occurs in the altmetrics field, the collective expertise of current start-ups will likely become concentrated in fewer, larger, more influential firms, potentially revolutionizing science communication, academic promotions, and our understanding of influence in the process.
Cronin, B. (2013). Metrics à la mode. Journal of the American Society for Information Science and Technology, 64(6), 1091. doi: 10.1002/asi.22989
Lin, J. (2012). A case-study in anti-gaming mechanisms for altmetrics: PLoS ALMs and DataTrust. Retrieved from http://altmetrics.org/altmetrics12/lin/
Liu, J., & Adie, E. (2013). Five challenges in altmetrics: A toolmaker’s perspective. Bulletin of the Association for Information Science and Technology, 39(4), 31-34.
Priem, J. (2013). Beyond the paper. Nature, 495, 437-440.
Steuer, G. (2013). How to buy friends and influence people. Wired. Retrieved from
Author: James D. Ross