Statistics from Altmetric.com
There is plenty written about metrics, both comparing one type with another and also trying to decide whether they are any use or not. The function of this article is not to endorse any particular metric, but instead to give you some basics so that you can understand conversations about them.
What is the metric for?
If you publish a piece, your wish is for that piece to change practice or alter the world’s understanding of your area in some way. The metric intends, therefore, to tell you something about how far your message may spread. The bodies who fund research will be keen that their research is read and has an impact, which brings us to the first of the three major metrics we will discuss here.
This is a journal-level metric, meaning that it assesses the whole journal, and not a specific article. It is described in a number of papers1 but, in short, it is a ratio of the number of citations of articles to the number of articles over a time period—2 years for the standard impact factor (IF). So, imagine a journal through 2013 and 2014. In one of those years, it publishes 50 citable papers. If, during that same period, there are 100 citations of those articles, then the IF is 2. A moment or two of thought about this reveals many of the possible pitfalls—for example, a paper published at the start of a year might contribute more to the IF; how do you define ‘citable’, could editors ask that you cite their own journal and so on. Additionally, certain articles—for example, review articles which are highly cited—can distort the IF away from the concept of ‘contribution to science’. Thompson-Reuters, the company that compiles IF and a number of other citation reports, have a number of ways to look for gaming. Their other reports look at impact from different angles, but it remains true that your supervisor, your university and your grant awarding body will be very interested in the IF of the journals your work is published in.
This is best thought of an author's own IF. It looks at the publications of a particular author, ranks them in order of number of citations and then assigns a number—the H-index—based on the point in that ranking that citations drop below a certain level. The way it is calculated attempts to minimise the impact of a single paper in distorting the score. There have been suggestions that the H-index be directly related to promotion in academia.
This is an article-level metric, which means that it assesses an individual paper rather than the whole of a journal. It attempts to gauge how much discussion there is about a particular paper. The company that runs it monitors social networking sites, as well as the academic journals, and assigns a score which goes up if the paper is discussed, linked to and cited. There are obvious appeals to this. The author can watch their Altmetric score climb as they themselves create a social media buzz around the paper, or if the press are interested in a particular article. The Altmetric gives you ‘compare and contrast’ information—so it will tell you that your paper has a higher score than, say, 50% of the other comparable papers in the journal. Certainly, you can imagine an individual feeling much more engaged and invigorated with a conversation that goes “My paper Altmetric has just passed 100,” much more than “The journal my paper was printed in has just had a rise in its impact factor.” There are many criticisms though—not least that the social and media buzz might not be related to the scientific value of the paper. The ‘Skinny Jeans’ paper2 now has, when accessed on 19 August 2015, a huge Altmetric of 610, just 2 months after being published online only—and not yet in print, but to say that this was a scientifically groundbreaking paper would be generous. To put this into context, I looked, in August 2015, at the Altmetric for the 11 research papers published in the BMJ in August 2014. The Altmetrics ranged from 2 to 480, three were in single figures and only four made it above 100. Note that by citing the Wai paper, I have added to the Altmetric and to the IF of that journal.
There are a number of other metrics which attempt to measure similar things in different ways. For example, the Eigenfactor assesses popularity of a journal by looking at the cumulative popularity and authority of the journals linked to it. It is likely that with ‘journal pays’ models of metric—for example, Altmetric now charges journals to display these data—that alternatives to this will emerge (table 1).
This article has avoided endorsing IF or Altmetric in favour of describing both, so that authors can understand a little more what each is saying. To paraphrase the statistician George E P Box, ‘All models are wrong, some are useful’. Our duty, as authors, editors and readers, is to understand a little about each of the models and decide for ourselves if we find their message useful.
I would like to thank Dr Catherine Otto, MD, Editor in Chief of the journal Heart; she and I spoke at an editors’ retreat in a debate on IF versus Altmetric. She did most of the work, and she has graciously let me lift freely from her slides.
Competing interests None declared.
Provenance and peer review Commissioned; externally peer reviewed.
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.