Hardly any topic in scientific publishing generates as much attention, debate and frequent outbursts as the Journal Impact Factor (JIF), calculated by Thomson Reuters. Long recognized for it’s inherently flawed nature in determining the quality of individual research articles or contributions (and performance) of individual scientists, it is unfortunately still frequently used as a means to evaluate said researcher in hiring, promotion or funding decisions.
Understandably, this sad fact continues to provoke a lot of outrage among people who find themselves neglected by major funding agencies, because they cannot demonstrate that their work has ‘impact’. This particularly affects early-career researchers who are unable to afford the staff, specialty equipment and years of extensive experimenting required to publish in high-tier journals.
Flaws of the Journal Impact Factor
Even when assessing a Journal’s impact, the JIF can only give a rough, and often skewed insight into the research output. Citation counts are usually rather unevenly distributed in the given 2-year interval monitored by the JIF. For example, it can be greatly influenced by a small number of papers, which might be cited excessively compared to the majority of the papers in the Journal with much lower citation counts (as can be seen here for Nature, Science and PlosONE). Certain article types, such as Reviews, generate proportionally more citations than primary articles and can therefore be used to artificially boost the impact of a Journal.
Specific research areas also tend to have more impact than others and are more widely cited. Over the years – as new fields emerge and establish themselves or get saturated with data – these tendencies can be observed in JIF losses of specialized journals with a more restricted scope. Depending on whether you focus on the 2-year or 5-year JIF you can also spot varying differences between Journal performances. As nicely summarized in this editorial, novelty indeed takes time. Many important papers might hardly generate any citations initially, but are rediscovered years after publication and suddenly completely change a field. Dvorak’s paper that established tumors as “wounds that do not heal” came out in 1986 and has, in the last decade, inspired a whole assembly of researchers to focus on tumour micro-environments – a whim which his fellow scientists have rewarded with 3,395 citations so far.
And, finally, a surely not insignificant point of criticism is that the data used to generate the JIF are not accessible to the public. It remains entirely untransparent how Journals acquire and measure their ‘impact’ on the scientific community throughout the years.
Initiatives to improve research assessment
As there is so much blatantly wrong with the JIF, initiatives have been formed to end it’s ill-fated reign or, at least, optimize general Journal metrics. One of them, the San Francisco Declaration on Research Assessment (DORA), was founded by the American Society for Cell Biology and a group of editors to improve the ways research output is evaluated in terms of quality and impact. Their website, to date, lists 12,593 individual signers and 840 organizational signers who have declared their support for the adoption of a list of practices in research assessment.
DORA lists a few recommendations, but I will only summarize the most important ones below – they are directed at different audiences, of course.
Funders and institutions
- Do not use the JIF to measure the quality of individual articles or researchers.
- Highlight that scientific content of a paper is more important than publication metrics or Journal identity
- Consider value and impact of all research outputs (including datasets and software) in addition to researchpublications
- Reduce the use of the JIF as a promotional tool or at least present a variety of journal-based metrics
- Make a range of article-level metrics available
- Encourage responsible authorship practices and provide information about author contributions
- remove all reuse limitations on reference lists in research article
- Remove or reduce the constraints on the number of references; mandate citation of primary research, not Reviews. Give credit to the group(s) who first reported a finding.
- In all decisions (when part of a committee for funding decisions, reviews etc) make assessments based on scientific content
- Where possible cite primary literature
- Use a range of article metrics on personal/supporting statements
- Challenge research assessment practices that rely inappropriately on the JIF
In addition, they revealed they would in the future also provide more information about aspects of the peer-review process for their papers, including key stages of the submission-to-publication workflow. First metrics, such as decision times, can already be found here.
Will this new set of guidelines finally enable us to retire the JIF forever?
That remains to be seen. Surveys done by Nature indicate that most researchers still see the JIF as very important and might not readily give it up. The same might apply for funders (no data are provided in this case). But I would like to think that if all main stakeholders, researchers, publishers and institutions, collectively chose to adopt these new principles, they will surely follow suit. Why not sign up for DORA today?