Ask anybody with a vague interest in scientific research, and they’ve probably heard of journals like Nature and Science. These journals claim to publish some of the most cutting edge research in the natural sciences, from genes involved in human cancer to evidence that the axis of the Moon’s rotation has shifted. And those are just from the last week in Nature.
These journals are home to some of the most highly cited papers—this means papers whose findings are mentioned in other papers, often highlighting research that has had a large impact on a field of research. The quest for scientific papers in these so-called “High Impact” journals has come to dominate many scientists’ research.
For some, this leads to a conundrum. Applications for faculty positions, post-doctoral jobs, and tenure reviews often privilege articles published in such high impact journals over all others. It’s not uncommon to hear that if you don’t have a Nature or Science article as an early career researcher, you chances of landing a long-term academic position are slim-to-none. This might be a bit too much like scare mongering, but in a job market saturated with PhD graduates it can ring far too true.
And so, is it really a surprise that many scientists will push to get their research into these prestigious journals? Yet this push for “high impact” has started to come under fire.
Could it be that this obsession with prestige, and the name of the journal, is actually feeding into a culture of quantity over quality? Where the drive to publish in a high-profile journal is leading to the publication of results that haven’t been sufficiently vetted? And perhaps where the preconceptions of what constitutes “cutting edge research” are systematically biased in favour of certain fields of research?
It’s not uncommon now to hear about another high profile paper that has been retracted or amended after further investigation (following publication, of course) has poked holes in the story. The most well known case recently involved the retraction of two papers in Nature which claimed to have induced stem-cell creation via application of mechanical stress. But this is a particularly unique case—it’s still rare for papers to be retracted allegedly for outright fabrication of results.
No, perhaps more worrying is the trend towards publishing flashy, new science in high profile journals, and away from publishing studies which attempt to replicate new findings. It might seem natural that papers that are investigating new results should receive higher “status” per se. But it’s increasingly apparent that many results, particularly in biology, are unable to be replicated. And yet, with few journals to publish papers based on replicating results, there’s little incentive for scientists to waste time and resources on double-checking another lab’s results.
Maybe if the academic establishment placed less importance on ostensibly “high-powered” research, a notion that in and of itself seems increasingly outdated, we might start to see a shift towards high-quality research that is less dependent on the current scientific trends. I’m of the opinion that the reliance on the name of the journal as a measure of quality is past its time. Why not focus on the number of citations of the paper itself, rather than those of the journal its in, if some sort of quantification of a researcher’s impact is needed? Moves have already been made in this direction, such as with ResearchGate’s Score metric and Google Scholar’s impact metrics.
Scientific journals originated decades, if not centuries, before the Internet upended the means of distributing research findings. Being published in a smaller, less renowned journal no longer prevents your research from reaching its desired audience. A quick search on Google or Web of Science, and papers from journals ranging from Nature to the Canadian Journal of Zoology can reach anyone anywhere in the world. And now the number of times a paper is cited can be tracked automatically, making it easier than ever to directly measure a paper’s impact.
Lets stop relying on outdated means of judging scholarly output. Enough with the blind acceptance of “high-impact journals” as the be all and end all of a researcher’s career. Why not let our research speak for itself?
Featured Image from Amy | Flickr