32 Are researchers writing more, and is more better?

The idiom ‘publish or perish’ suggests that researchers will increase their output in order to obtain positions and promotions. And if a researcher’s productivity is measured by their publication output, shouldn’t we all be writing more papers? Certainly, it appears that more papers are being published (see Figure 32.1). An estimate for the total number of scholarly articles in existence was 50 million in 2009, with more than half of these published in the years from 1984 (Jinha, 2010).

Similarly, if we are all writing more, then wouldn’t some people start publishing two (or more) papers, when one would be adequate? This idea of ‘salami slicing’ to inflate outputs would be an understandable strategy if researchers were all trying to increase their output. Alternatively, the names of authors might be added to papers in which they did not make significant input via ghost authorship or hyperauthorship (see Cronin, 2001 for an interesting historical perspective, and Part I).

FIGURE 32.1: The growth in the number of papers published in Life Sciences over time. The number of papers published (blue line) compared to a standard growth rate (black line) shows an exponential and not a linear increase in Life Sciences. The data come from www.scopus.com.

A study by Fanelli and Larivière (2016) has a new take on the above questions, by asking whether researchers are actually writing more papers now than they did 100 years ago. They used the Web of Science to look for unique authors (more than half a million of them) and determine whether the first year of publication and the total number of publications resulted in an increasing trend. Fanelli and Larivière’s (2016) trend line for biology is very stable at around 5.5 publications whether you started publishing in 1900 or 2000 (note that earth science and chemistry do both increase dramatically).

But it is possible that these figures could be explained by the fact that the culture in publishing in biological sciences has changed a great deal. One hundred years ago, it was very unlikely that any postgraduate students would publish articles in peer reviewed journals. Moreover, it was also acceptable for advisors to take the thesis work of their students and write it up in monographs. This has certainly changed with the ranks of authors now being swelled considerably so that many more authors are likely to be included on only a single publication in which they participated (see Measey, 2011). I interpret this as the biological sciences becoming more democratic, with more of the people that contribute to the work receiving the credit.

There is a claim that despite the exponential growth in literature, the numbers of ideas increases in a linear fashion over the same period (Milojević, 2015; Fortunato et al., 2018). Although this is approximated (based on counting unique phrases in titles), is this evidence of scientists finding the ‘least publishable unit’? Personally, I am optimistic and think that more evidence is accumulating on ideas due to the new publishing arena. My optimism is grounded on conducting reviews where the amount of evidence published on phenomena does not stay static over time, but also increases. I think that more science is being conducted than ever before, and a greater proportion of the scientific project is being published. Although this does throw up increasing difficulties for editors and reviewers, we need to be careful not to impose restrictions on what can be published. Nissen et al. (2016) show us the dangers that not publishing negative results has on accepting faulty ideas.

32.0.1 At what rate is the literature increasing?

Using several databases (Web of Science, Scopus, Dimensions and Microsoft Academic) back to the beginning of their collections at the start of scientific journals in the mid 1600s, Bornmann et al. (2021) calculated the inflation rate of scientific literature to run at 4.02%, such that the literature will double in 16.8 years (Bornmann, Mutz & Haunschild, 2021). This means that there is literally twice as much published in 2020 as there was in 2003.

Although the early period of scientific publishing was notably slower than today, it is since the mid-1940s (following the end of ‘World War II’) that science has seen an exponential growth in productivity, with annual growth of 5.1%, and a doubling time of 13.8 years (Bornmann, Mutz & Haunschild, 2021).

The data (from Scopus) in Figure 32.1 suggest that in the Life Sciences, the doubling time is as little as 10 years: twice as much in 2020 as there was in 2010. The previous doubling period was 20 years, between 1990 and 2010. This exponential rate of increasing literature appears higher than in other subjects.

32.0.2 If more is being published, will Impact Factors increase?

Yes, there is inflation of impact factors. If the numbers of citations per paper remains constant, then the Impact Factor of journals should increase annually at a mean rate of 2.6% according to a study that considered IF of all journals in the then Journal Citation Reports (JCR now Web of Science) from 1994 to 2005 (Althouse et al., 2009). For Biological Sciences, this inflation rate varies from 0.882 (Agriculture), 1.55 (Ecology and Evolution) to 4.763 (Molecular and Cell Biology). In another study looking at 70 ecological journals from 1998 to 2007, the inflation rate was found to be 0.23 with half of the journals failing to keep pace (Neff & Olden, 2010), showing that even within a field Impact Factor inflation can vary substantially to the point of deflation (-0.05 to 0.70). This is principally driven by the growing number of citations per article over time, but with an important aspect being the number of cited journals covered in the index (much greater in cell biology than agriculture Althouse et al., 2009; Neff & Olden, 2010). Worryingly, the trends suggested that it is the top Impact Factor journals and those that are published for-profit that are inflating above the mean rate, suggesting that they are further differentiating themselves from other journals (Wilson, 2007; Neff & Olden, 2010). This also gives you insight as to why some journals are restricting the number of papers cited, and why what seems reasonable to you is excessive to those senior scientists in positions of power.

32.1 Are some authors unfeasibly prolific?

How many articles could you publish in a year before you could be described as ‘unfeasibly prolific’? This question was posed in a study that examined prolific authors in four fields of medicine (Wager, Singhvi & Kleinert, 2015). This publication piqued my interest as it turns out that the authors decided that researchers with more than 25 publications in a year were unfeasibly prolific, as this would be the equivalent of “>1 publication per 10 working days”. Their angle was to suggest that publication fraud was likely, and that funders should be more circumspect when accepting researchers productivity as a metric. Looking back through the peer review of this article (which is a great aspect of many PeerJ articles), I’m astounded that only one reviewer questioned the premise that it’s infeasible to author that number of papers in a year.

I have published >25 papers and book chapters in a year, and I know other people who do this regularly. To me, there is no question that (a) it is possible and (b) that they really are the authors - with no question of fraud. Firstly, the idea that prolific authors constrain their activity to “working days” is naïve. Most will be working throughout a normal weekend, and working in the early morning and late evening, especially in China (see Barnett, Mewburn & Schroter, 2019). A hallmark of a prolific author would be emails early in the morning and/or late at night. This gives you an indication of their working hours, and how they are struggling to keep up with correspondence on top of writing papers.

Authorship of a publication is often the result of several years of work, and it can be at many different levels of investment (see Part 1). Thus, from my perspective, when I look at authoring a lot of publications it reflects the activity of the initial concept for the work, raising of money, conducting the field work or experiment, analysing the data and then writing it up (with the subsequent submission and peer review time - see Part I). Often, the work conducted by students who lead the publications, many years later, are the culmination of many years of investment of both time and money. And in the Biological Sciences, good study systems keep giving.

32.2 Salami-slicing

‘Salami-slicing’ is different things to different people. While many people refer to salami-slicing as the splitting of research results into as many papers as possible, or slicing research into ‘least publishable units’, others are referring to publishing the same paper more than once in different journals. In the latter case, this should be considered self-plagiarism at best and fraud at worst. If you find examples of dual publications, these should result in retractions. Instances don’t need to be exact copies. I edited a submission where one of the referees alerted me to the fact that the same data with a very similar question had already been published two years previously. In this instance, I passed the submission to the ethical panel of the journal who rejected it, and flagged the authors for scrutiny in the case of future submissions. Note that instances where conference abstracts are printed in a journal does not prevent you from publishing a full paper of this work. I would maintain, however, that you should rewrite the abstract to prevent self-plagiarism (Measey, 2021).

During the production of any dataset, you are likely to find that you are able to answer more questions than you originally set out to ask when you first proposed the research (i.e. preregistration; see Part I). The question you will be left with is: whether you should be adding these post hoc questions (questions that arise after the study) to the manuscripts that you planned to write when you proposed the research, or whether these should be published as separate publications - clearly identified as post hoc questions? The realities of publishing in scientific journals means that in many instances you will be restricted by the number of words a journal will accept. This will mean that for certain outputs it will not be possible for you to ask additional post hoc questions, or potentially all the questions you wanted to report in the preregistered plan.

There is nothing wrong with writing papers based purely on findings that you came across during the study: post hoc questions. There is a clear role in scientific publishing of natural history observations. But that any publication (or part of a publication) that results must indicate that it results from a post hoc study. To my way of thinking, it would be more useful if journals had separate sections for such studies, with other publications only stemming from those that can show a preregistered plan. This would clearly improve transparency in publishing, and avoid accusations of p-hacking or HARKing.

Although ‘salami-slicing’ is the preoccupation of many authors and gatekeepers, there is evidence that more experimental data and analyses are now required to meet the standards of modern publishing (in some journals) compared to 30 years ago (Vale, 2015). As we move toward a more open and equitable way of sharing our data, we should see an increase the the total amount of evidence that can be put toward answering more questions. However, metric based assessments will always tempt some individuals to game the system (Chapman et al., 2019), resulting in a need to be aware of potential ‘salami-slicing’.

At what point does the separation of research questions into different papers become ‘salami-slicing’? There is no simple answer to this question, and editors are likely to disagree (Tolsgaard et al., 2019). However, there are ways in which you as an author can make sure that your work is transparent, and therefore that you are not accused of ‘salami slicing’. First is the preregistration of your research plan. Second is to preprint any unpublished papers that are referenced in your submission. There are also guidelines from COPE on the: ‘Systematic manipulation of the publication process(COPE, 2018c). And the last is to be transparent when you publish post hoc research.

In manuscripts where another very similar study is cited by the authors, but not available to reviewers or editors, there should be a ‘salami slicing’ red flag. Obviously, when you produce a number of outputs from a research project, they are likely to be linked and therefore cited by each other. However, when these are not available to reviewers and editors (as preprints or as preregistration of the questions), authors should expect to be asked for these manuscripts to demonstrate that they are not salami-slicing. Perhaps worse, however, is when authors deliberately hide any citation to another very similar work. In the end, we have to rely on the integrity of the researchers not to be unethical or dishonest.

32.3 Is writing a lot of papers a good strategy?

This is a question of long standing, and one that you may find yourself asking at some point early on in your career. I’d suggest that the answer will be more about the sort of person that you are, or the lab culture you experience, over any strategy that you might consciously decide. If you tend toward perfectionism, this will likely result in fewer papers that (I hope) you’d consider to be of high quality. If on the other hand your desire were to finish projects and move on, you’d be more likely to tend toward more papers. It is clear that the current climate leads towards the latter strategy, with increasing numbers of early career researchers bewildered at the idea of increasing their publication metrics (Helmer, Blumenthal & Paschen, 2020). But what should you do?

Given that the ‘best’ personality type lies somewhere in the middle, you can decide for yourself whether you identify with one side more than the other. But which is the better strategy? Vincent Larivière and Rodrigo Costas (2016) tried to answer this question by considering how many papers unique authors wrote and seeing how this relates to their share of authoring a paper in the top 1% of cited papers. Their result showed clearly that for researchers in the life sciences, writing a lot of papers was a good strategy if you started back in the 1980s. However, for those starting after 2009, the trend was reversed with those authors writing more papers less likely to have a ‘smash hit’ paper (in the top 1% of cited papers). Maybe the time scale was too short to know. After all, if you started publishing in 2009 and had >20 papers by 2013 then you have been incredibly (but not unfeasibly) prolific. Other studies continue to show that in the life sciences, writing more papers still provides returns towards having papers highly cited: the more papers you author, the higher the chance of having a highly cited paper (Sandström & van den Besselaar, 2016).

One aspect not considered by Larivière and Costas (2016) is that becoming known as a researcher who finishes work (resulting in a publication) is likely to make you more attractive to collaborators. Thus, publishing work is likely to get you invited to participate in more work. Obviously, quality plays a part in invitations to collaborative work too. Thus pulling the argument back to the centre ground. However, many faculty in North America (and particularly female faculty) seem to believe that writing more papers is considered desirable for review, promotion and tenure, including a greater number of publications per year (Niles et al., 2020). Whether or not this is the case within their institutions, that faculty consider it desirable may be part of the trend to publish more.

There are other scenarios in which you might be encouraged to write more. In Denmark, for example, research funding is apportioned to universities based on the number of outputs their researchers generated in a point system, where higher ranked journals get more points. This resulted in researchers in the life sciences changing their publication strategy with a notable increase in publications in the highest points bracket following this change (Deutz et al., 2021).

You may find yourself becoming preoccupied about which is the best strategy for you, not because you want to, but because your institution is relying on you to pull your weight in their assessment exercise. University rankings are now very important, and big universities like to be ranked highly for research, which depends (in part) on the quantity and quality of their output.

32.3.1 Does all of this tell us that publishing more is bad?

Although I would agree that continuing the exponential trend in numbers of publications is unsustainable, more publications are not bad per se. Instead, we need to be more careful about what we do and don’t publish, as well as the reasons why we publish. As long as we are conducting science and communicating our results, there should be no problem. Our problems arrive with publication bias, especially not publishing negative results (Nissen et al., 2016) and chasing ever higher Impact Factors.

The direct result of a system driven by Impact Factor and author publication metrics is that we will have a generation of scientists at the top institutions that are trained not to conduct the best science, but to generate publications that can be sold to the best journals (see also Gross & Bergstrom, 2021). We should be deeply suspicious of any claim of linkage between top journals and quality (Brembs, Button & Munafò, 2013). Indeed, what we see increasingly is that the potential rewards of publishing in top Impact Factor journals leads not only to bad science, but increasingly to deliberate fraud. Continuing along this path threatens to undermine the entire scientific project, and places science and scientists as just another stakeholder in a system ruled by economic markets, and their promotion of the fashion of the day (Casadevall & Fang, 2012; Brembs, Button & Munafò, 2013).