Chapter 31 When should you correct or retract your paper?

Once your Version of Record is published, any changes that you may wish to make will result in a separate correction publication. In extreme cases, you may even need to retract the paper. The different options available in many journals to correct your paper are shown in Table 31.1. If you are in doubt about exactly what kind of correction you need, read the guidelines from the Council of Science Editors and the Committee on Publication Ethics (COPE) (Barbour et al., 2009).

TABLE 31.1: Journals have a number of different ways in which to correct the published Version of Record. You many not need a full retraction to set your study straight. Source: Council of Science Editors.
Action Example Conclusions impacted Issued by
Corrigendum / erratum / correction Important typos / incorrect figure legends or tables / author name or address issues Not Author
Expression of concern Data appear unreliable / misconduct suspected Undetermined Editor (perhaps through information received)
Partial retraction One aspect of the study is corrupted / Inappropriate analysis Overall findings remain Author or Editor
Full retraction Majority of the study is corrupted / Evidence of misconduct / Work is invalidated Yes Author or Editor

31.1 Making a correction to a published paper

It is very unlikely that you will be in a position where you will need to think about retracting your paper. If you notice a mistake, especially one that results in a difference to how the results are presented, then you should approach the editor about publishing a correction (also termed a corrigendum or erratum - plural errata).

Because a corrigendum is a change to the VoR, it will result in what is in effect an additional separate publication (with its own DOI). This will succinctly point out the error and how this should be rectified. In the journal, this will only be a few lines. In addition, on the site of the original publication, the journal will place a notice that there has been a correction, and provide a permalink to the correction. However, it results in a lot of extra administrative work for everyone, so it’s best avoided if at all possible. This is another reason why it’s worth taking your time when checking your proofs.

Another way to avoid having to make a corrigendum is to ensure that all co-authors are happy with the original submission, the resubmission and the proofs (i.e. get it right before you submit it).

Mistakes do occur, and it is likely to be some time after the publication that you might find that there was an error. Errors such as typos, or mistakes in the introduction or discussion are unlikely to warrant a corrigendum. However, if the error is in the way that the results were calculated, or causes a change in the significance, then you should consider making a corrigendum. If you feel that it is necessary, do consult your co-authors before taking it to the editor. It is well worth having someone else check your new calculation, as the last thing you want is a compounded error.

If the mistake is systemic, and changes all of the results, their significance and/or the validity of the conclusion, then you need to consider a full retraction.

31.2 Expression of concern

An expression of concern lies between a correction and a retraction. An expression of concern can precede a retraction, but suggests that the editors are seriously worried about something being wrong with the publication. For example, a paper published in Proceedings of the National Academy of Sciences that used an unusual mutant strain of Chlamydomonas (a genus of green algae) was placed under an official ‘Editorial Expression of Concern’ when the editors learned that the authors would not share their strain with any other researchers (Berenbaum, 2021). Clearly, this notice could be removed if, for example, the authors agreed to share their strain and their findings were replicated. However, if they continue to refuse, the editors could also fully retract the paper.

This is an unusual case, but shows how the editors are prepared to take their journal’s policy seriously, as a requirement of submission is that authors must be prepared to share any unique reagents described in their papers, or declare restrictions up front. Failure of these authors to follow through with the transparency declared on submission could mean that these authors have their paper retracted. Finding out this sort of information takes time, and so there is often a lag in the retraction window (Figure 31.1).

31.3 A retraction is unusual

“A person who has committed a mistake, and doesn’t correct it, is committing another mistake.”

— Confucius

A retraction of a paper is when your paper is effectively ‘unpublished.’ This happens at the discretion of the editor (and often the entire editorial board), and is a very serious issue. Retractions are rare. Reasons for retractions vary. It could be that a piece of equipment was later found to have malfunctioned or was calibrated incorrectly (Anon, 2018). A cell line was misidentified. Or they can be through no fault of the authors. For example, Toro et al. (2019) had their manuscript rejected by Journal of Biosciences, but due to an administrative error, the article was printed in an issue, and later retracted. However, the top reason for retraction is now misconduct (Fang, Steen & Casadevall, 2012; Brainard & You, 2018), and this is hardly surprising given the crazy incentives that many scientists received to publish in journals with top impact factors. Another important factor with retractions is that they appear to be more common in journals with higher impact factors (Brembs, Button & Munafò, 2013), and this should not surprise us as these journals are prone to publishing studies with confirmation bias (Forstmeier, Wagenmakers & Parker, 2017; Measey, 2021).

Although retractions are so rare in life sciences, 0.06% of all papers published between 1990 and 2020, they appear to be on the increase in the last 30 years: from around 0.5 to over 15 papers per 10 000 published (see Figure 31.1). It takes a mean time of nearly 2 years between notification of problems with a paper, and issuing a correction or retraction (Grey, Avenell & Bolland, 2021), but this belies a bimodal distribution in retraction times with the first hump coming from self-correcting authors, or clerical errors from journals coming within months of the original date of publication. The second hump is usually associated with fraud, and comes after several years of investigations by institutions often with added legal frustrations.

FIGURE 31.1: The growth of retractions in life sciences journals over time. The three lines show when the original paper is published (blue line) and when a correction, expression of concern or retraction are made (black line). Articles that are finally retracted (green line) are only a part of those with other issues after 2008. While the number of papers published that are later retracted appears to take turn downward from 2014, this may simply represent the lag before they are later retracted. This data came from the Retraction Watch database, selecting only data from Basic Life Sciences and Environment. The data is normalised by dividing the total number of publications (as taken from SCOPUS) in this area for the year by 10 000, and multiplying this by numbers of Original or Retracted papers.

Retractions showed a steep uptake in 2011, when a number of laboratories that made multiple retractions in 2011 (see Fang & Casadevall, 2011; Brainard & You, 2018). A single publisher was responsible for pulling a great many of the retracted papers in 2011, and this spike in retractions isn’t seen in the life sciences (Oransky, 2011; Brainard & You, 2018). What is clear from Figure 31.1 is that the rising trend in retractions (black line) appears to have been unaffected by the 2011 spike in other subject areas. Claims that retractions are levelling off (Brainard & You, 2018), are not matched by the data.

31.3.1 How do you know if a paper you cited is later retracted?

Citations to retracted papers are not uncommon, and often positively cite the paper even when the retraction has been made for misconduct (Bar-Ilan & Halevi, 2017). This suggests that most authors are simply not aware of the retracted status of many publications. Of course, if you visit the publisher’s website, you should see a clear notification at the Version of Record (e.g. Figure 31.2), that points to the retraction notice (see Figure 31.3), but this is not always the case. In general, publishers seem very shy about their retractions, and it can be difficult to track down retractions that should be clear for everyone to see. Indeed, if publishers did their due diligence on notifications on retractions, and this were entered into CrossRef, we wouldn’t need an independent database like the Retraction Watch database.

You should not cite a retracted paper. Once papers are retracted they don’t disappear. They continue to be available at the publishers’ website, but with a clear notice that they have been retracted (see below). In addition, a separate publication is made announcing the retraction of the work, as shown in Figure 31.3.

FIGURE 31.2: You should not cite a retracted paper. Once papers are retracted they don’t disappear. They continue to be available at the publishers’ website, but with a clear notice that they have been retracted (see below). In addition, a separate publication is made announcing the retraction of the work, as shown in Figure 31.3.

If you downloaded the article before it was retracted, then you will not be aware of what has happened unless you are following that particular publication. Similarly, if you get your search results from Google Scholar, there is no indication that a paper has been retracted. Scopus and Web of Science clearly indicate some articles that have been retracted, but the vast majority go unrecorded even on these databases. Perhaps this is why even highly publicised retractions continue to be cited by articles that follow (Piller, 2021). Clearly, the community is still responsible for watching what happens to the literature, even once a paper is cited. Of course, the publishers could be using items such as DOIs to track retracted papers and query their citations. So why don’t they?

Publishers are bad at sending through the correct metadata with their content. For example, for the same 30 year period as Figure 31.1, Web of Science lists only 133 retractions. Really what this means that if you aren’t using Zotero, and you want to be sure that there hasn’t been a retraction in any of your source material, you need to run a search on the Retraction Watch database - better to use Zotero says Ivan Oransky. Right now, the chance that any paper published in the last 30 years that you have cited will get retracted is still low (one in 1750), but if it was published in 2020 this rises to one in 650.

Some literature databases will notify you if a paper that you have in your database is retracted - but don’t count on this unless you use Zotero. Zotero takes DOIs from the retraction watch database, and uses them to notify uses of any retractions of articles that have occurred. A large red bar (that’s hard to ignore) is shown across the top of the citation window. This is a strong and very positive reason for using Zotero, leaving you to get on with your research.

31.3.2 Notification of retraction

The notification of retraction is supposed to explain exactly why a paper has been retracted. For example, if you have cited or used this work you should know whether it is because data has been fabricated, or more innocently there was a mistake with the equipment or another aspect of the investigation. However, it seems that some journals are issuing retraction notices that fall short of the guidelines required by COPE (Barbour et al., 2009), and that these delays are not in the interest of anyone involved (see Grey, Avenell & Bolland, 2021; Teixeira da Silva, 2021a). Clearly, there is need for improvement here on the part of the journals.

But we must be cautious about playing a blame game when it comes to journal retractions (Smith, 2021). We already know that peer review has shortcomings (see Part IV), and even the best of peer reviewers and/or editors cannot be expected to spot potentially fatal errors, especially when these come about from deliberate deceit on the part of the authors. Retraction will remain a necessary part of the scientific publishing process, and as retractions become more commonplace among journals, we can hope that guidelines will be followed in a timely manor (Grey, Avenell & Bolland, 2021).

You can however, cite the retraction which is published under a separate citation string. You might want to do this as an example of the type of work that is retracted, like the one shown (Costa-Pereira & Pruitt, 2020). This retraction notice refers the the Version of Record seen in Figure 31.2.

FIGURE 31.3: You can however, cite the retraction which is published under a separate citation string. You might want to do this as an example of the type of work that is retracted, like the one shown (Costa-Pereira & Pruitt, 2020). This retraction notice refers the the Version of Record seen in Figure 31.2.

31.4 Retraction Watch

To learn more about retractions in science, I encourage you to read the blog at Retraction Watch (https://retractionwatch.com/). This will give you an idea of the reasons why retractions are made, and give you some perspectives about the practices (and malpractices) that go on in the scientific environment.

31.5 Fabrication of data

The fabrication of data does happen. An anonymous survey on research integrity in the Netherlands suggested that prevalence of fabrication was 4.3% and falsification was 4.2%, while nearly half of those surveyed admitted to questionable research practices most prevalent in PhD candidates and junior researchers (Gopalakrishna et al., 2021). A growing body of retractions and alleged evidence on the tampering of data in spreadsheets has led to the suspension of a top Canadian researcher, Jonathan Pruitt. The detection of fraudulent (usually duplicated) data in spreadsheets is not too difficult to spot (e.g. by application of Benford’s law - the last digits of naturally occurring numbers should approach uniformity), and has become the subject of some contract data scientists who specialise in finding such instances of fraud. To some extent, the automated assessment of fraudulent practices has been or could be implemented for many infringements (Bordewijk et al., 2021).

The existence of paper mills should also be mentioned at this point. Papers, produced commercially and to order, are entirely fabricated by third parties to improve the CVs of real paying scientists (Teixeira da Silva, 2021a). Hundreds of these papers have been discovered that appear to come from the same source (Bik, 2020), but the true size of such additions to the scientific literature is unknown.

The pressure to publish is widely acknowledged as driving questionable research practices, including fraud (Gopalakrishna et al., 2021). Some have suggested that the additional pressure to obtain a permanent academic position is enough to drive some scientists to commit fraud (Fanelli, Costas & Larivière, 2015; Husemann et al., 2017; Kun, 2018). The idiom ‘publish or perish,’ and the importance of publishing is made elsewhere. However, I hope that by shedding some light on unethical practices, this book equips you to avoid these together with those that may espouse them, and instead show you that there is a better path to success.

Pruitt’s case highlights a good reason for increased transparency in the publication process. A blog post from someone caught up in the Pruitt retractions makes the point that journals that insisted on full data deposits for publication were well ahead of those that hadn’t (Bolnick, 2021). Of growing concern in many areas of Biological Sciences is the potential to manipulate results that are essentially images of results, for example blots on a gel. However, it turns out that manipulated images are also not too hard to discern.

Images are increasingly being used in journals to demonstrate results, and the manipulation of images in published papers appears to be rife. In a study of more than 20000 papers from 40 journals (1995 to 2014), Bik et al. (2016) found that nearly 2% had features suggesting deliberate manipulation. These could include simple duplication of an image from supposedly different experiments, duplication with manipulation (e.g. rotation, reversal, etc.) and duplication with alteration (including adding and subtracting parts of the copied image). The authors suggested that as they only considered these types of manipulations from certain image types, the actual level of image fraud in scientific papers is likely much higher (Bik, Casadevall & Fang, 2016). An R package (FraudDetTools) is now available for checking manipulation of images (Koppers, Wormer & Ickstadt, 2017), but there are other steps that reviewers and editors can take themselves (Byrne & Christopher, 2020).

31.5.1 Who is responsible?

In the case of fraud, retraction statements should indicate who the perpetrator is in order to exonerate the other researchers. Some research into the likely source of the fraud has been conducted. There are clearly serial fraudsters, and the presence of their names in the author list is a red flag for those investigating fraud. Data from papers that are known to be fraudulent suggest that the first author is the most likely to be responsible for the fraud committed, and middle authors the least (Hussinger & Pellens, 2019). This suggests that in a collaboration, you should be very careful who you collaborate with. While you might not be responsible, the discovery of (particularly large scale) fraud might well harm your career.

In a study looking for patterns about types of authors involved in retractions (all reasons), suggested that Early Career Researchers were particularly likely to be involved in retractions (Fanelli, Costas & Larivière, 2015), although the exact reason why remains obscure.

It is worth noting that while the journals (and ultimately the journal editor) are responsible for retractions from journals, this is not the same as punishing individuals who have committed fraud. As we have already seen, there can be many reasons for a retraction, and it is not up to editors or journals to punish researchers as there will be innocent researchers who may also need to make retractions. Moreover, it should never be the role of the journal to have any punitive action over a researcher. There are other mechanisms for this with the employer and (where applicable) the academic society involved. Different institutions and governments will have different rules when it comes to fraud being committed by their employees. These processes may be legal and take some time to finish. As we saw with the case of Pruitt, once lawyers get involved, the process may become mired in a lot more beaurocracy.

Whilst you may consider that your employers are not a group that you are particularly concerned about, you should consider the possibility that any scientific fraud could penetrate deeper than you are aware. For example, if you used fraudulently obtained data in order to make a grant application look better, then this would be deemed very serious by the (probably) government that you were applying to. Essentially, this becomes financial fraud as you are using false data in order to obtain money. While your employers may simply remove you from your position (and their employment), the government might prosecute and you could find yourself in prison. This does happen in some countries, and so (obviously) it is a really bad idea to commit fraud.

31.6 What to do if you suspect others

If you suspect that someone in a lab in your department, faculty or university is fabricating data, find out whether your university has a research integrity officer (RIO); most universities in the US have one. Document your evidence if you can and approach the RIO or person in the equivalent position. If you can’t find such a person, then ask at you university library for the most relevant person. Libraries are usually neutral places where you can find out information without arousing suspicion. You do need to be careful that you do not place yourself in harm’s way when reporting, so be prudent about sharing until you are assured protection from any potential retaliation. It isn’t easy to be a whistleblower - but it is the right thing to do.

If the research is published, and you think it is fraudulent, approach the editor directly. If there is some conflict of interest (like the person is at your institution), then you can try to sort it out internally (as described above). Otherwise, you can approach the editor directly yourself, anonymously or by using a third party.

The Committee on Publication Ethics (COPE) has published some useful flowcharts to guide researchers who suspect fraud in manuscripts or published articles:

31.7 Confirmation Bias and the paradox of high-flying academic careers

Jonathan Pruitt had it all going for him. His studies of spider sociality were producing novel and significant results that opened the door to publications in high impact journals. In turn, this opened the door to getting prizes and funding. The funding allowed him to conduct more studies and soon a prestigious chair in Canada with more funding to pursue his rocketing career. Things started to unravel for Pruitt when colleagues raised concern about the data in some of his publications. Things gathered pace very quickly, and doubt gathered around more and more of his publications. Although there is much written about the Pruitt debacle on the internet, the blog by American Naturalist editor and former Pruitt fan and friend, Dan Bolnick, is particularly enlightening (Bolnick, 2021). Pruitt’s case is becoming increasingly untenable as more editors backed by co-authors are retracting papers where he contributed data (see Marcus, 2020). For Pruitt, this has become a threat to his career and livelihood (Pennisi, 2020). Similarly, his university is facing the possibility that they hired a fraud. Consequently, this whole debacle has slipped into the legal world. Bolnick has clearly suffered personally from the affair, but has set out to provide as transparent an account as possible.

The harrowing part of Bolnick’s account is when he, co-authors and other editors started to receive letters from Pruitt’s Lawyer (see one example on Bolnick’s blog). At the point that the lawyer steps in, the functioning academic community that had raised itself to meet the demands of the concerns began to get muted. Bolnick then makes an important point that the legal threats from Pruitt’s Lawyer were stifling the freedom for academics (in this case the co-authors and editors) to publish, and therefore their academic freedom.

Another example of a rising star with high profile papers, allegedly making a habit of fabricating data, comes in the world of marine biology (Clark et al., 2020). The researchers in question, Danielle Dixson under the supervision of Philip Munday, made counter-claims that the detractors were unimaginative or that they are attempting to make a career from criticism. Meanwhile, students from their own labs continue to raise concern about the culture of fraud (see Enserink, 2021). In this case, the tide of evidence against the marine biologists appears to have turned, with forensic data specialists finding multiple examples of suspect data.

Neither case is fully resolved as the cases against these scientists still rest with their institutions. The lives of co-authors, former students and colleagues are put on hold, until some unforeseen point in the future.

It is clear from these reports that there are systemic problems when high profile scientists are accused of fraud. Journals say that it’s the responsibility of the institutions, and the institutions have no impetus to find fraud as that might lose them a very productive (think research income) and high profile scientist. What university would want to have its name dragged through the mud, and on top of this lose a large amount of grant income? Top researchers become untouchables in many institutions because they are essentially cash cows that no-one wants to disturb. Allegations against such individuals also include bullying and sexual misconduct. For those interested in reading more high profile misdemeaners in science Stuart Ritchie (2020) has put together a popular book on the subject.

Another important issue that arises from (alleged) scientific fraud is that it creates a culture of research that pushes towards an extremely unlikely hypothesis, in the misbelief that the hypothesis is likely given the nature of the publications (also see Fanelli, Costas & Ioannidis, 2017). Indeed, this natural selection of ‘bad science’ has permeated the hiring system so that researchers like this are more likely to be hired (Smaldino & McElreath, 2016). Forsmeier et al (2017) have an excellent review that outlines the problems with a culture that pushes towards increasingly unlikely hypotheses (see also Measey, 2021 on Type I errors).

At the heart of all of this is the cult of the Impact Factor and the research mentality that it generates.

References

Anon. 2018. Retraction. Behavioral Ecology 29:508–508. DOI: 10.1093/beheco/ary014.
Barbour V, Kleinert S, Wager E, Yentis S. 2009. Guidelines for retracting articles. Committee on Publication Ethics. DOI: 10.24318/cope.2019.1.4.
Bar-Ilan J, Halevi G. 2017. Post retraction citations in context: A case study. Scientometrics 113:547–565. DOI: 10.1007/s11192-017-2242-0.
Berenbaum MR. 2021. Editorial Expression of Concern: New class of transcription factors controls flagellar assembly by recruiting RNA polymerase II in Chlamydomonas. Proceedings of the National Academy of Sciences 118. DOI: 10.1073/pnas.2108930118.
Bik EM. 2020. The Tadpole Paper Mill. Science Integrity Digest.
Bik EM, Casadevall A, Fang FC. 2016. The Prevalence of Inappropriate Image Duplication in Biomedical Research Publications. mBio 7:e00809–16. DOI: 10.1128/mBio.00809-16.
Bolnick D. 2021. 17 months. Eco-Evo Evo-Eco.
Bordewijk EM, Li W, Eekelen R van, Wang R, Showell M, Mol BW, Wely M van. 2021. Methods to assess research misconduct in health-related research: A scoping review. Journal of Clinical Epidemiology 136:189–202. DOI: 10.1016/j.jclinepi.2021.05.012.
Brainard J, You J. 2018. What a massive database of retracted papers reveals about science publishing’s ‘death penalty.’ Science. DOI: 10.1126/science.aav8384.
Brembs B, Button K, Munafò MR. 2013. Deep impact: Unintended consequences of journal rank. Frontiers in Human Neuroscience 7:291. DOI: 10.3389/fnhum.2013.00291.
Byrne JA, Christopher J. 2020. Digital magic, or the dark arts of the 21st century—how can journals and peer reviewers detect manuscripts and publications from paper mills? FEBS Letters 594:583–589. DOI: 10.1002/1873-3468.13747.
Clark TD, Raby GD, Roche DG, Binning SA, Speers-Roesch B, Jutfelt F, Sundin J. 2020. Ocean acidification does not impair the behaviour of coral reef fishes. Nature 577:370–375. DOI: 10.1038/s41586-019-1903-y.
COPE. 2017. How to spot potential manipulation of the peer review process. Committee on Publication Ethics.
COPE. 2018a. What to do if you suspect image manipulation in a published article. Committee on Publication Ethics; Springer Nature.
COPE. 2018b. How to recognise potential authorship problems. Committee on Publication Ethics.
COPE. 2018c. Systematic manipulation of the publication process. Committee on Publication Ethics; Springer Nature.
Costa-Pereira R, Pruitt J. 2020. Retraction: Behaviour, morphology and microhabitat use: What drives individual niche variation? Biology Letters 16:20190266. DOI: 10.1098/rsbl.2020.0588.
Enserink M. 2021. Does ocean acidification alter fish behavior? Fraud allegations create a sea of doubt. Science 372:560–565.
Fanelli D, Costas R, Ioannidis JPA. 2017. Meta-assessment of bias in science. Proceedings of the National Academy of Sciences 114:3714–3719. DOI: 10.1073/pnas.1618569114.
Fanelli D, Costas R, Larivière V. 2015. Misconduct Policies, Academic Culture and Career Stage, Not Gender or Pressures to Publish, Affect Scientific Integrity. PLOS ONE 10:e0127556. DOI: 10.1371/journal.pone.0127556.
Fang FC, Casadevall A. 2011. Retracted Science and the Retraction Index. Infection and Immunity 79:3855–3859. DOI: 10.1128/IAI.05661-11.
Fang FC, Steen RG, Casadevall A. 2012. Misconduct accounts for the majority of retracted scientific publications. Proceedings of the National Academy of Sciences 109:17028–17033. DOI: 10.1073/pnas.1212247109.
Forstmeier W, Wagenmakers E-J, Parker TH. 2017. Detecting and avoiding likely false-positive findings–a practical guide. Biological Reviews 92:1941–1968. DOI: https://doi.org/10.1111/brv.12315.
Gopalakrishna G, Riet G ter, Cruyff MJLF, Vink G, Stoop I, Wicherts J, Bouter L. 2021. Prevalence of questionable research practices, research misconduct and their potential explanatory factors: A survey among academic researchers in The Netherlands. MetaArXiv. DOI: 10.31222/osf.io/vk9yt.
Grey A, Avenell A, Bolland M. 2021. Timeliness and content of retraction notices for publications by a single research group. Accountability in Research 0:1–32. DOI: 10.1080/08989621.2021.1920409.
Husemann M, Rogers R, Meyer S, Habel JC. 2017. Publicationism and scientists’ satisfaction depend on gender, career stage and the wider academic system. Palgrave Communications 3:1–10. DOI: https://doi.org/10.1057/palcomms.2017.32.
Hussinger K, Pellens M. 2019. Scientific misconduct and accountability in teams. PLOS ONE 14:e0215962. DOI: 10.1371/journal.pone.0215962.
Koppers L, Wormer H, Ickstadt K. 2017. Towards a Systematic Screening Tool for Quality Assurance and Semiautomatic Fraud Detection for Images in the Life Sciences. Science and Engineering Ethics 23:1113–1128. DOI: 10.1007/s11948-016-9841-7.
Kun Á. 2018. Publish and Who Should Perish: You or Science? Publications 6:18. DOI: 10.3390/publications6020018.
Marcus AA. 2020. Spider researcher uses legal threats, public records requests to prevent retractions. Retraction Watch.
Measey J. 2021. How to write a PhD in biological sciences: A guide for the uninitiated. Boca Raton, Florida: CRC Press.
Oransky AI. 2011. The Year of the Retraction: A look back at 2011. Retraction Watch.
Pennisi E. 2020. Embattled spider biologist seeks to delay additional retractions of problematic papers. Science.
Piller C. 2021. Disgraced COVID-19 studies are still routinely cited. Science 371:331–332. DOI: 10.1126/science.371.6527.331.
Ritchie S. 2020. Science Fictions: How Fraud, Bias, Negligence, and Hype Undermine the Search for Truth. Metropolitan Books.
Smaldino PE, McElreath R. 2016. The natural selection of bad science. Royal Society Open Science 3:160384. DOI: https://doi.org/10.1098/rsos.160384.
Smith EM. 2021. Reimagining the peer-review system for translational health science journals. Clinical and Translational Science. DOI: https://doi.org/10.1111/cts.13050.
Teixeira da Silva JA. 2021a. Abuse of ORCID’s weaknesses by authors who use paper mills. Scientometrics. DOI: 10.1007/s11192-021-03996-x.
Toro VP, Padhye AD, Biware MV, Ghaya NA. 2019. Retraction Note to: Larvicidal effects of GC-MS fractions from leaf extracts of Cassia uniflora Mill non Spreng. Journal of Biosciences 44:76. DOI: 10.1007/s12038-019-9892-4.
Wager E. 2006c. Suspected plagiarism in a submitted manuscript. Committee on Publication Ethics.
Wager E. 2006b. Suspected fabricated data in a submitted manuscript. Committee on Publication Ethics.
Wager E. 2006a. Suspected ghost, guest or gift authorship. Committee on Publication Ethics.