What hurts science - rejection of good or acceptance of bad?

Yesterday, Science published a story by John Bohannon about acceptance of a fake and deeply-flawed paper at open access journals, despite peer review. Disturbingly, 157 journals accepted the bogus article and only 98 rejected it. Scientists and some journalists swiftly pointed out the grave problems with this attack on open access - this sting operation highlights problems with traditional peer review, but it says very little about open access, as the same experiment was not performed on subscription journals1.

To me, the stunning part of this is that a journal with the title “SCIENCE” published a fake study, without a control, about the problem of accepting fake studies. But the bigger question is - how much damage is there from publication of poor science? Does anyone really read the journals that accepted this?

I took the 157 journals that accepted Bohannon’s fake paper and asked how many articles are in the libraries of the PubChase users from them. The answer is that out of over 75,000 articles of our users, only 5 are from this set (all five are from the single journal Bioinformation). In contrast, our users have 1,631 articles from 12 of the 98 journals that rejected the paper2.

The real problem in science is not that bad papers get published; that has always been and will continue to be the case. The real problem is that good and important papers are rejected and delayed from publication by journals such as Science. These delays hurt the progress of science and they demoralize and ruin careers.

Finally, when it comes to publishing bad research, Science is not the journal that should be pointing fingers. The 2011 editorial “Retracted Science and the Retraction Index” showed unambiguously that the higher the journal’s impact factor, the higher its retraction rate. Not surprisingly, Science had the second worst retraction rate of all the journals considered in that editorial.

  1. Great list of the responses here. []
  2. Of the 98 journals that rejected the fake paper, PubChase users have articles from 12 of them: PLOS One, mBio, Neurosurgical focus, International journal of biological sciences, Chinese medical journal, American journal of nuclear medicine and molecular imaging, Carcinogenesis, Yonsei medical journal, Current issues in molecular biology, Anti-cancer drugs, Immunome research, Environmental health perspectives. []
This entry was posted in PubChase and tagged , , . Bookmark the permalink.

20 Responses to What hurts science - rejection of good or acceptance of bad?

  1. Pingback: John Bohannon’s peer-review sting against Science | Sauropod Vertebra Picture of the Week

  2. Stephen Thorpe says:

    Costello, M.J.; Wilson, S.; Houlding, B. 2012: Predicting total global species richness using rates of species description and estimates of taxonomic effort. Systematic biology, 61(5): 871-883. doi: 10.1093/sysbio/syr080

    The above article was not intended to be a “hoax”, as such, but is, IMHO, deeply flawed in just about every respect. The obvious hoaxes are not a problem, it is rather articles like the one above which are far more of a problem. The article makes no logical sense, and its conclusions are not supported by the methods of results, and everything is just so vague and self-contradictory! The approach seems to have been to simply state the desired conclusion, and then throw up a smokescreen of overly complex and incomprehensible rubbish which purports to support that conclusion!

    • Lenny Teytelman says:

      I am not entirely sure why highlighting this particular article is necessary in context of the blog post. However the comment underscores the real value of Bohannon’s sting - traditional peer review does not work. The Costello paper may have been rejected by Systematic Biology had it been sent to you to review it. Then Costello would submit elsewhere and eventually publish. It is just waste of time and effort, both for the authors and reviewers. We just need to move away from pre-publication peer review, that’s all. Discussions should happen in the open, post-publications, so that you and Costello, and others could have open and public discourse about the paper.

      Michael Eisen nails this in his post about the Science sting: http://www.michaeleisen.org/blog/?p=1439

  3. Nice analysis. Is it possible to calculate how many papers have been retracted in this same data set and if there is any significant difference in the proportion of lax vs outright wrong papers in people’s libraries?

    • Lenny Teytelman says:

      Hmm, how do you gage lax versus wrong? Also, is there a central listing of retractions? PubMed seems to show it for some papers, but I am not sure if they show it for all.

  4. Arjun Raj says:

    Lenny, I think your point is right on. Who cares! Honestly, most science (even in fancy journals) is pretty stupid and pointless. Probably some of my work as well. The really bad stuff gets out no matter what, and it’s just not worth policing–that stuff does not drive any sort of scientific discussion. It’s the bad stuff at Science and Nature that are the most damaging, because they actually influence scientific discussion, often inflating entire fields of bogus science for decades…

    • Agreed. We can’t stop self-publishing, and there are hugely many journals that do publish some papers worth reading, so relying on pre-selected journal lists is not a good strategy (although maybe if we restrict to those passing a few simple tests, we wouldn’t lose much). Most people find papers via keyword search, surely. So the main issue now is filtering - most of the “good” stuff is indistinguishable from noise to most researchers, since it is irrelevant to their work, being in remote fields. What I don’t want to see is the herd behaviour caused by Science and Nature replaced by herd behaviour caused by Twitter. We should be aiming to improve search capability, so that it is easier to find papers based on ideas, even if you don’t know the idiosyncratic notation and terminology that the author may be using. Post-publication review (with a very light pre-publication step to filter out computer-generated nonsense) is obviously the way to go, and the sooner we get serious about it, the better.

      • Eva Amsen says:

        “Post-publication review (with a very light pre-publication step to filter out computer-generated nonsense) is obviously the way to go”

        That’s pretty much the model we use at F1000Research!

        We do do a bit more than checking for computer-generated text in our pre-pub check, though. That’s also the point where we check papers for plagiarism, for example, and we make sure all figures and data are present, but the whole process takes up just a few days, and then the paper is formally published with DOI. When invited referee reports come in, they’re published with the article (publicly, and with referee name) and once a paper passes peer review, it will be indexed in external databases.

        More about our model on our About Page and in FAQs.

        • I fully agree with and support the idea of decoupling the importance of a paper from its review and the journal where it is published. The part that is essential to keep and raise in significance, in my opinion, is the scientific rigor: the claims in a scientific article must be supported by the data and analysis. I am not convinced that we can rely on random “experts” or majority layman opinion for that part. It is great to make the evaluation of a paper transparent and ongoing but the first pass, I think, must involve people who have a large probability of assessing correctly the support of the claims by data and analysis. Otherwise we may end up with numerous weak and incorrect but mutually supporting claims forming influential opinions, a phenomenon well studied and amply demonstrated by psychology research.

          • Eva Amsen says:

            Just to be clear: F1000Research invites reviewers in the appropriate fields (same as other journals do). The only differences are: their names and reports are public, and the paper is already online. It’s not open for anyone to review. (Although anyone can leave a comment, but the comments are not part of the formal peer review).

  5. Pingback: Bohannon’s Science Sting - playing devil’s advocate and proposing a solution | adamgdunn

  6. Pingback: Science Mag sting of OA journals: is it about Open Access or about peer review? | I&M / I&O 2.0

  7. Pingback: ¿Quién teme a la revisión por pares? Demos un voto de confianza al sistema | Psicoteca

  8. This is a good point Lenny and I love your analysis of the readership audience of the journals that accepted the flawed paper. It is quite clear that even a huge number of such flawed papers published in such “illustrious venues” is not going to influence much the mainstream scientific community.

    I think, however, that there is a cost of contaminating and diluting the mainstream scientific literature with flawed papers. It does take at least a few minutes to scan through a paper before I can confidently say it is flawed; I would rather not spend a few hours every week scanning through a few dozens obviously flawed papers. I think that this extreme is undesirable as well and it is a balancing act of deciding how to implement the optimal filter. I do agree with you that the current system filters out more aggressively than what I would think optimal. The better solution, in my opinion, is to implement a better and more efficient filter rather than just change the stringency of a filter that is likely to incur many false positives and/or many false negatives.

    • Lenny Teytelman says:

      I would love to be able to instantly skip deeply flawed papers. Even more importantly, I would love to know in good papers what I can and cannot trust. In most research, unless I am an expert in that particular topic, it is impossible for me to judge whether a figure/result/interpretation is valid.

      What John Bohannon showed very well is that traditional peer review and journal-based assessment of quality is broken. We can’t rely on it to weed out the bad and we can’t rely on it to select the good for us.

      The solution, being tried by the likes of PeerJ and F1000 Research is to make peer-review post publication, visible over the entire life of the paper, with more and more experts able to weigh in on the article after the publication.

  9. I see shades of gray rather than black and white. It has been very clear to all active scientists that I know, long before John Bohannon, that the peer review system is not perfect, not to mention some of the the particular journals that John targeted and you showed are out of the picture for most of us. Calling something a journal or peer review do not make it one.

    On the other hand, I want to defend our imperfect peer review system. Yes, I have received misguided feedback on papers but I have received much constructive and thoughtful feedback as well. At the end of the day, I find the conditional probability that I think highly of a MBoC paper very different from the conditional probability that I think highly of a paper that has undergone “very light initial screening”. While we cannot rely on any peer review at any journal to perfectly weed out the bad and to select the good for us, I think that the peer review in some journals does a good job of weeding out the bad and to selecting the good for me.

    The current system has many flaws, needs revision and is undergoing a rapid revision. Hopefully we move to a lighter shade of gray but I am not convinced that we should abandon what we have (the eLife’s approach to peer review sounds quite good to me) and open the floodgates to papers evaluated by questionable experts. Calling somebody an expert does not make him/her an expert just as calling a peer review journal does not make it a peer review journal. I feel that John had derived a conclusion from an experiment lacking controls and targeting non-representative sector of journals, many of which you showed to be outside of the reading list for most of us. Such a concision can hardly be an argument to ignore peer review. It has never been perfect, never will be, and we never should trust it blindly. Yet, I think that it has many virtues worth incorporating into the new system that we come up with.

  10. Lenny Teytelman says:

    I am arguing for much more rather than less peer review. F1000 Research and PeerJ don’t do “light” screening. They do rigorous traditional peer review. In fact, I trust the quality of a paper in these journals more than of a random paper in Science.

    The difference is that PeerJ and F1000 Research make referee reports public and encourage accumulation of more reviews over the lifetime of the paper.

    • Eva Amsen says:

      The “light” screening that sometimes gets mistaken for the formal peer review step at F1000Research is the editorial check that is done before the paper gets sent to referees. Although papers go online after this check, their status is at that point very clearly labeled as “awaiting peer review” (on both the article page and the PDF). It’s definitely not the review itself, just the initial step.

      And as I also indicated above, the peer review is carried out by experts in the field of the paper, not by random people.

      We also encourage authors to update their papers to address reviewer comments EVEN if they pass peer review! Here’s an example of a paper being improved further even after three referees already approved it.

  11. Pingback: The Fake Open Access Scam | Pubchase Blog

  12. Pingback: We Can Fix Peer Review Now | Pubchase Blog

Leave a Reply

Your email address will not be published. Required fields are marked *