[ weird things ] | peer review goes under the microscope

peer review goes under the microscope

Peer review isn't perfect. But it's still our best hope for quality science and catching errors and frauds with a few small, but meaningful tweaks.
death by peer review

You probably noticed that often times in this blog’s posts and in comment threads, I asked for peer-reviewed papers to support an odd or outright incorrect argument. And yet, at the same time, quite a few posts here can be rather critical of certain scientific papers despite the fact that they passed peer-review, the supposed gold standard for all scientific publications. Likewise, you might remember that Andy Wakefield’s attempt to cash in on a fear of MMR vaccines he manufactured with a fraudulent study and Bill Demski’s inane attempt to look like an expert in information theory were both peer-reviewed and published in scientific journals, along with various alt med woo and papers that could only be described as a violent rape of particle physics. That’s why Michael Brooks is calling for a major overhaul of peer review on New Scientist’s The S Word blog, and while he’s asking about why papers that should never see publication are getting included in journals, the fact that workable papers are actively being excluded as well, makes his call even more urgent and important.

Generally speaking, not all journals are created equal and the higher the impact, the better the quality of both the research and the experts’ reviews. That’s why while the recklessly dangerous brand of crankery known as HIV denialism is uncritically regurgitated by Medical Hypotheses alongside various alt med woo trying to justify homeopathy, faith healing and archaic folk remedies, publications like Nature and Science showcase extensive research into quantum mechanics and potential future treatments for cancers based on solid, well designed experiments. Just saying that a paper has been reviewed isn’t enough since anyone could just make a vanity journal which pushes a particular ideology with no regard for the science involved, or publishes pretty much anything it’s sent to collect a fee and let a submitter brag about having a publication in much the same way a mail order degree lets someone brag about having a college education. This is how the Medical Hypotheses journal is still in business and Ken Ham’s lackeys at Answers in Genesis could claim to author peer-reviewed publications, despite the fact that the peer review in question consists of people who believe that The Flintstones was an accurate documentary of prehistoric life patting each other on the back.

So using these basic rules, if you’re choosing a study from a respectable, high impact publication catering to high brow research, we should be fine as real peer review kicks in, right? Hopefully yes, but the reality isn’t as rosy. Wakefield’s aforementioned study, paid for by a trial lawyer and meant to let him sell a measles vaccine of his own design, was published in a prestigious medical journal and it took years and a huge public uproar until The Lancet finally retracted it, even while the vast majority of the paper’s co-authors publicly disowned their work soon after its publication. Bad papers do appear in journals with high standards and many decent papers get shot down by an anonymous reviewer or two who could’ve been using their position to carry out a personal vendetta, or just stubbornly didn’t believe the paper or its authors. On top of that, scientists trying to get their work published and gearing their paper towards editors could be biased towards submitting only positive results. But while a successful experiment makes for better headlines, a failed trial could be just as helpful, saving scientists from spending weeks or months on recreating another team’s mistake, or inviting a fresh look at a promising, but supposedly flawed idea that could be made workable with a few tweaks.

All right, so we can be cautious about what journals we choose as our sources to get a better evaluation of a scientific paper, but how can we fix the editorial biases and groundless complaints from reviewers who often have their identities shielded from the public? Publishing more negative results would take too much space for journals in paper format, but with today’s vast online portals and knowledge bases, it should be easy to get a write-up of a promising but failed experiment up and running in some sort of a peer-reviewed repository which should encourage researchers to collaborate on problems and identify dead ends. As for reviewers, journals could identify them so their credentials can be verified by the outside world, or allow more online open access papers which could be read and evaluated by more than three reviewers. Think of a fusion of PLoS and Wiki, only with expert readers contributing their input to the study’s authors for consideration. These are very simple and straightforward steps that could lead to marked improvements in today’s peer reviews. Of course politics, personal grudges, competition for grants and prestige, or even outright mistakes on the part of the reviewers will always affect the peer review process as long as it’s done by humans. But we’re not looking for perfection here. We’re looking for better and more stringent reviews, and a more transparent process.

[ cartoon from PHD Comics by Jorge Cham ]

# science // journal / scientific publications / scientific research


  Show Comments