REM:
"Scientists have to go through a rigorous "peer review" process to get their papers published"
LOL! Haaaahah! Wake up, man.....
wow, you really think that politics and grant funding is not part of the game? Do you really think that there is no bias built into this system?
Criticisms of peer review
Peer review has been criticized because of several problems, including the fears that reviewers may:
- Take advantage of ideas in manuscripts that are not yet published and grant proposals that are not yet funded
- Be biased in favor of well-known researchers or institutions
- Review the work of a competitor unfairly
- Be insufficiently qualified to provide an authoritative review
Many attempts have been made to examine these assumptions about the peer review process, and most have found such problems to be, at worst, infrequent (e.g., Abby et al., 1994; Garfunkel et al., 1994; Godlee et al., 1998; Justice et al., 1998; van Rooyen et al., 1998; Ward and Donnelly, 1998). Nonetheless, problems do occur. For example, reviewers may be less likely to criticize work that is consistent with their own perceptions (Ernst and Resch, 1994) or to award a fellowship to a woman rather than a man (Wennerds and Wold, 1997). Not surprisingly, because the process of peer review is highly subjective, it is possible that some people will abuse the process or act based on intentional or unintentional biases.
In addition to concerns about bias, peer review of publications does not do well at detecting innovative research or filtering out fraudulent, plagiarized, or redundant publications (reviewed by Godlee, 2000). Although laudable, these goals are not the strengths of peer review. However, this does not mean that peer review has no value. As Godlee (2000) concludes: "...there is evidence that it contributes to maintaining standards in published science, both by ensuring that lower-quality research does not appear in the higher-impact journals and by improving the quality of accepted articles before publication."
Peer Review: Reform or Revolution?
Time to Open Up the Black Box of Peer Review
by Richard Smith
British Medical Journal
1997;315:759-760As recently as 10 years ago we had almost no evidence on peer review, a process at the heart of science. Then a small group of editors and researchers began to urge that peer review could itself be examined using scientific methods. The result is a rapidly growing body of work, much of it presented at the third international congress on peer review held in Prague last week. The central message from the conference was that there is something rotten in the state of scientific publishing and that we need radical reform.
The problem with peer review is that we have good evidence on its deficiencies and poor evidence on its benefits. We know that it is expensive, slow, prone to bias, open to abuse, possibly anti-innovatory, and unable to detect fraud. We also know that the published papers that emerge from the process are often grossly deficient. Research presented at the conference showed, for instance, that reports of randomised controlled trials often fail to mention previous trials and do not place their work in the context of what has gone before; that routine reviews rarely have adequate methods and are hugely biased by specialty and geography in the references they quote (p 766); and that systematic reviews rarely define a primary outcome measure.
Perhaps because scientific publishing without peer review seems unimaginable, nobody has ever done what might be called a placebo controlled trial of peer review. It has not been tested against, for instance,editors publishing what they want with revision, and letting the correspondence columns sort out the good from the bad and point out the strengths and weaknesses of studies. Most studies have compared one method of peer review with another and used the quality of the review as an outcome measure rather than the quality of the paper. One piece of evidence we did have from earlier research was that blinding reviewers to the identity of authors improved the quality of reviews,(1) but three larger studies presented at the congress found that it did not. The new studies also found that blinding was successful in only about half to two thirds of cases. One of those studies - by Fiona Godlee from the BMJ and two colleagues - might also be interpreted as showing that peer review "does not work." The researchers took a paper about to be published in the BMJ, inserted eight deliberate errors, and sent the paper to 420 potential reviewers: 221 (53%)responded. The median number of errors spotted was two, nobody spotted more than five, and 16% didn't spot any.
How should editors - and those deciding on grant applications - respond to the growing body of evidence on peer review and the publishing of scientific research? The most extreme sometimes argue that peer review, journals, and their editors should be thrown into the dust bin of history and authors allowed to communicate directly with readers through the internet. Readers might use intelligent electronic agents ("knowbots" is one name) to help them find valid research that meets their needs. This position is being heard less often, and at the conference Ron LaPorte - an American professor of epidemiology who has predicted the death of biomedical journals(2) - took a milder position on peer review. He sees a future for it. Readers seem to fear the fire hose of the internet: they want somebody to select, filter, and purify research material and present them with a cool glass of clean water.
Peer review is unlikely to be abandoned, but it may well be opened up. At the moment most scientific journals, including the BMJ, operate a system whereby reviewers know the name of authors but authors don't know who has reviewed their paper. Nor do authors know much about what happens in the "black box" of peer review. They submit a paper, wait, and then receive a message either rejecting or accepting it: what happens in the meantime is largely obscure. Drummond Rennie - deputy editor (West) of JAMA and organiser of the congress - argued that the future would bring open review,whereby authors know who has reviewed their paper. Such a proposal was floated several years ago in Cardiovascular Research, and several of the editors who were asked to respond (including Dr Rennie; Stephen Lock, my predecessor; and me) said that open review would have to happen.(3)Indeed, several journals already use it. The argument for open review is ultimately ethical - for putting authors and reviewers in equal positions and for increasing accountability.
Electronic publishing can allow peer review to be open not only to authors but also to readers. Most readers don't care much about peer review and simply want some assurance that papers published are valid, but some readers, particularly researchers, will want to follow the scientific debate that goes on in the peer review process. It could also have great educational value. With electronic publishing we may put shorter, crisper versions in the paper edition of the journal and longer, more scientific versions on our website backed up by a structured account of the peer review process.
The Medical Journal of Australia and the Cochrane Collaboration have already made progress with using the internet to open up peer review. The Australians have been conducting a trial of putting some of their accepted papers on to their website together with the reviewers' comments some two months before they appear in print. They invite people to comment and give authors a chance to revise their paper before final publication. Contributors, editors, reviewers, and readers have all appreciated the process, although few changes have been made to papers. The Medical Journal of Australia now plans to extend its experiment and begin to use the web for peer review of submitted manuscripts. The Cochrane Collaboration puts the protocols of systematic reviews on the web together with software that allows anybody to comment in a structured way - so long as they give their names. Protocols have been changed as a result. The collaboration also invites structured responses to published reviews. These are particularly important because those who have contributed reviews are committed to keeping them up to date in response to important criticisms and new evidence. Dr Rennie predicted a future in which the such a commitment to the "aftercare" of papers would apply also to those publishing in paper journals. At the moment papers are frozen at publication, even when destroyed by criticism in letters columns.
I believe that this conference will prove to have been an important moment in the history of peer review. The BMJ now intends to begin opening up peer review to contributors and readers and invite views on how we should do this. Soon closed peer review will look as anachronistic as unsigned editorials.
Richard Smith
Editor, BMJ
References
1 McNutt R A, Evans A T, Fletcher R H, Fletcher S W. The effects of blinding on the quality of peer review: a randomized trial. JAMA 1990;263:1371-6.
2 LaPorte R E, Marler E, Akazawa S, Sauer F, Gamboa C, Shenton C, etal. Death of biomedical journals. BMJ 1995;310:1387-90.
3 Fabiato A. Anonymity of reviewers. Cardiovascular Research 1994;28:1134-9.
Edited by - thichi on 5 November 2002 14:10:57
Edited by - thichi on 5 November 2002 14:18:32