290 likes | 440 Views
Adventures in peer review. Pippa Smart PSP Consulting Karachi, December 2010. It doesn ’ t find errors. So, what ’ s wrong with peer review. It doesn ’ t spot plagiarism. Its hard to find good reviewers. Where do I start …. A lot of effort for little return?. It delays publication.
E N D
Adventures in peer review Pippa Smart PSP Consulting Karachi, December 2010
It doesn’t find errors So, what’s wrong with peer review It doesn’t spot plagiarism Its hard to find good reviewers • Where do I start … A lot of effort for little return? It delays publication Reviewers are biased Poor quality comment Good reviewers get overloaded Not enough reviewers It is censorship of new ideas
it is slow, expensive, largely a lottery, poor at detecting errors and fraud, anti-innovatory, biased, and prone to abuse Richard Smith, BMJ Blogs, March 22, 2010
Researchers complain … 2009 survey of 4000 researchers: • Technology has improved things, but adequate guidance is missing • Why don’t journals provide better guidance? • Journals ask the wrong people • Why don’t they know who has adequate subject knowledge? (www.Senseaboutscience.org)
… and reviewers disagree with each other … • Study of 2264 papers receiving 5881 reviews from Journal of General Internal Medicine • Reviewers agreement was little more than chance • Editors continue to place considerable weight on reviewers' recommendations
“Peer review is neither necessary nor sufficient for scientific progress” • “Journal review procedures are merely a means to the end, and the end is a journal that serves a useful function in the dynamic process of science” (Ian Charlton, ex-editor of Medical Hypotheses)
However …. • … the editor left amid a storm over two papers on AIDS … • … and peer review was introduced … “Medical Hypotheses is now dead: killedby Elsevier 11 May 2010. RIP.”Bruce Charlton
“Lightweight” review • PLoS • “PLoS ONE will … publish all papers that are judged to be technically sound” • Why? • “Too often a journal's decision to publish a paper is dominated by what the Editor/s think is interesting and will gain greater readership …” • IF = 4.351, rejection rate c.30%, c.14,000 submissions in 2010, 14,250 articles published by 18 November 2010
Other supporters … • BMC Research Notes • “Publication … is dependent primarily on the [article’s] validity and coherence and whether the writing is comprehensible” • Why? • “…to reduce the loss suffered by the research community when results remain unpublished because they do not form a sufficiently complete story to justify the publication of a full research article” (280 articles since launch in January 2010)
Post-publication review is spotty, unreliable, and may suffer from cronyism Phil Davis, Scholarly Kitchen Blog July 14, 2010
# 1: It takes too long … • Elsevier three-tier publication model • On acceptance – 'Article in Press' • When finalised – ‘Issue in Progress' • Publication – by issue • BMC article-by-article publishing • No delay for issue, published as accepted • ArXiv.org and early drafts in repositories • Upload working paper, submission and pre-print
# 2: It's hard to find (good) reviewers • Built-in search tools: Elsevier provide editors with Scopus search • Guidelines, training - BMJ • PeerChoice for Chemical Physics Letters • Registered users alerted to new articles • Users download the articles with a promise to review them • Still in trial • Get the authors to do the hard work • Biology Direct: • Authors select 3 reviewers from the Editorial Board • Reviewers have 72 hours to agree to review or not • Agreement tacitly implies acceptance (with revisions)
# 3: Avoiding “cronyism” • Biology Direct • Reviewer-author pairing – no more than 4 articles each year • No more than 2 articles per year with the same three reviewers
# 4: Protecting against bias and censorship • Atmospheric Chemistry and Physics • Combined closed and open review • EMBO and Biology Direct • Published reviewer reports • BMJ • Make it all open (to author and reviewer, not reader) • Journal of Medical Internet Research • Reviewer names published under each accepted article
# 4: Protecting against bias and censorship • Atmospheric Chemistry and Physics • Combined closed and open review • EMBO and Biology Direct • Published reviewer reports • BMJ • Make it all open (to author and reviewer, not reader) • Journal of Medical Internet Research • Reviewer names published under each accepted article Doesn’t that just change the problem?
# 5: Minimising reviewer overload • Nature Publishing Group • Simplified method to redirect rejected manuscripts to other Nature journals • Nature Communications bottom of the chain • BioMed Central • Method to cascade articles from high-ranking journals to others – sharing reviews
Neuroscience consortium • http://nprc.incf.org/ • Launched January 2008 • 42 participating journals (and growing) • Sharing reviews if article rejected by one journal and submitted to another • Voluntary, confidential, no central organisation • Slow take-up
J. Neurophysiology J. Neuroscience Neuroscience consortium • J Neuroscience • Largest journal • Sends on c.5% rejections (c.175) articles per year • J Neurophysiology • Receives c.5% articles • Sent on only 8 articles • J Comparative Neurology • Receives 3-5% articles • Sends on 1-2% articles
Positive effects? • J Neurophysiology • 10% articles received accepted with no re-review • (cf <1% of new submissions) • 80% redirects accepted • (cf. 45% of new submissions) • Time reduced from 82 to 73 days • Many articles re-reviewed and/or revised
Positive effects? • c.700 articles published each year • Each article receives 4-5 reviewer comments • 1 in 4 articles receive a public comment • 3 of 4 reviewer comments are posted anonymously • Rejection rates of 10-20% • Better quality submissions and greater evolution of articles • Journal growth 20-50% articles each year • 2010 IF 4.881
In summary ? • More options for peer review • More experimentation • More acceptance of different methodologies
Thank you! Any questions? Pippa Smart Pippa.smart@gmail.com