340 likes | 348 Views
This article discusses the various ways to evaluate scientific publications, including assessing their impact, reception, and integrity. It also explores the different levels of granularity at which evaluation can occur, from the journal level to the article level. The emergence of MegaJournals and the importance of open peer review are also highlighted.
E N D
How Should We ‘Evaluate’Scientific Publications Today? Pete Binfield Co-Founder and Publisher PeerJ Samuel Merritt - 10/30/2013 @p_binfield pete@peerj.com @ThePeerJ https://peerj.com
What do we mean when we say ‘Evaluate’? • Evaluating ‘Impact’ or ‘Reception’ or ‘Reach’ or ‘?’ • Providing Subjective Opinions & Evaluations • Evaluating ‘Integrity’ • And at what level of granularity? • The journal? • The article? • The paragraph?
#1. Evaluating ‘Impact’ or ‘Reception’ or ‘Reach’ or ‘Interest’ or ‘Readership’ or, or, or
Open Access ‘MegaJournals’ • Online-only, peer-reviewed, open access journals • covering a very broad subject area • selecting content based only on ‘technical soundness’ (or similar) • with a business model which allows each article to cover its own costs
In addition, if we allow for narrow scope ‘megajournals’ then we should also include: • All of the “Frontiers in…” Series (part of Nature) • All of the “BMC Series” (~ half of BMC) • ~ 1/3 of Hindawi’s current output All of these titles refuse to pre-judge what the audience should be reading (other than determining that the content should join the literature).
An OA future containing MegaJournals PLoSONE ALL OTHER OA JOURNALS SAGEOpen PeerJ etc.etc.
The Effect of the ‘MegaJournal’ Rapidly Approaching ~10% of all published content, spurring new developments Require, and have given rise to, Article-Level Metrics Publish Negative Results, Replication Studies, Incremental Articles Dramatic Improvement to the Speed of the Ecosystem Dramatic Improvement to the Efficiency of the Ecosystem
From “Article-‐Level Metrics, A SPARC Primer” - http://www.sparc.arl.org/sites/default/files/sparc-alm-primer.pdf
Screenshot from ~ Nov 2009 but Way Back Machine has examples from April 2008
The Effect of the ‘MegaJournal’ Rapidly Approaching ~10% of all published content, spurring new developments Require (and have stimulated) Article-Level Metrics Publish Negative Results, Replication Studies, Incremental Articles Dramatic Improvement to the Speed of the Ecosystem Dramatic Improvement to the efficiency of the way the ecosystem currently ‘filters’ content
“rejected from at least six journals (including Nature, Nature Genetics, Nature Methods, Science) and took a year to publish before going on to be my most cited research paper (150 last time I looked)” – Cameron Neylon
http://blog.rubriq.com/2013/06/03/how-we-found-15-million-hours-of-lost-time/http://blog.rubriq.com/2013/06/03/how-we-found-15-million-hours-of-lost-time/ “…in a recent report Kassab and his colleagues estimated that Elsevier currently rejects 700,000 out of 1 million articles each year.” http://poynder.blogspot.co.uk/2013/10/media-research-analyst-at-exane-bnp.html
#2. ‘Subjective’ Opinions & Evaluations(i.e. contextual, human evaluations)
Open Peer Review would makethis problem disappear. Overnight
Journals Practicing Open Peer-Review • Atmospheric Chemistry and Physics - Reviewers comments published on pre-pub discussion site. Reviewer names optional. • Biology Direct - Reviewer comments published, and reviewers named • BMJ Open - All reviewers named, all reports public • eLife- Decision letter published with articles with author approval. Reviewers anonymous, but editor named. • EMBO journal - Review process file published with articles. Reviewers anonymous, editor named. • F1000Research - All reviewers named, all reports public. • Frontiers journals - Reviewers named, but reports not public • GigaScience- Pre-publication history published with articles, and reviewers named. (encouraged, opt-out) • Medical journals in the BMC series - Pre-publication history published with articles, and reviewers named (encouraged). • PeerJ - Peer review history published with articles with author approval. Reviewers encouraged to sign report.
~40% of PeerJ Reviewers name themselves. • ~80% of PeerJ Authors reproduce their peer review history
What do we mean when we say ‘Evaluate’? • Evaluating ‘Impact’ or ‘Reception’ or… • Providing ‘Subjective’ Opinions & Evaluations • Evaluating ‘Integrity’ • And at what level of granularity? • The journal? • The article? • The paragraph?
@ThePeerJ Thank You Pete Binfield Co-Founder and Publisher @p_binfield pete@peerj.com