280 likes | 353 Views
Are they any use?. Hazards of price-per-use comparison in e-journal management. Jason S. Price , Ph.D. Claremont Colleges’ Libraries Los Angeles, California. General hazards -- Broad Strokes. Defining use narrowly Vagaries of user behavior Different dissemination styles in teaching
E N D
Are they any use? Hazards of price-per-use comparisonin e-journal management Jason S. Price, Ph.D. Claremont Colleges’ Libraries Los Angeles, California
General hazards -- Broad Strokes • Defining use narrowly • Vagaries of user behavior • Different dissemination styles in teaching • Granularity of usage reports * Ejournal package-centric approach from an academic institutions’ perspective
G1. A narrow definition of use COUNTER JR 1: Full text article requests Additional use-related measures: • A-Z list click throughs/Web log files • Times cited at your Inst. in recent papers • ACS Livewire 8:2 • Impact Factor • # of papers published by local researchers by journal • Faculty/Researcher Surveys • Print Use? • Page Rank? (Bollen & Van de Sompel 2006)
G2. Vagaries of search/use habits Users may check for full-text before judging relevance from abstracts (or even titles!) Google Accelerator
G3. Dissemination style in teaching Prof. A downloads 1 pdf, makes copies for students ↓ under-counts 1 use for many Prof. B sends link to publisher PDF to her 40 students ↑ over-counts many uses of 1 article Prof. C posts pdf on Electronic Reserve site ↓ under-counts 1 use for many
G4. Usage Report Granularity Title level use statistics Can’t separate frontfile use from backfile use whether purchased or freely available
Specific Hazards – Cost per use • Determining cost • Comparison to ILL cost • Comparison across Publishers • Ignoring ‘by-title’ data • Lack of benchmarks
COUNTER briefing Counting Online Use of NeTworked Electronic Resources -A standard & code of practice that enables comparison of usage statistics from different vendors Components: Terminology & Definitions Layout & Format of journal & database reports) Processing of user input Delivery frequency & availability period Testing & Regular audits www.projectcounter.orgtinyurl.com/nxqvv COMPLIANT
? S1. Determining Cost Overall cost per use =1 year’s cost / 1 year’s requests e.g. $58,600 publisher E-access fee 35,700 article views $1.64 Cost per use * $420,000 mandatory cost of subs (to agent) for a subset of these same titles $420K + $ 58.6K = $478,600 / 35,700 = $13.40
S2. Comparison with ILL Package CPV = $13.40 What does this tell us? • Is it High? Low? • Better than ILL?
CPV S3. Cross-package comparison So Pkg 1 is a better value than Pkg 3? It might not be…
Variation in use by format Davis and Price, 2006 JASIST
html to pdf Ratios vary widely for these packages How many pdfs in Pkg 1 are duplicates of html views?
CPU vs. CPP S3. Package value revisited pdf requests only tell a different story!
Response: COUNTER filter A unique article filter provides new metric: number of successful unique article requests in a session Need to be applied to Specific institutions/ interface configurations
ASSUMPTIONS Reality Check Should we expect cost per use to be equivalent among packages? • Quality • Scope • Business Model • For Profit vs Cost Recovery • Exposure in Google Scholar
Before Collaboration After Collaboration
Consortium S5. Lack of Benchmarks
Consortium S5. Lack of Benchmarks
Consortium S5. Lack of Benchmarks
Consortium S5. Lack of Benchmarks
Recommendations • Ensure you have the right cost • Be wary of cross-publisher comparison • Consider both overall and pdf use • Should it be the same? • For single package evaluation: • Look at patterns at title level • Benchmark vs Consortium or Peers • (Support efforts to ‘outsource’ CPU analysis to consortia staff)
Support from COUNTER • Indication of subs type (Subs vs Lease) • Separation of backfile data • Unique article filter to mitigate interface & linking effects • By title data • Single Password consortium access to aggregate and by-institution statistics • Much more…