490 likes | 592 Views
Aggregation and Analysis of Usage Data: A Structural, Quantitative Perspective Johan Bollen Digital Library Research & Prototyping Team Los Alamos National Laboratory - Research Library jbollen@lanl.gov Acknowledgements:
E N D
Aggregation and Analysis of Usage Data: A Structural, Quantitative Perspective Johan Bollen Digital Library Research & Prototyping Team Los Alamos National Laboratory - Research Library jbollen@lanl.gov Acknowledgements: Herbert Van de Sompel (LANL), Marko A. Rodriguez (LANL), Lyudmila L. Balakireva (LANL) Wenzhong Zhao (LANL), Aric Hagberg (LANL) Research supported by the Andrew W. Mellon Foundation. NISO Usage Data Forum • November 1-2, 2007 • Dallas, TX
Usage data has arrived. Value of usage data/statistics is undeniable: • Business intelligence • Scholarly assessment • Monitoring of scholarly trends • Enhanced end-user services Production chain: • Recording • Aggregation • Analysis Challenges and opportunities at all links in chain. But of interest to this workshop in particular: • Recording: data models, standardization • Aggregation: restricted sampling vs. representativeness • Analysis: frequentist vs. structural This presentation: overview of lessons learned by MESUR project at LANL. Technical project details later this afternoon.
Structure of presentation. • MESUR: introduction and overview • Aggregation: value proposition • Data models: • Usage data representation • Aggregation frameworks • Usage data providers: technical and socio-cultural issues, a survey • Sampling strategies: theoretical issues • Discussion: way forward, roadmap
Structure of presentation. • MESUR: introduction and overview • Aggregation: value proposition • Data models: • Usage data representation • Aggregation frameworks • Usage data providers: technical and socio-cultural issues, a survey • Sampling strategies: theoretical issues • Discussion: way forward, roadmap
The MESUR project. Johan Bollen (LANL): Principal investigator. Herbert Van de Sompel (LANL): Architectural consultant. Aric Hagberg (LANL): Mathematical and statistical consultant. Marko Rodriguez (LANL): PhD student (Computer Science, UCSC). Lyudmila Balakireva (LANL): Database management and development. Wenzhong Zhao (LANL): Data processing, normalization and ingestion. “The Andrew W. Mellon Foundation has awarded a grant to Los Alamos National Laboratory (LANL) in support of a two-year project that will investigate metrics derived from the network-based usage of scholarly information. The Digital Library Research & Prototyping Team of the LANL Research Library will carry out the project. The project's major objective is enriching the toolkit used for the assessment of the impact of scholarly communication items, and hence of scholars, with metrics that derive from usage data.”
Project data flow and work plan. 4a 2 3 1 4b metrics survey reference data set aggregation ingestion negotiation
Project timeline. We are here!
Structure of presentation. • MESUR: introduction and overview • Aggregation: value proposition • Data models: • Usage data representation • Aggregation frameworks • Usage data providers: technical and socio-cultural issues, a survey • Sampling strategies: theoretical issues • Discussion: way forward, roadmap
Lesson 1: aggregation adds significant value. Focus on institution-oriented services and analysis Multiple institutions, each with focus on institution-oriented services and analysis Note: MESUR=not services Aggregating usage data provides: Validation: reliability and span Perspective: group-based services and analysis
Aggregation value: validation • Aggregating usage data provides: • Reliability: overlaps and intersections • Validation: error checking • Reliability: repeated measurements • Span: • Diversity: multiple communities • Representativeness: sample of scholarly community
Aggregation value: convergence. Convergence! • Open research questions: • Is this guaranteed? • To what? A common-baseline? • What we do know: • Institutional perspective can be contrasted to baseline. • As aggregation increases in size, so does value.
We have COUNTER/SUSHI. How about the aggregation of item-level usage data? If there is value in aggregating COUNTER and other reports, there is considerable value in aggregating item-level usage data.
Structure of presentation. • MESUR: introduction and overview • Aggregation: value proposition • Data models: • Usage data representation • Aggregation frameworks • Usage data providers: technical and socio-cultural issues, a survey • Sampling strategies: theoretical issues • Discussion: way forward, roadmap
Lesson 2: We need a standardized representation framework for item-level usage data. MESUR: Ad hoc parsing is highly problematic - field semantics - field relations - various data models Framework objectives: • Minimize data loss: • Preserve event info • Preserve sequence info • Preserve document metadata • “realistic”: scalability and granularity. • Apply to variety of usage data, i.e. no inherent bias towards specific type of usage data
Requirements for usage data representation framework. Needs to minimally represent following concepts: Event ID: distinguish usage events Referent ID: DOI, SICI, metadata User/Session ID: define groups of events related by user Date and time ID: identify data and time of event Request types: identiby type of request issued by user Implications Sequence: session ID and date/time preserves sequence Privacy: session ID groups events not by user ID Request types: filter on types of usage
COUNTER reports: information loss From: www.niso.org/presentations/MEC06-03-Shepherd.pdf Event ID: distinguish usage events ? Referent ID: DOI, SICI, metadata User/Session ID: define groups of events related by user? Date and time ID: identify data and time of event Request types: identiby type of request issued by user
journal1 journal2 From same usage data: journal usage graphs
Usage graphs connect this domain to50 years of network science • social network analysis • small world graphs • network science • graph theory • social modeling • Good reads: • Barabasi (2003) Linked. • Wasserman (1994). Social network analysis. Heer (2005) - Large-Scale Online Social Network Visualization
Structure of presentation. • MESUR: introduction and overview • Aggregation: value proposition • Data models: • Usage data representation • Aggregation frameworks • Usage data providers: technical and socio-cultural issues, a survey • Sampling strategies: theoretical issues • Discussion: way forward, roadmap
Lesson 3: aggregating item-level usage data requires standardized aggregation framework. Standardization objectives similar to work done for COUNTER and SUSHI: • Serialization (~COUNTER): • standard to serialize usage data • Suitable for large-scale, open aggregation • Event provenance and identification • Transfer protocol (~SUSHI): • Communication of usage data between log archive and aggregator • Allow open aggregation across stakeholders in scholarly community • Privacy standard: • Standards need to address privacy concerns • Should allow emergence of trusted intermediaries: aggregation ecology LANL has made proposals based on community standards
OpenURL ContextObject to represent usage data <?xml version=“1.0” encoding=“UTF-8”?> <ctx:context-object timestamp=“2005-06-01T10:22:33Z” … identifier=“urn:UUID:58f202ac-22cf-11d1-b12d-002035b29062” …> … <ctx:referent> <ctx:identifier>info:pmid/12572533</ctx:identifier> <ctx:metadata-by-val> <ctx:format>info:ofi/fmt:xml:xsd:journal</ctx:format> <ctx:metadata> <jou:journal xmlns:jou=“info:ofi/fmt:xml:xsd:journal”> … <jou:atitle>Isolation of common receptor for coxsackie B … <jou:jtitle>Science</jou:jtitle> … </ctx:referent> … <ctx:requester> <ctx:identifier>urn:ip:63.236.2.100</ctx:identifier> </ctx:requester> … <ctx:service-type> … <full-text>yes</full-text> … </ctx:service-type> … Resolver… Referrer… …. </ctx:context-object> Event information: * event datetime * globally unique event ID Referent * identifier * metadata Requester * User or user proxy: IP, session, … ServiceType Resolver: * identifier of linking server
CO CO CO CO CO CO CO CO CO Aggregation framework: existing standards Log Repository 1 Serialization: OpenURL ContextObjects Log harvester Log Repository 2 Aggregated Usage Data Log DB Aggregated logs Log Repository 3 Tranfer protocol: OAI-PMH Johan Bollen and Herbert Van de Sompel. An Architecture for the aggregation and analysis of scholarly usage data. In Joint Conference on Digital Libraries (JCDL2006), pages 298-307, June 2006.
The issue of anonymization • Privacy and anonymization concerns play on multiple levels that standard needs to address: • Institutions: where was usage data recorded? • Providers: who provided usage data? • Users: who is the user? • Goes beyond “naming” and masking identity: simple statistics can reveal identity • User identity can be inferred from activity pattern (AOL search data) • Law enforcement issues • MESUR: • Session ID: preserve sequence without any references to individual users • Negotiated filtering of usage data
Structure of presentation. • MESUR: introduction and overview • Aggregation: value proposition • Data models: • Usage data representation • Aggregation frameworks • Usage data providers: technical and socio-cultural issues, a survey • Sampling strategies: theoretical issues • Discussion: way forward, roadmap
Lesson 4: socio-cultural issues matter. MESUR’s efforts Negotiated with • Publishers • Aggregators • Institutions Results: • +1B usage events obtained • Usage data provided ranging from 100M to 500K • Spanning multiple communities and collections BUT: • 12 months • +550 email messages • Countless teleconferences Correspondence reveals socio-cultural issues for each class of providers • Privacy • Business models • Ownership • World-view Study by means of email term analysis of correspondence • Term frequencies: tag clouds • Term co-occurrences: concept networks
The world according to usage data providers:institutions Note: strong focus on technical matters related to sharing SFX data
The world according to usage data providers:publishers Note: strong focus on procedural matters related to agreements and project objectives
The world according to usage data providers:aggregators Note: strong focus on procedural matters and project/research objectives
The world according to usage data providers:Open Access Publishers Note: strong focus on technical matters related to sharing of data
The world according to usage data providers:Open Access publishers
Lesson 5: size isn’t everything MESUR’s efforts: • 12 months • +550 email messages Good results but large variation: • Usage data provided ranging from 100M to 500K “Return on investment” analysis • Value generated relative to investment • Investment = correspondence • Value = usage events loaded Extracted from each provider correspondence with MESUR: • Investment: • Delay between first and last email • Number of emails • Number of bytes in emails • Timelines of email “intensity” • Value: • Number of usage events loaded • (bytes won’t work: different formats)
Return on investment (ROI) value = usage events loaded work = 1) days between first contact 2) emails sent 3) bytes communicated ROI = value / work
Return on investment (ROI) Survivor bias! Another way to look at it: ROI = value / work Value = usage data obtained Work = asking
Return on investment: ETA? All Publisher Aggregator
Return on investment: timelines Institution Note: same pattern, but much less traffic.
Structure of presentation. • MESUR: introduction and overview • Aggregation: value proposition • Data models: • Usage data representation • Aggregation frameworks • Usage data providers: technical and socio-cultural issues, a survey • Sampling strategies: theoretical issues • Discussion: way forward, roadmap
Lesson 6: Sampling is (seriously) difficult • Usage data = sample • Sample of whom? • Different communities • Different collections • Different interfaces • Who to aggregate from? • Different providers • Interfaces • Systems • Representation frameworks • Result = difficult choices
A tapestry of usage data providers: Each represent different, and possibly overlapping, samples of the scholarly community. Institutions: • Institutional communities • Many collections Aggregators: • Many communities • Many collections Publishers: • Many communities • Publisher collection Main players: • Individual institutions • Aggregators • Publishers We will hear from several in this workshop
MESUR: significant project overhead can be reduced by standards MESUR’s research Unresolved. However, MESUR lessons may elucidate. A tapestry of usage data providers Which providers to choose? Different problems: * A: Technical • Aggregation • Normalization • Deduplication • * B: Theoretical • Who: Sample characteristics? • What: resulting aggregate? • Why: sampling objective? • * C: Socio-cultural • World views • Concerns • Business models
Structure of presentation. • MESUR: introduction and overview • Aggregation: value proposition • Data models: • Usage data representation • Aggregation frameworks • Usage data providers: technical and socio-cultural issues, a survey • Sampling strategies: theoretical issues • Discussion: way forward, roadmap
Conclusion There exists considerable interest in aggregating item-level usage data. This requires standardization on multiple levels: • Recording and processing • Field semantics: what do the various recorded usage items mean? • Requests: standardize on request types and semantics? • Standardized representation needs to be scalable yet minimize information loss • Aggregation: • Serialization: standardized serialization for resulting usage data • Transfer: standard protocol forexposing and harvesting of usage data • Privacy: protect identity and rights of providers, institutions and users Although technical issues can be resolved: • Socio-cultural issues: • Worldviews, business models and ownership shape aggregation strategies. • Greatly improved by existing standards and infrastructure for usage recording, processing and aggregating • Theoretical questions of sampling: MESUR’s research project will provide insights.
Marko A. Rodriguez, Johan Bollen and Herbert Van de Sompel. A Practical Ontology for the Large-Scale Modeling of Scholarly Artifacts and their Usage, In Proceedings of the Joint Conference on Digital Libraries, Vancouver, June 2007 Johan Bollen and Herbert Van de Sompel. Usage Impact Factor: the effects of sample characteristics on usage-based impact metrics. Journal of the American Society for Information Science and technology, 59(1), pages 001-014 (cs.DL/0610154). Johan Bollen and Herbert Vande Sompel. An architecture for the aggregation and analysis of scholarly usage data. In Joint Conference on Digital Libraries (JCDL2006), pages 298-307, June 2006. Johan Bollen and Herbert Van de Sompel. Mapping the structure of science through usage. Scientometrics, 69(2), 2006. Johan Bollen, Marko A. Rodriguez, and Herbert Van deSompel. Journal status. Scientometrics, 69(3), December 2006 (arxiv.org:cs.DL/0601030) Johan Bollen, Herbert Van de Sompel, Joan Smith, and Rick Luce. Toward alternative metrics of journal impact: a comparison of download and citation data. Information Processing and Management, 41(6):1419-1440, 2005. Some relevant publications.