340 likes | 366 Views
Dive into the key takeaways from the Research Outputs in Reputation Management Workshop at Edinburgh University Library. Gain insights into library infrastructure development, research administration history, and action plan development for enhanced reputation management. Explore international examples and learn about OCLC Programs & Research. Discover the ROiRM initiative aiming to transform evaluation frameworks using metrics. Delve into projects on expertise profiling and research output administration workflows to optimize research management processes.
E N D
Research Outputs in Reputation Management Workshop, Edinburgh University Library 23 June 2008 John MacColl European Director, OCLC Programs & Research
Programme • 09.15-09.30Coffee/tea • 09.30-10.15 Introduction: OCLC Programs & Research; Research outputs in reputation management overview (JMacC) • 10.15-11.00 Library infrastructure: development history (Library staff) • 11.00-11.20Break • 11.20-11.35Library infrastructure continued • 11.35-12.30Research administration infrastructure: development history (Research Administration & Applications staff) • 12.30-13.10 Lunch break • 13.10-13.50 Facilitated discussion: objectives setting • 13.50-15.30 Action plan development • 15.30-15.50 Break • 15.50-16.15 Outcomes (discussion) • 16.15-16.30 Summing up (JMacC)
Overview • OCLC Programs & Research • Bad penny: an explanation • ROiRM/WRAP • RAE • RQF - ERA • Other international examples
The RLG Programs Partnership • RLG Programs provides a venue and focus for collaboration, problem-solving, and the development of new standards, products, and services among research institutions • Around 140 libraries, museums and archives across the world with • Deep, rich research collections • Mandate to make collections accessible • Commitment to exploit technology • Commitment to collaboration • Commitment and capability to contribute to ‘commons’ (collections, expertise, infrastructure) • We seek to provide a strategicguidance role
Programs & Research • San Mateo, California • 12 Program Officers • Vice President • Administrative support • Dublin, Ohio • Communications Team • Vice President • 20 Research Scientists • Research Assistants • Administrative Support • Fife, Scotland • Director, Europe
RLG Programs Partners - Geography UK, Ireland & Continental Europe 29 North America 100 Japan 1 Middle East 2 Australia and New Zealand 4
The Work Agenda • Shared print collections • Harmonising digitisation • Data-mining for management intelligence • Collection sharing beyond libraries • Scholarship in text aggregations • Libraries, Archives and Museums study • Research Outputs in Reputation Management (Bad penny) • Create new structures and service areas • Network-level ILS (INLS) • Build prototypes/software to showcase and demonstrate new service possibilities • Investigate where which data is best exposed • Changing metadata creation processes • Leverage vocabularies for effective discovery • Tools and services to support metadata processes
ROiRM (or WRAP?): Problem statement • ‘due to the high cost of sustaining these frameworks on the basis of peer review of significant samples of research outputs produced by large numbers of institutions, there is discussion about moving parts of this evaluation from a peer-review process to one based on metrics (e.g. citation analysis). New metrics for assessment have recently been proposed ….’ • ‘These frameworks for assessment throw into sharp relief the interaction between university research administration and information management services potentially offered by the library. ‘ • ‘Good practice has not yet emerged, however, and the models for confronting these challenges are diverse due to the novelty of practice involved in libraries working on systems design with research administration units. There is no common view of the optimum architecture for managing research outputs comprehensively across the commercially provided component (for which a library will typically have metadata only) and the Open Access component (for which a library may provide metadata with full-text links to non-local outputs, and locally-held full-text). Nor is there yet any common agreement on an optimal set of metrics for the assessment of research such that individual researchers as well as their institutions can be easily profiled nationally and internationally.’
Project: Expertise profiling • ‘What are the ways in which universities are disclosing academic staff expertise and productivity? Do they intersect with existing library services? In some institutions, some of these services are provided by third parties. What are the advantages and disadvantages of this?’
Project: Research output administration workflow • ‘What types of workflow are emerging to support research administration? Typically, libraries running research repositories are providing data which is amalgamated with financial data (e.g. on research grant income) and headcount data (e.g. on numbers of research staff and students) to compose discipline-specific profiles. Are models for efficient data-merging yet in evidence? Are third-party system vendors responding to this need? Can best practice approaches be identified and promulgated? What models have emerged in the relationship between institutional repositories and the managed publications lists that are required to drive assessment input?’
Project: Network-level discovery of research outputs • ‘What steps are institutions taking to promote the discoverability of their research outputs on the web? In the context of providing content which is tuned to optimise discovery by third-party search engines, what is the minimum platform investment which research libraries need to make? • Models have been proposed to encourage quality thresholds for metadata in institutional repositories to be achieved via ‘club good’ cooperative metadata approaches. What is the value of a more highly regulated cooperative approach as opposed to the lesser regulated approach of producing more variable metadata and trusting to third-party search engines such as Google to perform effectively? How closely related are enhanced metadata quality standards and ‘gaming the system’ (or ‘search engine optimisation’) in the field of research outputs?’
The context for today’s discussions • National research assessment regimes • Impact measures • University ranking systems • Showcase services • New intra-institutional collaborations and efficiencies
International assessment approaches (Professor Sir Gareth Roberts: ‘Caution: Evaluation measures disturb the system they purport to evaluate’)
Research Excellence Framework • ‘This integrated system will combine bibliometrics and other quantitative indicators with light-touch peer review within a variable geometry of assessment. Bibliometric indicators of research quality will be a key element in quality assessment wherever this is appropriate, with light-touch review of research outputs operating where it is not. In all cases we will use other indicators of quality and impact appropriate to the disciplinary area and determined after advice from expert panels.’ - HEFCE, May 2008
REF assessment • Discipline-based • Spectrum of assessment, i.e. • ‘Some subjects may be assessed through bibliometrics in combination with other quantitative indicators only (where this approach is sufficiently robust to be used without the need for qualitative elements or expert review of outputs) • For subjects where bibliometrics and other quantitative indicators are partially informative, they would be used in combination with qualitative elements and possibly some expert review of outputs • For subjects where bibliometric indicators are not sufficiently mature to be informative, expert review of outputs would be used in combination with other applicable indicators and qualitative information.’
REF timetable • First assessment during year 2013 ‘to drive quality-related research funding from academic year 2014-15’ • For subjects where bibliometric indicators play a leading role in quality assessment, these will start to influence HEFCE funding phased in from 2011-12. • A full exercise to produce bibliometric indicators will take place, for appropriate subjects, in 2010.
REF and bibliometrics • A pilot exercise in constructing bibliometric indicators is starting now. • ‘It will inform decisions about which subjects bibliometric indicators should be used for, how such indicators can be used in creating quality profiles and the design of the process for producing them.’ • Has Edinburgh expressed interest in the pilot? • Results due in spring 2009
REF: still to be decided • Universal vs selective approach to including staff and papers • Whether papers should be accredited to the institution or the researcher (who may move around) • Timeframe and frequency • WoS, SCOPUS, or both? • Prior data cleaning requirements
REF: the role of citation analysis • How to define ‘fields’ for purposes of normalisation • How to handle self-citation • Profiles: how to construct them; whether internal profiles can be provided; how to benchmark them internationally
Differences between RQF and RAE2008 (Gareth Roberts) • UK process has 67 disciplinary sub-panels • Stronger emphasis on Research Environment and Esteem indicators in UK approach • Different descriptors for 5-point Research Quality Rating Scale • UK provides financial support for ‘Third Stream’ activities, i.e. provision of infrastructure for knowledge transfer (approx 10% of the funds distributed) • RQF grades both Research Quality and its Impact (latter against a 3-point scale: high; moderate; limited) • RQF has broader Research Impact outputs. Reports to Government and industry. Greater media and social impact.
ERA • 8 discipline clusters • ERA will not determine block grant allocations of research funding • Evaluations in each cluster will take 3-4 months • 3 categories of indicators • Research activity and intensity • Research income; research degree load and completions; FTE researcher counts • Research quality • Publications analysis (ranked outlets, citation analysis, percentile analysis) • Research income based on peer review • Applied Research • Still to be determined
ERA: Attribution • ‘It is proposed that irrespective of the decision on publications, all other indicators will be collected based on institution affiliation because attributing research income and other indicators to staff who have moved would be difficult to monitor and verify’ (AR Consultation Paper) • Two options being considered: • staff affiliation at census date (attribution based on where researchers in presently working). May be difficult to collect the data. • Institution affiliation based on HERDC (HE Research Data Collection): indicators attributed to the institution in which the activity occurred during the reference period. Would take advantage of existing data collection processes within institutions. But the drawback is that it would nor reflect the current of prospective state of affairs within an institution.
ERA ‘Outlet rankings’ • More than 17k journals have been ranked across 100 disciplines. More than either Thomson ISI or Scopus (they each rank ~ 15k indexed journals). Done by learned societies and other disciplinary lead bodies. • Draft ranking list will be released this month. • Will be used to derive journal sets from which discipline-specific citation benchmarks will be generated. Necessary because the discipline journal sets (‘subject categories’) used by ISI and Scopus are very broad, and contain overlaps. This skews centile analysis when disciplines with different citation practices are included in a single journal set • Total publication and citation counts for each institution will be assessed, and citation-per-publication rates for each determined. This will be presented to the relevant Research Assessment Committee. • The ARC will test the use of multiple citation data suppliers to determine the most appropriate supplier for each discipline. • Outputs which include research students as authors will be analysed separately
Reporting • Ideally by discipline, to identify areas of national research strength • By institution: to allow for comparative research strengths across institutions; and • By academic unit: to provide flexibility for institutions
Dutch METIS system • Research information database • Offers secured access to universities, research institutes and individual researchers • Provides management information ‘necessary for the periodical evaluation of the research activities within the institution’ • Permits showcasing by individual researchers • Includes detailed information on publication outputs, with URLs • Also includes data on inputs, shown as FTEs • Used by all13 Dutch universities +KNAW (Academy of Arts & Sciences) • Linked to all DARE repositories