1 / 25

Research Universities & Research Assessment Launch Event

Join us at the launch event on 19th June in Brussels with Dr. Mary Phillips from Academic Analytics to learn about research assessment in universities. Discover how assessment is used to measure research output, quality, and impact, and how it informs strategic planning and collaborations.

ariasm
Download Presentation

Research Universities & Research Assessment Launch Event

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. RESEARCH UNIVERSITIES & RESEARCH ASSESSMENT Launch Event 19th June Brussels Dr Mary Phillips Academic Analytics

  2. What is assessment? • The compilation of data relating to: • Inputs – competitive grants won, human and physical infrastructure, research environment • Outputs – publications, PhDs trained, commercialisation (patents, licences, spin-outs, venture funds) • Outcomes – longer term societal and economic “impacts” • Can also involve assessment of “process”-HR, tech transfer etc 3

  3. Who needs assessment? • Universities -senior academics and administrators • Governments to determine allocation of scarce resources • Charities for competitive funding or philanthropy • Commercial entities for commissioned research or investment • Researchers looking to relocate • Students – UG and PG –home and overseas • Potential Collaborators – national and international • Researchers and groups for internal intelligence 4

  4. How universities use assessment • to gauge research output, quality and impact, for allocation of funds, improving performance and maximising return on investment • to inform strategic planning in specific subject areas –investing in areas of strengths or new directions; to expose weaknesses • to identify and track individual accomplishments; to recruit, retain and reward top performers • to track (and possibly reward) departmental/faculty performance and leadership • to find and foster productive collaborations, including international, especially with rapidly developing economies • to benchmark against genuine institutional peers 5

  5. Problems and Dangers (1) • Dissatisfaction with current tools-need more sophisticated approach to integrate data from disparate systems • Lack of shared definitions, eg. what is a researcher • National differences eg. Funding mechanisms • Data often determined by external agencies • “Bean counting not enough” eg. PhDs 6

  6. Problems and Dangers (2) • Disciplinary differences • Perverse incentives • Adequate time-frames • Culture of measurement and monitoring detracting from academic mission • Academics feel threatened • Duplicative and time-consuming 7

  7. Examples of Assessment (1) • UK RAE –started 1986- outcomes determine the allocation of QR, based on peer-review by expert Panels, metrics include publications, grants won and PhDs produced. Has led to a concentration of funding on research intensive universities which mirrors RC funding. REF in 2014 will include “impact” • QR funding is not hypothecated and can be used by universities to fund infrastructure, recruitment, PhDs etc- has had beneficial effect on productivity 8

  8. Examples of Assessment (2) • Australia ERA (abandoned RQF)-relies mostly on indicators/metrics to inform Panels (no impact) • When cycle complete, outcomes will determine the allocation of funds through block grants for infrastructure, training and research 9

  9. Examples of Assessment (3) • Germany – Excellence Initiative run by DFG and Council for Science and Humanities- started in 2005 • 1.9 billion euros • Distributed through competition in 3 areas- Graduate Schools, Clusters of Excellence (HEIs, non-HEIs and industry) and Institutional Strategies. To qualify for latter must have at least one Graduate School and one Cluster • Has focused funding and improved productivity 10

  10. Examples of Assessment (4) • In France, Initiatives d’Excellence • Non-selective universities, Grandes Ecoles and independent institutes (CNRS)-working together in larger units • Funding competitive –international jury • Based on research excellence, training, innovation, national and international partnerships and strategic management • Also AERES-evaluation of all types of institutions-research and teaching • In Italy-ANVUR-7% of state support based on new evaluation exercise 11

  11. From Assessment to Rankings • Lisbon Agenda 2002-to make the EU “most competitive and dynamic knowledge-based economy in the world” • Driven by Europe’s universities and research institutes • Carried forward in Europe 2020 Strategy • With some exceptions European universities do not do well in rankings compared to USA 12

  12. How to improve EU rankings (1) • EC DGR convened an Expert group in 2008 to look at assessment • To identify users and purposes • Existing methodologies-strengths and weaknesses • Define a multidimensional methodology to address diversity of users • Where along research pathway to assess • Social and economic benefits 13

  13. How to improve EU rankings (2) • Answer- U-Multirank • Dimensions: teaching & learning, research, knowledge exchange, internationalisation and regional engagement • Coincides with EC Modernisation of Europe’s HE systems-governance, autonomy and improved education 14

  14. LERU’s view • Misnomer-never meant to be a ranking • Lack of relevant data in several dimensions • Problems of comparability between countries eg. funding • No reality checks • Burden of collecting data • Thus lack of compliance 15

  15. Peer Review-Pros and Cons • Most agree that fairest and most efficient way to assess past work and future potential • Subjective and inconsistent • Bias (deliberate or inadvertent) • Group bias • Conservative • Problems with interdisciplinary research • Burdensome for reviewers and costly 16

  16. Bibliometrics-pros and cons (1) • “advanced bibliometric methodology provides the opportunities to carry out effective evaluations with low burdens for the objects of the evaluation”-A Van Raan • Objective • Multiple authors • Variations in names of researchers and institutions • Over- and self-citation 17

  17. Bibliometrics- pros and cons (2) • Different patterns/norms in different disciplines • Arts and Humanities etc • Grey literature • Open Access journals 18

  18. Impact-new dimension • Wider social and economic outcomes-public policy, treatments, environmental, public engagement etc • Metrics very challenging • UK-REF 20% will not include academic impacts-case studies- “reach” and “significance” • Impacts in past 5years based on previous 20years 19

  19. STAR METRICS (1) • USA-NIH and NSF • Uses universities own records –HR, grant data, expenditure-ie automated • First stage will calculate employment impact • Second stage-economic growth (patents, start-ups etc); workforce (student mobility and workforce markers); knowledge (publications, citations); social (health and environment) 20

  20. STAR METRICS (2) • Federal government-to justify expenditure • Funding agencies –locate experts and analyse gaps • Senior HE administrators-to identify strengths and weaknesses • Researchers-locate collaborators and to see how they are funded 21

  21. Impact and Europe • Innovation Union-flagship initiative of 2020 Strategy • Focus on excellence, IMPACT and implementation • Contrasts with ERC-excellence sole criterion • Proof of Concept Grants-ERC not averse to exploitation 22

  22. Recommendations (1) Universities • Need to reflect the views of those at the “coal face” • Transparent and explicit-suite of methods-inputs, outputs and longer term impact-limitations of each • Information from universities-accurate, up-to-date (eg. HR, publications etc) • Researchers-unique personal and institutional names • Central database for all research data - grants, commercialisation, publications, esteem measures etc-for multiple uses - avoid duplication 23

  23. (2) External Agencies • External agencies should ensure consistency for reliable comparisons (nationally, within Europe and ideally beyond) • External agencies should avoid creating perverse incentives/behaviours • Recognise broader role of universities 24

  24. “To measure or not to measure..” • “If you can measure that of which you speak, and can express it by a number, you know something of your subject; but if you cannot measure it, your knowledge is meagre and unsatisfactory” William Thomson, Lord Kelvin • “Not everything that counts can be counted, and not everything that can be counted counts.” Albert Einstein 25

More Related