1 / 44

“ A research-based approach to research evaluation of individuals and institutions”

Research Evaluation. Giovanni Abramo “ Laboratory for Studies of Research and Technology Transfer” at the Institute for System Analysis and Computer Science National Research Council of Italy. “ A research-based approach to research evaluation of individuals and institutions”. by.

clancy
Download Presentation

“ A research-based approach to research evaluation of individuals and institutions”

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Research Evaluation Giovanni Abramo“Laboratory for Studies of Research and Technology Transfer”at the Institute for System Analysis and Computer ScienceNational Research Council of Italy “A research-based approach to research evaluation of individuals and institutions” by Nov. 21, 2013 – CNR

  2. Articulation • Outlineof the mainbibliometricsindicators, methods and ranking lists • The Italianresearchassessmentexercise, VQR • The FSS indicatoras a proxyofresearchproductivity • Measurement of FSS at various organizational levels

  3. Researchevaluationgoals • Stimulatinghigherresearchproductivity • Allocatingresourcesaccordingto performance • Informingresearch policy (strategy) • Demonstrating that investment in research is effective and delivers public benefits • Reducing information asymmetry between supply and demand

  4. Researchassessmentproblems • Proliferationof performance indicators • Doubtfulassessmentmethods • Abundanceofno-research-basedrankings • Media fanfare for (invalid) world institutionsrankings • Do-it-myselfpractices

  5. Improvisation ARWUby ShanghaiJiao Tong University http://www.shanghairanking.com/ARWU2012.html Pisa, Sapienza:101-150 Milan, Padua:151-200

  6. ARWU (Shanghai Jiao Tong University) Metodology: total score 90% of the score is size dependent!

  7. Via-academy(Italian_Scientists_and_Scholars_in_UK) “ … The top 50 research institutions in Italy.The institutions are ranked according to the sum of h-index of their affiliated TIS (top 500 Italian scientists by h). http://www.tisreports.com/products/4-Top_50_Italian_Institutes.aspx

  8. Research-based (?!) Leidenrankingshttp://www.leidenranking.com/ranking.aspx Mean citation score (MCS). The average number of citations of the publications of a university. Mean normalized citation score (MNCS). The average number of citations of the publications of a university, normalized for field differences, publication year, and document type. An MNCS value of two for instance means that the publications of a university have been cited twice above world average. Proportion top 10% publications (PPtop 10%). The proportion of the publications of a university that, compared with other similar publications, belong to the top 10% most frequently cited. Publications are considered similar if they were published in the same field and the same year and if they have the same document type.

  9. Research-based 2013 Leiden ranking

  10. SCImago country rank Cites (without self cites) per Document: Italy vs US

  11. Validity of the most popular indicators • The CWTS new crown indicator (MNCS):The average number of citations of the publications of a university, normalized for field differences, publication year, and document type Univ. A = (10) Univ. B = (10, 10, 10, …,9) => MNCS = 10 => MNCS < 10

  12. Validity of the most popular indicators • The h-index: the maximum number h of works by a scientist that have at least h citations each John Doe I = (4,4,4,4) John Doe II = (400,400,400,400, 3,3, …,3) John Doe I h = 4 John Doe II h = 4

  13. Officialnationalresearchassessmentexercises • UK: RAE series (peer-review) up to 2010; REF, 2014 (informedpeer-review) • Italy: VTR, 2006 (peer-review); VQR, 2011 (hybrid) • Australia: ERA, 2010 (bibliometrics) • …

  14. The Italianuniversity system • 96 universities • 67 public (94.9% of total research staff) • 6 schoolsforadvancedstudies (0.5%) • 1.8% foreign staff • 16.8% unproductive (hard sciences) • 7.8% uncited • Govtfunding = 56% of total income • 3.9% based on VTR results

  15. The Italian VQR 2004-2010 • State universities; • legally-recognized non-state universities; • research institutions under the responsibility of the MIUR. • 3 products per researcher • 50% of score based on the quality of the research products submitted and 50% derived from a composite of six other indicators.

  16. VQR: qualityofproducts • A = Excellent (score 1), if the product places in the top 20% on “a scale of values shared by the international community”; • B = Good (score 0.8), if the product places in the 60%-80% range; • C = Acceptable (score 0.5), if the product is in the 50%-60% range; • D = Limited (score 0), if the product is in the bottom 50%. • -0.5 for each missing product

  17. The Italian VQR 2004-2010 Classification matrix for products in Chemistry IR = “evaluated by Informed Peer Review”

  18. VQR: mainlimits • Robustness: How sensitive are rankingsto the share of the output evaluated? • Reliability: Do universities submit their best outputs? • Precision: How precise is the qualityevaluationofproducts and institutions? • Functionality: How useful are national rankings foruniversities, students, companies, …? • Costs and timeofexecution: Spendingreview

  19. Rankingssensitivityto the share of output Median and range of variation (max – min) of rankings in Physics, when varying output share 8 times

  20. Reliability: howeffectiveisselectionofoutputsbyuniversities? • Universities do-it-myself selection worsened the maximum score achievable in the hard sciences by 23% to 32%, compared to the score from an efficient selection*. • * Abramo G., D’Angelo C.A., Di Costa F., 2013. Inefficiency in selecting products for submission to national research assessment exercises. Scientometrics , DOI: 10.1007/s11192-013-1177-3

  21. Precision: VQR mainproblems • The use of the journal impact factor; • the failure to consider products’ quality values as a continuous range; • the full counting of the submitted publications regardless of the number of co-authors and their position in the byline.

  22. Back to the fundamentalsofmicroeconomics(1/2) 100€ 100€ 110€ 120€ A B IRRA= 10% < IRRB= 20%

  23. Back to the fundamentalsofmicroeconomics(2/2) L (labour) Q (new knowledge) K (scient. instrum., etc.) Theory:

  24. The Fractional Scientific Strength (FSS)individual level Where: wR = average yearly salary of the researcher t = number of years of work of the researcher in the period of observation N = number of publications of the researcher in the period of observation ci = citations received by publication i = average of the distribution of citations received for all cited publications of the same year and subject category of publication i fi = fractional contribution of the researcher to publication i

  25. Additional performance indicators • Output (O), numberofpublications; • Fractional Output (FO), number of publications, each divided by the number of co-authors*; • ScientificStrength(SS), numberoffield-normalizedcitations; • Average Impact (AI), average field-normalized citations per publication. * In the life science, the position of co-authors in the byline reflects the relative contribution to the project and is weighted accordingly.

  26. The Italian academic classification system • MIUR database of all academics • 370 (205) fields (SDS) • 14 (9) disciplines (UDA) • Authorship disambiguation algorithm

  27. The importanceofresearchers’ classification

  28. The importanceofresearchers’ classification

  29. The importanceofresearchers’ classification

  30. The ORP-basedevaluation system • Assigns publications to each author: • Affiliation unification • Authors’ name disambiguation • Classifies authors by field • Classifies publications by subject category

  31. ORP database • Source: Web of Science (WoS) • Observationperiod: from 2001 • AllItalianuniversities (95), researchinstitutions (76), researchhospitals (196) • 350,000 publications, 120,000 proceedings • 320,000 (66,000 university) authors • Publicationsclassification: 245 (182) WoSsubjectcategories; 12 (8) disciplines • Researchersclassification: 370 (205) universitydisciplinarysectors (SDS); 14 (9) universitydisciplinaryareas (UDA)

  32. What the ORP system does (1/2) • It measures the standardized impact (citations, IF) of each article, review, conference proceeding indexed in WoS. • Matches each publication to its real author (4% error) and institution. • Measures productivity of individual researchers and ranks them in the same field (SDS) on a national scale. • Measures productivity of institutions in each field and ranks them on a national scale. • Based on individual or field productivity scores, measures productivity of multi-fields research units (discipline, departments, institutions) and ranks them on a national scale.

  33. What the ORP system does (2/2) • Identifies non-productive researchers and ranks research units per concentration rates of non-productive researchers. • Identifies top-scientists and ranks research units per concentration rates of top-scientists. • Identifies highly-cited publications and ranks research units per concentration rates of highly-cited publications. • Ranks research units per productive researchers alone. • Rates and ranks the performance distribution (Gini coefficient) of research units. • Measures collaboration rates of research units with private companies, foreign institutions, etc.

  34. The performance of single researchers The national percentile ranking of researchers of the Biopathology Dept of university “X” (2006-2010).

  35. The Fractional Scientific Strength (FSS)field level (SDS) Where: wS = total salary of the research staff of the university in the SDS, in the observed period N = number of publications of the research staff of the university in the SDS in the period of observation ci = citations received by publication i = average of the distribution of citations received for all cited publications of the same year and subject categoryof publication i fi= fractional contribution of the research staff of the university in the SDS, to publication i

  36. The performance in each field (SDS) The fields within the UDA “Mathematics” of university “X”

  37. The Fractional Scientific Strength (FSS)multi-field unit (1/2) Labor productivity of multi-fields units (e.g. Department) based on FSSR Where: RS = research staff of the department, in the observed period FSSRj = productivity of researcher j in the department = average productivity of all national productive researchers in the same SDS of researcher j

  38. The Fractional Scientific Strength (FSS)multi-field unit (2/2) Labor productivity of multi-fields units (e.g. UDA) based on FSSS Where: wSk = total salary of the research staff of the university in the SDS k, in the observed period wU = total salary of the research staff of the university in the UDA U NU = number of SDSs of the university in the UDA U = weighted average FSSS of all universities with productivity above 0 in the SDS k

  39. The performance ofuniversityX in each discipline (UDA)

  40. Distortion of rankings by theLeiden’s new crown indicator (MNCS)

  41. Distortionofuniversitiesrankingsby h and g indexes

  42. Comparison of VQR and FSS quartile ranking lists

  43. Conclusions • Count only what counts and be aware of what you cannot count • The most popular research performance indicators are invalid • Field classification of scientists is absolutely required to compare performance at the individual level • Research performance at the individual level is absolutely required to measure performance at organizational level • Avoid the “do it myself” temptation

  44. Giovanni Abramo Laboratory for Studiesof Research and Technology Transfer at the Institute for System Analysis and Computer Science (IASI-CNR) National Research Council of Italy Viale Manzoni 30, 00185 Roma – ITALY Tel. +39 06 72597362 giovanni.abramo@uniroma2.it http://www.disp.uniroma2.it/laboratorioRTT/eng/index_eng.html

More Related