1 / 44

The use of bibliometric indicators in research evaluation and policy

Colloque de l’Académie des sciences "Évolution des publications scientifiques - Le regard des chercheurs" des 14-15 mai 2007. The use of bibliometric indicators in research evaluation and policy. Henk F. Moed Centre for Science and Technology Studies (CWTS) Leiden University, the Netherlands.

amable
Download Presentation

The use of bibliometric indicators in research evaluation and policy

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Colloque de l’Académie des sciences "Évolution des publications scientifiques - Le regard des chercheurs" des 14-15 mai 2007 The use of bibliometric indicators in research evaluation and policy Henk F. Moed Centre for Science and Technology Studies (CWTS) Leiden University, the Netherlands

  2. Contents - 1 • Adequacy of ISI/WoS coverage • Data collection: Accuracy • Indicators: Sophistication • Analyse ‘in context’ • Be aware of ‘strategic’ behaviour

  3. Contents - 2 • What citations measure • Bibliometric indicators and peer review • Criteria for their proper use in research evaluation • Bibliometric indicators and funding parameters

  4. Citation Analysis in Research Evaluation Springer, 2005, 350 pp. Henk F. Moed CWTS, Leiden University

  5. Comments on issues addressed in earlier sessions

  6. Comparing the impact of Open Access (OA) vs. Non-OA Articles in the Same Journals [Figure 1 from: Harnad, S, Brody, T. D-Lib Magazine Vol 10, Nr 6, June 2004] OA Advantage 500 % !

  7. Recent Case Study ArXiv papers appear earlier EarlyView Effect Quality Bias: Better authors use ArXiv

  8. Conclusions from a case study of 24 condensed matter physics journals(Moed, arXiv:cs/0611060v1 [cs.DL]) • Correcting for early view and self selection effects, there is no evidence of a general ‘open access’ (ArXiv) advantage

  9. Adequacy of ISI/WoS coverage • ISI/WoS coverage varies across research fields • Type of analysis depends upon field of inquiry • Alternative methods in (parts of) social sciences and humanities

  10. Measurement of internal WoS Coverage Citing/Source Non-WoS WoS Non-Wos Journals Books Conference proceedings Reports Etc. ?% ?% Cited/Target Non-WoS WoS

  11. Overall ISI coverage by main field

  12. Sub-disciplines (non-exhaustive list)

  13. Three types of studies Citing/Source Non-WoS WoS 3. Source Expanded 2. Target Expanded 1. Pure WoS Cited/Target Non-WoS WoS

  14. Alternative approaches for social sciences and humanities • Expand the WoS with additional sources • Classification of publications and sources based on scholars’ quality perceptions • Library collection analysis

  15. Data collection • Data collection must and can be accurate • Accurate citation links • Verification of publication lists

  16. Indicators • Publication counts have limited value • Journal impact factors are no surrogates of actual citation impact • Sophisticated indicators are available

  17. Normalised citation impact (1.0 = at world average) The average citation rate of a unit’s papers ÷ world citation average in the subfields in which the unit is active Corrects for differences in citation practices among fields, publication years and type of article

  18. At the level of research groups, actual citation impact and journal prestige tend to show only weak correlations[Set of 2,150 UK authors with > 10 articles per year]

  19. Other important issues • Analyse ‘in context’ • Be aware of strategic behavior

  20. % French papers declined after 1998

  21. JP, UK, DE, FR, CA decline China No. 2 in 2006

  22. French citation impact increased

  23. Convergent trend in citation impact because of globalisation

  24. Effects of editorial self-citations upon journal impact factors[Reedijk & Moed, J. Doc., to be publ, 2007] • Editorial self-citations: A journal editor cites in his editorials papers published in his own journal • Focus on ‘consequences’ rather than ‘motives’

  25. Another problem with ISI/JCR Journal Impact Factor Citations to “citable” and “non-citable” items ÷ Number of “citable” items Citations to letters, editorials and other “non-citable” items are “free”

  26. Case: ISI/JCR Impact Factor of a Gerontology Journal (published in the journal itself)

  27. Decomposition of the IF of a Gerontology journal Editor himself made his journal’s impact factor increase

  28. Timing effects and shifts in criteria in UK Research Assessment Exercises (RAE) Total Publication Counts Shift from Quantity to Quality Research Active Staff % UK Articles % UK authors 1992 1995/6 2000

  29. What do citations measure? • Many studies showed positive correlations between citations and qualitative judgments • In principle it is valid to interpret citations in terms of intellectual influence • But its expression in the citing text may be vague or implicit • And the concepts of citation impact and intellectual influence do not coincide

  30. ‘Cold fusion’ Case: Fleischmann & Pons publications Impact of cold fusion paper High impact of earlier work World citation average

  31. Implications for the use of citation analysis in research evaluation • Its outcomes must be valued in terms of a framework that takes into account substantive content • The interpretation of citation impact involves a quest for possible biases or distortions • Level of aggregation and research questions addressed are crucial

  32. Citation Analysis and Peer Review • Tools for peers to assess research quality of research groups in (basic) science • Tools for policy makers to assess peer review processes • Tools for peers and policy makers to address complex, general, global issues (macro/meta studies)

  33. Affinity applicants – Committee 0 Applicants are/were not member of any Committee • Co-applicant is/was member of a Committee, but not of the one evaluating • Firstapplicant is/was member of a Committee, but not of the one evaluating • Co-applicant is member of the Committee(s) evaluating the proposal • Firstapplicant is member of the Committee(s) evaluating the proposal

  34. For 15 % of applications an applicant is a member of the evaluating Committee (Affinity=3, 4)

  35. Probability to be granted increases with increasing affinity applicants-Committee

  36. Logistic regression analysis:Affinity Applicant-Committee has a significant effect upon the probability to be granted MAXIMUM-LIKELIHOOD ANALYSIS-OF-VARIANCE TABLE (N=2,499) Source DF Chi-Square Prob ------------------------------------------------------------- INTERCEPT 1 18.47 0.0000 Publ Impact applicant 3 26.97 0.0000 ** Rel transdisc impact applicant 1 0.29 0.5926 Affinity applicant-Committee 2 112.50 0.0000 ** Sum requested 1 45.47 0.0000 ** Institution applicant 4 25.94 0.0000 ** LIKELIHOOD RATIO 199 230.23 0.0638

  37. Conclusion • The future of research assessment exercises lies in the intelligent combination of metrics and peer review

  38. The use of citation analysis in research evaluation is more appropriate the more it is: • Formal • Open      • Scholarly founded • Supplemented with expert knowledge • Carried out in a clear policy context with clear objectives • Stimulating users to explicitly state basic notions of scholarly quality • Enlightening rather than formulaic

  39. Intelligent combination of ‘metrics’ and peer review • Policy makers may let the type of peer review depend upon the outcomes of a bibliometric study • Peer committees may use citation analysis for initial rankings and explicitly justify why their final judgments deviate

  40. Metrics and funding parameters Data Elaboration Funding parameters Aggregate per institution Across institutions Central Indicators of groups / individuals Within an institution Combine with peer review Institution

  41. Use of metrics in allocation of research funds • At a central level: To distribute funds across institutions based on aggregate statistics • At an institutional level: Combined with peer review to evaluate groups and individuals; outcomes are used to distribute funds within the institution

  42. Aggregate statistics • Random errors to some extent cancel out • Various types of statistics possible, e.g., globalised vs. distributional

  43. END

More Related