100 likes | 244 Views
Standards in science indicators. Vincent Larivière EBSI, Université de Montréal OST, Université du Québec à Montréal Standards in science workshop SLIS-Indiana University August 11th 2011. Current situation. Since the early 2000s, we are witnessing :
E N D
Standards in science indicators Vincent Larivière EBSI, Université de Montréal OST, Université du Québec à Montréal Standards in science workshop SLIS-Indiana University August 11th 2011
Current situation Since the early 2000s, we are witnessing: Increase in the use of bibliometrics in researchevaluation; Increase in the size of the bibliometric community; Increase in the variety of actorsinvoved in bibliometrics (e.g. no longer limited to LIS or the STS community); Increase in the variety of existingmetrics for mesuringresearch impact: H-index (withitsdozenvarieties); engenvalues, SNIP and SCIMAGO impact indicators, etc. No longer an ISI monopoly (Scopus, Google Scholar + severalother initiatives (SBD, etc.).
Why do we need standardized bibliometric indicators? Symptomatic of the immaturity of the research field – no paradigm is yet dominant; Bibliometric evaluations are spreading at the levels of countries, institutions, research groups and individuals; Worldwide rankings are spreading and often yield diverging results Standards shows the consensus in the community and allows for various measures to be : Comparable Reproducable
Impact indicators Impact indicators have been used for quite a while in science policy and researchevaluation. Untilquiterecently, only a handful of metrixwereavailable or compiled by research groups involved in bibliometrics: 1) raw citations 2) citations per publication 3) Impact factors Only one databasewasused: ISI Only one normalizationwas made: by field (whenitwasdone!)
Factors to take into account in the creation of a new standard Field specificities: citation potential and agingcharacteristics. Field definition: at the level of journal or at the level of paper? Interdisciplinaryjournals? Differences in the coverage of databases Distributions vs. aggregatedmeasures Skewness of citation distributions (use of logs?) Paradox of ratios (01∞) Averages vs medians vs ranks Citation windows Unit vs fractionalcounting Equal or differentweight for each citation?
Ex. 1: Impact indicators Example of how a very simple change in the calculation method of an impact indicator can change the results obtained – even when very large number of papers are involved. All things are kept constant here: same papers, same database, same subfield classification, same citation window. The only difference is the order of operations leading to the calculation: average of ratio (AoR) vs ratio of averages (RoA). Both these methods are considered as standards in research evaluation. 4 levels of aggregation are analyzed: individuals, departments, institutions and countries
Relation between RoA and AoR field normalized citation indicators at the level of A) individual researchers (≥20 papers), B) departments (≥50 papers), C) institutions (≥500 papers) and D) countries (≥1000 papers)
Figure 2. Relationship between (AoR – RoA) / AoR and the number of papers at the level of A) individual researchers, B) departments, C) at the level of institutions (≥500 papers), D) countries.
Ex. 2: Productivity measures Typically, we count the research productivity of units by summing the distinct number papers they produced and dividing it by the total number of researchers of the unit. Another method is to assign papers to each researcher of the group, and then perform the average of their individual output. Both counting methods are correlated, but nonetheless yield different results:
Difference in the results obtained for 1223 departments (21,500 disambiguated researchers)