260 likes | 340 Views
Measuring academic research in canada : AleX Usher Higher Education Strategy Associates. IREG-7 Warsaw, Poland – May 17, 2013. The Problem. When making institutional comparisons, biases can occur both because of institutional size and distribution of fields of study
E N D
Measuring academic research in canada:AleX UsherHigher Education Strategy Associates IREG-7 Warsaw, Poland – May 17, 2013
The Problem When making institutional comparisons, biases can occur both because of institutional size and distribution of fields of study Can we find a way to compare institutional research output in a way that controls for size and field of study?
Basic methodology • Simple 2-indicator system: publication (H-index) and research income (granting councils) • Data gathered at the level of the individual researcher, not institution • Every researcher given a score for his/her performance relative to the average of his/her discipline. Scores are then summed and averaged.
Publication Metric: H-Index “A scientist has index h if h of his/herNppapers have at leasth citations each, and the other (Np − h) papers have no more thanh citations each.” • (i.e., the largest possible number N where a scientist has a total of N papers with N or more citations) Ex. 2 Publication 1: 10 citations Publication 2: 2 citations Publication 3: 2 citations Publication 4: 2 citations Ex. 1 Publication 1: 5 citations Publication 2: 4 citations Publication 3: 3 citations Publication 4: 2 citations H-Index: 3 H-Index: 2
H-Index (pros and cons) • Pros • Discounts publications with little or no impact • Discounts sole publications with very high impact Cons • Requires a large, accurate, cross-referenced database (labour) • Age bias (less concern on aggregates) • Differences in publication cultures (can be fixed) • Not very useful in disciplines with low publication cultures
The HiBar Database Faculty lists Standardized discipline names
Medicine We did not cover medical fields Impossible to do so because manner in which certain institutions choose to list staff at associated teaching hospitals made it impossible to generate equivalent staff lists.
Research Income Collected data on peer-evaluated individual grants (i.e. major institutional allocations for equipment, etc excluded) made by two main granting councils (SSHRC and NSERC) over a period of three years Data then field-normalized as per process for H-Index.
Research Income (pros and cons) • Pros • Publicly available, 3rd party data, with personal identifiers • Based on a peer-review system designed to reward excellence Cons • Issues with respect to cross-institutional awards • Ignores income from private sources which may be substantial
Controversies (1) • The double-count issue. In an initial draft, we included a record count of staff rather than a head count (former is higher because of cross-appointments). Led to questions • The part-time professor issue. Many objected to our inclusion of part-time staff in the total. So we re-did the numbers without them…
Who is a university? • Whose performance gets included in a ranking says something about who one believes embodies a university. Should it include: • FT faculty only? • PT faculty? Emeritus faculty? • Graduate students? • At the moment, most ranking systems decision driven by data collection methodology.
Do all subjects matter equally? • Field-normalization implies that they do. But is this correct? Are some fields more central to the creation of knowledge than others? Should some fields be privileged when making inter-institutional comparisons?
Does Size Matter? • Does aggregation of talent bring benefits of its own, independent of the quality of people being aggregated?
Where Does Greatness Lie? • On whose work should institutional reputation be based? Its best scholars, or all of its scholars? • Norming for size implicitly rewards schools with good average professors. Failure to norm more likely to reward a few “top” professors