180 likes | 264 Views
Estimating the Quality of Business and Management Journals from the UK Research Assessment Exercise John Mingers, Paola Scaparra, Kate Watson Kent Business School Centre for the Evaluation of Research Performance IMEC 2011, Hong Kong. 1. Journal rankings . Journal quality. Journal rankings.
E N D
Estimating the Quality of Business and Management Journals from the UK Research Assessment Exercise John Mingers, Paola Scaparra, Kate Watson Kent Business School Centre for the Evaluation of Research Performance IMEC 2011, Hong Kong
1. Journal rankings Journal quality Journal rankings • Many different journal rankings each with its own biases and prejudices • They are based on often arbitrary criteria. They can be by peer review or behavioural (e.g, impact factors) • The original Kent ranking was simply a statistical combination of other rankings • “Objectivity results from a combination of subjectivities” (Ackoff) Why are they so contentious?
Paper quality Researcher quality Journal quality Journal rankings • Paper quality is unknown unless we peer review it – hence the RAE; so is researcher quality – no little Lion markSo we impute them from the journal ranking • THEORY 1: The quality of a journal purely reflects the quality of its papers (Editors/publishers/common sense) • THEORY 2: Low quality papers may be published in high quality journals and vice versa (RAE) • It matters in terms of publication strategy and decision-making Is journal a good proxy for quality?
2. Reconstructing the 2008 RAE Grades Submission statistics for the last three RAEs Adapted from Geary et al (2004), Bence and Oppenheim (2004), RAE (2009a) a Totals differ slightly between different sources. Figures for 2008 are after data cleaning as described later
Number of publications by output type Adapted from Geary et al (2004), Bence and Oppenheim (2004), RAE (2009a). Categories with zero entries have been suppressed
Pareto curve for the number of entries per journal in the 2008 RAE
2.1 The LP Model Initial model (QP1) Let: j index the journals (j = 1 .. no. of journals) g index the grades 0* - 4* (g = 0 .. 4) i index the universities (i = 1 .. no. of institutions) eig be the estimated proportion of research at grade g for university i pjg be the estimated proportion of the outputs of journal j graded at grade g uig be the actual proportion of research at grade g for university i nij be the number of entries of journal j submitted by university i s.t. for each institution (i) and grade (g) for each journal (j)
TOP-20 JOURNALS BASED ON % 4* RECONSTRUCTED FROM RAE OUTPUTS
PROPORTIONS OF JOURNALS IN PARTICULAR RANKS COMPARING ABS WITH RAE
Conclusions from Table 9 • Overall RAE grades were higher than overall ABS grades (cols 1, 4) but this was because of selectivity of submissions • This can be seen by comparing the ABS submitted with the ABS not submitted (cols 2, 3) • Comparing those journals that are in common the level of grading is very similar (cols 3,6) • In the RAE , ABS journals were graded more highly than non-ABS journals (cols 5,6)
Scattergram showing association between GPA and proportion of an institution’s submitted journals that are in ABS
There are at least 3 possible explanations of this: Better RAE grades Higher % ABS journals “RAE Bias” Higher % ABS journals Higher quality of department “Better depts. more mainstream” Better RAE grades Higher quality of department “Greater selectivity” Higher % ABS journals
TOP JOURNALS IN OPERATIONAL RESEARCH RANKED BY IMPUTED RAE GRADE
3. Technical conclusions • Rankings are just a heuristic device and should not be taken as synonymous with quality • We can use the RAE data to reconstruct the judgements they made
4. Strategic questions • Current measurement regimes are hugely distorting to research: • Narrow focus on types of outputs – ie “4*” English language journal articles • Narrow focus on types of measurements • Narrow focus on types of impact • Should we stop now and develop a system that aims to evaluate quality in a variety of forms, a variety of media, through a variety of measures with the ultimate goal of answering significant questions? Adler, N. and Harzing, A-W. (2009) “When Knowledge Wins: Transcending the Sense and Nonsense of Academic Rankings”, Academy of Management Learning and Education 8, 1, 72-95