110 likes | 230 Views
Assessing regional engagement and knowledge transfer – ranking or benchmarking?. David Charles, EPRC, University of Strathclyde. KT and engagement. Qualitatively different to assess than teaching and research Not same consensus over idea of quality Not simply in control of university
E N D
Assessing regional engagement and knowledge transfer – ranking or benchmarking? David Charles, EPRC, University of Strathclyde
KT and engagement • Qualitatively different to assess than teaching and research • Not same consensus over idea of quality • Not simply in control of university • Does not indicate institutional excellence • Partly dependent on external demand and environment • Subjective assessment depending on perspective
Different forms of KT and RE • Different paths to KT – research exploitation or informal exchange • KT as codified vs tacit knowledge – who benefits? • Other forms of engagement – cultural, social, governance relationships etc • Varied possible forms of excellence, some easier to measure than others
Ranking • Comparison across diverse activities • No sensible means of weighting activities • Are we assessing university or regional environment? • Balance of private and community benefit
Simple exploitation measures • Patents, licences, spin offs, contract income • Discipline-specific opportunities and partly demand driven • Example of HEBCIS survey in UK, AUTM in US and Canada • Different rankings of universities for different indicators
Benchmarking instead of ranking • Comprehensive set of indicators • Identify areas of strength and weakness • University and partners to decide on prioritisation • Benchmarking with other universities to learn how to improve those areas seen as important • Differentiation as an objective to better meet needs of stakeholders
Issues for discussion • Does it make sense to try and reduce engagement to one or two composite indicators? • Why do we want to measure engagement, and how does this affect what we try to measure? • What are the merits of benchmarking approaches that mix output and process indicators? • Should we focus on mutual learning rather than ranking in this field?