190 likes | 357 Views
The role of evaluation and ranking of universities in the quality culture. Professor Jean-Marc Rapp EUA President 2 July 2009. Overview. Evaluation vs. Ranking –different concepts Evaluation – an EUA perspective EUA´s quality policy
E N D
The role of evaluation and ranking of universities in the quality culture Professor Jean-Marc Rapp EUA President 2 July 2009
Overview • Evaluation vs. Ranking –different concepts • Evaluation – an EUA perspective • EUA´s quality policy • EUA´s activities focusing on the development of a quality culture • Rankings – an EUA perspective • Reflection on current initiatives • Rankings and their impact on quality culture • Conclusion – the way forward
Ranking Quality Assurance/evaluation Judgement of strengths & concerns in a number of measures related to input, process and output of HE -> quality enhancement Data always obtained from HEI as a self-evaluation report Almost always involve site visit by peers • Relative positions in participating groups <- mathematic “formulas” <- performance on a number of selected measures • Independent data or obtained or usually verified by HEI • Usually no site visit
Key EUA activities in the field of QA • Institutional Evaluation Programme since 1994 • Quality Culture –project 2002 – 2006 • Creativity project 2007 • Quality Assurance for Higher Education Change Agenda (QAHECA) 2008-2009 • European Quality Assurance Forum in co-operation with other E4 partners (ESU, ENQA and EURASHE) • Founding member of EQAR
EUA’s Policy Positions on QA/Evaluations • Main responsibility for quality assurance lies with the institutions • Context sensitive (institutional and disciplinary diversity) • Fitness for purpose approach • Enhancement oriented • Internal and external evaluations or QA processes should be complementary • Transparency and co-operation
The present landscape 1-Global initiatives • Global rankings: • Shanghai ARWU • Times-SQ World University Ranking • Leiden Ranking Newspaper driven Emerging Global Model (EGM) of a ‘world class university’ • Therefore, rankings increasingly reflect the prestige and reputation of HEIs according to one specific model • OECD feasibility study for the international assessment of HE Learning Outcomes: AHELO
The present landscape 2 - European initiatives • European Commission feasibility study to develop multi-dimensional university ranking • EU Commission supported statistical database on Higher Education (via Eurostat) • European Commission has supported projects to develop a Classification of European HEIs • DG Research Expert group is working on methodologies for University-Based Research Assessment • CHE (D) - various classification initiatives with different foci (universities, research rankings, departmental excellence, employability rating)
The present landscape – some observations.. • Significant limitations of existing rankings: • Not comprehensive: provide an incomplete & once-off snapshot of small segment of a rapidly changing sector • ‘One-size-fits-all’ methodology: do not take account of increasingly differentiated HE landscape in Europe • Lack of transparency in the way they are compiled • Compilers use availabledata rather than compiling data • Reflect largely reputational factors (40% THES) • Dominance of research and metrics – little focus on other missions of the university • Therefore, existing rankings typically favor old, large, Anglo-Saxon research intensive institutions with +/- 24,000 students and a $2 billion annual budget • Therefore, existing rankings typically favor old, large, research intensive institutions with about 24,000 students and a 2 Billion annual budget
The present landscape – some observations.. • Despite the commonly acknowledged limitations, rankings are increasingly used • Institutions: • Governments: • seek to influence compilers (HEFCE 2008) • senior managment KPIs are influenced (HEFCE 2008) • change promotion & marketing efforts (HEFCE 2008) • argue for a “value added” approach • increased interest in transparency instruments • at the HE system level (Leuven Communique 2009) • key priorities such as LLL, widening access not • accounted for in the rankings explore alternatives
Rankings – what is their purpose? • The common “politically correct” purpose: Providing transparent information to students & reflect the prestige of institutions • What are the “real” purposes? - drive research and teaching performance - to allocate and lobby for resources - to identify stakeholders and partners - to promote other policy objectives
Rankings & quality issues • Rankings increasingly equated with quality standards, which is a danger, as: • externally definedindicators that are not necessarily linked to an institution’s core mission and objectives • some HEIs are tempted to chase rankings and focus on improving what can be measured/indicators rather than focus on their core mission • rankings are based on a one-size-fits-allmethodology that does not take account of diversity • Poor positioning in the rankings can have a negative impact on staff morale (HEFCE 2008)
Rankings – the way forward? • For those developing rankings: promote the use of Berlin Principles (CEPES, CHE, IHEP, 2006) : • Recognise the diversity of HEIs & take account of different missions & goals • Be transparent regarding methodology • Measure outcomes in preference to inputs • Use audited & verifiable data wherever possible • Provide consumers with a clear understanding of the factors involved & offer a choice in how they are displayed i.e. attach their own weightings
Ranking – EUA´s response • The debate on rankings has been launched in EUA policy bodies –Board and Council • There has also been discussion in policy dialogue with Asian universities • EUA has established an internal working group on rankings to consider next steps – Proposals will be made to October 09 Council meeting • EUA has commissioned a study on institutional diversity • EUA continues to advocate that rankings should not be used as a proxy for quality & thus for QA purposes ..
Conclusions ... • There is a fundamental difference between Quality Assurance and rankings: • QA process should always be internally driven (even if there are external incentives) and aim at enhancing the quality of activities (usually through recommendations) and therefore foster a quality culture. • rankings are externally driven and only state the current situation of an institution in comparison to other institutions on the basis of selected indicators.
Conclusions .. • QA and evaluations usually take into account the variety of missions (diversity of HE) and processes behind the indicators • Rankings measure the performance of an institution against a certain (ideal) model of an institution reflected in the choice of selective indicators by the compilers • Whilst the compiler may use objective indicators, combining these indicators is always subject to judgement and hence subjective