220 likes | 392 Views
Module 5: Evaluating training effectiveness. MOA – FAO – TCP Workshop on Managing Training Institutions Beijing, 11 July 2012. Objectives (I). To understand a major international model for evaluating training effectiveness. Objectives (II).
E N D
Module 5: Evaluating training effectiveness MOA – FAO – TCP Workshop on Managing Training Institutions Beijing, 11 July 2012
Objectives (I) • To understand a major international model for evaluating training effectiveness.
Objectives (II) • To review national and international examples of how training institutions evaluate training effectiveness.
Objectives (III) • To work in small groups to design evaluation activities relating to actual modules or courses of instruction from the FFRC and HHRRC.
The Kirkpatrick framework • Named for Donald Kirkpatrick, who developed this approach to evaluation in the 1950s. • Still the most common framework for the evaluation of training in North America.
1. Reaction • Frequently referred to as happy face evaluation, this level measures participant reaction to, and satisfaction with, the program and the learning environment. • If trainees are not satisfied with their training, the likelihood of learning is reduced.
2. Learning • Changes in knowledge, skills, and / or attitudes. • Learning is the direct goal of training – without it the likelihood of behaviour change and impact are reduced.
3. Behaviour change • This level determines whether changes in behavior have occurred as a result of the program. • Absence of behaviour change may not be the result of absence of satisfaction with, or learning from, a training program.
4. Results • This level looks atthe final results that occurred because the participants attended the program. • Results are "the bottom line" or impact of the program.
Evaluation beyond Kirkpatrick • The Kirkpatrick framework helps us to evaluate the core questions of training: did the trainees learn anything, and did that learning make any difference to their performance and their world. • There are other pertinent questions to ask when evaluating training programs.
Other evaluation questions • Were resources invested in the training program used efficiently to produce the desired outcomes? • Were major stakeholders to the training program satisfied with its process and outcomes?
Other evaluation questions • Was the training program developed and delivered in a manner which could be sustained over time? • What is the relationship between the training program and the strategic direction of the training organization?
Stages of evaluation • Planning for evaluation. • Formative (process) evaluation. • Summative (terminal / impact) evaluation.
Methods of evaluation • Review of documents and records associated with a training program. • Observation of program delivery by evaluators and experts.
Methods of evaluation • Surveys and interviews with trainees or informants able to provide feedback regarding the trainees.
Methods of evaluation • Focus groups with trainees or stakeholders. • Examinations or assignments to test the knowledge and skills of trainees (sometimes done as pre-test and post-test).
Example of evaluation work • Standard, online course evaluation from Continuing Education at the University of Calgary. • http://fluidsurveys.com/surveys/reg/scott-s-test-june-14-2012/ • See the Training Manual for • Background • Survey questions, procedures, and management
Example of evaluation work • Systematic Program Review (SPR) from Continuing Education at the University of Calgary. • See the Training Manual for • Background • SPR criteria, methods, and indicators
Example of evaluation work • Canadian Agriculture Lifelong Learning (CALL) program • See the Training Manual for • Background • Detailed information on the four levels
Example of evaluation work • JICA evaluation of Chinese interns / students receiving agricultural training in Japan. • See the Training Manual for summary.
Thank-you. Time for questions and discussion.