290 likes | 759 Views
Training Evaluation. Training evaluation provides the data needed to demonstrate that training does provide benefits to the company. What are the differences among:. Training effectiveness Training outcomes Training evaluation Evaluation design. Types of Evaluation. Formative Summative.
E N D
Training evaluationprovides the data needed to demonstrate that training does provide benefits to the company.
What are the differences among: • Training effectiveness • Training outcomes • Training evaluation • Evaluation design
Types of Evaluation • Formative • Summative
Why Evaluate Training Programs? • Objectives • Satisfaction • Benefits • Comparison
Objectives = Foundation • Terminal behavior • Conditions under which terminal behavior is expected • The standard below which performance is unacceptable • --> criteria by which the trainee is judged
The Evaluation Process Conduct a Needs Analysis Develop Measurable Learning Outcomes and Analyze Transfer of Training Develop Outcome Measures Choose an Evaluation Strategy Plan and Execute the Evaluation
Level Criteria Focus 1 Reactions Trainee satisfaction; aka affective 2 Learning Acquisition of knowledge, skills, attitudes, behavior; aka cognitive 3 Behavior Improvement of behavior on the job; akaskills 4 Results Business results achieved by trainees Training Outcomes: Kirkpatrick’s Four-Level Framework of Evaluation Criteria
How do you know if your outcomes are good? Good training outcomes need to be: • Relevant • Reliable • Discriminative • Practical
Good Outcomes: Relevance • Criteria relevance – the extent to which training programs are related to learned capabilities emphasized in the training program • Criterion contamination – extent that training outcomes measure inappropriate capabilities or are affected by extraneous conditions • Criterion deficiency – failure to measure training outcomes that were emphasized in the training objectives
Criterion deficiency, relevance, and contamination: Outcomes Identified by Needs Assessment and Included in Training Objectives Outcomes Related to Training Objectives Outcomes Measured in Evaluation Contamination Relevance Deficiency
Good Outcomes (continued) • Reliability – degree to which outcomes can be measured consistently over time • Discrimination – degree to which trainee’s performances on the outcome actually reflect true differences in performance • Practicality – refers to the ease with which the outcomes measures can be collected
Training Evaluation Practices Percentage of Courses Using Outcome Outcomes
[(Ns)*(T)*(r)*(SDy)*(Zs)]-[(N)*(C)] • Ns = number of applicants selected • T = tenure of selected group in years • r = correlation between predictor and job performance (VALIDITY) • SDy = standard deviation of job performance • Zs = average standard predictor score of selected group • N = number of applicants • C = cost per applicant
[(Nc)*(T)*(r)*(SDy)*(Zs)]-[(N)*(C)] • Nc = number of trainees who complete program • T = duration of training benefit • r = correlation between training criterion and job performance (VALIDITY) • SDy = standard deviation of job performance • Zs = average standard criterion score of trainees • N =total number of trainees enrolled • C = cost per trainee
Training Costs • Direct • Indirect • Development • Overhead • Compensation for Trainees
For On the Job Training $81,000 • 50 = Ns = number of trainees who complete program • 1 = T = duration of training benefit • .50 = r = correlation between training criterion and job performance (VALIDITY) • 4800 = SDy = standard deviation of job performance (assume 40% of base pay . . . $12,000 * .40) • .80 = Zs = average standard criterion score of trainees • 100 = N = total number of trainees enrolled • 150 = C = cost per trainee [(Ns)*(T)*(r)*(SDy)*(Zs)]-[(N)*(C)] (50 * 1 * .50 * 4800 * .8) - (100 * 150)
Experimental DesignsChoices • Pretest/posttest • Control Groups
Experimental Designs • 1: 1 group, posttest only • 2: 1 group, pretest/posttest • 3: Pretest/posttest control group • 4: Solomon four-group • 5: Time-series • 6: Nonequivalent control group
Experimental Designs Validity • Internal • External
History Maturation Testing Instrumentation Regression toward the mean Differential selection Experimental mortality Interactions Diffusion/imitation of treatments Compensatory equalization of treatments Rivalry/desirability of treatments Demoralization Experimental DesignsThreats to Internal Validity
Experimental DesignsThreats to External Validity • Reactive effect of pretesting • Interaction of selection & treatment • Reactive effects of experimental settings • Multiple-treatment interference
Issues in Training Validity • Training validity • Transfer validity • Intra-organizational validity • Inter-organizational validity