540 likes | 683 Views
Exploring the Equivalence and Rater Bias in AC Ratings. Prof Gert Roodt – Department of Industrial Psychology and People Management, University of Johannesburg Sandra Schlebusch – The Consultants. ACSG Conference 17 – 19 March 2010. Presentation Overview.
E N D
Exploring the Equivalence and Rater Bias in AC Ratings Prof Gert Roodt – Department of Industrial Psychology and People Management, University of Johannesburg Sandra Schlebusch – The Consultants ACSG Conference 17 – 19 March 2010
Presentation Overview • Background and Objectives of the Study • Research Method • Results • Discussion and Conclusions • Recommendations
Background Construct Validity has long been a Problem in ACs (Jones & Born, 2008) Perhaps the Mental Models that the Raters use are Part of the Problem However, other Factors that Influence Reliability Should not be Neglected
Background Continued • To Increase Reliability Focus On all aspects of the Design Model (Schlebusch & Roodt, 2007): • Analysis • Design • Implementation • Context • Participants: • Process Owners (Simulation Administrator; Raters; Role-players)
Background Continued • Analysis (International Guidelines, 2009) • Competencies / Dimensions • Also Characteristics of Dimensions (Jones & Born, 2008) • Situations • Trends/Issues in Organisation • Technology
Background Continued • Design of Simulations • Fidelity • Elicit Behaviour • Pilot
Background Continued • Implementation • Context: • Purpose • Participants • Simulation Administration (Potosky, 2008) • Instructions • Resources • Test Room Conditions
Background Continued • Raters • Background • Characteristics • “What are Raters Thinking About When Making Ratings?” (Jones & Born, 2008)
Sources of Rater Bias • Rater Differences (background; experience, etc.) • Rater Predisposition (attitude; ability; knowledge; skills, etc.) • Mental Models
Objective of the Study The Focus of this Study is on Equivalence and Rater Bias in AC Ratings More specifically on: • Regional Differences • Age Differences • Tenure Differences • Rater Differences
Research Method Participants (Ratees) Region
Research Method (cont.) Participants (Ratees) Age
Research Method (cont.) Participants (Ratees) Tenure
Research Method (cont.) • Measurement: • In-Basket Test • Measuring Six Dimensions: • Initiative; • Information Gathering; • Judgement; • Providing Direction; • Empowerment; • Management Control • Overall In-Basket Rating
Research Method (cont.) Procedure: Ratings were Conducted by 3 Raters on 1057 Ratees
Results Initiative
Results (cont.) Initiative
Results (cont.) Information Gathering
Results (cont.) Information Gathering
Results (cont.) Judgement
Results (cont.) Judgement
Results (cont.) Providing Direction
Results (cont.) Providing Direction
Results (cont.) Empowerment
Results (cont.) Empowerment
Results (cont.) Control
Results (cont.) Control
Results (cont.) Overall In-Basket Rating
Results (cont.) Regional Differences
Results (cont.) Age Differences
Results (cont.)- tenure Tenure differences
Results (cont.) Rater Differences
Results (cont.) Post Hoc Tests: Judgement
Results (cont.) Post Hoc Tests: Providing Direction
Results (cont.) Post Hoc Tests: Empowerment
Results (cont.) Post Hoc Tests: Control
Results (cont.) Post Hoc Tests: In-Basket
Results (cont.) Non-Parametric Correlations
Discussion • Clear Regional; Age and Tenure Differences Do Exist among Participants • Possible Sources of the Differences: • Regional Administration of In-Basket • Thus Differences in Administration Medium (Potosky, 2008) • Different Administrators (Explaining Purpose; Giving Instructions; Answering Questions) • Different Resources • Different Test Room Conditions
Discussion (cont.) • Differences Between Participants Regionally: • English Language Ability (not tested) • Motivation to Participate in the Assessment (not tested) • Differences in Employee Selection Processes as well as Training Opportunities (Burroughs et al., 1973) • Simulation Fidelity (not tested)
Discussion (cont.) • Clear Regional; Age and Tenure Differences Do Exist among Participants • Supporting Findings by Burroughs et al. (1973) • Age does Significantly Influence AC Performance • Participants from Certain Departments Perform Better
Discussion (cont.) • Appropriateness of In-Basket for Ratees • Level of Complexity • Situation Fidelity Recommendations: • Ensure Documented Evidence (Analysis Phase in Design Model) • Pilot In-Basket on Target Ratees (Design Phase of Design Model) • Shared Responsibility of Service Provider and Client Organisation
Discussion (cont.) • Context in Which In-Basket Administered • Purpose Communicated Recommendations: • Ensure Participants (Ratees) and Process Owners Understand and Buy-into Purpose
Discussion (cont.) • Consistent Simulation Administration: • Instructions Given Consistently • Interaction with Administrator • Appropriate Resources Available During Administration • TestRoom Conditions Appropriate for Testing Recommendations: • Ensure All Administrators Trained • Standardise Test Room Conditions
Discussion (cont.) • Rater Differences do Exist • Possible Sources of Rater Differences: • Background (All from a Psychology Background, with Management Experience) • Characteristics such as Personality (Bartels & Doverspike) • Owing to Cognitive Load on Raters • Owing to Differences in Mental Models (Jones & Born, 2008)