310 likes | 534 Views
Faking in personnel selection: Does it matter and can we do anything about it?. Eric D. Heggestad University of North Carolina - Charlotte. Education Testing Service Mini-Conference Oct 13 th & 14 th 2006. Four Questions About Faking in Personnel Selection Contexts. Can people fake?
E N D
Faking in personnel selection:Does it matter and can we do anything about it? Eric D. Heggestad University of North Carolina - Charlotte Education Testing Service Mini-Conference Oct 13th & 14th2006
Four Questions About Faking in Personnel Selection Contexts • Can people fake? • Do applicants fake? • Does faking matter? • I will talk about one project • What do we do about it? • I will talk about one project
Effects on Validity and SelectionMueller-Hanson, Heggestad, & Thornton (2003) • Ss completed personality and criterion measures in lab setting • Personality measure • Achievement Motivation Inventory • Criterion measure • A speeded ability test with no time limit • Could leave when they wanted, opportunity for normative feedback • Groups • Honest (n = 240) vs. faking (n = 204)
214.7 225.6 0.41 40.5 40.1 -0.05 Means & Standard Deviations Honest Group Faking Group Effect Size Predictor Criterion
Upper third .20* .07 Lower third .26* .45* Criterion-Related Validity Honest Group Faking Group Full Groups .17* .05 * p < .05
But Validity is Only Skin Deep • Important to look at selection • Groups were combined and various selection ratios examined • Variables examined • Percent of selectees from each group • Performance of those selected
Effects on SelectionPercent hired at various selection ratios Percent of Selectees Selection Ratio (%) Note: Honest made up 54% of sample
.07 .09 .08 .15 .18 .23 .31 .50 .56 Effects on SelectionGroup performance at various selection ratios Performance Selection Ratio (%)
Conclusions • Faking appears to have… • An impact on the criterion-related validity of our predictor • Most noticeably at the high end of the distribution • An impact on the quality of decisions • Low performing fakers more likely to be selected in top-down contexts
What Do We Do About Faking? • Approach 1: Detection and Correction • Tries to correct faking that has already occurred • Score corrections • Not successful (Ellingson, Sackett & Hough, 1999; Schmitt & Oswald, 2006) • IRT work • Retesting
What Do We Do About Faking? • Approach 2: Prevention • Many prevention strategies • Warnings • Subtle items • Multidimensional forced-choice (MFC) response formats
What is an MFC Format? • Dichotomous quartet format • Item contains four statements • Each statement represents a different trait • 2 statements positively worded, 2 statements negatively worded • Indicate “Most Like Me” and “Least Like Me”
Most Like Me Least Like Me X X Example MFC Item Avoid difficult reading material (-) Only feel comfortable with friends (-) Believe that others have good intentions (+) Make lists of things to do (+)
MFC Formats • Appears to be faking resistant (Christiansen et al., 1998; Jackson et al., 2000) • Example from Jackson et al. (2000) • Likert-type format effect size = .95 • MFC format effect size = .32
However…. • Normative vs. Ipsative • MFC measures typically provide partiallyipsative measurement • Selection settings require normative assessment • Also, evaluations have focused on group level analyses
Forced-Choice as Prevention? Heggestad, Morrison, Reeve & McCloy (2006) • Two studies • Study 1 – Do MFC measures provide normative trait information? • Study 2 – Are MFC measures resistant to faking at individual level?
Study 1 Do MFC measures provide normative information? • Participants (n= 307) completed three measures under honest instructions • NEO-FFI • IPIP Likert measure • IPIP MFC measure • Conducted three data collections to create this measure
Study 1 Do MFC measures provide normative information? • Logic: If MFC provides normative information, then correspondence between … • IPIP-Likert and IPIP-MFC scales should be quite good • Each IPIP measure and the NEO-FFI should be similar
Correlations NEO IPIP Likert NEO IPIP MFC IPIP Likert IPIP MFC Stability .81 .68 .59 Extroversion .87 .67 .58 Openness .75 .76 .65 Agreeableness .75 .70 .64 Conscientious. .83 .81 .71 Study 1 Do MFC measures provide normative information?
Study 1 Do MFC measures provide normative information? • We also defined correspondence as mean percentile differences across the measures
Percentile Rank NEO IPIP Likert NEO IPIP MFC IPIP Likert IPIP MFC Stability 14.00 18.29 21.13 Extroversion 11.38 18.61 20.49 Openness 15.22 15.28 18.58 Agreeableness 16.39 17.63 19.31 Conscientious. 12.61 14.07 16.96 Study 1 Do MFC measures provide normative information?
Study 1 Do MFC measures provide normative information? • Conclusions • MFC seems to do a reasonable job of capturing normative trait information • People can be compared directly!
Study 2 Are MFC measures resistant to faking at individual level? • Participants (n= 282) completed three measures • NEO-FFI Honest instructions • IPIP Likert Faking instructions • IPIP MFC Faking instructions
Effect Sizes IPIP MFC IPIP Likert Stability 0.75 0.61 Extroversion 0.65 0.33 Openness 0.36 0.13 Agreeableness 0.65 0.07 Conscientious. 1.23 1.20 Replication of Previous Findings
Study 2 Are MFC measures resistant to faking at individual level? • Logic: If MFC is resistant to faking at the individual level, then… • NEO-FFI (honest) IPIP-MFC (like honest) • and • NEO-FFI (honest) IPIP-Likert (fakeable) • IPIP-MFC IPIP-Likert
Correlations IPIP Likert IPIP MFC NEO IPIP Likert NEO IPIP MFC Stability .62 .37 .26 Extroversion .61 .37 .36 Openness .59 .53 .55 Agreeableness .48 .50 .52 Conscientious. .68 .40 .39 Study 2 Are MFC measures resistant to faking at individual level?
Percentile Rank IPIP Likert IPIP MFC NEO IPIP Likert NEO IPIP MFC 20.23 25.29 28.87 Stability 21.09 24.23 26.12 Extroversion 20.44 21.85 20.69 Openness 24.33 21.54 22.82 Agreeableness 18.05 23.47 23.75 Conscientious. Study 2 Are MFC measures resistant to faking at individual level?
Study 2 Are MFC measures resistant to faking at individual level? • Conclusion • MFC not a solution to faking • Can fake specific scales • Not faking resistant at individual level
Summary and Conclusion • Faking does impact scores • Changes the nature of the score • Not likely to have a big effect on CRV • Could have notable implications for selection • Dichotomous quartet response format does not offer a viable remedy