220 likes | 230 Views
Understand the concepts of selection ratio, base rate, and utility in making hiring decisions. Learn about combining information and the difference between clinical and mechanical prediction methods.
E N D
Learning Objectives • Understand issues outside of predictive power that affect “validity” • To be able to understand and apply: • Selection Ratio • Base Rate • Understand the concept of Utility • Understand ways of combining information to make hiring decisions • Multiple regression vs. multiple hurdles • Clinical vs. mechanical combination
Validity: Concepts • First, let’s define what the terms mean: • Validity coefficient: Correlation of our predictor test to the outcome of interest (e.g., job performance) • Selection Ratio: The proportion of applicants that are actually selected into positions • Base Rate: The proportion of individuals in the population that can perform the job at a at least minimally proficient level
Quantifying Validity: Correct vs. Incorrect Decisions • When hiring, we make a correct decision when we select someone who performs acceptably well, AND when we do not select someone who would not • When hiring, we make an incorrect decision when we select someone who does not perform acceptably well AND when we fail to hire someone who would
Selection Ratios and Base Rates, Take 1 • Alright, so let’s say our firm has 10 openings in Widget Design. Our recruitment has netted us 100 applicants. • Our selection ratio is 10% • Now, widget design is a pretty complicated business, so only about 30% of people can do it well enough to meet our minimum standards. • Our base rate is 30%
Selection Ratio, Take 2 • Selection is just filling our job openings with people from the applicant pool • Usually we hire top down, best to worst • e.g., First we take the people with 4.0 GPAs, then the 3.9s, then the 3.8s, and so on (some of us would never get jobs if the world always worked this way…) • The selection ratio is just the ratio of the number of people we hire to the total number of applicants (hires/applicants) • Those 10 Widget Designers we have to hire • If we do have 100 applicants, our SR is 10% • But, if we’ve only got 20 applicants, our SR bloats to 50%
Base Rate, Take 2 • When the base rate is high, nearly everyone we might hire would be a competent worker • When the base rate is low, nearly everyone we might hire would not be a competent worker • The base rate is just the percentage of applicants who would be competent workers if we hired them. • We’re not too useful when base rates are high, because almost anyone off the street would be a good worker • We’re also not too useful when the base rates is low, because we’re more apt to make incorrect decisions (just because so few people would be good workers)
Bottom Line • So when can we do the most good? That is to say, when can we make the most effective decisions, all else being equal? • Make a good test • Make sure we can be picky (favorable selection ratio) • And the base rate is…?
Utility: The Short Course • The basic utility equation: U = SDyZxrxy • Where U is the expected utility • SDy is the standard deviation of job performance in dollars • Zx is the average standardized score on the selection test of applicants hired • r(x,y) is the correlation between selection test and job performance criterion
Cut Scores • When we have a predictor, we have to decide who to hire • Set a cut score, or a score on the predictor that if candidates score below, they will not be hired • Do this in one of two ways • Criterion-referenced: cut score corresponds to minimally acceptable performance (set by regression equation) • Norm-referenced: based on the scores themselves (i.e., 60% = F)
Combining Information • The simplest selection system has one predictor • Based on that predictor, we make an offer • Sometimes we have multiple predictors • Say, a formal test and a job interview • Combine the info, then make an offer • Multiple Hurdles • First we check to see if you have a college degree • Then we give you a test, take top 50% • Then we interview the remaining candidates • Make offer based only on the interview
Combining Information • Multiple regression – compensatory approach • Again, test + interview • A good interview will compensate for a weak test score • Multiple hurdles – non-compensatory • “Weed out” process • Have to pass each stage of the system
Combining Information • Clinical vs. Mechanical prediction • Mechanical methods use a formula to combine the information (may just be a sum or average) • Clinical methods use a human judge to combine the info based on his/her judgment • Meehl’s “Little Black Book” (1954) • Clinical versus statistical prediction • Mechanical methods produce superior decisions • Except for “broken legs”
Sawyer’s (1996) follow-up • Distinguished between method of measurement and method of data combination • Mechanical collection – objective tests (cognitive ability, personality, etc) • Clinical collection – human judge/rater (interview, simulation rating)
Combining info • Full clinical: clinical collection + clinical combination (all done by human judge) • Full mechanical: mechanical collection + mechanical combination (all done actuarially) • Mechanical synthesis: mechanical and/or clinical collection + mechanical combination • Clinical synthesis: mechanical and/or clinical collection + clinical combination • Mechanical synthesis outperforms, even when combining clinically collective information
Bottom line • The key thing is to combine the information mechanically • People can be very good at collecting information (e.g., in an interview) • But, we’re unsystematic in putting it all together • This is one of the most robust findings in psychology, but a lot of people like to ignore it • Even if you just add the numbers up, with no fancy statistics, you’re probably way ahead of the game