650 likes | 662 Views
Learn about the significance of the Mental Measurements Yearbook (MMY) in testing, its updated edition in 2012, and how you can access it online for free through your library. Explore the distinction between achievement and aptitude tests and why the arbitrary classification should be replaced with the term "ability tests." Discover how tests can predict future performance and the role of mental ability tests in unfair discrimination cases.
E N D
Schedule Today and Wednesday: Lecture Monday, 4/08: Exam PSY 6430, Unit 7Survey of tests
SO2 (this IS FE): Most important source for tests • Mental Measurements Yearbook (1938) • Now in its 19th edition, updated in 2012 • You can access it online for free through our library • I included the Szostek & Hobson article so you can get an idea of how important this resource is from a legal perspective – article was published in 2011 and is up-to-date • Courts have acknowledged it as the “bible of testing” and the “authoritative source on testing” (Just a brief mention of the MMY – you should always check the reviews for any off-the-shelf test an organization is planning on using – not going over the other study objectives on the article)
SO8 Intro: Achievement vs. Aptitude TestsAn Arbitrary Distinction • For decades, tests have been classified as either achievement tests or aptitude tests • Definitions of “achievement” and “aptitude” • Achievement The act of accomplishing or finishing something successfully, especially by means of skill, practice, or perseverance • Aptitude A natural or acquired talent or ability or inclination; quickness in learning and understanding - intelligence • The distinction represents the mind-body dualism typical of traditional testing (in this material, GFB argue that the terms “achievement test” and “aptitude test” are inappropriate and should be replaced with the term “ability test” - and I agree with them; excellent material)
SO8A, NFE: Typical distinction between achievement and aptitude tests • Achievement tests (supposedly) measure • What a person learned as a result of a specific structured educational/training experience/course • Scores are interpreted to be a measure of how much an individual knows as a result of the education or training • English grammar, math, science, social studies, etc. • These are the types of tests used in grade school and high school to measure student learning/proficiency • In Michigan, MEAP tests: Michigan Educational Assessment Program
SO8A, NFE: Typical distinction between achievement and aptitude tests • Aptitude tests (supposedly) measure • Accumulation of learning from a number of diverse and usually informal learning experiences • Although not emphasized by GFB, there is a genetic implication • You have artistic ability or you don’t • You have mechanical aptitude or you don’t • Women don’t have an aptitude for math • Men don’t have good spatial aptitude • Said to measure potential to learn, or the potential to develop new skills and acquire new knowledge • If you don’t have the aptitude you can’t be a good artist, mechanic, mathematician • Intelligence tests, SATs, GREs, Artistic Aptitude • These are the tests that you are told you can’t study for (hog wash - most people don’t say that any more) (Olympic athletes and musicians have “natural” ability – then we learn the parent was an Olympic athlete or musician – both parents were musicians…)
SO8B&C, FE: Why is the distinction arbitrary? • All tests measure what a person has learned up to the time he or she takes the test and that is the only thing a test can measure • They cannot and do not measure innate or unlearned potential (even if that existed) • Thus, the distinction between achievement tests and aptitude tests is arbitrary and • We should use the term “ability tests” for both types of tests • Ability in the sense of competence or proficiency, regardless of how you have acquired the ability/skill
SO8D: Tests can still be used to predict how well someone will perform. Why? • Tests can and do measure the prerequisites that are necessary for further learning in an specified area, and thus can predict future learning/performance • If students do not do well in PSY 3600, Concepts and Principles of Behavior Analysis, they cannot do well in PSY 4600, Survey of Behavior Analysis Research, thus a student’s grade in PSY 3600 can predict his or her performance in PSY 4600 • You can’t balance an equation in chemistry unless you know algebra, thus a test of algebra can predict performance in a chemistry class (not in text, but important to understand)
SO9: (NFE) Mental Ability and Cognitive Ability Tests = Intelligence Tests • Mental ability tests were at the center of early critical Supreme Court decisions regarding unfair discrimination • Thus, many companies stopped using them • However, there is a lot of research in selection that indicates that mental ability tests are related to almost all jobs • Validity correlations are often quite high, and higher than correlations of other tests • Many companies are now using them again • Remember, however, if you use one of these, you must conduct an empirical validity study (or use validity generalization - risky) (as a behavior analyst, I still have trouble with the term “mental” ability since it still implies mind-body dualism; I’m more comfortable, but not completely with “cognitive” ability; but haven’t been able to come up with anything different that and certainly like those terms better than “intelligence” tests)
SO9: Why is it that all mental ability tests are not interchangeable? • A rose is not a rose is not a rose • A mental ability test is not a mental ability test is not a mental ability test • Mental ability tests measure a collection of abilities - a learned repertoire that typically includes: • Verbal, math, memory, and reasoning abilities • 14 different abilities are often measured in some combination by mental ability tests (next slide) • Different mental ability tests often measure a different set of these abilities • Thus a person may score differently on different tests of mental ability (FE: main abilities include some form of verbal, math, memory, and reasoning abilities)
(NFE) Abilities Measured by Various Mental Ability Tests • Memory span • Numerical fluency • Verbal comprehension • Conceptual classification • Semantic relations • General reasoning • Conceptual foresight • Figure classification • Spatial orientation • Visualization • Intuitive Reasoning • Ordering • Figure identification • Logical evaluation and deduction (that is why if you use the PAQ you must take great care in selecting tests that are similar to the GATB tests that are recommended)
SO11, NFE: Why should these tests be called “mental ability” rather than “intelligence” or “I.Q.” tests? • The term mental ability makes it explicit that these tests measure various cognitive abilities of the applicant (and not some innate, unlearned, hypothetical construct called “intelligence”) • These cognitive abilities are most directly identified by the what is measured (some combination of the 14 abilities listed earlier) and from the content of the items themselves • They should be thought of the same way the other abilities discussed in the book are thought of • e.g., mechanical ability, clerical ability • In other words, the authors are resisting the traditional view that there is something called “intelligence”
(NFE) Popular mental ability tests • I am going to show you some examples of mental ability tests at the end of class, just to “de-mystify” them a bit • The authors describe the Wonderlic Personnel Test which is probably the most popular • Given to all players at the NFL Scouting Combine and scores are reported to NFL teams before the annual draft • For a moment, look at items in the text that are similar to the ones on the Wonderlic Personnel Test
Examples of items similar to those on the Wonderlic 1. Which of the following months has 30 days? (a) February (b) June (c) August (d) December 2. Alone is the opposite of: (a) happy (b) together (c) single (d) joyful 3. Which is the next number in this series: 1, 4, 16, 4, 16, 64, 16, 64, 256, (a) 4 (b) 16 (c) 64 (d) 1024 (Two slides - Note: all six items are different types of items: general knowledge, opposites - verbal comprehension and vocabulary, numerical reasoning and ordering)
Example items similar to those on the Wonderlic 4. Twilight is to dawn as autumn is to: (a) winter (b) spring (c) hot (d) cold 5. If Bob can outrun Junior by 2 feet in every 5 yards of a race, how much ahead will Bob be at 45 yards? (a) 5 yards (b) 6 yards (c) 10 feet (d) 90 feet 6. The two words relevant and immaterial mean: (a) the same (b) the opposite (c) neither same nor opposite (again, notice the type of questions: semantic or verbal reasoning, numerical fluency/reasoning, verbal comprehension - opposites)
SO12: Validity of mental ability tests • What have the validity studies uniformly concluded? Mental ability tests are among the most valid of all selection instruments (work samples are the only tests that seem to be as valid, recent data suggest they have just as much adverse impact; next slide on validity of mental ability tests as well)
SO13: Validity of mental ability tests • Differences in the actual tasks that a person performs as part of a job have very little effect on the magnitude of the validity coefficients for mental ability tests • In other words, mental ability tests are valid predictors of performance for a wide variety of jobs
A problem with mental ability tests • They have repeatedly been shown to have adverse impact on protected classes, particularly blacks and hispanics • This led to the notion that these types of test might have differential validity - next
SO14: Differential Validity • 14A: What is meant by differential validity? • Notion/hypothesis that tests are less valid for minority groups than for non-minorities • That is, a test may be significantly more valid for whites than for blacks • Term is related to test bias regarding ability tests, particularly mental ability tests • This claim is made over and over again with respect to SATs and GREs - that those tests are more predictive of the performance of white students than they are of the performance of minority students (extremely important; and mentioned often in selection as well as admissions to colleges and universities,- and is still very controversial)
SO14: Differential Validity (this slide NFE) • The argument is that the content of ability tests is based on content/items related to the white middle-class (e.g., vocabulary and grammar), and thus the scores of the minorities are lower than what they should be
SO14B: Differential Validity (FE) • The data are very clear about this issue Differential validity does not exist • That is, tests are equally valid for whites and other ethnic/racial groups • It makes sense • Verbal comprehension skills are verbal comprehension skills • Verbal reasoning skills are verbal reasoning skills • Math skills are math skills, etc. • Thus if any of these skills are required by the job, they should be “equally required” by whites and members of other ethnic/racial groups
SO15: Cognitive ability tests -Differences among demographic groups • Meta-analyses have been consistent – there are significant differences in mean test scores among racial/ethnic groups • Ranking: Asians whites Hispanics blacks
(NFE) Cognitive ability tests -Differences among demographic groups • Cognitive ability tests have a high correlation with job performance and academic performance • They have a disproportionate impact on Hispanics and blacks • Often result in adverse impact as legally defined when used for selection (important, difficult issue arises)
SO16: Adverse impact and cognitive ability tests Remember, adverse impact, however, does not mean that unfair discrimination has occurred; if the tests are job related then fair discrimination has occurred • SO16: Three things that make a defense against adverse impact likely: • Their overall validity – they are among the most valid and least expensive tests • Differential validity does not exist • Adverse impact cannot be overcome by using any other measure
SO16, NFE: Inappropriate conclusions from mean differences on test scores • It is not appropriate to conclude from these studies that differences are due to • genetic differences • educational differences • cultural differences • Studies do not address the reasons (the authors want to caution any one making any general conclusions as to why differences exist; particular concern about race-based genetic arguments as advanced in the Bell Curve, published a number of years ago that re-opened the debate about race-based genetic intelligence.)
SO19: Two factors that should be taken into account when deciding whether to use cognitive ability tests • Cognitive ability tests are among the most valid tests for a large number of jobs (and some selection specialists would say for all jobs) • Evidence also indicates that adverse impact is highly likely with these tests (skipping to SO19; cont. on next slide)
NFE: Cognitive Ability Tests • Because they are so valid, some selection specialists believe cognitive ability tests should be used extensively in selection • Some, however, have expressed deep reservation about using them because of the social implications of the disqualification of larger proportions of minorities (very nice discussion of this in text; directly quoting GFB here; cont. on next slide)
NFE: Cognitive Ability Tests • To some extent, the decision may reflect the values/goals of the organization • If goal is to maximize individual performance with minimal cost, cognitive ability tests will do this • If the organization has multiple goals of sustaining high performance while maintaining a broad representation of minorities, then it would be better to limit the use of cognitive ability tests and use other, generally more expensive and almost equally valid instruments • biodata inventories (I don’t like these as you will see next unit) • structured interviews • assessment centers *The authors include work samples in their list but in later in this chapter present recent data that indicates work samples appear to have as much adverse impact as cognitive ability tests. (that’s the rub - the expense of those other instruments)
SO17: Diversity and use of cognitive ability tests • If an organization has diversity as a selection goal and wants to use cognitive ability tests because of their validity and the fact that other options are much more expensive, what is the main/best option? Vigorous recruitment of minority applicants (now back to SO17: remember race norming is not legal; often a problem because selection specialists are typically not the ones who are responsible for recruitment –selection specialists really need to work with the HR staff)
SO20: (NFE) Popular mechanical ability, clerical, and physical ability tests • The authors describe several very popular tests • Refer to this material if you are ever looking for tests in these categories • I am not going to have you learn anything specific about these tests
SO21: Height & Weight Requirements • Height and weight requirements have often been challenged in court • Adverse impact on females and Asians • The courts have rarely let them stand • The rationale for using these measures is that they are substitute measures for strength • But courts have consistently held that if strength is the job requirement, then it should be measured directly (physical ability test) (a lot of organizations in the past; police and fire)
Intro Personality Tests • The data and information on personality tests is difficult • For many years, companies used personality tests that were developed by clinical psychologists, and some of those tests are still popular and being used by organizations • One is the California Personality Inventory • Have not had good validity historically • In prior editions of the book, GFB advised against their use • They remain cautious in this one, but “cautiously optimistic”
Intro Personality Tests • There is some good work going on right now, however, the field is in a bit of flux right now • Intuitively we know that “personality” influences how effective a person is at work, we just haven’t tapped into what the relevant KSAs really are, or what the relevant clusters of behaviors are • Even with the recent work, validity coefficients tend to be low, but they do appear to add independent predictive power (above and beyond cognitive ability tests and other types of ability tests)
SO23A: Personality Tests • There is some agreement in the field that personality characteristics can be grouped into five broad dimensions called the Five-Factor Model or Big Five • Conscientiousness • Being responsible, organized, dependable, planful, willing to achieve, and persevering • Emotional stability (only one described in negative terms) • Being emotional, tense, insecure, nervous, excitable, apprehensive, and easily upset • Agreeableness (relevant for team work) • Being courteous, flexible, trusting, good natured, cooperative, forgiving, softhearted, and tolerant • Extroversion • Being sociable, gregarious, assertive, talkative, and active • Openness to experience (also called intellect or culture) • Being imaginative, cultured, curious, intelligent, artistically sensitive, original and broad minded
SO23B: Personality Tests • Good news: to date there has been little or no adverse impact (a) across racial and ethnic groups and (b) between males and females
SO24: Traits as predictors • Two traits have been shown to be universal predictors, that is, valid across jobs • Conscientiousness • Emotional stability • The other three were found to be valid for only a few jobs or specific criteria • Extraversion (managers and training criteria) • Agreeableness (team work) • Openness to experience (training criteria)
SO 25: Personality Tests • If you do use a personality test, you must use a criterion-related validity study to support it because personality traits cannot be directly observed • Concurrent validity • Predictive validity • Validity generalization (in other words you cannot use content validity: also have some legal issues to be aware of)
SO26: Two thorny legal issues with personality tests • ADA (dealt with this previously in U3) • If a test can and is used to diagnose mental/psychiatric disorders, then it will probably be considered a medical examination under ADA • If it deals with other personality traits (the Big 5, for example) then it probably will not be considered a medical examination although I don’t know how courts would/will handle “emotional stability” as it relates to ADA • Nonetheless, my strong advice to you is to treat every personality test as a medical examination until things are clarified more by the courts • Which means you should only administer personality tests post-offer and keep the results in a file that is separate from the personnel file
SO26: This slide, NFE:Two thorny legal issues with personality tests • Clarifying court case, 2005, 7th Circuit Court • MMPI is a medical examination and thus illegal for pre-employment use (certainly that was expected) • Psychological tests that measure personal traits such as honesty, integrity, preferences and habits do not constitute medical examinations
SO26: Two thorny legal issues with personality tests • Right to privacy (be able to explain this as well) • Although a right to privacy is not explicitly guaranteed under the US Constitution, individuals are protected from unreasonable intrusions and surveillance • Personality tests, by their nature, reveal an individual’s thoughts and feelings • Several states have laws that explicitly guarantee a right to privacy • To date, litigation has occurred about questions relating to sexual inclinations and orientation and religious views (second thorny issue)
SO26: Right to privacy (this slide, NFE) • Soroka v. Dayton Hudson (1991) • California Court of Appeals stopped Dayton Hudson’s Target stores from requiring applicants for store security positions to take a personality test that contained questions about sexual practices and religious beliefs • The court also stated that employers must restrict psychological testing to job-related questions • The ruling was later dismissed because the parties reached a court-approved settlement • Dayton-Hudson agreed to stop using the personality test • Divided $1.3 million dollars among the estimated 2,500 members of the plaintiff class who had taken the test
Intro, Performance or work sample tests • Performance or work sample tests are excellent and I highly recommend their use when you can do them • Typing test • Having candidates write a computer program to solve a specific problem • Role playing a sales situation with an applicant for a sales position • Having mechanics trouble shoot a problem with an engine • You are getting an actual sample of behavior under controlled testing conditions (which permits you to easily compare performance across applicants) (this slide NFE)
Performance or work sample tests • From a technical perspective, they have high validity • They reduce two limitations of other selection procedures, and both are related to verbal behavior • Most selection procedures rely heavily on verbal behavior • Written answers to questions (ability tests) • Oral descriptions of abilities/skills (interviews, training and evaluation assessments) (This slide NFE)
SO28(NFE): The two limitations that are reduced • Willful distortion and faking (people want to look good) • This varies dependent upon the selection procedure • Reports about past experiences (interviews, T&Es) where the information is difficult to confirm - most susceptible • Personality and honesty inventories, next susceptible • Ability tests, least susceptible
SO28 (FE): The two limitations that are reduced • Relationship between verbal behavior and actual behavior is not perfect (as we behavior analysts well know) • Much of our behavior is contingency-shaped, not rule-governed • This is particularly a problem for exemplar performers who are not verbally fluent • Automobile mechanic • Plumbers • Machine operator • It can also be a problem for employees who are exemplar performers but can’t describe what makes them exemplary performers – sales representatives
SO29 (FE): Three limitations of work samples • Difficulty of accurately simulating job tasks that are representative of the job • Applicants must already have the KSAs being tested – they cannot cover specialized things that must be learned on the job • General sales skills OK, but questions that deal with specific company-related products and pricing will not be • Very costly to develop and and often to administer (many must be done one-on-one)
SO29 NFE: Example of a bad, yet common, work sampling test: Stress interviews • Many consulting firms use stress interviews • Stress interviews Interviewer creates a stressful situation, often by asking many questions rapidly, not allowing much time for the applicant to respond, interrupting the applicant frequently, acting in a semi-hostile manner, or in a cool aloof manner • Why bad? • Even if the job is one of high work demands that produce stress, rarely is the situation staged in the interview representative of the actual work demands that produce the stress • In very few jobs, is the stress related to a semi-hostile or cool/aloof stranger rapidly firing questions • The behavior of the applicant doesn’t readily generalize to the job and thus should not be used as a predictor (maybe OK for a press secretary for a politician)
SO30: Performance tests vs. cognitive ability tests, validity, adverse impact, and cost • Validity • They both have high validity: they are two of the most valid types of selection instruments • Adverse impact • Equal adverse impact • Cost • Performance tests cost much more to develop and administer
Just a Word About Assessment Centers • Assessment centers or even the use of some of the exercises often included in assessment centers have been highlysuccessful • In-basket tests • Leaderless group interaction tests • Case analyses • Main problem is their time and expense to both develop and administer • You are unlikely to become involved in designing an assessment center, thus I am skipping them for the sake of time
Just a Word About Assessment Centers • Refer you to the Minnich & Komaki article in U7 in the course pack from the OBM Network News The article describes the use of a validated in-basket test to assess the effectiveness of managers based on Komaki’s Operant Supervisory Taxonomy and Index This is one of the best examples I have ever seen of the intersection of behavior analysis and traditional I/O Psychology • Operant supervisory taxonomy and index • Assessed the difference between high performing and low performing managers • Found that work sampling and type of consequence following performance distinguished between high and low performing managers (Gives a detailed description of the instrument, some of the actual items, and responses, along with analysis of responses Unfortunately, it is not commercially available – done as Minnich’s dissertation)
SO32: Graphology: Some companies are using it! • During the introduction to the course, I provided some information about graphology • Used as a selection tool in/by (very popular in Europe): • 5,000 US companies • 68% of Swiss companies • 50% of French companies • 80% of French selection consultants • 80% of Western European countries • I am appalled, as are the authors, that a section on graphology has to be included in a legitimate text on personnel selection and placement but the good news is that its use appears to be declining, at least in this country (couldn’t resist including this; this slide NFE)