520 likes | 680 Views
The Many Threats to Test Validity. David Mott , Tests for Higher Standards and Reports Online Systems. Presentation at the Virginia Association of Test Directors (VATD) Conference, Richmond, VA, October 28, 2009. The Many Threats to Test Validity.
E N D
The Many Threats to Test Validity David Mott, Tests for Higher Standards and Reports Online Systems Presentation at the Virginia Association of Test Directors (VATD) Conference, Richmond, VA, October 28, 2009
The Many Threats to Test Validity In order for a test or assessment to have any value whatsoever, it must be possible to make reasonable inferences from the score. This is much harder than it seems. The test instruments, the testing conditions, the students, and the score interpreters, and perhaps Fate, ALL need to be working together to produce data worth using. Many specific threats will be delineated; a number of solutions suggested; and audience participation is strongly encouraged.
Validity and Value come from the same Latin root. The word has to do with being strong, well, good. Validity = Value
Initial Attitude Adjustment Amassing Statistics The government are very keen on amassing statistics — they collect them, raise them to the nth power, take the cube root and prepare wonderful diagrams. But what you must never forget is that every one of those figures comes in the first instance from the village watchman, who just puts down what he damn well pleases. (J. C. Stamp (1929). Some Economic Factors in Modern Life. London: P. S. King and Son) Distance from Data I have noticed that the farther one is from the source of data, the more likely one is to believe that the data could be a good basis for action. (D. E. W. Mott (2009). Quotations.)
The Examinationas shown by the Ghost of Testing Past The Examinationas shown by the Ghost of Testing Past
Validity — Older Formulations 1950’s through 1980’s • content validity • concurrent validity • predictive validity • construct validity Lee J. Cronbach
Content Validity — • Refers to the extent to which a measure represents all facets of a given social construct. Social constructs such as: Reading Ability, Math. Computation Proficiency, Optimism, Driving Skill, etc. It is a more formal term than face validity. As face validity refers, not to what the test actually measures, but to what it appears to measure. Face validity is whether a test "looks valid" to the examinees who take it, the administrative personnel who decide on its use, and to others.
Concurrent Validity — • Refers to a demonstration of how well a test correlates well with a measure that has previously been validated. The two measures may be for the same construct, or for different, but presumably related, constructs.
Predictive Validity — • Refers to the extent to which a score on a scale or test predicts scores on some criterion measure. For example, how well do your final benchmarks predict scores on the state SOL Tests?
Construct Validity — • Refers to whether a scale measures or correlates with the theorized underlying psychological construct (e.g., "fluid intelligence") that it claims to measure. It is related to the theoretical ideas behind the trait under consideration, i.e. the concepts that organize how aspects of personality, intelligence, subject-matter knowledge, etc. are viewed.
Validity — New Formulation 1990’s through now • Six aspects or views of Construct Validity • content aspect • substantive aspect • structural aspect • generalizability aspect • external aspect • consequential aspect Samuel Messick
Validity — New Formulation Six aspects or views of Construct Validity • Content aspect – evidence of content relevance, representativeness, and technical quality • Substantive aspect – theoretical rationales for consistency in test responses, including process models, along with evidence that the processes are actually used in the assessment tasks • Structural aspect – judges the fidelity of scoring to the actual structure of the construct domain • Generalizability aspect – the extent to which score properties and interpretations generalize to related populations, settings, and tasks • External aspect – includes converging and discriminating evidence from multitrait-multimethod comparisons as well as proof of relevance and utility. • Consequential aspect – shows the values of score interpretation as a basis for action and the actual and potential consequences of test use, especially in regard to invalidity related to bias, fairness, and distributive justice
Validity — New Formulation Six aspects or views of Construct Validity • Content aspect – evidence of content relevance, representativeness, and technical quality • Substantive aspect – theoretical rationales for consistency in test responses, including process models, along with evidence that the processes are actually used in the assessment tasks • Structural aspect – judges the fidelity of scoring to the actual structure of the construct domain • Generalizability aspect – the extent to which score properties and interpretations generalize to related populations, settings, and tasks • External aspect – includes converging and discriminating evidence from multitrait-multimethod comparisons as well as proof of relevance and utility. • Consequential aspect – shows the values of score interpretation as a basis for action and the actual and potential consequences of test use, especially in regard to invalidity related to bias, fairness, and distributive justice
Validity — New Formulation Six aspects or views of Construct Validity • Content aspect – evidence of content relevance, representativeness, and technical quality • Substantive aspect – theoretical rationales for consistency in test responses, including process models, along with evidence that the processes are actually used in the assessment tasks • Structural aspect – judges the fidelity of scoring to the actual structure of the construct domain • Generalizability aspect – the extent to which score properties and interpretations generalize to related populations, settings, and tasks • External aspect – includes converging and discriminating evidence from multitrait-multimethod comparisons as well as proof of relevance and utility. • Consequential aspect – shows the values of score interpretation as a basis for action and the actual and potential consequences of test use, especially in regard to invalidity related to bias, fairness, and distributive justice
Validity — New Formulation Six aspects or views of Construct Validity • Content aspect – evidence of content relevance, representativeness, and technical quality • Substantive aspect – theoretical rationales for consistency in test responses, including process models, along with evidence that the processes are actually used in the assessment tasks • Structural aspect – judges the fidelity of scoring to the actual structure of the construct domain • Generalizability aspect – the extent to which score properties and interpretations generalize to related populations, settings, and tasks • External aspect – includes converging and discriminating evidence from multitrait-multimethod comparisons as well as proof of relevance and utility. • Consequential aspect – shows the values of score interpretation as a basis for action and the actual and potential consequences of test use, especially in regard to invalidity related to bias, fairness, and distributive justice
Validity — New Formulation Six aspects or views of Construct Validity • Content aspect – evidence of content relevance, representativeness, and technical quality • Substantive aspect – theoretical rationales for consistency in test responses, including process models, along with evidence that the processes are actually used in the assessment tasks • Structural aspect – judges the fidelity of scoring to the actual structure of the construct domain • Generalizability aspect – the extent to which score properties and interpretations generalize to related populations, settings, and tasks • External aspect – includes converging and discriminating evidence from multitrait-multimethod comparisons as well as proof of relevance and utility. • Consequential aspect – shows the values of score interpretation as a basis for action and the actual and potential consequences of test use, especially in regard to invalidity related to bias, fairness, and distributive justice
Validity — New Formulation Six aspects or views of Construct Validity • Content aspect – evidence of content relevance, representativeness, and technical quality • Substantive aspect – theoretical rationales for consistency in test responses, including process models, along with evidence that the processes are actually used in the assessment tasks • Structural aspect – judges the fidelity of scoring to the actual structure of the construct domain • Generalizability aspect – the extent to which score properties and interpretations generalize to related populations, settings, and tasks • External aspect – includes converging and discriminating evidence from multitrait-multimethod comparisons as well as proof of relevance and utility. • Consequential aspect – shows the values of score interpretation as a basis for action and the actual and potential consequences of test use, especially in regard to invalidity related to bias, fairness, and distributive justice
Administration Validity • Administration Validity is my own term. A test administration or a test session is valid if nothing happens that causes a test, an assessment, or a survey to fail to reflect the actual situation. Test-session validity is an alternate term.
Administration Validity • Many things can come between the initial creation of an assessment from valid materials and the final uses of the scores that come from that assessment. • Imagine a chain that is only as strong as its weakest link. If any link breaks, the value of the whole chain is lost. • This session deals with some of those weak links.
Areas of Validity Failure • We create a test out of some “valid” items — Discuss some of the realities most of us face: We either have some “previously validated” tests or we have a “validated” item bank we make tests from. Let’s assume that they really are valid, this is, the materials have good content matches with the Standards/ Curriculum Frameworks/Blueprints, and so on.
Areas of Validity Failure Some examples of things that can creep in within the supposedly “mechanical” aspects of creating a test from a bank. • Here are two items from a Biology benchmark test we recently made for a client:
Two Biology Items Bio.3b 5. Which organic compound is correctly matched with the subunit that composes it? A maltose – fatty acids B starch – glucose C protein – amino acids D lipid – sucrose Bio.3b 6. Which organic compounds are the building blocks of proteins? A sugars B nucleic acids C amino acids D polymers
Two Biology Items Bio.3b 5. Which organic compound is correctly matched with the subunit that composes it? A maltose – fatty acids B starch – glucose C protein – amino acids D lipid – sucrose Bio.3b 6. Which organic compounds are the building blocks of proteins? A sugars B nucleic acids C amino acids D polymers Standard BIO.3b The student will investigate and understand the chemical and biochemical principles essential for life. Key concepts include b) the structure and function of macromolecules.
Two Biology Items Bio.3b 5. Which organic compound is correctly matched with the subunit that composes it? A maltose – fatty acids B starch – glucose C protein – amino acids * D lipid – sucrose Bio.3b 6. Which organic compounds are the building blocks of proteins? A sugars B nucleic acids C amino acids D polymers
Two Biology Items Bio.3b 5. Which organic compound is correctly matched with the subunit that composes it? A maltose – fatty acids B starch – glucose C protein – amino acids * D lipid – sucrose Bio.3b 6. Which organic compounds are the building blocks of proteins? A sugars B nucleic acids C amino acids * D polymers
A B C D A Life Science Item LS.6c 12. In this energy pyramid, which letter would represent producers? A A B B C C D D
A B C D The same Life Science Item “Randomized” LS.6c • In this energy pyramid, which letter would represent producers? A C B D C A D B
Moving from test creation to test administration
What Can Fail in the Test Administration Process • Students aren’t properly motivated • Random responding • Patterning responses • Unnecessary guessing • Cheating Let’s look at what some of these look like:
What Can Fail in the Test Administration Process • Students or teachers make mistakes. • Stopping before the end of test • Getting off position on answer sheets • Giving a student the wrong answer sheet • Scoring a test with the wrong key Let’s look at what some of these look like:
The chain has many links • Nearly any of them can break • Try to find the weakest links in your organizations efforts • Fix them – one by one
What are some of my solutions to all of this? • To the problems of mistakes in test creation • Use test blueprints • Be very careful of automatic test construction • Read the test carefully yourself and answer the questions • Have someone else read the test carefully and answer the questions • Use “Kid-Tested” items * * Future TfHS initiative
What are some of my solutions to all of this? • Be careful when reading reports – look past the obvious • For problems of careless, unmotivated test taking by students (even cheating) — Make the test less of a contest between the system/teacher and the student and more of a communication device between them • Watch the students as they take the test and realize that proctoring rules necessary for high-stakes tests are possibly not best for formative or semi-formative assessments • Look for/flag pattern marking and rapid responding * • Watch the students as they take the test * Future TfHS/ROS initiative
Here is a graph showing the timing of student responses to an item
For online tests it is possible to screen for rapid responding * * Future TfHS/ROS initiative
A major new way of communicating! • Let the students tell you when they don’t know or understand something – eliminate guessing • New mc scoring scheme: * • 1 point for each correct answer • 0 points for each wrong answer • ⅓ point for each unanswered question • Students mark where they run out of time * Future TfHS/ROS initiative
A major new way of communicating! Continued • Students have to be taught the new rules • Students need one or two tries to get the hang of it • Students need to know when the new scoring applies • It is better for students to admit not knowing than to guess
Humor Time flies like an arrow; fruit flies like a banana. We sometimes need to take a 90° turn in our thinking