180 likes | 379 Views
Quiz. Do random errors accumulate? Name 2 ways to minimize the effect of random error in your data set. Validity. In our last class, we began to discuss some of the ways in which we can assess the quality of our measurements.
E N D
Quiz • Do random errors accumulate? • Name 2 ways to minimize the effect of random error in your data set.
Validity • In our last class, we began to discuss some of the ways in which we can assess the quality of our measurements. • We discussed the concept of reliability (i.e., the degree to which measurements are free of random error).
Why reliability alone is not enough • Understanding the degree to which measurements are reliable, however, is not sufficient for evaluating their quality.
Validity • In this example, the measurements appear reliable, but there is a problem . . . • Validity reflects the degree to which measurements are free of both random error, E, and systematic error, S. • O = T + E + S • Systematic errors reflect the influence of any non-random factor beyond what we’re attempting to measure.
Validity: Does systematic error accumulate? • Question: If we sum or average multiple observations (i.e., using a multiple indicators approach), how will systematic errors influence our estimates of the “true” score?
Validity: Does error accumulate? • Answer: Unlike random errors, systematic errors accumulate. • Systematic errors exert a constant source of influence on measurements. We will always overestimate (or underestimate) T if systematic error is present.
Note: Each measurement is 2 points higher than the true value of 10. The errors do no average out.
Note: Even when random error is present, E averages to 0 but S does not. Thus, we have reliable measures that have validity problems.
Validity: Ensuring validity • What can we do to minimize the impact of systematic errors? • One way to minimize their impact is to use a variety of indicators • Different kinds of indicators of a latent variable may not share the same systematic errors • If true, then S will behave like random error across measurements (but not within measurements)
Example • As an example, let’s consider the measurement of self-esteem. • Some methods, such as self-report questionnaires, may lead people to over-estimate their self-esteem. Most people want to think highly of themselves. • Other methods, such as clinical ratings by trained observers, may lead to under-estimates of self-esteem. Clinicians, for example, may be prone to assume that people are not as well-off as they say they are.
Self-reports Clinical ratings Note: Method 1 systematically overestimates T whereas Method 2 systematically underestimates T. In combination, however, those systematic errors cancel out.
Another example • One problem with the use of self-report questionnaire rating scales is that some people tend to give high (or low) answers consistently (i.e., regardless of the question being asked).
1 = strongly disagree | 5 = strongly agree In this example, we have someone with relatively high self-esteem, but this person systematically rates questions one point higher than he or she should.
1 = strongly disagree | 5 = strongly agree If we “reverse key” half of the items, the bias averages out. Responses to reverse keyed items are counted in the opposite direction. T: (4 + 4 + [6-2] + [6-2]) / 4 = 4 O: (5 + 5 + [6-3] + [6-3]) / 4 = 4 1(strongly disagree) + 5(strongly agree) = 6
Validity • To the extent to which a measure has validity, we say that it measures what it is supposed to measure • Question: How do you assess validity? ** Very tough question to answer! ** (But, we’ll give it a shot in our next class.)