330 likes | 772 Views
Errors in Measurement. Psych 231: Research Methods in Psychology. Turn in your class experiment results Pass the results over Pass the consent forms over. Class Experiment. Independent variables Dependent variables Measurement Scales of measurement Errors in measurement
E N D
Errors in Measurement Psych 231: Research Methods in Psychology
Turn in your class experiment results • Pass the results over • Pass the consent forms over Class Experiment
Independent variables • Dependent variables • Measurement • Scales of measurement • Errors in measurement • Extraneous variables • Control variables • Random variables • Confound variables Variables
Example: Measuring intelligence? • How do we measure the construct? • How good is our measure? • How does it compare to other measures of the construct? • Is it a self-consistent measure? Reliability & Validity
Reliability • If you measure the same thing twice (or have two measures of the same thing) do you get the same values? • Validity • Does your measure really measure what it is supposed to measure (the construct)? • Is there bias in our measurement? Errors in measurement
Reliability = consistency Validity = measuring what is intended Bull’s eye = the “true score” reliablevalid unreliable invalid reliable invalid Dartboard analogy
True score + measurement error • A reliable measure will have a small amount of error • Multiple “kinds” of reliability Reliability
Test-restest reliability • Test the same participants more than once • Measurement from the same person at two different times • Should be consistent across different administrations Reliable Unreliable Reliability
Internal consistency reliability • Multiple items testing the same construct • Extent to which scores on the items of a measure correlate with each other • Cronbach’s alpha (α) • Split-half reliability • Correlation of score on one half of the measure with the other half (randomly determined) Reliability
Inter-rater reliability • At least 2 raters observe behavior • Extent to which raters agree in their observations • Are the raters consistent? • Requires some training in judgment Reliability
Does your measure really measure what it is supposed to measure? • There are many “kinds” of validity Validity
VALIDITY CONSTRUCT INTERNAL EXTERNAL FACE CRITERION- ORIENTED PREDICTIVE CONVERGENT CONCURRENT DISCRIMINANT Many kinds of Validity
VALIDITY CONSTRUCT INTERNAL EXTERNAL FACE CRITERION- ORIENTED PREDICTIVE CONVERGENT CONCURRENT DISCRIMINANT Many kinds of Validity
At the surface level, does it look as if the measure is testing the construct? “This guy seems smart to me, and he got a high score on my IQ measure.” Face Validity
Usually requires multiple studies, a large body of evidence that supports the claim that the measure really tests the construct Construct Validity
Did the change in the DV result from the changes in the IV or does it come from something else? • The precision of the results Internal Validity
History – an event happens the experiment • Maturation – participants get older (and other changes) • Selection – nonrandom selection may lead to biases • Mortality – participants drop out or can’t continue • Testing – being in the study actually influences how the participants respond Threats to internal validity
Are experiments “real life” behavioral situations, or does the process of control put too much limitation on the “way things really work?” External Validity
Variable representativeness • Relevant variables for the behavior studied along which the sample may vary • Subject representativeness • Characteristics of sample and target population along these relevant variables • Setting representativeness • Ecological validity - are the properties of the research setting similar to those outside the lab External Validity
Control variables • Holding things constant - Controls for excessive random variability • Random variables – may freely vary, to spread variability equally across all experimental conditions • Randomization • A procedure that assures that each level of an extraneous variable has an equal chance of occurring in all conditions of observation. • Confound variables • Variables that haven’t been accounted for (manipulated, measured, randomized, controlled) that can impact changes in the dependent variable(s) • Co-varys with both the dependent AND an independent variable Extraneous Variables
Pilot studies • A trial run through • Don’t plan to publish these results, just try out the methods • Manipulation checks • An attempt to directly measure whether the IV variable really affects the DV. • Look for correlations with other measures of the desired effects. “Debugging your study”
Why do we do we use sampling methods? • Typically don’t have the resources to test everybody, so we test a subset Sampling
Population Everybody that the research is targeted to be about The subset of the population that actually participates in the research Sample Sampling
Sampling to make data collection manageable Inferential statistics used to generalize back Population Sample Sampling
Why do we do we use sampling methods? • Goals of “good” sampling: • Maximize Representativeness: • To what extent do the characteristics of those in the sample reflect those in the population • Reduce Bias: • A systematic difference between those in the sample and those in the population Sampling
Have some element of random selection Susceptible to biased selection • Probability sampling • Simple random sampling • Systematic sampling • Stratified sampling • Non-probability sampling • Convenience sampling • Quota sampling Sampling Methods
Every individual has a equal and independent chance of being selected from the population Simple random sampling
Selecting every nth person Systematic sampling
Step 1: Identify groups (strata) • Step 2: randomly select from each group Stratified sampling
Use the participants who are easy to get Convenience sampling
Step 1: identify the specific subgroups • Step 2: take from each group until desired number of individuals Quota sampling