160 likes | 289 Views
Measurement. Experiment - effect of IV on DV. Independent Variable (2 or more levels). MANIPULATED a) situational - features in the environment b) task – type of task performed c) instructional – type of instructions given control vs experimental groups NOT MANIPULATED
E N D
Measurement Experiment - effect of IV on DV
Independent Variable (2 or more levels) • MANIPULATED • a) situational - features in the environment • b) task – type of task performed • c) instructional – type of instructions given • control vs experimental groups • NOT MANIPULATED • Subject variable – existing differences of participants • - cannot infer causality because cannot manipulate • control vs comparison group
Dependent Variable (measured) • The usefulness of the experiment depends on what is measured and how well you make the measurements • Uses operational definition
Dependent variable defined operationally • Construct inferred from measure. • memory ; attention; social dominance; anxiety; aggression; work ethic; work load; bonding; helping behavior; hunger
Measurement type • Covert –gauges events that cannot be observed directly. • Empirical –based on directly observable events • Self-reported –based on feelings and perception of subject
Nominal Categorical data No quantitative information E.g males and females Ordinal Ranked scores Know relative position of scores E.g affiliation ranking Scales of measurement
Interval • Constant separation between values of scale but no meaningful zero. • Know relative difference between scores • E.g. IQ, temperature Ratio • Meaningful zero point. • Know absolute difference between scores. • E.g. height, reaction time
Reliability • Results are repeatable when measured again • No measure is 100% reliable (especially behavioral measure) • Measurement = True (hypothetical score) + measurement error • reliability most likely if use careful measurement procedure
Test-retest reliability • varies due to situational changes • sloppy measurement tool • assessed by correlation
Internal consistency reliability: questionnaires Measure each person one time but compare multiple answers • Split-half reliability : correlates the scores on one half of the test with the other half • Cronbach’s alpha : calculates the correlation of each item with every other item – alpha is the average of these correlation coefficients • Item-total : correlation of each item to the total score • (can assess individual questions too)
Inter-rater reliability • The extent to which observers agree • Reliability tells us about measurement error but does not indicate if we are accurately measuring the variable of interest.
Validity • Are you measuring what you think you are measuring? • Validity assumes reliability
Construct validity – is it a valid construct to measure and is the measuring instrument the best – ie adequacy of operational definition • Content/Face validity – common sense test does there seem to be a relationship between measure and construct • Criterion validity – judged by outcome
How good is the measure? • predictive - does it accurately predict future behavior • convergent -is it meaningfully related to other measures of same thing • concurrent - people in groups known to differ on the construct differ on the measure • divergent (discriminant)- score on measure not related to other measure theoretically different
If you have no reliability then your scores vary randomly and you cannot assess the impact of the IV • If you have no validity then your conclusions will be wrong.