220 likes | 234 Views
Learn about measurement in quantitative studies, including errors, reliability, validity, and assessment criteria. Explore factors contributing to measurement errors and methods for evaluating measurement quality.
E N D
Chapter 17Assessing Measurement Quality in Quantitative Studies
Measurement • The assignment of numbers to represent the amount of an attribute present in an object or person, using specific rules • Advantages: • Removes guesswork • Provides precise information • Less vague than words
Errors of Measurement Obtained Score = True score + Error Obtained score: An actual data value for a participant (e.g., anxiety scale score) True score: The score that would be obtained with an infallible measure Error: The error of measurement, caused by factors that distort measurement
Factors That Contribute to Errors of Measurement • Situational contaminants • Transitory personal factors • Response-set biases • Administration variations • Problems with instrument clarity • Item sampling • Instrument format
Key Criteria for Evaluating Quantitative Measures • Reliability • Validity
Reliability • The consistency and accuracy with which an instrument measures the target attribute • Reliability assessments involve computing a reliability coefficient • most reliability coefficients are based on correlation coefficients
Correlation Coefficients • Correlation coefficients indicate direction and magnitude of relationships between variables • Range • from –1.00 (perfect negative correlation) • through 0.00 (no correlation) • to +1.00 (perfect positive correlation)
Three Aspects of Reliability Can Be Evaluated • Stability • Internal consistency • Equivalence
Stability • The extent to which scores are similar on 2 separate administrations of an instrument • Evaluated by test–retest reliability • Requires participants to complete the same instrument on two occasions • A correlation coefficient between scores on 1st and 2nd administration is computed • Appropriate for relatively enduring attributes (e.g., self-esteem)
Internal Consistency • The extent to which all the instrument’s items are measuring the same attribute • Evaluated by administering instrument on one occasion • Appropriate for most multi-item instruments • Evaluation methods: • Split-half technique • Coefficient alpha
Equivalence • The degree of similarity between alternative forms of an instrument or between multiple raters/observers using an instrument • Most relevant for structured observations • Assessed by comparing observations or ratings of 2 or more observers (interobserver/interrater reliability) • Numerous formula and assessment methods
Reliability Coefficients • Represent the proportion of true variability to obtained variability: r = VTVo • Should be at least .70; .80 preferable • Can be improved by making instrument longer (adding items) • Are lower in homogeneous than in heterogeneous samples
Validity • The degree to which an instrument measures what it is supposed to measure • Four aspects of validity: • Face validity • Content validity • Criterion-related validity • Construct validity
Face Validity • Refers to whether the instrument looks as though it is measuring the appropriate construct • Based on judgment, no objective criteria for assessment
Content Validity • The degree to which an instrument has an appropriate sample of items for the construct being measured • Evaluated by expert evaluation, via the content validity index (CVI)
Criterion-Related Validity • The degree to which the instrument correlates with an external criterion • Validity coefficient is calculated by correlating scores on the instrument and the criterion
Criterion-Related Validity (cont’d) Two types of criterion-related validity: • Predictive validity: the instrument’s ability to distinguish people whose performance differs on a future criterion • Concurrent validity: the instrument’s ability to distinguish individuals who differ on a present criterion
Construct Validity Concerned with the questions: • What is this instrument really measuring? • Does it adequately measure the construct of interest?
Methods of Assessing Construct Validity • Known-groups technique • Relationships based on theoretical predictions • Multitrait-multimethod matrix method (MTMM) • Factor analysis
Multitrait-Multimethod Matrix Method Builds on two types of evidence: • Convergence • Discriminability
Convergence • Evidence that different methods of measuring a construct yield similar results • Convergent validity comes from the correlations between two different methods measuring the same trait
Discriminabililty • Evidence that the construct can be differentiated from other similar constructs • Discriminant validity assesses the degree to which a single method of measuring two constructs yields different results