210 likes | 664 Views
Research Methods in MIS. Dr. Deepak Khazanchi. Measurement of Variables: Scaling, Reliability and Validity. Major Sources of Errors in Measurement .
E N D
Research Methods in MIS Dr. Deepak Khazanchi
Major Sources of Errors in Measurement • Since 100% control for precise and unambiguous measurement of variables is unattainable, error does occur. Much potential error is SYSTEMATIC (results from a bias) while the remainder is RANDOM (occurs erratically). • Sources of Measurement Differences • Respondents • Situational factors • Measurer or researcher • Data collection instruments
Validity and Reliability • Validity: • Accuracy of measurement. The degree to which an instrument measures that which is supposed to be measured. • Validity Coefficient: An estimate of the validity of a measure in the form of a correlation coefficient. • Reliability: • Consistency of measurement. The degree to which an instrument measures the same way each time it is used under the same conditions with the same subjects. • Reliability Coefficient: An estimate of the reliability of a measure usually in the form of a correlation coefficient.
Possible Conditions of Validity and Reliability • When examining an instrument for validity and reliability remember that three types of conditions may exist. An instrument might show evidence of being: • Both valid and reliable or • Reliable but not valid or • Neither reliable nor valid • NOTE: An instrument which is valid will also have some degree of reliability.
About Reliability and Validity Coefficients • Validity and reliability are estimated by using correlation coefficients. These statistics estimate the degree of validity or reliability. • Thus, it is not a question of an instrument having or not having validity or reliability; • It is a question of to what degree is an instrument valid for a specific purpose and to what degree does the instrument evidence specific types of reliability. • Reliability estimates are done after validity is assessed. • We will discuss the notions of internal and external validity in the context of experimental designs.
Types of Validity • Content Validity • The representativeness of the content of the instrument to the objectives of using the instrument. • Usual Process: • 1. Examine objectives; • 2. Compare objectives to content of instrument.
Types of Validity (cont’d) • Criterion-Related Validity • Predictive: The degree to which a measure predicts a second future measure. • Usual Process: • 1. Assess validation sample on predictor; • 2. Assess validation sample on criterion at later time; • 3. Correlate scores • Concurrent: The degree to which a measure correlates with another measure of the same variable which has already been validated. • Usual Process: • 1. Assess validation sample on new measure; • 2. Assess validation on already validated measure of same variable at about the same time; • 3. Correlate scores.
Types of Validity (cont’d) • Construct Validity • The degree to which a measure relates to expectations formed from theory for hypothetical constructs • Usual Process • 1. Assess validation sample on major variable • 2. Assess validation sample on several hypothetically related variables • 3. Analyze to see of major variable differentiates Ss on the related variables
Types of Reliability (Consistency) Estimates • V.IMP: A MEASURE CAN BE RELIABLE BUT TOTALLY LACK VALIDITY. • Stability • Test-retest • Equivalence • Parallel forms • Internal Consistency • Split-half • KR20 (Kuder-Richardson) • Coefficient (Cronbach’s) alpha • Interater reliability
Types of Reliability (Consistency) Estimates: STABILITY • Test-Retest Reliability • Used to assess the stability of a measure over time. • Usually indicated by a correlation coefficient. • Number of forms (of instrument): 1 • Number of administrations: 2 • Usual Process: • Administer the instrument to the reliability sample at Time 1. • Wait a period of time (e.g., 2-4 weeks) • Administer copies of the same instrument to the same sample at Time 2. • Correlate the scores from Time 2 and Time 1.
Types of Reliability (Consistency) Estimates: EQUIVALENCE • Equivalence Forms Reliability (Also known as Parallel Forms or Alternate Forms Reliability). • Used to assess the equivalence of two forms of the same instrument. • Usually indicated by a correlation coefficient. • Number of forms (of instrument): 2 • Number of administrations: 2 • Usual Process: • Administer Form A of the instrument to the reliability sample • Break the sample for a short rest period (10-20 minutes) • Administer Form B of the instrument to the same reliability sample • Correlate the scores from Form A and Form B
Types of Reliability (Consistency) Estimates: INTERNAL CONSISTENCY • Split-Half Reliability • Used to assess the internal consistency or equivalence of two halves of an instrument. • Usually indicated by a correlation coefficient plus Spearman-Brown Prophecy Formula. • Number of forms (of instrument): 1 • Number of administrations: 1 • Usual Process: • Obtain or generate an instrument in which the two halves were formulated to measure the same variable. • Administer the instruments to the reliability sample. • Correlate the summed scores from the first half (often the odd numbered items) with the summed scores from the second half (often the even numbered items). • Computer the Spearman-Brown Prophecy Formula to correct for splitting one instrument into halves.
Types of Reliability (Consistency) Estimates: INTERNAL CONSISTENCY • KR20 (Kuder-Richardson Reliability) • Used to assess the internal consistency of items on an instrument when responses are dichotomous. • Usually indicated by the correlation generated using the KR-20 formula (There are other forms of this formula. This is used when there are two responses: correct or incorrect). • Number of forms (of instrument): 1 • Number of administrations: 1 • Usual Process: • Generate or select an instrument. • Administer the instrument to the reliability sample. • Compute the variance (x)2 of the scores. • Computer the proportion of correct and incorrect responses to each item. • Compute the KR-20 formula.
Types of Reliability (Consistency) Estimates: INTERNAL CONSISTENCY • Coefficient Alpha (Cronbach’s Alpha) • Used to assess internal consistency of items on an instrument when responses are nondichotomous. • Usually indicated by the coefficient generated using Cronbach’s formula (more generic version of the KR-20 formula) • Number of forms (of instrument): 1 • Number of administrations: 1 • Usual Process: Same as previous slide.
Types of Reliability (Consistency) Estimates: INTERSUBJECTIVE • INTERRATER RELIABILITY • Used to assess the degree to which two or more judges (raters) rate the same variables in the same way. • Usually needed when two or more judges (raters) will be used in a research study. • Usual Process: • Select or generate an instrument • Randomly select a number of objects or events to be rated • Train the raters • Have each rater rate each object or event independently • Correlate the scores of the two raters.
Practicality of Measurement • Practicality has been defined in terms of the following three characteristics: Economy, Convenience and Interpretability • Economy • Some trade-off between ideal needs and budget • Instrument length (limiting factor: cost) • Choice of data collection method • Need for Fast and economical scoring
Practicality of Measurement (cont’d) • Convenience • A measuring device passes the convenience test if it is easy to administer. • Detailed and clear instructions with examples, if needed. • Pay close attention to design and layout • Avoid crowding of material, carryover of items from one page to another • Interpretabililty • Relevant when persons rather than test designers interpret results. In that case, test designers should include: • A statement of the functions the test was designed to measure and the procedures by which it was developed • Detailed instructions for administering and scoring • Evidence of reliability etc.