230 likes | 364 Views
Developing a Measure: scales, validity and reliability. Types of Measures. Observational Physiological and Neuroscientific Self-report --m ajority of social & behavioral science research. Self-report measures People’s replies to written questionnaires or interviews Can measure:
E N D
Types of Measures • Observational • Physiological and Neuroscientific • Self-report --majority of social & behavioral science research
Self-report measures • People’s replies to written questionnaires or interviews • Can measure: • thoughts (cognitive self-reports) • feelings (affective self-reports) • actions (behavioral self-reports)
Self-Report Self-reported momentary emotions: Positive and Negative Affect Schedule (PANAS) (Watson, Clark & Tellegen,1988)
Scales of Measurement Thing being measured IntervalIntervalRatio Nominal Hot = 1 Warm = 3 Cold = 2 Ordinal 1st Place Sample 2nd Place Sample 3rd Place Sample 4th Place Sample 5th Place Sample
Scales of MeasurementFour Types Distinction between scales is due to the meaning of numbers • Nominal Scale—numbers assigned are only labels. • Ordinal Scale—a rank ordering. • Interval Scale—each number equidistant from the next, but no zero point(majority of measures). • Ratio Scale—each number is equidistant and there is a true zero point.
Scales of Measurement Type of Scale Determines Statistics and Power
Attributes of Good Measures • Valid: measure assesses the construct it is intended to and is not influenced by other factors • Reliable: the consistency of a measure, does it provide the same result repeatedly.
Reliability and Validity Reliable but not Valid Dependable measure, but doesn’t measure what it should Example: Arm length to measure self-esteem. Valid but not Reliable Measures what it should, but not dependably Example: Stone as a measure of weight in Great Britain.
Reliability vs. Validity Visual Central dot = construct we are seeking to measure
Reliability Assessments 1 • Test-Retest Reliability Measure administered at two points in time to assess consistency. Works best for things that do not change over time (e.g., intelligence). • Internal Consistency Reliability Judgments of consistency of results across items in the same test administration session. 1. Intercorrelation: Chronbach’sα (> .65 is preferred) 2. Split halves reliability
Types of Validity • Content Validity Does the measure represent the range of possible items the it should cover based on the meaning of the measure. • Predictive Validity measure predicts criterion measures that are assessed at a later time. Ex: Does aptitude assessment predict later success? • Construct Validity Does the measure actually tap into intended construct?
Developing Items for a New Measure • Guided spontaneous response from individuals in sample population (thought listings, essay questions…) • Face valid items: develop items that appear to measure your construct. • Pilot test a larger set of items and choose those that are more reliable & valid. • Reversed coded items indicate whether participants are paying attention.
Use common response scale types • Likert Scale: To what extent do you agree with the following statement… (0 to 9, strongly disagree-strongly agree) • Semantic Differential: What is your response to (insert person, object, place, issue)? (-5 to +5, good-bad, like-dislike, warm-cold)
Pitfalls of New Measures • The measure exists already in the literature • Restriction of range: responses either at high or low end of scale (skew). • Can you trust responses? Social desirability, demand characteristics & satisficing.
Simple things I have learned. 1. Develop subjective and objective versions of a new scale • Example: Contact with Blacks scale: Objective: % of your neighborhood growing up Subjective: No Blacks—a lot of Blacks 2. Using 5+ items worded similarly provides greatly increased reliability and likelihood of success. 3. Human targets are rarely evaluated below the midpoint of the scale, so use more scale points (9 instead of 5 points).
**Most Important** If you have a larger study ready and a great idea for a new scale comes up, build something and give it a shot!
A Few Types of Non-scale measures • Response time measures • Physiological measures • Neuroscience: fMRI and other brain imaging • Indirect measures: projective tests, etc. • Facial and other behavior coding schemes (verbal/nonverbal) • Cognitive measures: (memory, perception…) • Task performance: academic, physical… • Game theory: prisoner’s dilemma…
SPSS: Reliability Chronbach’sα: AnalyzeScaleReliability Analysis Pull over all scale items Click Statistics, select inter-item correlations OK Try Van Camp, Barden & Sloan (2010) data file. Centrality1-Centrality8. Compare to manuscript. Many other reliability analyses involve correlations (test-retest, split halves) or probabilities (inter-rater reliability).
Advanced Scale Development Techniques • Factor Analysis: determines factor structure of measures (does your measure assess one construct or multiple constructs? Is your proposed construct coherent?) • Multi-trait Multi-method Matrix: using combination of existing measures and manipulations to establish convergent/ divergent validity with measure.
Reliability Assessments 2 • Inter-rater Reliability Independent judges score participant responses and the % of agreement is assessed to indicate reliability. Used particularly for measures requiring coding (video coding, spontaneous responses…).