60 likes | 442 Views
Understanding what makes a “good” questionnaire: Reliability and validity: COURSE QUESTIONS. Dan Coletti, Ph.D. Assistant Investigator Division of Psychiatry Research Pediatric Subspecialty Fellows Research Course March 19 th , 2013. Scenario:
E N D
Understanding what makes a “good” questionnaire: Reliability and validity: COURSE QUESTIONS Dan Coletti, Ph.D. Assistant Investigator Division of Psychiatry Research Pediatric Subspecialty Fellows Research Course March 19th, 2013
Scenario: A fellow in Developmental and Behavioral Pediatrics is interested in learning more about how parents understand the risks and benefits of childhood immunizations. An exhaustive literature review reveals that no such measure currently exists, so a decision is made to construct a new 10-item paper-and pencil questionnaire--the “Risks And Benefits of Infant Disease prevention (RABID)--to learn more about this construct.
The fellow includes the following item in the measure: Circle the number that describes how much you agree with this statement: “Parents who allow their innocent children to be subject to the ticking time-bomb of having a measles-mumps-rubella shot are:” NOT So FOOLISH FOOLISH 0 1 2 3 4 5 6 7 8 9 10 • What’s the best way to describe the measurement scale this item uses? • Nominal • Ordinal • Temporal • Interval • The wording of this question… • Might indicate a respondent is “faking good.” • Is “loaded” in content and tone. • Has poor test-retest reliability. • Will show floor effects.
The fellow administers the RABID to a sample of 100 parents on a Friday morning. Then, the same parents take the exact same test again later that day. The association between scores in the morning and in the afternoon is 0.86. This would be an example of acceptable: • Inter-rater reliability • Test-retest reliability • Internal Consistency • Face Validity
Two groups of parents are administered the RABID before and after two different workshops. • The first group attends the workshop entitled “Autism and Childhood Illness: Dispelling Myths.” • The second group’s workshop is entitled “Decorating Ideas for your Young Child’s Bedroom.” • Here is the data: • Mean Score BEFORE “Autism and Childhood Illness: Dispelling Myths”….......... 54.23 • Score AFTER the workshop: [higher scores show more positive attitudes] ……. 86.59 • Mean Score BEFORE “Decorating Ideas for your Young Child’s Bedroom.”……. 53.87 • Score AFTER the workshop: [no change in scores]………………………………....... 53.96 • This set of findings is an example of • Response sets. • Demand characteristics and social desirability. • Documenting convergent and divergent validity. • Alternate-form reliability.
CORRECT ANSWERS: • D- Likert scales with numerical values provide interval-level data. • B- This is a loaded question written with an apparent bias or agenda against childhood immunizations. Even the scale anchors are loaded. • B- Scores on the morning administration are highly correlated with the scores on the afternoon administration. • C- Convergent/divergent validity. Measure is sensitive to change after learning more about the construct of interest. Conversely, mean scores don’t change after participating in an un-related presentation.