260 likes | 267 Views
This study explores the roles and qualities of interviewers in the survey process, including recruiting and administering surveys. It examines the impact of interviewer performance on survey error and the relationship between interviewer experience and data quality.
E N D
The Dual Tasks of Interviewers Ting Yan Colm O‘Muircheartaigh Jenny Kelly Pat Cagney Rebecca Jessoe NORC at University of Chicago Kenneth Rasinski University of Chicago Gary Euler Centers for Disease Control and Prevention
What do interviewers do? • Recruiting potential respondents • Introducing survey to potential respondents • Gaining cooperation • Screening for eligible respondents • Administering interviews • Reading questions • Recording answers • Probing • Providing definitions
Desired qualities of interviewers • When recruiting respondents • Adaptive and flexible (Converse & Schuman, 1974) • Tailoring (Groves & McGonagle, 2001; Houtkoop-Steenstra & van den Bergh, 2002; Maynard & Schaeffer, 2002) • Maintaining interaction (Groves & McGonagle, 2001) • Those who developed their own approach had lower refusal and higher cooperation than those who follow a standard script • When administering interviews • Technician like (Converse & Schuman, 1974) • Standardized interviewing (Fowler and Magione, 1990) • Conflicting?
How do interviewers affect survey error? • Recruiting respondents • Nonresponse error • If interviewer consistently attract respondents with a certain characteristic • Administering interviews • Measurement error • Interviewer bias • Interviewer variance • If interviewers consistently influence responses in a certain way
Research questions • Is there a relationship between interviewers’ performance at recruiting respondents and administering interviews? • Are interviewers who are good at recruiting respondents also good at collecting data of good quality? • How does interviewer experience mediate this relationship, if the relationship exists?
Data • National Immunization Survey (NIS) • Nationwide, list-assisted random digit-dialing (RDD) survey conducted by the NORC for the Centers for Disease Control and Prevention • Monitors the vaccination rates of children between the ages of 19 and 35 months. • 2007 Q3 data • 712 interviewers worked • 499,490 telephone numbers dialed • 4,438 interviews obtained
Which interviewers were includedin the analysis? • Interviewers who had completed interview(s) on first contact • 295 interviewers • 3114 completes
Measures of recruitment task • (First contact) Refusal rate =# refusals/# first contact cases • (First contact) Completion rate =# completes/# first contact cases • (First contact) Eligibility rate =# eligibles/# first contact cases • Denominator: first contact cases • Virgin (fresh) cases or cases that were dialed by autodialers only. • They haven’t been touched by humans before sent to the current interviewer. • Refusal conversion rate =# converted refusals/# refusals
Measures of administration task • Interviewer effect (ρint) • Adherence to standardized interviewing (monitoring data) • Item nonresponse • Interview time (cost)
Good openers vs. Bad openers • Good openers: 3 out of 4 rates are above medians
Good openers vs. Bad Openers (II) • When experience is introduced • Median split on # of days worked at NORC
Good openers vs. Bad Openers (III) • When experience is introduced • Median split on # of days worked at NORC
ρint • ρint : Intra-interviewer correlation • Deffint=1+ ρint*(m-1) • Hierarchical linear models • Respondent data as level 1 data • Interviewer data as level 2 data • Unconditional model with no explanatory variables at either level • ρint=between-interviewer variance/total variance
Monitoring scores • Monitoring items • Read questionnaire verbatim • Probe without biasing or leading/Probing for Don’t Knows • Reads scales as directed etc. • Scores • 1=Error • 2=No Error • 3=Outstanding • Item-level monitor score for each interviewer • Overall summary score for each interviewer
Monitoring scores (II) • Good openers on average have higher mean scores than bad openers, but difference sig. only for one monitoring item • Read Questionnaire Verbatim • Verifies dates and confirms spelling • Properly obtains all provider information • Use job aids as needed • Reads scales as directed • Records open-end response verbatim • Probes without biasing or leading/Probes Don’t Knows
Item Nonresponse • A set of 24 questions every one had to answer • Item nonresponse rate=# of times R didn’t provide an answer /24
Average Interview Duration (cost) • Time spent on completing an interview • The longer the interview time, the more costly
Provider consent rate (79.8%) (74.9%)
Conclusions and Discussion • Good-opener interviewers • More completes • Higher refusal conversion, completion, and eligibility rates • Lower refusal rate • Good-opener interviewers • Higher intra-interviewer correlation • But more adherence to standardized interviews (higher monitoring scores) • More missing data • Are good openers also good at collecting data of good quality? • No one clear answer • Depends on which measures of interviewing tasks • Experience didn’t matter much
Limitations and Next Steps • Only used various rates to measure interviewers’ performance at the recruitment stage • Demographic compositions by interviewer status • Nonresponse error by interviewer status • Only used proxy measures of data quality • Direct measures of measurement bias • Interviewer characteristics and respondent characteristics not considered • Bringing in interviewer and respondents characteristics into the picture • Examining the effect of matched interviewer and respondents characteristics
Thank You! Yan-ting@norc.org