310 likes | 443 Views
Mixed Modes and Measurement Error. Gerry Nicolaas. Mixed Modes and Measurement Error. Funded under the ESRC Survey Design and Measurement Initiative 3-year contract starting 1 Oct 2007 Collaboration between academics and data collection organisation. Research Team.
E N D
Mixed Modes and Measurement Error Gerry Nicolaas
Mixed Modes and Measurement Error Funded under the ESRC Survey Design and Measurement Initiative 3-year contract starting 1 Oct 2007 Collaboration between academics and data collection organisation
Research Team National Centre for Social Research (NatCen) Gerry Nicolaas Steven Hope Institute for Social & Economic Research (ISER) Peter Lynn Annette Jäckle Alita Nandi Nayantara Dutt Freelance Survey Methods Consultant Pam Campanelli
Interviewer-administered face-to-face interview (PAPI, CAPI) telephone interview (PAPI, CATI) Self-administered postal questionnaire self-completion questionnaires with interviewer present (PAPI, CASI, A-CASI) IVR and TDE in telephone surveys web/email Modes of data collection:
Choice of mode = trade-off Samplingerror Coverage error Non-response error Measurement error Recording error
Differences across modes coverage error different members of the population have a zero chance of inclusion depending on the mode non-response error differential non-response bias measurement error respondents give different answers in different modes
Increasing use of mixed modes: To maximise response rates (unit & item) offer choice of mode to respondents use alternative mode among non-respondents To reduce measurement error self-completion modes for sensitive items within a f2f or tel interview To reduce costs sequentially with cheapest mode first different waves of a longitudinal study
Mixing modes: Different modes to collect different data items from the same respondents e.g. CAPI with CASI module data comparability not affected data quality may be improved Different modes to collect the same data from different respondents e.g. tel follow-up among postal non-responders e.g. panel survey with wave 1=CAPI & wave 2=CATI potential for mode effects
For an overview of using mixed modes in surveys: Edith D. de Leeuw (2005) “To Mix or Not to Mix Data Collection Modes in Surveys”, Journal of Official Statistics, 21(2), pp. 233–255 One of Edith’s conclusions: Hardly any theoretical or empirical knowledge is available on how to design optimal questionnaires for mixed-mode data collection.
Main Objective Practical advice on how to improve portability of questions across modes Which mode combinations are likely to produce comparable responses? Which types of questions are more susceptible to mode effects?
Research Design A literature review & framework of mixed modes develop a theoretical framework identify gaps in evidence base and formulate hypotheses to address gaps Quantitative data analysis test hypotheses using existing datasets and new experimental data Cognitive interviewing explore how respondents process questions in different modes
The Quantitative Data Existing datasets, e.g. 1999 Welsh Assembly Election Study 2005 Social Capital Survey European Social Survey mode experiments 2006 Health Survey for England London boost New experimental data follow-up surveys to BHPS & NatCen Omnibus focus on f2f, tel and web comparisons
NatCen Omnibus Survey Two rounds of face-to-face data collection Jul/Aug 2008 and Sep/Oct 2008 Follow-up surveys after 6 months Omnibus respondents who agreed to follow-up and web access Random allocation = 400 f2f, 400 tel, 400 web Cognitive interviews after 6 months Purposively selected sample of 36 respondents from follow-up surveys
British Household Panel Study BHPS Wave 18 (all f2f) Sep 2008 – Dec 2008 Follow-up surveys after 6 months BHPS respondents who agreed to follow-up and have web access Random allocation = 400 tel, 400 web No separate f2f data collection at this stage BHPS Wave 19 (all f2f) Sep 2009 – Dec 2009
F2F interview after 6 months NatCen Omnibus 20 BHPS questions Another 40 questions 20 BHPS questions Cognitive interviews after 6 months Tel interview after 6 months 20 BHPS questions Another 40 questions Other modules Web q’naire after 6 months Another 40 questions 20 BHPS questions BHPS W19 (12 months after W18) BHPS W18 20 BHPS questions Other BHPS questions 20 BHPS questions Tel interview after 6 months 20 BHPS questions Another 40 questions Other BHPS questions Web q’naire after 6 months 20 BHPS questions Another 40 questions
Key Features of Design Repeated measures enables estimation of mode effects in measures of change Random allocation to modes Compare ‘seasoned’ panel members with ‘fresh’ survey sample members Cost-efficient design to collect very rich experimental data
Limitations Restricted to respondents with web access primarily relying on randomisation within the sample as our basis for inference relatively broad basis for extrapolation to general population compared to other mixed mode studies BHPS f2f follow-up is 12 months later rather than 6 months overcome by comparing data from Omnibus and its f2f follow-up after 6 months with data from BHPS and its f2f follow-up after 12 months
Aims of literature review A review of the evidence of differences in measurement due to mode for: Different types of questions Different combinations of modes Identify gaps (mode pairs and question types) in the evidence base Formulate hypotheses to address these gaps
The literature review Initially over 700 papers identified Currently screening papers for relevance and summarising relevant papers Criteria for inclusion: comparison of 2 or more modes modes of survey data collection measurement error
Classify existing evidence: The question question type (e.g. attitude, behaviour, other factual) question format (e.g closed/open, scale, # categories) task difficulty sensitivity of question The mode comparison interviewer presence (face-to-face, telephone, none) delivery of question (visual, aural) response list (visual, aural) recording of responses (oral, written) Continued on next slide
Classify existing evidence: The results Hypotheses tested Indicators and statistical methods Results
Synthesis of literature Causes of differential measurement error e.g. interviewer presence, cognitive task Nature of differential measurement error e.g. social desirability bias, survey satisficing Magnitude of differential measurement error
Causes of Mode Effects on Measurement(Roberts, Jäckle & Lynn; 2006) Privacy/legitimacy Interviewer presence: Anonymity vs. rapport Willingnessto disclose? Socialdesirability bias Comprehension Retrieval Judgement Response Shortcutting SufficientEffort? Task difficulty R motivation R ability * * Interviewerpresence: Pace, non-verbal communication, multitasking Stimulus: Cognitive task
Initial observations from lit review: Many experiments are not theory-driven Focus is on descriptive comparisons of response distributions across modes Lack of generalisable inferences about causal mechanisms Many papers provide insufficient information about the questions and modes being tested question type & format, sensitivity of question, task difficulty interviewer presence, delivery of question and response options, recording of responses
Next steps: Complete the literature review and develop theoretical framework Identify gaps in the evidence base Design extra 40 questions for mixed modes experiment
RC-33 Conference in Naples, 1-5 Sept 2008 Research team is chairing a session on Mixed Modes and Measurement Error Steven Hope presenting a paper on preliminary results of literature review