320 likes | 329 Views
Explore the significance of response rates in online panel surveys and their influence on data quality, with a focus on the Citizen Panel at the University of Gothenburg. Investigate how cumulative response rates affect accuracy and representativity over time.
E N D
Do Response Rates Matter in Online Panels? Representativity at Different Levels of Cumulative Response Rates Johan Martinsson University of Gothenburg www.lore.gu.se
About response rates • RR used to be the main indicator of data quality • Perhaps today more questionable (e.g. Groves & Peytcheva 2008) • Moving to online surveys and panels, even more so • Are participation rates for online panels meaningful?
Participation rates in 163 surveys from the Citizen Panel at the University of Gothenburg The most important predictor is the demographic composition of the invited sample
Moving to more controlled comparison: the probability based part of the Citizen Panel • The emerging indicator is now Cumulative Response Rate for probability based panels • CRR = Recr.Rate * Profile rate * Participation rate • Both recruitment rates and CRRs are usually low • And it is not clear how important they are
Briefly about the Citizen Panel and LORE • Started in 2010 with support from the University of Gothenburg • The Citizen Panel has approx. 9,000 members from probability based recruitments from population samples • And approximately 57,000 members from opt-in recruitments • You can read more at www.lore.gu.se
Response rates in web panels • In 2012 LORE conducted a large probability recruitment with an experimental design • This results in variation in recruitment rates
Questions • What happens to response rates and CRR over time in a panel? • How are they related to accuracy/representativity?
How to examine representativeness/accuracy • We calculate average absolute deviations from a good benchmark • 6 demographic indicators • Gender, age, country of birth, marital status, education, labour market situation • These benchmarks come from Statistics Sweden • Four of these also register data in the sample
How to examine representativeness/accuracy • 2 political indicators (quasi benchmarks) • Interest in politics (from the SOM Institute survey, mail, n=6000) • Vote intention (Statistics Sweden, RDD, n=9000)
Initial span 8.4 pct • After two yrs 2.7 pct • Higher RR => higher attrition • Incentives recruitment => higher attrition
2. How are RR and CRR related to accuracy/representativity? • First, we examine the accuracy at the recruitment • Next, accuracy over time in the panel
2. How are RR and CRR related to accuracy/representativity? • At recruitment we have 11 treatment groups with different recruitment rates between 6 and 21 percent • We examine the average the AAD for the 8 indicators
All 8 indicators r=-.61
All 8 indicators r=-.30
Vote intention (8 parties) r=-.44
Vote intention (8 parties) r=-.14
2. How are RR and CRR related to accuracy/representativity? • We observe a weak correlation at the recruitment stage • But what about later on? For that purpose we have a smaller set of groups that allow strict comparisons • We examine representativeness 2 years later (8 panel waves later)
Summing up • High initial recruitment rates deteriorate substantially over time and differences diminish • Higher cumulative response rates consistently yield lower average errors ... • ... when comparing over time within a recruitment cohort • ... and when comparing between cohorts at same wave / panel age • But panel attrition seems to play a part independent of cumulative response rates (because not at random) • Overall, correlations btw CRR and accuracy seem low • Caveats: small samples for accuracy, unweighted data, limited variation in recruitment rates and CRRs
Read more atwww.lore.gu.seLaboratory of Opinion Research (LORE)