1 / 32

Do Response Rates Matter in Online Panels?

Explore the significance of response rates in online panel surveys and their influence on data quality, with a focus on the Citizen Panel at the University of Gothenburg. Investigate how cumulative response rates affect accuracy and representativity over time.

gdebbie
Download Presentation

Do Response Rates Matter in Online Panels?

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Do Response Rates Matter in Online Panels? Representativity at Different Levels of Cumulative Response Rates Johan Martinsson University of Gothenburg www.lore.gu.se

  2. About response rates • RR used to be the main indicator of data quality • Perhaps today more questionable (e.g. Groves & Peytcheva 2008) • Moving to online surveys and panels, even more so • Are participation rates for online panels meaningful?

  3. Yeager et al. 2011, pp. 731

  4. Participation rates in 163 surveys from the Citizen Panel at the University of Gothenburg The most important predictor is the demographic composition of the invited sample

  5. Moving to more controlled comparison: the probability based part of the Citizen Panel • The emerging indicator is now Cumulative Response Rate for probability based panels • CRR = Recr.Rate * Profile rate * Participation rate • Both recruitment rates and CRRs are usually low • And it is not clear how important they are

  6. Briefly about the Citizen Panel and LORE • Started in 2010 with support from the University of Gothenburg • The Citizen Panel has approx. 9,000 members from probability based recruitments from population samples • And approximately 57,000 members from opt-in recruitments • You can read more at www.lore.gu.se

  7. Response rates in web panels • In 2012 LORE conducted a large probability recruitment with an experimental design • This results in variation in recruitment rates

  8. Recruitment rates in 9 different treatment groups

  9. Questions • What happens to response rates and CRR over time in a panel? • How are they related to accuracy/representativity?

  10. How to examine representativeness/accuracy • We calculate average absolute deviations from a good benchmark • 6 demographic indicators • Gender, age, country of birth, marital status, education, labour market situation • These benchmarks come from Statistics Sweden • Four of these also register data in the sample

  11. How to examine representativeness/accuracy • 2 political indicators (quasi benchmarks) • Interest in politics (from the SOM Institute survey, mail, n=6000) • Vote intention (Statistics Sweden, RDD, n=9000)

  12. 1. What happens to CRR over time in a panel?

  13. Initial span 8.4 pct • After two yrs 2.7 pct • Higher RR => higher attrition • Incentives recruitment => higher attrition

  14. 2. How are RR and CRR related to accuracy/representativity? • First, we examine the accuracy at the recruitment • Next, accuracy over time in the panel

  15. 2. How are RR and CRR related to accuracy/representativity? • At recruitment we have 11 treatment groups with different recruitment rates between 6 and 21 percent • We examine the average the AAD for the 8 indicators

  16. All 8 indicators r=-.61

  17. All 8 indicators r=-.30

  18. Vote intention (8 parties) r=-.44

  19. Vote intention (8 parties) r=-.14

  20. 2. How are RR and CRR related to accuracy/representativity? • We observe a weak correlation at the recruitment stage • But what about later on? For that purpose we have a smaller set of groups that allow strict comparisons • We examine representativeness 2 years later (8 panel waves later)

  21. Standard postcards: all 8 indicators

  22. Standard postcards: all 8 indicators

  23. Standard postcards: 2 political indicators

  24. Incentive postcards: all 8 indicators

  25. Incentive postcards: all 8 indicators

  26. Incentive postcards: 2 political indicators

  27. Did you notice something peculiar?

  28. Did you notice something peculiar?

  29. Summing up • High initial recruitment rates deteriorate substantially over time and differences diminish • Higher cumulative response rates consistently yield lower average errors ... • ... when comparing over time within a recruitment cohort • ... and when comparing between cohorts at same wave / panel age • But panel attrition seems to play a part independent of cumulative response rates (because not at random) • Overall, correlations btw CRR and accuracy seem low • Caveats: small samples for accuracy, unweighted data, limited variation in recruitment rates and CRRs

  30. Read more atwww.lore.gu.seLaboratory of Opinion Research (LORE)

More Related