280 likes | 590 Views
Web Design Issues in a Business Establishment Panel Survey Third International Conference on Establishment Surveys (ICES-III) June 18-21, 2007 Montréal, Québec, Canada. Kerry Levin & Jennifer O’Brien, Westat. Overview of Presentation. A brief review of the web design system and its origins
E N D
Web Design Issues in a Business Establishment Panel SurveyThird International Conference on Establishment Surveys (ICES-III)June 18-21, 2007Montréal, Québec, Canada Kerry Levin & Jennifer O’Brien, Westat
Overview of Presentation • A brief review of the web design system and its origins • Design issues we encountered • Opportunities for experimental investigation
Background • The Advanced Technology Program (ATP) at the National Institute of Standards and Technology (NIST) is a partnership between government and private industry to conduct high-risk research • Since 1990, ATP’s Economic Assessment Office (EAO) has performed rigorous and multifaceted evaluations to assess the impact of the program and estimate the returns to the taxpayer. One key feature of ATP’s evaluation program is the Business Reporting System (BRS).
General Description of the BRS • Unique series of online reports that gather regular data on indicators of business progress and future economic impact of ATP projects • ATP awardees must complete four BRS reports per calendar year– three short quarterly reports and one long annual report
General Description of the BRS • There are several different types of instruments (each with a profit and nonprofit version): • Baseline • Annual • Closeout • Quarterly • The BRS instruments are a hybrid survey/progress report that ask respondents attitudinal questions as well as items designed to gather information on project progress. • The Baseline, Annual, and Closeout reports are between 70 and 100 pages in length. Due to this length and complexity, web administration is the most logical data collection mode
Design issues: Online logic checks vs. back-end logic checks 1. Examples of online logic checks (i.e., hard edits) • Sum checking • Range checks 2. Examples of back-end logic checks • Frequency reviews • Evaluation of outliers
Back-end checking: Frequency reviews and outlier evaluations • At the close of each cycle of data collection, the data for each instrument are carefully reviewed for anomalies • Frequency reviews are conducted to ensure that there were no errors in skips in the online instrument • Although the BRS includes range checks for certain variables, the ranges are sometimes quite large, therefore an evaluation of outliers is a regular part of our data review procedures
Use of pre-filled information in the BRS The BRS instruments make use of two types of pre-filled information: • Pre-filled information from sources external to the instrument (i.e., information gathered in previous instruments or information provided by ATP such as issued patents) • Pre-filled information from sources internal to the instrument (i.e., information provided by the respondent in earlier sections of the current report)
Required items • While most items in the BRS instruments are not required, the few that are fall into two categories: • Items required for accurate skips later in the instrument • Items deemed critical by ATP staff
Administration issues in the BRS: Multiple respondents • Each ATP-funded project has multiple contacts associated with it • It is rarely the case that a single respondent can answer all items in the survey. Westat provides only one access ID per report, however, therefore the respondents are responsible for managing who at their organizations are given access to the BRS online system
Reducing item nonresponse: The Applicant Survey • The ATP’s Applicant Survey is not one of the BRS instruments, but is regularly administered via the web to companies and organizations that applied for ATP funding • In 2006, Westat embedded an experiment within the Applicant Survey to test which of two different types of nonresponse prompting would result in reduced item nonresponse
Reducing item nonresponse: The Applicant Survey 904 respondents were randomly assigned to one of three conditions: 1) Prompt for item nonresponse appeared (if applicable) at the end of the survey; 2) Prompt for item nonresponse appeared (if applicable) after each section; 3) No prompt (control group).
Reducing item nonresponse: The Applicant Survey End of Survey: After each section:
Reducing item nonresponse: The Applicant Survey Both prompts for item nonresponse appeared effective, and to an equal degree.
Boosting response rates: The days of the week experiment • Literature suggests that there are optimal call times for telephone surveys. But are there also optimal days of the week to email survey communications? • Optimal day to email was measured by: • The overall response rate • The time it takes to respond
Boosting response rates: The days of the week experiment • Three different experimental conditions: • Monday cohort 2) Wednesday cohort 3) Friday cohort • The invitation email and up to 3 reminders were all sent on the same day, either Monday, Wednesday, or Friday.
Time to Complete the Survey Cumulative Response Rates
Boosting response rates: The days of the week experiment • Friday cohort trends toward higher response rates, but all cohorts require the same amount of effort to achieve their respective response rates • Overall, there is some evidence that the day of the week does matter
Conclusion • The BRS has presented us with various design and administration challenges • We have had the chance to fine-tune and address a variety of issues that that have come to our attention • As researchers encounter new issues in the administration of web surveys, the BRS offers a place to study them