200 likes | 451 Views
2. Timeline for SIPP Development. 2008 paper EHC. 2007 2008 2009 2010 2011 2012 Sep --- Jan --- May --- Sep --- Jan --- May --- Sep --- Jan --- May --- Sep --- Jan --- May --- Sep --- Jan --- May --- Sep --- Jan 2013.
E N D
1. 1 Initial Plans for the Re-engineered SIPP 2010 Electronic Prototype Field Test Jason M. Fields,
Housing and Household Economic Statistics Division,
US Census Bureau
2. 2 Timeline for SIPP Development
3. 3 Re-engineered SIPP Instrument Survey Instrument –
Designed for annual administration
Plan to continue to follow movers
Significant reduction in feedback compared with 2004
Programmed in Blaise and C#
Calendar –
Learning from the experience of past designs and integrating the more closely with Blaise and utilizing a single Blaise database to store data.
4. 4 Stakeholders and Content Review Responding users indicated a broad need for most of SIPP core content. About 40 stakeholders completed matrix
Select areas were added based on lost topical module content.
Held five subject area meetings to discuss specific content (Health, General income/Government programs, Assets and wealth, Labor force, Demographics and other items)
5. 5 Content Crosswalk What was gained and lost between SIPP 2004 and Re-SIPP?
Frequency for topical content – 2004 vs. Re-SIPP
Blaise Demographics
Time of interview information
Collected for whole household at once similar to the 2004 panel
EHC
Launches from Blaise directly once the interviewer chooses which household member will be interviewed next
Post-EHC Blaise items
Person level (person responding to EHC section continues through the Blaise items) before the next person is selected – returning to the EHC launch).
6. 6 EHC Sections
7. 7 EHC Sections
8. 8 EHC Sections
9. 9 EHC Sections
10. 10 Pause for quick demo if time allows
11. 11 Re-engineered SIPP Electronic Prototype Field TestObjectives Systems testing
The production Case Management and Regional Office Survey Control systems will be evaluated as they will be required to handle additional log and program files.
Training development
Lessons learned from the training used in the paper field test will be modified, and applied to the electronic test.
Evaluations will consist of focus group debriefings as well as summary evaluations with interviewers and trainers.
12. 12 Re-engineered SIPP Electronic Prototype Field TestObjectives Field data collection evaluation
As in the paper test, we will evaluate the reactions to the interview with a sample of respondents and interviewers, headquarters staff will collect evaluation information as observers, and focus groups will be conducted soon after the conclusion of field data collection with field representatives involved in the project.
Processing Development
The data collected will be the foundation for the processing system development, enabling systems and edits to be tested and evaluated before going into production.
Wave 2+ instrument development
Requirements and information necessary to develop the dependent interviewing systems for wave 2+ interviewing.
13. 13 Re-engineered SIPP Electronic Prototype Field TestObjectives Content Evaluation
Address the primary concern voiced by most stakeholders
How comparable are the estimates, patterns and relationships in the data collected with the re-engineered SIPP instrument with those collected by the traditional SIPP data collection procedures?
Key Estimates from the EHC
The measurement of data for program receipt among the low-income population – Food Stamps as a key program – estimates and coverage units.
Social Security receipt and estimates, and the ability to provide necessary inputs to stakeholders models
Health Insurance Coverage – patterns of uninsurance, realationships between public and private insurance, and coverage units.
Poverty status during the reference period – ability to examine specific populations and transitions into and out of poverty.
14. 14 Re-engineered SIPP Electronic Prototype Field TestObjectives Content Evaluation – Key Estimates from Blaise Topic Sections
The measurement of assets and wealth and thereby eligibility for various government programs
Disability status with a new sequence of questions
Medical expenses and utilization – estimates of MOOP.
Child and adult well-being
Work and commuting expenses by job and how this could be applied to alternative poverty estimates
Annual program receipt and lump-sum payments
Evaluate content to develop and refine edit and recode specifications in advance of the implementation of the production instrument
15. 15 Re-engineered SIPP Electronic Prototype Field TestSample Sample of 5000 or more households (budget dependent)
Selected from the same frame as the current SIPP 2008 panel
Focused sample in selected areas with higher than average poverty rates – more efficient, financially than a national sample
Sub-select similar cases from the SIPP 2008 panel to match (Geographically and by poverty areas) the selected sample for the electronic prototype field test sample.
Ability to weight both samples comparable for monthly weights and the 2009 reference year.
16. 16 ASA-SRM Evaluation RecommendationsQuestions What makes a successful test?
Instrument collects and returns with data
Usability in the field by FR’s and Region systems
Respondents and FR’s able to navigate and complete instrument
What are the key content characteristics which indicate success or failure?
What are the indications that dictate an alternative instrument should be pursued?
17. 17 ASA-SRM Evaluation RecommendationsQuestions What are key comparisons?
We believe the primary comparisons will come from the 2008 SIPP panel – waves 2-5, collected during the same 2009 reference year.
Are there additional comparisons?
What methodology would you recommend as most informative to evaluate the level of differences between these data sources?
Which differences should be considered acceptable?
Which comparisons are most meaningful?
18. 18 ASA-SRM Evaluation RecommendationsQuestions What about using our imperfect standard as a metric?
SIPP has known problems with
Sample attrition
Reporting inconsistencies within households across a calendar year
Seam bias
Idiosyncrasies in measurement
Maybe different is good?
How can the evaluation using an imperfect standard indicate the success or failure of the new method?
What are your recommendations on:
how to evaluate the test?
the metrics of success?
the study design in light of these issues?
19. 19
20. 20