220 likes | 231 Views
Learn about how Connecticut distributed surveys to eligible families and used a census approach to ensure representativeness. Explore the analysis of the survey results and considerations for APR reporting.
E N D
The Basics • A group of people who hold stakes met to give the lead agency suggestions. • We chose the NCSEAM survey and we distributed the surveys to families eligible on specific date (just like 618 Table 1, point in time.) • Service Coordinators hand delivered survey in sealed envelopes with a stamped envelope. • Surveys (with a unique identifier printed on them) were returned directly to the lead agency and scanned in
The Basics • In Year One, we also mailed surveys to recently exited families. • The response rate for hand-delivered was much higher 26% v 19% so… • In Year Two we only sent them to families who were participatING in the program but…. • Return rates were low! So we did a follow up mailing ($$$$). • Ended up with a rate of 41%.
The Basics • THIS IS A CENSUS APPROACH for distribution of the surveys. • We gave the surveys to ALL ELIGIBLE families who had been in Birth to Three at least 6 months. • After year one, OSEP asked Connecticut to clarify the details of what we had done to assure representativeness. • READY?
The Details • We interpreted the “population of children with disabilities in the early intervention program” to mean our official 618 child count which is the only official demographic data reported to OSEP. • We called this our Target Group
The Details • Connecticut reported the percentages by both Race / Ethnicity and Gender for the… • Target Group (FFY05 618 child count - Table 1) • Census(Children whose families were sent a survey) • Respondent Pool(Children whose families returned a completed survey) and….SURPRISE! The respondent pool wasn’t representative!
Random and Representative • So we used SPSS to randomly select cases in order to create groups that would match the 618 percentages. • By Race/Ethnicity • By Gender • A crosstab of both • (We also did “Region” for our ICC.)
Analysis • Then we analyzed the results for each of the randomly selected representative groups. • Then we reported EVERYTHING…
Analysis 4A – Know My Rights (77%)
Analysis 4B – Communicate About My Child (75%)
Analysis 4c – Help Me Help My Child (88%)
Remaining Questions • “Week of Clarity” Prelim APR table read that CT • Didn’t include the N’s and • Didn’t meet it’s target for Indicator 4C • They use the results from our response pool • So we revised our APR to • Include the N’s and • Only report results that were from representative data
Original APR 4c – Help Me Help My Child (88%)
REVISED APR The respondent pool N=875 and from that the following representative groups were selected.
Remaining Questions • The final APR response table read that CT • Didn’t report the total N for the Response Pool • Still didn’t meet it’s target for Indicator 4C • Because we are required to assure representativeness, which results should be reported to and used by OSEP?
Remaining Questions • Dr. Paula LaLinda used Connecticut’s data for her dissertation. • She analyzed many more variables including language spoken in the home, insurance type (public or commercial) • This was more illustrative.
Remaining Questions • States may be using their family outcome data in ways that are a priority for stake-holders but not required for the APRs. • What are the minimum requirements for the APRs as related to reporting representativeness?