210 likes | 339 Views
SREE Annual Conference March 6, 2010. Using RTCs to determine the impact of reading interventions on struggling readers Newark Public Schools Jennifer Hamilton, Senior Study Director jenniferhamilton@westat.com Matthew Carr, Analyst matthewcarr@westat.com. Overview of Presentation.
E N D
SREE Annual ConferenceMarch 6, 2010 Using RTCs to determine the impact of reading interventions on struggling readers Newark Public Schools Jennifer Hamilton, Senior Study Director jenniferhamilton@westat.com Matthew Carr, Analyst matthewcarr@westat.com
Overview of Presentation • Context – Striving Readers in Newark, NJ • Fidelity of implementation • Adherence • Exposure • Discussion • For more information…
Context – Newark, NJ • 35% children living in poverty (compared to 18% nationally) • Largest school district in the state of NJ • A ‘district in need of improvement’ for last 4 years • State took over the district in 1995 (limited control given back in 2008) • Only ~ 50% of students in grades 6, 7, & 8 are proficient readers
Importance of Fidelity Fidelity is the extent to which the intervention as implemented is faithful to the pre-stated model. Little black dress is in; little black box is out • Internal validity - Helps to explain failure • External validity - Helps to make treatment more stable and replicable (treatment has to be well defined) • Helps ensure treatment is absent from control condition
Components of Fidelity - Theory of Change Adherence Exposure
Establishing Fidelity (Adherence) 4-steps: (1) identify, (2) measure, (3) score, (4) analyze Step 1: Identify critical components • Adaptation issue Step 2: Measure • Multiple sources of data, range of methodologies • Extant data (training receipt, class size, SRI, computer use) • Classroom observations • Practical considerations - $$$$ • Qualifications of data collection staff • Number of points in time (cost)
Establishing Fidelity (Adherence) Step 3: Score • Assign sub-scores Numberof sessions per week using instructional software • Combine to a single score • Equal weighting
Newark - Single Adherence Score • Year 1 = 88% • Year 2 = 82% • Year 3 = 89%
Establishing Fidelity (Adherence) Step 4: Analysis • Descriptive • But profoundly unsatisfying, given all the effort and expense • Generally, should not be used as a mediating variable • Fidelity usually related to error term as well as outcome • Error term contains unmeasured factors, such as teacher quality/charisma and student engagement • Non-experimental/exploratory • Fidelity as a predictor (with lots of covariates) • Correlational
Exposure You are here
Exposure Student Receipt of Intervention -- Components • Attrition • Attendance • No-Shows
Exposure - Attrition WWC (2008) Benchmarks for attrition tolerance Newark 19.6% overall 5.6% differential
Exposure - Attendance Number of unexcused absences by analytic group Group 1 = 1 year of potential exposure (6,7, 8 year 1; 6 year 2) Group 2 = 1 year of potential exposure - 6th graders only (years 1,2) Group 3 = 2 years of potential exposure – 7th graders only (year 2) Group 4 = 2 years of potential exposure – 8th graders only (year 2) Group 5 = 2 years of potential exposure – 7th + 8th graders (7,8 year 2) No significant differences b/t Treatment and Control students
Exposure – No-Shows Intention to Treat (ITT) vs. Treatment on the Treated (TOT) • Removing T students who didn’t receive T would bias the data • But keeping them in underestimates effects • Issue of real world implementation vs. ideal implementation Policymakers want to know TOT, Researchers need to report ITT Solution – The Bloom Adjustment
The Bloom Adjustment Adjusts the effects of an intervention upwards by the treatment group no-show rate AllSubjectEffect = γ*NoShowEffect + (1-γ)TreatSubjectEffectAssuming the effect per no-show is zero, then: AS = γ * 0 + (1- γ)TS AS = (1- γ)TS Therefore: TS = AS / (1- γ)
Example: Striving Readers Student sample divided into 5 analytic groups
Striving Readers Example ITT effect sizes compared to Bloom Adjusted (year 2)
Review • Adherence = receipt of materials + accurate delivery • 4 steps: identify, measure, score, analyze • Receipt • Attrition • Attendance • No Shows – Bloom Adjustment
For more information… • Bloom, H. (1984). Accounting for No-Shows in experimental evaluation designs. Evaluation Review, 8, 225-246. • Durlak, J.A., & DuPre, E.P. (2008). Implementation matters: A review of research on the influence of implementation on program outcomes and the factors affecting implementation. American Journal of Community Psychology, 41, 327-350. • Hill, L.G., Maucione, K,. & Hood, B.K. (2007). A focused approach to assessing program fidelity. Prevention Science, 8, 25-34. • Mowbray, C. Holter, M. Teague, G., & Bybee, D. (2003). Fidelity Criteria: Development, Measurement, and Validation. American Journal of Evaluation, 24, 315-340. • What Works Clearinghouse. (2008). WWC Procedures and Standards Handbook. Available online at http://ies.gov/ncee/wwc/references
On the Web Department of Elementary and Secondary Education Striving Readers webpage http://www.ed.gov/programs/strivingreaders/index.html