190 likes | 205 Views
Methods Matters. Grover J. (Russ) Whitehurst Director Institute of Education Sciences United States Department of Education. Methods Matter: High Stakes. Make-or-Break Exams Grow, But Big Study Doubts Value
E N D
Methods Matters Grover J. (Russ) Whitehurst Director Institute of Education Sciences United States Department of Education
Methods Matter: High Stakes Make-or-Break Exams Grow, But Big Study Doubts Value “Rigorous testing that decides whether students graduate, teachers win bonuses and schools are shuttered, an approach already in place in more than half the nation, does little to improve achievement and may actually worsen academic performance and dropout rates, according to the largest study ever on the issue.”
Methods Matter: High Stakes New Ammunition for Backers of Do-or-Die Exams “Two new studies make the case that do-or-die exams -- which decide whether students graduate, teachers are dismissed or schools are shut in more than half the states in the nation -- have brought about at least a modicum of academic progress, especially for minority students who may get scant attention otherwise.”
Methods Matter: High Stakes Study Finds Higher Gains in States With High-Stakes Tests "I've had a lot of people reanalyze our data … and each and every one of them have come up with different results.”
Methods Matter • Methods match questions • Description • Association • What works • Why
What Works: Causal Validity Meets Standards • Randomized controlled trial with no randomization, attrition, or disruption problems • Regression discontinuity study with no comparability, attrition, or disruption problems
Causal Validity Meets Standards with Reservations • Randomized controlled trial with a randomization, attrition, and/or disruption problem • Regression discontinuity study with a comparability, attrition, or disruption problem • Quasi-experimental design with equivalent groups and no problems with attrition or disruption
Causal Validity Does not meet Standards • Anecdotes • Case studies • Pre-post studies • Correlational studies • Value-added studies • Conceptual models • Flawed comparison group studies • Narrative summaries • Meta-analyses of flawed studies
Intervention Fidelity • The intervention contains most of the key characteristics that commonly define it • The author provides evidence of good implementation • The intervention is documented well enough for others to replicate it
Outcome Measures • Face Validity • Reliability • Not over-aligned
People, Settings, and Timing • Samples are from the people (units of interest) and settings that are the target of the intervention • Outcomes are measured at an appropriate time
Statistical Analysis • The analysis is conducted at the same level (for example, students, classes, schools) as the unit of assignment • Accounts for clustering effects (for example, when the intervention is delivered to all students in a classroom)
Statistical Reporting • Findings are reported for most outcome measures • Effect sizes can be calculated
Rigorous Evaluations at IES • Early Reading First National Evaluation • Reading First Impact Study • After-School Programs • Remedial Reading Programs • Teacher Preparation Models • Professional Development Models • Educational Technology Interventions • Interventions for English Language Learners
Rigorous Evaluations at IES • Charter School Strategies • D.C. Choice Program • Even Start • Teacher Induction Programs • Preschool Curriculum • Character and Socialization Interventions
Rigorous Evaluations Planned • Safe and drug-free school programs • Interventions for Adult ESL Students • Many individual grants, including IERI
The Institute of Education Sciences The home of evidence-based education