90 likes | 162 Views
USING RESULTS FROM RIGOROUS STUDIES: CRITICAL ISSUES. David Myers American Institutes for Research Washington, DC February 7, 2008 Presentation to the IES Regional Education Labs. BOTTOM LINE. Bottom Line Critical Issues Getting the Question Right Making Results Relevant References.
E N D
USING RESULTS FROM RIGOROUS STUDIES: CRITICAL ISSUES David Myers American Institutes for Research Washington, DC February 7, 2008 Presentation to the IES Regional Education Labs
BOTTOM LINE Bottom Line Critical Issues Getting the Question Right Making Results Relevant References • Evidence is still not a driver in many or most decisions • There is a disconnect between our expectations and what happens in the “field” • Too much variability in what is accepted as “scientifically-based evidence” in practice • Politics and ideology interfere with evidence-based decision making • Results (evidence) may make some difference at the national and state level, but are of limited utility for school districts, schools, and teachers • Schools of education are often a barrier to making progress • IES’s and other rigorous studies are not providing answers for practitioners; instead, they are geared toward policymakers – mandated by Congress • We are still going after the “low hanging fruit” • WWC practice guides are a move in the right direction • RELs have an opportunity to fill the needs of practitioners
CRITICAL ISSUES TO CONSIDERWHEN “USING RESULTS” Bottom Line Critical Issues Getting the Question Right Making Results Relevant References • Alignment of research questions and answers with the intended audience • Policymakers • Researchers • Practitioners • Alignment of questions with the appropriate estimand • Defensible estimator of the estimand • Use of appropriate populations – different answers for different populations (case of Power4Kids) • Can the results be generalized to “my context?”
GETTING THE QUESTION RIGHT FOR THE APPROPRIATE AUDIENCE Bottom Line Critical Issues Getting the Question Right Making Results Relevant References • Policymakers • What works and for whom? • Researchers • Can the results inform the design of the next study? • Do the results build a pattern of findings that give us more confidence in the results of a single study? • Practitioners • What works and for whom, and can we be confident it will work for me? • Can the results of an experiment be generalized with confidence to my schools, teachers, and students? • Placing the intervention in context
MAKING RESULTS RELEVANTFOR PRACTITIONERS Bottom Line Critical Issues Getting the Question Right Making Results Relevant References • Use designs that better address the context in which policies and interventions operate • As a simple example, take the education production model as a framework Student outcomes = f(student and family inputs) Student and family effects = f(classroom inputs, treatment, treatment x classroom interactions) Classroom effects = f(school and community inputs)
MAKING RESULTS RELEVANT (cont.) Bottom Line Critical Issues Getting the Question Right Making Results Relevant References • Implies better understanding of main and interaction effects of contextual factors • More theory dependent than most previous large-scale studies • Implies different sample designs in which to embed RCTs or QEDs • Implies larger sample sizes or more purposeful samples to maximize variation on critical factors • Can RELs coordinate on high priority, cross-cutting issues and conduct simultaneous/overlapping studies? • Increased variability on key contextual factors
MAKING RESULTS RELEVANT(cont.) Bottom Line Critical Issues Getting the Question Right Making Results Relevant References • Consider alternative “experimental designs” • Full factorial designs with complete crossing of factors • Designs that allow some effects to be confounded when prior evidence suggests a full factorial design isn’t necessary (Kirk, 1995) • Develop measures that will tell practitioners the extent to which impacts can be generalized or the confidence with which we can predict impacts in different settings • Borrow from the logic and statistical theory behind generalizability theory (see Hedges, 2006 as an introduction) • Reliability of results for particular contexts
USEFUL REFERENCES Bottom Line Critical Issues Getting the Question Right Making Results Relevant References • McDonald, S., Keesler, V., Kauffman, N., and Schneider, B. (2006). Scaling-Up Exemplary Interventions, Educational Researcher, 35, 15-24. • Hedges, L.V., (2006). Generalizability of Treatment Effects: Psychometrics and Education. In B. Schneider & S.-K. McDonald (Eds.), Scale-up in education: Vol. 1. Ideas in principle. Lanham, MD: Rowman & Littlefield. • Kirk, R.E. (1995). Experimental Design: Procedures for the Behavioral Sciences, Pacific Grove: Brooks/Cole.
Illustration:Effective Literacy and English Language Instruction for English Learners in the Elementary Grades • Screen for reading problems and monitor progress • Provide intensive small-group reading interventions • Ensure program is implemented daily for 30 minutes in small homogenous groups • Provide training and on-going support for teachers • Provide extensive and varied vocabulary instruction • Develop academic English • Teach in the earliest grades • Devote a block of time to teach academic English • Schedule regular peer-assisted learning opportunities