1 / 7

Symposium Chair: Chris S. Hulleman, Ph.D. Center for Assessment and Research Studies

Progressing Toward a Shared Set of Methods and Standards for Developing and Using Measures of Implementation Fidelity. Symposium Chair: Chris S. Hulleman, Ph.D. Center for Assessment and Research Studies James Madison University

danton
Download Presentation

Symposium Chair: Chris S. Hulleman, Ph.D. Center for Assessment and Research Studies

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Progressing Toward a Shared Set of Methods and Standards for Developing and Using Measures of Implementation Fidelity Symposium Chair: Chris S. Hulleman, Ph.D. Center for Assessment and Research Studies James Madison University Society for Research on Educational Effectiveness Annual Conference March 5, 2010

  2. Implementation vs. Implementation Fidelity Fidelity: How faithful was the implemented intervention (tTx) to the intended intervention (TTx)? Infidelity: TTx – tTx Implementation Assessment Continuum Descriptive What happened as the intervention was implemented? A priori model How much, and with what quality, were the core intervention components implemented? Most assessments include both

  3. Linking Fidelity to Causal Models Rubin’s Causal Model: True causal effect of X is (YiTx – YiC) RCT is best approximation Tx – C = average causal effect Fidelity Assessment Examines the difference between implementedcausal components in the Tx and C This difference is the achieved relative strength (ARS) of the intervention Theoretical relative strength = TTx – TC Achieved relative strength = tTx – tC Index of fidelity

  4. Why is this Important? Construct Validity • Which is the cause? (TTx - TC) or (tTx – tC) • Degradation due to poor implementation, contamination, or similarity between Tx and C External Validity • Generalization is about tTx – tC • Implications for future specification of Tx Statistical Conclusion Validity • Variability in implementation increases error, and reduces effect size and power

  5. 5-Step Process • Specify the intervention model • Develop fidelity indices • Determine reliability and validity • Combine indices • Link fidelity to outcomes

  6. What do we measure? (1) Essentialor corecomponents (activities, processes) (2) Necessary, but not unique,activities, processes and structures (supporting the essential components of Tx) (3) Ordinary featuresof the setting (shared with the control group)

  7. Presentations • Catherine Darrow • Measuring Fidelity in Preschool Interventions: A Micro-analysis of Fidelity Instruments Used in Curriculum Interventions • Michael Nelson et al. • A Procedure for Assessing Fidelity of Implementation in Experiments Testing Educational Interventions • Anne Garrison & Charles Munter • Evaluating Math Recovery: A Case of Measuring Implementation Fidelity of an Unscripted, Cognitively-Based Intervention • Chris Hulleman & David Cordray • Achieved Relative Intervention Strength: Models and Methods • Carol O’Donnell • Discussant

More Related