610 likes | 932 Views
A Procedure for Assessing Fidelity of Implementation in Experiments Testing Educational Interventions. Michael C. Nelson 1 , David S. Cordray 1 , Chris S. Hulleman 2 , Catherine L. Darrow 1 , & Evan C. Sommer 1 1 Vanderbilt University, 2 James Madison University. Purposes of Paper:.
E N D
A Procedure for Assessing Fidelity of Implementation in Experiments Testing Educational Interventions Michael C. Nelson1, David S. Cordray1, Chris S. Hulleman2, Catherine L. Darrow1, & Evan C. Sommer1 1Vanderbilt University, 2James Madison University
Purposes of Paper: • To argue for a model-based approach for assessing implementation fidelity • To provide a template for assessing implementation fidelity that can be used by intervention developers, researchers, and implementers as a standard approach.
Presentation Outline • What is implementation fidelity? • Why assess implementation fidelity? • A five-step process for assessing implementation fidelity • Concluding points
A Note on Examples: • Examples are drawn from our review of (mainly) elementary math intervention studies, which we are currently deepening and expanding to other subject areas • Examples for many areas are imperfect or lacking • As our argument depends on having good examples of the most complicated cases, we appreciate any valuable examples to which you can refer us (michael.nelson@vanderbilt.edu.)
What is implementation fidelity? • Implementation fidelity is the extent to which the intervention has been implemented as expected • Assessing fidelity raises the question: Fidelity to what? • Our answer: Fidelity to the intervention model. • Background in “theory-based evaluations” (e.g., Chen, 1990; Donaldson & Lipsey, 2006)
Fidelity vs. the Black Box The intent-to-treat (ITT) experiment identifies the effects of causes: Treatment “Black Box” Intervention’s Causal Processes Outcome Measures Outcomes Assignment to Condition Control “Black Box” Business As Usual Causal Processes Outcome Measures Outcomes
Fidelity vs. the Black Box …While fidelity assessment “opens up” the black box to explain the effects of causes: Intervention “Black Box” Assignment to Condition Intervention Component Mediator Outcome Fidelity Measure 1 Fidelity Measure 2 Outcome Measure
Fidelity assessment allows us to: • Determine the extent of construct validity and external validity, contributing to generalizability of results
Fidelity assessment allows us to: • Determine the extent of construct validity and external validity, contributing to generalizability of results • For significant results, describe what exactly did work (actual difference between Tx and C)
Fidelity assessment allows us to: • Determine the extent of construct validity and external validity, contributing to generalizability of results • For significant results, describe what exactly did work (actual difference between Tx and C) • For non-significant results, it may explain why beyond simply “the intervention doesn’t work”
Fidelity assessment allows us to: • Determine the extent of construct validity and external validity, contributing to generalizability of results • For significant results, describe what exactly did work (actual difference between Tx and C) • For non-significant results, it may explain why beyond simply “the intervention doesn’t work” • Potentially improve understanding of results and future implementation
Limitations of Fidelity Assessment: • Not a causal analysis, but it does provide evidence for answering important questions
Limitations of Fidelity Assessment: • Not a causal analysis, but it does provide evidence for answering important questions • Involves secondary questions
Limitations of Fidelity Assessment: • Not a causal analysis, but it does provide evidence for answering important questions • Involves secondary questions • Field is still developing and validating methods and tools for measurement and analysis
Limitations of Fidelity Assessment: • Not a causal analysis, but it does provide evidence for answering important questions • Involves secondary questions • Field is still developing and validating methods and tools for measurement and analysis • Cannot be a specific, one-size-fits-all approach
A Five Step Process for Assessing Fidelity of Implementation • Specify the intervention model • Identify fidelity indices • Determine index reliability and validity • Combine fidelity indices* • Link fidelity measures to outcomes* *Not always possible or necessary
The Change Model • A hypothetical set of constructs and relationships among constructs representing the core components of the intervention and the causal processes that result in outcomes
The Change Model • A hypothetical set of constructs and relationships among constructs representing the core components of the intervention and the causal processes that result in outcomes • Should be based on theory, empirical findings, discussion with developer, actual implementation
The Change Model • A hypothetical set of constructs and relationships among constructs representing the core components of the intervention and the causal processes that result in outcomes • Should be based on theory, empirical findings, discussion with developer, actual implementation • Start with Change Model because it is sufficiently abstract to be generalizable, but also specifies important components/processes, thus guiding operationalization, measurement, and analysis
Change Model: Generic Example Intervention Component Mediator Outcome Teacher training in use of educational software Teachers assist students in using educational software Improved student learning
Change Model: Project LINCS Adapted from Swafford, Jones, and Thornton, 1997 Increase in teacher knowledge of geometry Instruction in geometry Improved teacher instructional practice Increase in teacher knowledge of student cognition Instruction in student cognition of geometry
The Logic Model • The set of resources and activities that operationalize the change model for a particular implementation
The Logic Model • The set of resources and activities that operationalize the change model for a particular implementation • A roadmap for implementation
The Logic Model • The set of resources and activities that operationalize the change model for a particular implementation • A roadmap for implementation • Derived from the change model with input from developer and other sources (literature, implementers, etc.)
Logic Model: Project LINCS Adapted from Swafford, Jones, and Thornton, 1997 Improved teacher instructional practice Instruction in geometry Increase in teacher knowledge of geometry Geometry content course What is taught How it is taught Instruction in student cognition of geometry Increase in teacher knowledge of student cognition Characteristics teachers display Research seminar on van Hiele model
A Note on Models and Analysis: Recall that one can specify models for both the treatment and control conditions. The “true” cause is the difference between conditions, as reflected in the model for each. Using the change model as a guide, one may design equivalent indices for each condition to determine the relative strength of the intervention (Achieved Relative Strength, ARS). This approach will be discussed in the next presentation (Hulleman).
Steps 2 and 3: Develop Reliable and Valid Fidelity Indices and Apply to the Model
Examples of Fidelity Indices • Self-report surveys • Interviews • Participant logs • Observations • Examination of permanent products created during the implementation process
Index Reliability and Validity • Both are reported inconsistently • Report reliability at a minimum, because unreliable indices cannot be valid • Validity is probably best established from pre-existing information or side studies
Index Reliability and Validity • Both are reported inconsistently • Report reliability at a minimum, because unreliable indices cannot be valid • Validity is probably best established from pre-existing information or side studies • We should be as careful in measuring the cause as we are in measuring its effects!
Selecting Indices • Guided foremost by the change model: identify core components as those that differ significantly between conditions and upon which the causal processes are thought to depend
Selecting Indices • Guided foremost by the change model: identify core components as those that differ significantly between conditions and upon which the causal processes are thought to depend • Use the logic model to determine fidelity indicator(s) for each change component
Selecting Indices • Guided foremost by the change model: identify core components as those that differ significantly between conditions and upon which the causal processes are thought to depend • Use the logic model to determine fidelity indicator(s) for each change component • Base the number and type of indices on the nature and importance of each component
Selecting Indices: Project LINCS Adapted from Swafford, Jones, and Thornton, 1997
Why Combine Indices? • *May not be possible for the simplest models • *Depends on particular questions
Why Combine Indices? • *May not be possible for the simplest models • *Depends on particular questions • Combine within component to assess fidelity to a construct • Combine across components to assess phase of implementation • Combine across model to characterize overall fidelity and facilitate comparison of studies
Some Approaches to Combining Indices: • Total percentage of steps implemented • Average number of steps implemented
Some Approaches to Combining Indices: • Total percentage of steps implemented • Average number of steps implemented HOWEVER: These approaches may underestimate or overestimate the importance of some components!
Some Approaches to Combining Indices: • Total percentage of steps implemented • Average number of steps implemented HOWEVER: These approaches may underestimate or overestimate the importance of some components! • Weighting components based on the intervention model • Sensitivity analysis
MAP Example Weighting of training sessions for the MAP intervention Cordray, et al (Unpublished)
Linking Fidelity and Outcomes • *Not possible in (rare) cases of perfect fidelity (no covariation without variation) • *Depends on particular questions • Provide evidence supporting the model (or not) • Identify “weak links” in implementation • Point to opportunities for “boosting” strength • Identify incorrectly-specified components of the model
Assessment to Instruction (A2i) • Teacher use of web-based software for differentiation of reading instruction • Professional developmentStudents use A2i Teachers use A2i recommendations for grouping and lesson planningStudents improve learning • Measures: Time teachers logged in, observation of instruction, pre/post reading (Connor, Morrison, Fishman, Schatschneider, and Underwood, 1997)
Assessment to Instruction (A2i) • Used Hierarchical Linear Modeling to analyze • Overall effect size of .25 Tx vs. C • Pooling Tx+C, teacher time using A2i accounted for 15% of student performance • Since gains were greatest among teachers who both attended PD and were logged in more, concluded both components were necessary for outcome (Connor, Morrison, Fishman, Schatschneider, and Underwood, 1997)
Some Other Approaches to Linking from the Literature • Compare results of hypothesis testing (e.g., ANOVA) when “low fidelity” classrooms are included or excluded • Correlate overall fidelity index with each student outcome • Correlate each fidelity indicator with the single outcome • Calculate Achieved Relative Strength (ARS) and use HLM to link to outcomes