1 / 15

True Single-Case Applications and the WWC Standards

Learn about the design and evidence standards used by the What Works Clearinghouse (WWC) to evaluate single-case applications. Discover the criteria for systematic manipulation of variables, inter-assessor agreement, demonstrating intervention effects, and minimum number of phases and data points per phase. Assess the quality of single-case designs based on WWC standards and evidence.

kjeffrey
Download Presentation

True Single-Case Applications and the WWC Standards

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. True Single-Case Applications and the WWC Standards • What Works Clearinghouse Standards • Design Standards • Evidence Standards • Social Validity

  2. Evaluate the Design Meets Design Standards Meets with Reservations Does Not Meet Design Standards Evaluate the Evidence Strong Evidence Moderate Evidence No Evidence Effect-Size Estimation Social Validity Assessment

  3. WWC StandardsEvaluating the Quality of Single-Case Designs

  4. WWC Single-Case Design Standards • Four Standards for Design Evaluation • Systematic manipulation of independent variable • Inter-assessor agreement • Three attempts to demonstrate an effect at three different points in time • Minimum number of phases and data points per phase, for phases used to demonstrate an effect • Standard 3 Differs by Design Type • Reversal / Withdrawal Designs (ABAB and variations) • Alternating Treatments Designs • Multiple Baseline Designs

  5. Standard 1: Systematic Manipulation of the Independent Variable • Researcher Must Determine When and How the Independent Variable Conditions Change. • If Standard Is Not Met, Study Does Not Meet Evidence Standards.

  6. Examples of Manipulation that is Not Systematic • Teacher/Consultee Begins to Implement an Intervention Prematurely Because of Parent Pressure. • Researcher Looks Retrospectively at Data Collected during an Intervention.

  7. Standard 2: Inter-Assessor Agreement • Each Outcome Variable for Each Case Must be Measured Systematically by More than One Assessor. • Study Needs to Collect Inter-Assessor Agreement: • In each phase • On at least 20% of the data points in each condition (e.g., baseline, intervention) • Rate of Agreement Must Meet Minimum Thresholds: • (e.g., 80% agreement or Cohen’s kappa of 0.60) • If No Outcomes Meet These Criteria, Study Does Not Meet Evidence Standards.

  8. Current Reviews: Author Queries Occur When Study Text Provides Insufficient IOA Information • Determine if Standard is Met Based on Response • If the result of the query indicates that the study does not meet standards, treat it as such. • If No Response, Assume Standard is Met if: • The minimum level of agreement is reached. • The study assesses IOA at least once in each phase. • The study assesses IOA on at least 20% of all sessions. • Footnote is added to WWC Product Indicating that IOA Not Fully Determined.

  9. Standard 3: Three Attempts to Demonstrate an Intervention Effect at Three Different Points in Time • “Attempts” Are about Phase Transitions • Designs that Could Meet This Standard Include: • ABAB design • Multiple baseline design with three baseline phases and staggered introduction of the intervention • Alternating treatment design • Designs Not Meeting this Standard Include: • AB design • ABA design • Multiple baselines with three baseline phases and intervention introduced at the same time for each case

  10. Standard 4: Minimum Number of Phases and Data Points per Phase (for Phases in Std 3)

  11. Meets Evidence Standards with Reservations (MESWR) • 3 Attempts At 3 Different Points in Time • 4 Phases with At Least 3 Data Point per Phase Adapted from Horner and Spaulding, 2010

  12. Meets Evidence Standards (MES) • 3 Attempts At 3 Different Points in Time • 6 Phases with At Least 5 Data Point per Phase Source: Kern et al., 1994

  13. Ratings Differ by Research Question • MESWR – Int 1 v. Int 2 • DNotMES – Int 1 v. Int 3 • DNotMES – Int 2 v. Int 3 Source: Horner and Spaulding, 2010

  14. Extensions of Core Designs • Changing Criterion Designs • Researcher pre-schedules changes in the intervention criterion or intensity of the intervention • Can meet evidence standards with at least 3 criterion shifts (for Standard 3) • Non-concurrent Multiple Baseline • Completely non-concurrent MBDs baselines that do not overlap when examined vertically • Designs with NO vertical overlap at baseline do not meet standards because of the history threat • Multiple Probe • Multiple Probe (Days) • Multiple Probe (Conditions)

  15. Design Evaluation • Meets Standards • IV manipulated directly • IOA documented (.80 percent agreement; .60 Kappa) • 20% of data points in each phase • Design allows opportunity to assess basic effect at three different points in time. • Five data points per phase (or design equivalent) • ATD (four comparison option) • Meets with Reservation • All of above, except at least three data points per phase • Does not Meet Standards

More Related