490 likes | 690 Views
Single-Case Designs. Evaluation/Research Design. Overall plan that describes all of the elements of a research or evaluation study, and ideally the plan allows the researcher or evaluator to reach valid conclusions. Single-Case Design. Family of designs characterized by the:
E N D
Evaluation/Research Design Overall plan that describes all of the elements of a research or evaluation study, and ideally the plan allows the researcher or evaluator to reach valid conclusions
Single-Case Design • Family of designs characterized by the: • Systematic repeated measurement of a client’s outcome(s) at regular, frequent, pre-designated intervals under different conditions (baseline and intervention) • Evaluation of outcomes over time and under different conditions in order to monitor client progress, identify intervention effects and, more generally, learn when, why, how, and the extent to which client change occurs
A-B Design Two-phase single-case design consisting of a pre-intervention baseline phase (A) followed by an intervention phase (B)
Strengths of the A-B Design • Baseline can be used to: • Confirm or disconfirm that the problem exists • Establish the extent of the problem • Develop and explore hypotheses useful for case conceptualization and intervention planning • Determine whether the problem is getting better or worse and the pace of change • Estimate what would happen to the client’s outcome without intervention
Strengths of the A-B Design (cont’d) • In general, the A-B design can be used to: • Determine whether your client is changing over time, whether changes are for the better or worse, whether the pace of change is satisfactory, and whether the amount of change is sufficient • Determine the extent to which your intervention is related to client change
Potential Limitations of the A-B Design • Can’t use it to determine the extent to which client change is lasting • Can’t use it to determine whether your intervention will have the same effect with different clients, different problems, or under different circumstances • Can’t use it to determine the extent to which your intervention causes client change
Cause and Effect • Cause: • A variable (e.g., intervention) that produces an effect or is responsible for events or results (e.g., outcome) • Effect: • Change in one variable (e.g., outcome) that occurred at least in part as the result of another variable (e.g., intervention)
Intervention Effect Portion of an outcome change that can be attributed uniquely to an intervention rather than to other influences
Criteria for Causality • Cause must precede the effect • Cause must covary with the effect • Knowledge must be available of what would have happened in the absence of the cause • Alternative explanations must be ruled out
Alternative Explanations Plausible reasons for a relationship between an intervention and an outcome, other than that the intervention caused the outcome
Internal Validity Accuracy of conclusions based on evidence and reasoning about causal relationships between variables (e.g., extent to which an intervention, as opposed to other factors, caused a change in an outcome)
Threats to Internal Validity • Reasons why it might be partly or completely wrong (i.e., invalid) to conclude that one variable (e.g., an intervention) caused another (e.g., an outcome) • History effect • Instrumentation effect • Maturation effect • Regression effect • Testing effect (e.g., fatigue, practice) • Ambiguous temporal precedence
History Effect Potential threat to internal validity in which change in an outcome could be misinterpreted as an intervention effect, when in fact it is caused by an external event that occurs at the same time as the intervention
Instrumentation Effect Potential threat to internal validity in which an apparent change in an outcome could be misinterpreted as an intervention effect, when in fact it is caused by a change in how the outcome is measured
Maturation Effect Potential threat to internal validity in which change in an outcome could be misinterpreted as an intervention effect, when in fact it is caused by naturally occurring changes in clients over time
Testing Effect Potential threat to internal validity in which change in an outcome could be misinterpreted as an intervention effect, when in fact it is caused by repeated measurement of the outcome • Fatigue effect: Deterioration in an outcome caused by fatigue associated with repeated measurement of the outcome • Practice effect: Improvement in an outcome caused by repeated measurement of the outcome
Regression Effect Potential threat to internal validity in which change in an outcome could be misinterpreted as an intervention effect, when in fact it is caused by the tendency of an individual with unusually high or low scores on a measure to subsequently have scores closer to the mean
Ambiguous Temporal Precedence Potential threat to internal validity in which it is not clear whether one variable (e.g., intervention) occurred before or after another (e.g., outcome), making it difficult to distinguish the cause from the effect
A-B-A Design Three-phase single-case design consisting of: a pre-intervention baseline phase (A1); an intervention phase (B); and a second baseline phase (A2) in which the intervention is withdrawn to determine if the outcome “reverses” to the initial baseline pattern
A-B-A-B Design Four-phase single-case design consisting of: a pre-intervention baseline phase (A1); an intervention phase (B1); a second baseline phase (A2) in which the intervention is withdrawn to determine if the outcome “reverses” to the initial baseline pattern; and a reintroduction of the intervention (B2) to see whether the initial intervention effects are replicated
Multiple Baseline Designs • Multiple baseline across settings • Multiple baseline across subjects (clients) • Multiple baseline across behaviors (problems)
Multiple Baseline Across Settings Design Single-case design that begins with a baseline during which the same problem is measured for a single client in two or more settings at the same time. Baseline is followed by the application of the intervention in one setting while baseline conditions remain in effect for other settings, then the intervention is applied sequentially across the remaining settings to see whether intervention effects are replicated across different settings.
Multiple Baseline Across Subjects (Clients) Design Single-case design that begins with a baseline during which the same problem is measured for two or more clients at the same time in a particular setting. Baseline is followed by the application of the intervention to one client, while baseline conditions remain in effect for other clients, then the intervention is applied sequentially to remaining clients to see whether intervention effects are replicated across different clients
Multiple Baseline Across Behaviors (Problems) Design Single-case design that begins with a baseline during which two or more problems are measured at the same time for a single client in a particular setting. Baseline is followed by the application of the intervention to one problem with baseline conditions remaining in effect for other problems, then the intervention is applied sequentially to the remaining problems to see whether intervention effects are replicated across different problems
Going from A to B Decisions based on baseline pattern
Where Do You Go After B? Decisions based on pattern during B
A-B-C Design Three-phase single-case design consisting of: a pre-intervention baseline (A); an intervention phase (B); and a second intervention phase (C) in which a new intervention is introduced in response to the failure of the first intervention to produce sufficient improvement in the outcome
A-B-BC Design Three-phase single-case design consisting of: a pre-intervention baseline (A); an intervention phase (B); and a second intervention phase in which a new intervention (C) is added to the first intervention in response to the failure of the first intervention to produce sufficient improvement in the outcome
Follow-up Phase • Period of time after an intervention has ended during which outcome data are collected to determine the extent to which a client’s progress has been maintained • Also known as a “maintenance phase”