220 likes | 232 Views
Gain knowledge on causal inference, develop experimental designs, and understand statistical implications in educational research. Explore Rubin’s Causal Model and learn about group interventions. Enhance your ability to critique and propose intervention plans for educational efficacy.
E N D
NCSER Summer Institute on Single Case Intervention Design and Analysis: Logic and Foundations of Group Intervention Research David J. Francis University of Houston Texas Institute for Measurement, Evaluation, and Statistics
Overview • The over arching focus of this Summer Institute is on the conduct of rigorous single subject experiments. • However, the organizers felt it would be beneficial to include some background on group interventions. • Group interventions are the focus of a second IES sponsored institute. • Much of what I will present comes courtesy of Mark Lipsey (Vanderbilt) and Larry Hedges (Northwestern), organizers of that institute.
Overview • Causal Inference as Scientific and Statistical Hypotheses • Conditions for Meaningful Investigation of Educational Interventions • Moving from Scientific Question to Experimental Design and Statistical Analysis
Learning Objectives (1) Develop a conceptual understanding of causal inference in education science and the role of statistical inference in testing scientific hypotheses (2) Become familiar with Rubin’s Causal Model (RCM), potential outcomes, and average treatment effects, (3) Be able to delineate the major preconditions for meaningful scientific investigations of the efficacy of educational interventions
Learning Objectives (cont.) (4) Be able to describe the major dimensions underlying a theory of change (5) Be able to describe the stages of intervention development and testing, and differences in their requirements for design and analysis (6) Be able to apply this knowledge to critique the development of a familiar intervention in one’s area and propose a plan to advance its development.
Causal Inference and Rubin’s Causal Model • Causal hypotheses are scientific hypotheses. • Causal hypotheses are not fundamentally statistical hypotheses, but they have statistical implications. • Science is concerned with the business of explanation - this is true irrespective of the particular science. • Prediction and control are consequences of explanation.
Causal Inference (cont.) • While prediction may be of interest in its own right, we are generally interested in explanation over prediction. • Prediction is a consequence of explanation, but the converse is not necessarily true. • When causes are understood, prediction is more precise. • Experimentation leads to explanation.
Causal Inference (cont.) • In observational studies, there are always many plausible rival explanations for an observed effect. • These rival explanations can often be operationalized as additional variables that can be included in the system of variables under observation. • In fact, the development of research design by R.A. Fisher was motivated out of his experiences dealing with observational data.
A Little History • In 1919, Fisher was hired at Rothamsted agricultural station to identify effects of treatments by examining observational data on crop yields. • Fisher used regression analyses to identify the effects of confounding variables – rainfall, temperature, drainage, weather, etc. – and used qualitative methods in support. • He concluded that the effects of the confounding variables are much larger than the systematic effects he is trying to study. Hedges, IES/NCER 2010, Summer Institute on RCT’s
Fisher Develops a Science of Experimental Design • Fisher invents • Basic principles of experimental design • Control of variation by randomization • Analysis of Variance • Analysis of Covariance Hedges, IES/NCER 2010, Summer Institute on RCT’s
Experimental Design: What it is and Why we Need It • Experimental Design includes both • Strategies for organizing data collection, and • Data analysis procedures matched to those data collection strategies • Experimental Design is necessary because of variability • If experimental units were identical AND responded identically to treatment, we would not need a science of experimental design Hedges, IES/NCER 2010, Summer Institute on RCT’s
Statistical methods do not “discover” causality. • Statistical methods support causal inferences through the management of uncertainty in scientific observations.
Statistical inferences are derived from models for observations. • These models vary in complexity and in their assumptions. “All models are wrong; some models are useful!” (attributed in various forms to G.E.P. Box) • It is important to understand the ways in which the model, including its assumptions, may be wrong and the implications of such misspecifications for our scientific conclusions.
Causality • We all have some implicit understanding of what it means to say that X causes Y. • It is more difficult to provide a philosophical definition of causality - particularly one that everyone can agree on.
Causality (cont.) • “Causal laws are relations in nature that reveal what one would have to be able to do to effect specific kinds of outcomes unambiguously. And functional relations are an appropriate form for expressing these kinds of relationships.” (S. Mulaik, 1985) • A functional relation is any relation where each input is associated with a unique output.
Causality (cont.) • Note causal laws are asymmetric - effect values can be determined from the values of the causes, but not necessarily vice versa • We can distinguish two types of functional relations: deterministic and probabilistic
Causality (cont.) • Assuming all other causes of Y are held constant, there must be a minimum of 2 probability distributions for Y. • Otherwise, Y does not vary as a function of X, and therefore X does not cause Y.
Simple example of probabilistic causation: • Six sided fair die. Weight the “6” side with different weights (X). Observe p(Y | X = x). As X, p(Y=1)
Rubin’s Causal Model: Potential Outcomes and Average Treatment Effects • Rubin (1974) introduced education to the idea of potential outcomes and defined the causal effect of a treatment in terms of “average causal effects” • The effect Tj = yj(E) – yJ(C)) of treatment E cannot be observed for an individual because the effect is defined as the difference between what would happen if person j received treatment (E) at time 1 and what would have if person j received control (C) at time 1.
Rubin’s Causal Model – Potential Outcomes Rubin, D. (2005), JASA, p. 323
Rubin’s Causal Model: Potential Outcomes and Average Treatment Effects • Each potential outcome is observable, but we cannot observe them all. • Because either yj(E) or yj(C) can be observed for a particular j, but not both, treatment effects must be defined in terms of summary causal effects. • One such summary is the average treatment effect found by taking the difference in means across units receiving E and C. • This notion of average treatment effects is central to the design and analysis of group randomized trials.