1 / 61

Establishing a Cause-Effect Relationship

Establishing a Cause-Effect Relationship. Internal Validity. Is the relationship causal between. The “treatment” and the “outcomes” The independent and dependent variables. Alternative cause. Alternative cause. Treatment. Outcomes. What you do. What you see. Alternative cause.

Leo
Download Presentation

Establishing a Cause-Effect Relationship

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Establishing a Cause-Effect Relationship

  2. Internal Validity Is the relationship causal between... • The “treatment” and the “outcomes” • The independent and dependent variables. Alternative cause Alternative cause Treatment Outcomes What you do What you see Alternative cause Alternative cause Observation In this study

  3. Establishing Cause and Effect Temporal precedence

  4. Establishing Cause and Effect Temporal precedence then Cause Effect Time It can get complicated through: -sloppiness (campaign contributions - Chicken and egg cyclical functions (democracy and GDP)

  5. Cause Effect then Time Establishing Cause and Effect Temporal precedence Covariation of cause and effect

  6. Cause Effect then Time Establishing Cause and Effect Temporal precedence Covariation of cause and effect if X, then Y if not X, then not Y if treatment given, then outcome observed (usually) if program not given, then outcome not observed

  7. Cause Effect then Time Establishing Cause and Effect Temporal precedence Covariation of cause and effect if X, then Y if not X, then not Y if program given, then outcome observed if program not given, then outcome not observed Dosage effects or comparative statics: If more of treatment, then more of outcome observed if less of treatment given, then less of outcome observed

  8. Cause Effect then Time Establishing Cause and Effect Temporal precedence if X, then Y if not X, then not Y Covariation of cause and effect No alternative explanations Treatment Outcome Micromediation

  9. Cause Effect then Time Establishing Cause and Effect Temporal precedence if X, then Y if not X, then not Y Covariation of cause and effect Alternative cause (substantive) No alternative explanations Alternative cause Treatment Outcome Micromediation Alternative cause Alternative cause (nuisance)

  10. In Lab or Field Experiments… Temporal precedence • Is taken care of because you intervene before you measure outcome • Is measured by comparing treated and untreated groups • Is the central issue of internal validity -- usually taken care of through random assignment Covariation of cause and effect No alternative explanations

  11. Single-Group Threats to Internal Validity

  12. The Single Group Case Two designs:

  13. Administer program Measure outcomes X O The Single Group Case Two designs: • “Post-test only single-group design” • X is the treatment • O is the observation

  14. Administer program Measure outcomes X O The Single Group Case Two designs: “pre-test, post-test single-group design” or “interrupted time-series” Measure baseline O

  15. Administer program Measure outcomes X O Administer program Measure outcomes X O The Single Group Case Alternative explanations Two designs: Alternative explanations Measure baseline O Alternative explanations

  16. Example • After the 2003 recall election, did Democrats in the California Assembly move to the center? • California ran a full legislative “season” before the October, 2003 election, then ran another “season” afterward. • We can look at roll call vote behavior

  17. Example: What Kind of Design?

  18. Pretest Program Posttest O X O History Threat • Any other event that occurs between pretest and posttest • Perhaps the nation was just shifting to the center at this time. • How might we rule it out?

  19. Pretest Program Posttest O X O Maturation Threat • Normal growth between pretest and posttest. • Coming into an election year, state legislators always shift to the center.

  20. Ruling Out a Maturation Threat

  21. Pretest Program Posttest O X O Testing Threat • The effect on the posttest of taking the pretest • Legislators may have learned that the state was watching them. When real tests are given, this is a big problem.

  22. Pretest Program Posttest O X O Instrumentation Threat • Any change in the test from pretest and posttest • A different test may have been used if a different roll call estimation technology used.

  23. Pretest Program Posttest O X O Mortality Threat • Nonrandom dropout between pretest and posttest • If some legislators had been recalled along with Gray Davis, this would be a problem.

  24. Pretest Program Posttest O X O Regression Threat • Group is a nonrandom subgroup of population. • The 2003 session was particularly extreme, any other session would look more centrist.

  25. Multiple-Group Threats to Internal Validity

  26. The Central Issue • When you move from single to multiple group research the big concern is whether the groups are comparable. • Usually this has to do with how you assign units (for example, persons) to the groups (or select them into groups). • If you are not careful, may mistake a selection effect for a treatment effect.

  27. O X O O O The Multiple Group Case Alternative explanations Measure baseline Administer treatment Measure outcomes Do not administer treatment Measure baseline Measure outcomes Alternative explanations

  28. Example • Suppose USAID looked before and after at countries where it did and didn’t run governance programs in the last decade • Pre-post program-comparison group design • Measures (O) are all of the things Clark hates, but let’s set that aside for now.

  29. O X O O O Selection Threats • Any factor other than the program that leads to posttest differences between groups. • USAID did not randomly select the countries in which it ran programs, and sent aid to those with the lowest-rated governments

  30. O X O O O Selection-History Threat • Any other event that occurs between pretest and posttest that the groups experience differently. • For example, countries that begin with more stable democracies faced fewer challenges in the past decade.

  31. O X O O O Selection-Maturation Threat • Differential rates of normal growth between pretest and posttest for the groups. • It is easier to move from a semi-democracy to a full democracy than it is to move from a non-democracy to a semi-democracy

  32. O X O O O Selection-Testing Threat • Differential effect on the posttest of taking the pretest. • At least these measures are “unobtrusive,” so this probably is not a grave threat

  33. O X O O O Selection-Instrumentation Threat • Any differential change in the test used for each group from pretest and posttest • For example, the Polity measures may give some countries credit for having a USAID program

  34. O X O O O Selection-Mortality Threat • Differential nonrandom dropout between pretest and posttest. • Perhaps the countries with weak governments are more likely to cease being a country over the past decade.

  35. O X O O O Selection-Regression Threat • Different rates of regression to the mean because groups differ in extremity. • For example, the countries that USAID chooses may have nowhere to go but up.

  36. “Social Interaction” Threats to Internal Validity

  37. What Are “Social” Threats? • All are related to social pressures in the research context, which can lead to posttest differences that are not directly caused by the treatment itself. • Most of these can be minimized by isolating the two groups from each other, but this leads to other problems (for example, hard to randomly assign and then isolate, or may reduce generalizability).

  38. Types of Designs

  39. Types of Designs Random assignment?

  40. Types of Designs Random assignment? Yes

  41. Types of Designs Random assignment? Yes Randomized or true experiment?

  42. Types of Designs Random assignment? Yes No Randomized or true experiment?

  43. Types of Designs Random assignment? Yes No Control group or multiple measures? Randomized or true experiment?

  44. Types of Designs Random assignment? Yes No Control group or multiple measures? Randomized or true experiment? Yes

  45. Types of Designs Random assignment? Yes No Control group or multiple measures? Randomized or true experiment? Yes Quasi-experiment

  46. Types of Designs Random assignment? Yes No Control group or multiple measures? Randomized or true experiment? Yes No Quasi-experiment

  47. Types of Designs Random assignment? Yes No Control group or multiple measures? Randomized or true experiment? Yes No Quasi-experiment Nonexperiment

  48. Design Notation Example R O X O R O O Os indicate different waves of measurement.

  49. Elements of a Design • Observations and measures • Treatments • Groups • Assignment to group • Time

  50. Design Notation Example Vertical alignment of Os shows that pretest and posttest are measured at same time. R O X O R O O

More Related