1 / 37

Maximizing Internal Validity: Strategic Experimental Design for Causation

Learn how to safeguard against threats to internal validity and infer causality through clear, controlled experimental designs. Discover the key components of experimental variance partitioning to ensure robust statistical analysis.

earlv
Download Presentation

Maximizing Internal Validity: Strategic Experimental Design for Causation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 10 - Lecture 9 Internal Validity – Control through Experimental Design Test the effects of IV on DV Protects against threats to internal validity Causation

  2. Experimental Design • Highest Constraint • Comparisons btw grps • Random sampling • Random assignment Infer Causality

  3. Experimental Design (5 characteristics) One or more hypothesis Includes at least 2 “levels” of IV Random assignment Procedures for testing hypothesis Controls for major threats to internal validity

  4. Experimental Design • Develop the problem statement • Define IV & DV • Develop research hypothesis • Identify a population of interest • Random sampling & Random assignment • Specify procedures (methods) • Anticipate threats to validity • Create controls • Specify Statistical tests • Ethical considerations Clear Experimental Design…

  5. Experimental Design 2 sources of variance between groups variance (systematic) no drug drug 2. Within groups variance (nonsystematic) (error variance) Remember… Sampling error Significant differences…variability btw means is larger than expected on the basis of sampling error alone (due to chance alone)

  6. Need it! Without it… No go Variance VARIANCE “Partitioning of the variance” Between Group Within Group Experimental Variance (Due to your treatment) + Extraneous Variance (confounds etc.) Error Variance (not due to treatment – chance) Subs CON TX

  7. Variance: Important for the statistical analysis between groups variance Within groups variance F = Systematic effects + error variance error variance F = F = 1.00 No differences btw groups

  8. Variance • Your experiment should be designed to • Maximize experimental variance • Control extraneous variance • Minimize error variance

  9. Maximize “Experimental” Variance • At least 2 levels of IV (IVs really vary?) • Manipulation check: make sure the • levels (exp. conditions) differ each other • Ex: anxiety levels (low anxiety/hi anxiety) •  performance on math task anxiety scale

  10. Control “Extraneous” Variance • Ex. & Con grps are similar to begin with • Within subjects design (carryover effects??) • If need be, limit population of interest (o vs o ) • Make the extraneous variable an IV • (age, sex, socioeconomic) = factorial design M F Lo Anxiety M-low F-low Factorial design (2 IV’s) YOUR Proposals Hi Anxiety M-hi F-hi

  11. Control through Design – Don’ts • Ex Post Facto • Single-group, posttest only • Single-group pretest-posttest • Pretest-Posttest natural control group 1. Ex Post Facto – “after the fact” Group A Naturally Occurring Event Measurement No manipulation

  12. Control through Design – Don’ts Single group posttest only Group A TX Posttest Single group Pretest-posttest Pretest Group A TX Posttest Compare

  13. Control through Design – Don’ts Pretest-Posttest Naturalistic Control Group Group A Pretest TX Posttest Group B Pretest no TX Posttest Compare Natural Occurring

  14. Control through Design – Do’s – Experimental Design • Manipulate IV • Control Group • Randomization 4 Basic Designs Testing One IV 1. Randomized Posttest only, Control Group 2. Randomized Pretest-Posttest, Control Group 3. Multilevel Completely Randomized Between Groups 4. Solomon’s Four- Group

  15. Randomized Posttest Only – Control Group (most basic experimental design) R Group A TX Posttest (Ex) R Group B no TX Posttest (Con) Compare

  16. Randomized, Pretest-Posttest, Control Group Design R Group A Pretest TX Posttest (Ex) R Group B Pretest no TX Posttest (Con) Compare

  17. Multilevel, Completely Randomized Between Subjects Design (more than 2 levels of IV) R Group A Pretest TX1 Posttest R Group B Pretest TX 2 Posttest R Group C Pretest TX3 Posttest R Group D Pretest TX4 Posttest Compare

  18. Solomon’s Four Group Design (extension Multilevel Btw Subs) R Group A Pretest TX Posttest R Group B Pretest ---- Posttest R Group C -------- TX Posttest R Group D -------- ---- Posttest Compare Powerful Design!

  19. What stats do you use to analyze experimental designs? Depends the level of measurement Test difference between groups Nominal data  chi square (frequency/categorical) Ordered data  Mann-Whitney U test Interval or ratio  t-test / ANOVA (F test)

  20. t-Test Compare 2 groups Independent Samples (between Subs) One sample (Within) Evaluate differences bwt two conditions in a single groups Evaluate differences bwt 2 independent groups

  21. Assumptions to use t-Test • The test variable (DV) is normally distributed • in each of the 2 groups • The variances of the normally distributed test • variable are equal – Homogeniety of Variance • 3. Random assignment to groups

  22. t-distribution Represents the distribution of t that would be obtained if a value of t were calculated for each sample mean for all possible random samples of a given size from some population

  23. Degrees of freedom (df) When we use samples we approximate means & SD to represent the true population Sample variability (SS = squared deviations) tends to underestimate population variability Restriction is placed = making up for this mathematically by using n-1 in denominator

  24. S2 = variance ss (sum of squares) df (degrees of freedom) (x - )2 n-1 x Degrees of freedom (df): n-1 The number of values (scores) that are free to vary given mathematical restrictions on a sample of observed values used to estimate some unknown population = price we pay for sampling

  25. Degrees of freedom (df): n-1 Number of scores free to vary Data Set  you know the mean (use mean to compute variance) n=2 with a mean of 6 X 8 ? 6 In order to get a mean of 6 with an n of 2…need a sum of 12…second score must be 4… second score is restricted by sample mean (this score is not free to vary) =x

  26. Standard Error of the Mean Standard Deviation SEM s2 = variance S S = s2 How much single observations How much sample means jump jump around from one observation around from one sample to to the next the next Divide the S by square root of N S N x

  27. Two tailed – when we do not know how our TX will effect scores One tailed – when we know how our TX will effect scores…one way or another

  28. Analysis of Variance (ANOVA) Two or more groups ….can use on two groups… t2 = F Variance is calculated more than once because of varying levels (combo of differences) Several Sources of Variance SS – between SS – Within SS – Total Sum of Squares: sum of squared deviations from the mean Partitioning the variance

  29. Assumptions to use ANOVA • The test variable (DV) is normally distributed • The variances of the normally distributed test • variable is equal – Homogeniety of Variance • 3. Random assignment to groups

  30. between groups variance Within groups variance F = Systematic effects + error variance error variance F = F = 1.00 No differences btw groups F = 21.50 22 times as much variance between the groups than we would expect by chance

More Related