1 / 15

Experiments: Part 1 Overview - Experimental versus Observational Research, Variables, Designs

This overview provides a comparison between experimental and observational research, discusses variables and designs in experiments, and explains the similarities and differences between between-group, within-subject, and mixed-model designs.

hneville
Download Presentation

Experiments: Part 1 Overview - Experimental versus Observational Research, Variables, Designs

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Experiments: Part 1

  2. Overview • Experimental versus observational research • Variables • Designs • Between-group • Within-subject • Similarities and differences • Mixed-model

  3. Background on Experiments • Study where a researcher systematically manipulates one variable in order to examine its effect(s) on one or more other variables • Two components • Includes two or more conditions • Participants are randomly assigned by the researcher • Random = Equal odds of being in any particular condition • Examples • People with GAD randomly assigned to three treatments so the researchers can examine which one best reduces anxiety • Students assigned to a “mortality salience” or control condition so the research can examine the impact on “war support”

  4. Variables • Independent Variable • Manipulated by the researcher • Typically categorical • Also called a “factor” that has “levels” • Factor = Type of anxiety treatment • Level = CBT (or Psychodynamic or Control) • Dependent Variable • Outcome variable that is presumably influenced by (depends on the effects of) the independent variable • Behavior frequencies, mood, attitudes, symptoms • Typically continuous

  5. Variables • Confounds (extraneous variables, 3rd variables) • Happens when unwanted differences (age, gender, researchers, environments, etc.) across experimental conditions • Plan: Think of potential confounds up front • Control for them methodologically • Measure them to examine whether they have an effect • Control for them statistically

  6. Experimental Designs • Three main designs • Between-group design • Also called a “between-subjects design,” or “randomized controlled trial” (if clinically focused) • Within-subject design • Also called a “repeated-measures design” • Mixed-model design • Combines both of the above

  7. Between-group Design • IV: 2 or more randomly-assigned groups of people • DV: Usually a continuous variable

  8. Within-subject Design • Any time that a study assess participants on the DV on more than one occasion • Example: Participants go through more than one experimental condition

  9. Similarities • Uses the same type of analyses • p-values obtained from t-tests (if two conditions) or F-tests/ANOVA (if more than two conditions) • Is the result statistically significant, reliable, trustworthy? • Cohen’s d used to compute effect size • Tells the number of standard deviations by which two groups differ (kind of like r but on a scale from -∞ to ∞)

  10. Cohen’s d • Calculator • http://www.psychmike.com/calculators.php • Usually use the first formula, requires M, SD, and n • Can calculate by hand with a simple formula, but it doesn’t account for differences in sample size across conditions, so less accurate • d = = (Mean difference) / standard deviation • s = average standard deviation across groups

  11. Calculation Example: Does athletic involvement improve physical health? M1 = 6.47 M2 = 6.75 s = (1.87+1.94) / 2 = 1.91 d = (6.47 – 6.75) / 1.91 = -0.28 / 1.91 = -0.15 = 0.15 weak effect! +/- sign is arbitrary, so usually just dropped

  12. 2014 article in Lancet (impact factor: 45.2) Take-home from the abstract:

  13. Differences • Between-group design required when it is impossible or impractical to put participants through more than one condition • Within-subject design is more powerful • More likely to get significant p-value and bigger effect sizes. Why? It allows each participant to serve as their own control, canceling out a lot of cross-participant variability • Between-group design requires more people • Within-subject design is prone to ordering effects (order of conditions can effect results), such as progressive effects, or carryover effects • Solution: Counterbalancing

  14. Mixed-model Design • Many different types, but requires • Random assignment of people to different groups • Repeated measurement of dependent variable over time • Benefits of both designs • Example: Pre-post between-group design

More Related