550 likes | 999 Views
Lessons from Social Psychology for New Experimental Disciplines ( and vice versa ). Roger Giner-Sorolla University of Kent NCRM Festival 2008 University of Oxford . Experiments in social psychology. First experiment: Triplett (1898). Complicated analyses!.
E N D
Lessons from Social Psychology for New Experimental Disciplines (and vice versa) Roger Giner-Sorolla University of Kent NCRM Festival 2008 University of Oxford
Experiments in social psychology • First experiment: Triplett (1898)
Famous social psychology experiments since then Zimbardo’s prison experiment Milgram’s obedience studies
Don’t get the wrong impression … Experiments in social psychology are not usually so flamboyant Giner-Sorolla (2001): effect of primed emotion words on dieters’ eating ‘guilty’ et al., ‘happy’ et al.: ate less ‘proud’ et al., ‘depressed’ et al.: ate more
Experiments in other fields of study Behavioural economics: Apply psychological factors to modify classical economic theory Kahneman & Tversky, Nobel 2002 Dan Ariely, MIT … others … (but publish in psychology journals…)
Experimental philosophy (for critical review, Kauppinen, 2007) Survey research to gauge, e.g., lay understanding of terms and assertions John is a psychopathic criminal. He is an adult of normal intelligence, but he has no emotional reaction to hurting other people. John has hurt and indeed killed other people when he has wanted to steal their money. He says that he knows that hurting others is wrong, but that he just doesn’t care if he does things that are wrong. Does John really understand that hurting others is morally wrong? (Nichols, 2002) 85% of participants say yes
Behavioural experimental anthropology Economic experiments (e.g., ultimatum game; Tracer, 2003) in hard-to-reach societies Papua New Guinea highlanders: low offer rate, high rejection rate
Behavioural experiments in biology • Curtis & Biran (2001): what is disgusting and why?
Experiments in law e.g. Kahan, Hoffman, & Braman, in press Video of a car chase presented as support of a US Supreme Court decision that the police did not act unreasonably … did the general public agree?
What is an experiment? • Setting up a situation • Observing results More precise usage in social psychology: • Manipulation of an independent variable (IV) • Minimizing confounds • Observing dependent variable (DV)
IV causes DV if… 1. IV covaries with the DV 2. IV precedes the DV in time (not necessarily in measurement) 3. No combination of 3rd variables can fully account for the relationship (no full mediation) Example: “Expertise causes greater persuasion”
In a correlational study : Simple correlation can establish #1 (covariance)… But common sense can’t always establish #2 (priority in time)… And we can establish #3 (independence from 3rd variables) only for one or more 3rd variables that are known and measured, using multivariate analysis. Difficult to test all possible 3rd variables!
Requirements to show causation in experiment • The IV must be manipulated as cleanly as possible – no confounding third variables to provide a plausible alternate explanation for the manipulation’s effects. Examples: “expert” and “nonexpert” give different speeches – confounded! “expert” and “nonexpert” are different people in other ways than expertise – confounded!
Heavy metal confounds Exhibit B Exhibit A
Requirements to show causation • Participants must be randomlyassigned to conditions, otherwise participant choice is a confound. Ideally, only the IV and random error contribute to differences among conditions.
Sources of variance besides IV Systematic: can lead to biasing effects (if in direction of your hypotheses) or weakening (if against direction) Ex.: More persuadable people sign up for the condition with the expert (biasing) More persuadable people sign up for the condition with the non-expert (weakening) Extremely important to control these.
Sources of variance besides IV Random: always lead to weakening effects; not a credible alternate explanation for significant results Ex.: People have a variety of opinions before being randomly assigned; this will of course increase the variance of their final opinion, but has nothing necessarily to do with what condition they are in.
Random error Desirable to control this – how? • Standardisation (only look at neutral people) • Matching (run pairs of people with the same initial opinion; one hears an expert, one, a non-expert) • Analysis (measure prior opinion and factor it out)
An additional consideration • The manipulation should be effective in influencing the independent variable. Manipulation > Expertise > Persuasion Manipulation check can establish this. ‘How expert did Prof. X appear to be in the field of educational policy?’ (Most useful if experiment doesn’t work!)
Other terms • Operationalization: how you manipulate or measure a conceptual IV or DV. • Conceptual replication: same experiment, different operationalizations.
Other terms • Quasi-experiment: experiment with non-randomly-assigned groups (ex., people in and out of treatment). • Control group: condition with no manipulation; useful for establishing baseline level of the DV.
Other terms • Experimental realism: how much does the procedure create the desired psychological state within the participant? Ex.: ‘emergency’ via smoke coming from duct. • Ecological validity: how much does the procedure resemble the real-life version of the phenomenon you are studying? Ex.: person perception using phrases vs. video
Limits of the experiment Experiments favour internal validity (exactitude) over external validity (generalization to real life) – reduce confounding factors and random variance. Manipulation favours studying single factors over multiple factors Many interesting variables can’t be manipulated ethically or practically
Advantages of the experiment Best way to establish causal relationships decisively Can study ‘basic’ questions, minimising influence of context Conclusions can be confirmed by more applied studies
True experiments across disciplines Gutierrez & Giner-Sorolla, 2007, Experiment 2
True experiments across disciplines Manipulation (happy/sad music) Outcome (eating) Mediator (self report of mood)
Why true experiments are great for external communication 1. Manipulation and results are simple to understand (unlike SEM)
Why true experiments are great for external communication 2. Underlying experimental method justifies the causal inference people draw anyway
Four-way tradeoffs Investment Internal Validity Strength External Validity
Making results strong … while not sacrificing too much validity Starting strong: “no result” is more likely to mean there’s nothing out there. Manipulation, setting and measures are all important.
Strong manipulations • Value strength over subtlety; grab attention • Example: happy mood • Are credible (possible trade-off with research investment: for example, written scenario vs. “news video”?) • Are clearly expressed (participant input & feedback in pilot run) • Within-participants measures? • Efficiency and strength … sometimes
Strong settings • Focus the attention (lab vs. in the street … or web?) • Mood lighting! • Reduce error variance by eliminating or matching: • Experimenter characteristics • Using pre-measures of DV • Using covariates related to DV
Strong measures • Are easily noticed (not subtle) • Are clearly written and administered: pre-test on participant population for clarity • Watch for floor or ceiling effects (example: charity study) • Don’t leave random assignment to chance – use random method that ensures equal N • The 10 participant peek-in – basic check on item use and P comments
Pilot testing for strength Often a good idea if the manipulation is new or being tried in a new context Include only the main DV and manipulation check, and enough participants to detect a strong effect on the check (12-20 per condition)
Excluding participants On the basis of: • not “getting” the manipulation on a basic level • Expressing suspicion If more than 10%, need to rethink procedures in future experiments
Internally valid manipulations Precision: the manipulation varies only the variable of interest; the measures tap only the variable they’re supposed to. After establishing a strong effect (without glaring validity problems!) tighten up validity. Some ways to get validity from day 1 …
Validity without sacrifice I: Parallelism In procedure and materials Example: This could be better….
Validity without sacrifice I: Parallelism Manipulations now vary only at the key word.
Validity without sacrifice II: Pilot test for validity Look at experimental stimuli equivalence on possible confounds, not just strength Example: Attractiveness of “White male” facial stimuli
Validity without sacrifice III: Include tests for confounds A confound is really only an unwanted mediator Example: Is your effect of goal priming “only” a mood effect? Include measure of mood to find out!
Sacrifices for validity I: Add conditions Add control condition to see where the “action” is
Sacrifices for validity I: Add conditions Sometimes the “right” control condition is not obvious; experiments with multiple controls need to be done. Example: What’s the right control condition for failure feedback: no feedback (lack of parallelism) or neutral performance feedback (gives people information)?
Sacrifices for validity I: Add conditions Added experimental conditions can be used to establish a more precise causal story, too (Spencer, Zanna & Fong, 2005). Example: No-responsibility condition in Castano & Giner-Sorolla (2006) More infrahumanization when reading of a killing your group has done: is it just-world belief, or defence against guilt? Add “accidental killing” condition to find out.
Sacrifices for validity II: Trade strength for subtlety Demand characteristics might explain results (but unsubtlety can also hinder results!) • Within manipulation > go between-participants • Go from an obvious manipulation, to an incidental one (e.g. Gilovich, 1981: war and metaphor) • Go from obvious measures to implicit measures
Sacrifices for validity III: Buy validity with resources • From “scenario” studies to more compelling manipulations (possibly with deception) • 2. Include suspicion debriefing with “funnel” characteristics. A must when using deception or subtle manipulations!
External validity • Generalizability In the later stages of an article or research programme: • add conditions to test boundary conditions and moderators. • Vary the procedures and materials to see how robust the underlying concept is (example: manipulating power).
External validity: Generalizability Extending populations beyond the easily obtained … • General population • Children • Other cultures
External validity: Ecological validity To what extent do your procedures resemble real life? Most objections to the “experiment” are actually objections to the lab. Can good experiments be done outside the lab in real world conditions?