370 likes | 971 Views
Laboratory Experiments in the Social Sciences: An Introduction. Stephen Benard Department of Sociology Indiana University. Overview. A sample question What is an experiment? Basics of experimental design What can we learn from experiments? Ethics of experiments.
E N D
Laboratory Experiments in the Social Sciences:An Introduction Stephen Benard Department of Sociology Indiana University
Overview • A sample question • What is an experiment? • Basics of experimental design • What can we learn from experiments? • Ethics of experiments
An (Encouraging) Disclaimer • Just a small sample of the: • Questions • Experimental designs • Independent and dependent variables • Many, many possibilities
Sample Question: What predicts helping in an emergency? • If we notice someone in need: • Are we more likely to help when alone, or in the presence of others? • “Diffusion of responsibility”
Are people less likely to help others when in a group? • Challenging to study through observation • Emergency events are rare and hard to predict • May vary in countless ways • Many alternative explanations • People in groups less likely to notice • More groups at busier times of the day – less time • Unhelpful people more likely to travel in groups
Studying Helping in an experiment • Would be useful to repeatedly observe responses to the same emergency under different condition • E.g., when many or few people observe the emergency • Could be staged in a laboratory • (Darley and Latane1968 in JPSP) • Laboratory discussion group • One person appears to have a seizure • Manipulate number of people present • Measure proportion who helped, speed
A few more examples • Does violent media make people aggressive, or do aggressive people prefer violent media (Bandura, Ross, and Ross 1961)? • Does intergroup contact reduce or exacerbate intergroup conflict (Sherif 1958)? • Does positive mood make people more altruistic, or are more altruistic people happier (Isen et al 1978)? • Do our attitudes determine our behavior, or does our behavior determine our attitudes (Festinger and Carlsmith 1959)? • Does the gender/race/age/criminal record/other characteristic of a job applicant affect the likelihood of being hired (e.g., Pager 2003)? • Is support for a policy determined by the content of the policy, or the identity of the party supporting it (Cohen 2003)? • Does lack of control over our environment turn us into conspiracy theorists (Whitson and Galinsky 2008)? • Does the status of an author’s institution affect their chances of having an article accepted (Peters and Ceci 1982)?
Why conduct an experiment? • Identifying causes • Addressing alternative explanations • Identifying moderators and mediators • Examining hard-to-observe or rare events
What is an experiment? Three Principles • Manipulation of the independent variable • Random assignment to condition • Controlled measurement
Manipulation • In experiments you must manipulate an independent variable (IV) • This creates 2 (or more) levels of the IV • The levels of the IV are called conditions • Conditions identical except for the manipulated IV • E.g., number of people present when an emergency occurs
Random Assignment • How do we distinguish the effects of our IV from extraneous variables? • Perhaps personal interest in helping others confounded with group size • Experimenter places people into experimental conditions by chance • Equal likelihood of being in each condition • Individual differences cancel out
R Small crowd Large crowd Experimental Groups Random Assignment Colors symbolize any differentiating attribute among the individuals (e.g., personal interest in helping others) Before Random Assignment After Random Assignment
C Small crowd Large crowd Self-selected Groups What if people chose their condition? Colors symbolize any differentiating attribute among the individuals (e.g., personal interest in helping others) Before choosing Systematic error
Controlled Measurement • Systematically observe changes in the dependent variable as a function of changes in the independent variable • Important to avoid bias in recording the DM • Participant blind to hypotheses • Experimenter blind to hypotheses • Experimenter blind to condition
A simplified helping study(based on Darley and Latane 1968) • Experimental setting: a laboratory discussion group • Simulate an emergency (seizure) • Manipulate number of other people present in group • E.g., zero vs. three • Randomly assign participants to the “alone” condition or the “group” condition • Measure proportion helping, time to help
Experimental Design • Two condition, treatment-control design • Similar to medical study with placebo • Simplest possible design • Often very effective, but also limited • Additional treatment conditions • Factorial designs
Additional Treatment Conditions • Perhaps group size has a non-linear effect • Add additional condition with 6 total group members • Sometimes it is useful to have a “baseline” condition • E.g., a study of whether a text is evaluated more positively when the author is a man vs. a woman • May wish to compare to a condition with no author information • Is it that men receive a boost relative to the baseline, or women receive a penalty?
Factorial Designs • Multiple IVs • Every combination of every level of IV • Interaction effects • Predict an interaction • Or evaluate generality A 2 x 2 Factorial Design
Between vs. Within-Subjects Designs • Between-subjects design: Each participant is exposed to one level of the independent variable • E.g., study of helping
Between vs. Within-Subjects Designs • Within-subjects design: Each participant exposed to multiple levels of the dependent variable • E.g. Text evaluation study • More efficient • But possibly easier to guess hypotheses • Requires counterbalancing • Rarely possible in high-impact designs
Independent Variables • How do we know operational independent variable accurately measures theoretical dependent variable? • E.g., positive mood • How do we know the manipulation had the expected effect on participants? • Manipulation Checks
What Can We Learn From experiments? • High degree of control provides high internal validity • Experiments provide the strongest possible evidence for causality • But, external validity of laboratory experiments is often criticized • Settings don’t always resemble “real world” • Participants don’t resemble other populations • Samples are generally non-random • Small samples, at least by survey data standards • Participants are often college undergraduates • Participants are often WEIRD: Western, educated, industrialized, rich, democratic
Mundane vs Experimental Realism • Mundane Realism: the extent to which an experiment looks like the real world • Experimental realism: the extent to which experience is psychologically real and important to participants • Rarely come to a lab for a group discussion
Generalizing From…? • Should not generalize directly from an experiment to a real world situation • Experiments test theories • Theory bridges empirical studies and the real world • See Zelditch, 1969, “Can you really study an army in the laboratory”
Scope COnditions • Example: College students discriminate against women in hiring simulation • Maybe more than real managers: less experience • Maybe less than real managers: more egalitarian • Criticism that findings won’t generalize • Often explicitly or implicitly signal possible scope conditions • These can be tested to further refine the theory
Convergent Validity • Useful to think of different methods as complementary, not competing • Survey data • May have high external validity, but limited ability to show causality • Experiments • High internal validity, but limited generality
Experimental Ethics: • Three core principles for all research • Respect • Beneficience • Justice • Deception • Necessary to test some hypotheses • But should be used only as a last resort • And fully explained to participants • Debriefing
Summary • Experiments are excellent for answering questions about causality, exploring alternative explanations, and examining rare or hard to observe events • Many different types and approaches to experiments, can (must) be tailored to the research question • Facilitate systematic replication and theory development • Strengths/weaknesses complement other methods
Thank you! Stephen Benard Department of Sociology Indiana University