330 likes | 340 Views
Discover the power of social experiments in various fields such as education, criminal justice, journalism, marketing, nursing, political science, psychology, social work, and sociology. Learn about the experimental method, its strengths and limitations, and how to answer research questions using this approach.
E N D
The Experiment Chapter 7
Experiments • Simple social experiments allow us to test targeted hypotheses about a specific social process • Experiments gives us clear evidence about a particular causal relationship • Social experiments in education, criminal justice, journalism, marketing, nursing, political science, psychological social work, and sociology use the same basic logic that guides natural science experiments
DOING EXPERIMENTS IN EVERYDAY LIFE • An experiment refers to two basic situations • before-and-after comparison: you modify something and then compare an outcome to what existed before the modification • side-by-side comparison: you have two similar things, and then you modify one but not the other and compare the two
You do three things in an experiment: • Start with a cause-effect hypothesis • Modify a situation or introduce a change • Compare outcomes with and without modification
WHAT QUESTIONS CAN YOU ANSWER WITH THE EXPERIMENTAL METHOD? • Research questions most appropriate for an experiment fit its strengths and limitations, which include: • The experiment has a clear and simple logic • It has the ability to isolate a causal mechanism • It is targeted on two or three variables and is narrow in scope • It is limited by the practical and ethical aspects of the situations you can impose on humans
Comparison is central to experiments • In experimental social research, you want to compare things that are fundamentally alike • To make a valid comparison, you want to compare participants who do not differ with regard to variables that could be alternative explanations to your hypothesis
WHY ASSIGN PEOPLE RANDOMLY? • Many experiments use randomization to ensure that participants are sorted into groups in an unbiased way • random assignment: sort research participants into two or more groups in a mathematically random process
Random sampling & random assignment • When you randomly sample, you use a random process to select a smaller subset of people (sample) from a larger pool (population) • When you randomly assign, you use a random process to sort a collection of participants into two or more groups • You can both randomly sample and randomly assign • this is the ideal in social experiments
DO YOU SPEAK THE LANGUAGE OF EXPERIMENTAL DESIGN? • We can divide an experiment into 7 parts • Independent variable • Dependent variable • Pretest • Posttest • Experimental group • Control group • Random assignment
The independent variable (IV) in experimental research • In experiments, the IV is something a researcher does or introduces, modifying or altering a condition • e.g., giving participants different instructions, showing them different situations, using different physical settings, staging contrived social situations • Also called: the treatment, manipulation, stimulus, or intervention
Researchers measure dependent variables (DVs) in many ways • e.g., response times, percent accurate scores, social behaviors, attitudes, feelings and beliefs
In many experiments, you measure the DV more than once – before introduction of IV (pretest) and after introduction of IV (posttest) • pretest: a measure of the dependent variable prior to introducing the independent variable in an experiment • posttest: a measure of the dependent variable after independent variable has been introduced in an experiment
When introducing an IV, you can use 2 or more groups or a single participant group • independent group design: experimental designs in which you use two or more groups and each gets a different level of the IV • repeated measures design: an experimental design with a single participant group that receives different levels of the independent variable • See Example: Was it a Gun or a Tool?
Experimental and control groups • experimental group: in an experiment with multiple groups, a group of participants that receives the IV or a high level of it • control group: in an experiment with multiple groups, a group of participants that does not receive the IV or receives a very low level of it
Managing Experiments • You want to isolate the effects of the IV and eliminate alternative explanations • Any aspect of an experiment that is not controlled may become an alternative explanation for changes in the DV • Sometimes, deception is used to manipulate how participants define a situation, by misleading them w/ instructions, by staging settings, or by using helpers or confederates • confederates: people who work for an experimenter and mislead participants by pretending to be another participant or uninvolved bystander
Types of Experimental Design • experimental design: how parts of an experiment are arranged, often in one of the standard configurations • classical experimental design: has all key elements that strengthen its internal validity: • random assignment, control & experimental groups, and pretest & posttest
Tip study can be done in different ways, using the standard experimental designs • Goal: design experiment(s) to explain size of tips • To test whether servers receive bigger tips if they introduce themselves by first name before taking an order and return 8-10 minutes after delivering food to ask “Is everything fine”? • IV=server behavior / DV = size of tip received
Group 1 Pretest Serve food w/o (amount of tips) intro or check-in Group 2 Pretest Serve food w/o (amount of tips) intro or check-in IV Present Posttest Self-intro & (amount of tips) check-in IV Absent Posttest Serve food w/o (amount of tips) intro or check-in Example Using Classical Experimental DesignMonth 1Month 2
Group 1 Serve food w/o intro or check-in Group 2 Serve food w/o intro or check-in IV Present Posttest Self-intro & (amount of tips) check-in IV Absent Posttest Continue to serve (amount of tips) w/o intro or check-in Ex. w/ 2-Group Posttest-Only Experimental DesignMonth 1 Month 2
Group 1 Pretest Serve food w/o (amount of tips) intro or check-in Group 2 Pretest Serve food w/o (amount of tips) intro or check-in Group 3 Serve food with intro or check-in Group 4 Serve food w/o intro or check-in IV Present Posttest Self-intro & (amount of tips) check-in IV Absent Posttest Continue serving (amount of tips) w/o intro or check-in IV Present Posttest Continue self- (amount of tips) intro & check-in IV Absent Posttest Serve food w/o (amount of tips) intro or check-in Ex. w/Solomon 4-Group Experimental DesignMonth 1Month 2
Factorial designs • Some research questions suggest you look at the simultaneous effects of multiple IVs • factorial design: an experimental design in which you examine the impact of combinations of two or more independent variable conditions
Main vs. Interaction Effects • main effects: the effect of a single independent variable on a dependent variable • interaction effects: the effect of two or more independent variables in combination on a dependent variable that is beyond or different from the effect that each has alone
Fig 7.7: Blame, Resistance, and Schema: Interaction Effect Sex schema B L A M E Power schema Submits Fights Victim tries to fight off the Rapist (Resistance)
Experimental Validity • internal validity: the ability to state that the IV was the one sure cause that produced a change in the DV • external validity: the ability to generalize experimental findings to events and settings beyond the experimental setting itself
Pre-experimental designs • pre-experimental designs: experimental designs that lack one or more parts of the classical experimental design, e.g., • One-Shot Case Study Design • One-Group Pretest-Posttest Design • Static Group Comparison
Quasi-Experimental Designs • quasi-experimental designs: experimental designs that approximate the strengths of the classical experimental design, but do not contain all of its parts, e.g., • Interrupted Time Series • Equivalent Time Series
Threats to internal validity • Selection bias • History • Maturation • Testing • Experimental mortality • Contamination or diffusion of treatment • Experimenter expectancy
Experimenter expectancy • Experimenter expectancy: how researchers may accidently and indirectly communicate the desired findings to participants • This is not purposeful, unethical behavior • Researchers control experimenter expectancy by using double-blind experiments • double-blind experiments: an experimental design to control experimenter expectancy in which the researcher does not have direct contact with participants. All contact is through assistants from whom some details are withheld. • placebo: a false or noneffective independent variable given to mislead participants
Threats to External Validity • Participants are not representative • Artificial setting • Artificial treatment • Reactivity
Reactivity • Research participants respond differently than they would in real life situations b/c they’re aware they’re part of a study • reactivity: a threat to external validity due to participants modifying their behavior because they are aware that they are in a study • Hawthorne effect: a type of experimental reactivity in which participants change due to their awareness of being in a study and the attention they receive from researchers
Field Experiments • The amount of control researchers have over an experiment varies, from tightly-controlled laboratory experiments in specialized settings to field experiments • field experiment: an experiment that takes place in a natural setting and over which experimenters have limited control
Natural experiments • Unlike intentional, planned experiments, natural experiments, or ex post facto (after the fact) control group comparisons, are field experiments that people did not internationally plan as an experiment but had features of an experiment • natural experiments: events that were not initially planned to be experiments but permitted measures and comparisons that allowed the use of an experimental logic
HOW TO BE ETHICAL IN CONDUCTING EXPERIMENTS • Some researchers use deception, misleading or lying to participants • While dishonesty is never condoned, deception may be acceptable under limited conditions • If it is the only way to achieve a clear research goal • The amount and type of deception do not go beyond what is minimally necessary • Participants are debriefed as soon as possible • debrief: an interview or talk with participants after an experiment ends in which you remove deception if used and try to learn how they understood the experimental situation