310 likes | 348 Views
Social Research Methods. Experiments. Experiments. One of the three research strategies: Experiment Survey Case Study. Experimental Design. Assignment of subjects to different conditions (N.B. subject or participant or respondent)
E N D
Social Research Methods Experiments
Experiments • One of the three research strategies: • Experiment • Survey • Case Study
Experimental Design • Assignment of subjects to different conditions • (N.B. subject or participant or respondent) • Manipulation of one or more independent variables by experimenter (IV) • Measurement of effects on one or more dependent variables (DV) • Control of other variables • In order to make causal judgements about what causes variation in dependent variables.
Control • Direct - e.g. focus on certain characteristics by exclusion. But problem of finding individuals with obscure combinations of characteristics
Control by Randomisation - usual alternative True experiment (as defined by Ronald Fisher) • Randomised selection of participants from known populations. Theoretically required but seldom done. Done to give external validity = generalisability • Randomised allocation to different conditions. Done to give internal validity = do changes in IV actually cause changes in DV? N.B. there is still possibility of bias but we know size and probability of it
Example • Does noise affect the amount that people can remember when learning? • Is noise good or bad? • Task: respondents given a standard list of items to remember in a given time • Groups • Control - no noise at all • Treatment group 1 - loud, unpredictable noise • Treatment group 2 - soft, classical music
Procedure • Randomly allocate participants to the 3 groups • All groups given the same task (“memorise the list”) under different conditions (noise, classical music, no noise) • All participants given the same test of memory recall after the noise etc. has ended • Compare the mean score of recall for each group. • Use statistics (e.g. ANOVA) to see if differences are significant.
Field Experiments • Experiments by tradition in the laboratory. • Easy to control variables • Move to use experiments in the field. BUT. • Random allocation may be difficult • Ethical issues of control • Validity • Likely lack of control
Field Experiments cont. Advantages • Easier generalisability. N.B. not statistical. i.e. ecological validity, more natural setting • Validity - can be improved • Improved participant availability However, field experiments not often done because we need to know, in advance, what variables are. This often not known in field conditions.
Quasi-experimental designs Two pioneering books by: • Campbell, D T and Stanley, J C (1966) Experimental and Quasi-experimental Designs for Research on Teaching. Chicago: Rand McNally. • Cook, T D and Campbell, D T (1979) Quasi-experimentation: Design and analysis issues for field settings. Chicago: Rand McNally. • Like true experiments - general style and approach • BUT no randomised allocation of participants to different conditions. • Cook and Campbell argue that you can still make judgements about causal connections
Designs to avoid: 1 • There are some Quasi experimental designs to avoid. • The one group post test only design • Very little warrant for deduction of causal effect - beyond the pale
Designs to avoid: 2 Post test only – non equivalent groups • Can be history effects on X1 only (e.g. if done at different times) but if X1 and X2 are similar then history may be excludable • Mortality problems (e.g. treatment causes dropout of lower scoring leavers) • Maturation problems especially if not done in parallel • Selection effects differentiate X1 and X2, produce possible interactions
Designs to avoid: 3 Before-after single group design • Test effects - people learn something from 1st measurement (O1) and get better at O2 • Instrumentation - change in measurement scale, O1 to O2 • Selection effects • BUT despite dangers to validity, particular circumstances may help protect against threats • Need to ask are there plausible effects or dangers.
Regression threat • e.g. disadvantaged group vs. comparison group. Pre-test on DV • Temptation to match samples on this DV. Problem if high scoring disadvantaged matched with low scoring comparison group • BUT regression to the mean • Because score is due to factor + error.
Regression threat cont. Some of high scorers there because they’re lucky, some low scorers there because they’re unlucky. On retest luck changes. Some high scorers move down, some low scorers more up. Net result is appearance of disadvantaged group doing worse
Good designs: 1 Pre-test post-test non-equivalent groups design • Selection bias possible • Selection/treatment interaction • But interpretation depends on outcome…
Good designs: 1 cont • Treatment increases more. But may be a problem of scales
Good designs: 1 cont Better is:
Good designs: 1 cont or even better:
Good designs: 2 Interrupted time series X1 ..... O1 ..... O2 ..... O3 ..... O4 ..... T ..... O5 ..... O6 ..... O7 ..... O8 ..... • Look for different pattern before compared with after. • e.g. Time
Good designs: 2 cont. or Time
Good designs: 2 cont. But there are problems cases, e.g. No evidence. N.B. average before and average after shows a difference, but no evidence Time
Good designs: 2 cont. Danger of maturation effect or testing effect Premature effect. Plus: instrumentation effects, participant mortality Time
Good designs: 3 Regression discontinuity design • Need pre-test score or measure. • All below certain score given one condition - e.g. not treat • All above certain score given second condition - e.g. treatment • Demonstration of effect depends on discontinuity
Good designs: 3 cont. • This is quite powerful 2
Lessons from quasi-experimental designs • If it is difficult to run a field experiment, then consider a quasi-experiment. Gives some of the power of experiments to determine causality. • Include experimental like features into other approaches (e.g. questionnaire surveys done as an interrupted time series). • Always consider general issues of threats to validity (e.g. in surveys, single case experiments)