230 likes | 518 Views
Types of Research and Designs. This week and next week… Covering Research classifications Variables Steps in Experimental Research Validity Research Designs Common Sources of Error. RESEARCH DESIGNS. Types of Designs.
E N D
Types of Research and Designs • This week and next week… • Covering • Research classifications • Variables • Steps in Experimental Research • Validity • Research Designs • Common Sources of Error
Types of Designs • Research designs are ways to structure and conduct research in order to avoid the threats to internal and external validity. • Pre-experimental designs • True experimental designs • Quasi-experimental designs
Pre-experimental Designs • Fewest controls • No random sampling • No random assignments • Controls few threats to validity
True Experimental Designs • Most controls • Random sampling • Random assignments • All threats to internal validity controlled
Quasi-experimental Designs • Some controls • (maybe) random sampling • (maybe) random assignments • Not all threats to internal validity controlled, but attempts are made to control some of them.
Types of Designs • Research designs are ways to structure and conduct research in order to avoid the threats to internal and external validity. • Pre-experimental designs • True experimental designs • Quasi-experimental designs
Design Complexity • X = treatment administered • O = data collected • Example: • O1 – X – O2 • Pre-test vs. post-test design • Look at Table 7.2…
Methods of Control • Physical manipulation • Researcher controls all aspects • Selective manipulation • Selecting certain participants controls threats to internal and external validity • Matched pairs and block designs • Participants with similar scores on a dependent variable are matched into pairs (for two groups) or into blocks if more than two groups are needed. • The participants are still randomly assigned to the treatment groups • This procedure allows comparisons between or among groups that started equally
Methods of Control • Counterbalanced design • All participants receive all treatments, but in random order • Example: • Sarah?
Methods of Control • Statistical techniques • When physical and selective manipulations of variables cannot be accomplished • Groups differ on a known variable, but you can’t do anything about it in terms of the design • There are many ways to statistically control for these differences • ANCOVA (analysis of covariance) • Adjusts the differences among groups (variates) based on the common variability in one variable (the covariate) • Normalization • A technique that is commonly used to reduce between-subject and between-group variability • Limited to looking at the “patterns” in the data
Common Sources of Error • Hawthorne Effect • Placebo Effect • “John Henry” Effect • Rating Effect • Experimenter Bias Effect • Participant-Researcher Interaction Effect • Post Hoc Error
Hawthorne Effect • Named after a study conducted at the Hawthorne Electric Plant in the 1920’s. • It was observed that a group of workers that participated in the study acted differently because they “felt special.” • Therefore, participants should… • Be unaware that they are participating in a study • Be unaware of the hypotheses being tested • Within the confines of human subject research restrictions
Placebo Effect • If participants believe that a change is supposed to occur as a result of a treatment, they will respond with a change in performance (no matter what the treatment is). • Therefore, participants should… • Be randomly assigned to treatment and placebo groups
“John Henry” Effect • Participants know that they are in a control group and that the experimental group is supposed to be better, therefore, the control group tries harder to outperform the experimental group. • Therefore, participants should… • Not be aware of the group they are in • If they all think they are in a control group, they will all try hard.
Rating Effect • Several kinds of rating effects: • Halo effect: • The tendency to let initial impressions or ratings of a participant or group influence future ratings. • Over-rater/under-rater error • When researchers tend to over- or under-rate subjects • Central tendency error • When researchers rate subjects toward the middle of the scale • When subjects rate issues toward the middle of the scale
Experimental Bias Effect • The bias of a researcher can effect the outcome of a study. • Therefore, studies should be blinded. • Single blinded studies • Only the subjects are blinded to the treatments • Double blinded studies • Subjects and investigators blinded to the treatments
Participant-Researcher Interaction Effect • How participants respond to different researchers. • Sometimes, males and females respond differently to male vs. female researchers. • Therefore, researchers should… • Act and respond to subjects exactly the same way each time (professionally)
Post Hoc Error • Error that is introduced by assuming a cause-and-effect relationship between two variables • Example: • More people die in a bed than any other place; therefore, beds are dangerous. • Therefore, researchers should… • Not assume cause-and-effect relationships without sufficient evidence