390 likes | 779 Views
Experimental Design. The Gold Standard?. Today’s Goals. Identify issues of internal and external validity with various experimental designs Design an experiment for a given topic Critique advantages and disadvantages of different designs. To Review.
E N D
Experimental Design The Gold Standard?
Today’s Goals • Identify issues of internal and external validity with various experimental designs • Design an experiment for a given topic • Critique advantages and disadvantages of different designs
To Review • Why is most educational research comprised of non-experimental research designs? Ethical and logistical considerations.
To Review • What is the purpose of non-experimental research? It describes current existing characteristics of the topic under study.
To Review • How does the independent variable function in non-experimental research? It is not manipulated.
To Review • Can non-experimental research claim causality? NO!
An example • Read the example given in class and in pairs respond to the questions
Experimental Research • Purpose • To make causal inferences about the relationship between the independent and dependent variables • Characteristics • Direct manipulation of the independent variable • Control of extraneous variables • Eliminate the variable from the study • Statistically adjust for the effect of the variable
Experimental Designs • Single Group Post-test • Single Group Pre-test Post-test • Non-Equivalent Groups Post-test • Quasi-Experimental Design • Randomized Post-test only • Randomized Pre-test Post-test • Factorial
Experimental Validity • Internal validity • The extent to which the independent variable, and not other extraneous variables, produced the observed effect on the dependent variable • External validity • The extent to which the results are generalizable
Internal Validity • Threats that reduce the level of confidence in any causal conclusions • Key Question: Is this a plausible threat to the internal validity of the study?
Threats to Internal Validity • History • Extraneous events have an effect on the subjects’ performance on the dependent variable • The crash of the stock market, 9-11, the invasion of Iraq, etc. • Selection • Groups that are initially not equal due to differences in the subjects in those groups • Positive and negative attitudes, high and low achievers, etc.
Threats to Internal Validity • Maturation • Changes experienced within the subject over time • Pretesting • The effect of having taken a pretest • Instrumentation • Poor technical quality (i.e. validity, reliability) or changes in instrumentation
Threats to Internal Validity • Subject attrition • Differential loss of subjects from groups • Statistical regression • The natural movement of extreme scores toward the mean • Diffusion of treatment • The treatment is given to the control group • Experimenter effects • Different characteristics or expectations of those implementing the treatments across groups
Threats to Internal Validity • Subject effects • The effects of being aware that one is involved in a study • Four types • Hawthorne effect • John Henry effect • Novelty effect
Internal Validity • Key Point: Ultimately, validity is a matter of judgment. Ask if it is reasonable that possible threats are likelyto affect the results.
External Validity • The extent to which results can be generalized from a sample to a particular population. • Question – Why would really good internal validity often result in poor external validity?
External Validity • Factors affecting external validity • Subjects • Representativeness of the sample in comparison to the population • Personal characteristics of the subjects • Situations - characteristics of the setting • Specific environment • Special situation • Particular school
External Validity • Importance of explanation of sampling procedures
Experimental Designs • Single Group Post-test • Single Group Pre-test Post-test – • Non-Equivalent Groups Post-test – • Quasi-Experimental Design – • Randomized Post-test only – • Randomized Pre-test Post-test – • Examples
Your Task • Based on the topic of your proposal, design an experimental study using the design you were assigned. • Write a research question and hypothesis. • Sketch out the methods. • Identify strengths and weaknesses of the design.
Experimental Designs • Notation • R indicates random selection or random assignment • O indicates an observation • Test • Observation score • Scale score • X indicates a treatment • A, B, C, ... indicates a group
Pre-Experimental Designs • No pre-experimental design controls internal validity threats well • Single group pretest only • A X O • Internal validity threats • History, maturation, attrition, experimenter effects, subject effects, and instrumentation are viable threats • Useful only when the research is sure of the status of the knowledge, skill, or attitude being changed and there are no extraneous variables affecting the results
Pre-Experimental Designs • Single group pretest post-test • A O X O • Internal validity threats • Maturation and pretesting are threats • History and instrumentation are potential threats • Useful when subject effects will not influence the results, history effects can be minimized, and multiple pretests and post-tests are used
Pre-Experimental Designs • Non-equivalent groups post-test only • A X O B O • Internal validity threats • Definite Threat: Selection • Potential Threats: History, maturation, and instrumentation • Useful when groups are comparable and subjects can be assumed to be about the same at the beginning of the study
Quasi-Experimental Designs • Types • Non-equivalent pretest/post-test, experimental control groups • A O X O B O O • Non-equivalent pretest/post-test, multiple treatment groups • A O X1 O B O X2 O • Useful when subjects are in pre-existing groups (e.g. classes, schools, teams, etc.)
Quasi-Experimental Designs • Threats to internal validity • Selection is the major concern • Controls for statistical regression • Likely to control for most other threats, provided the groups are not significantly different from one another • See Table 9.2 for specific threats related to each design
True Experimental Designs • Important terminology • Random assignment • Subjects placed into groups by random • Ensures equivalency of the groups • Random selection of subjects • Subjects chosen from population by random • Ensures generalizability to the population from which the subjects were selected (i.e. external validity)
True Experimental Designs • Types • Randomized post-test only experimental control groups • R A X O R B O • Randomized post-test only multiple treatment groups • R A X1 O R B X2 O
True Experimental Designs • Types (continued) • Randomized pretest/post-test multiple treatment groups • R A O X1 O R B O X2 O • Randomized pretest/post-test experimental control groups • R A O X O R B O O
True Experimental Designs • Threats to internal validity • Controls for selection, maturation, and statistical regression • Likely to control for most other threats • See Table 9.2 for specific threats related to each design
Factorial Designs • Research designs containing two or more independent variables • Example: A study of the effects of two instructional strategies on male and female students’ math achievement • Examples of factorial designs
Types of Effects • Main effects • For each independent variable • i.e., one main effect for instructional strategy and one main effect for math achievement
Types of Effects • Interaction effects • Consider the vitamins you take. • Iron decreases fatigue. • Vitamin C decreases stress. • Vitamin C boosts the absorption of iron. • If you are fatigued and stressed, you may want to take both iron and Vitamin C. • The interaction of Vitamin C and iron means you may want to skip an iron supplement when taking Vitamin C.
Types of Effects • Interaction effects • A different effect for the level of the first independent variable across the levels of the second independent variable • i.e., the first instructional strategy could be effective for males but not females, whereas the second instructional strategy could be effective for females but not males • One cannot state the effectiveness of the treatment (i.e., instructional strategy) without qualifying it relative to the dependent variable (gender).
Evaluating Experimental Designs • Criteria for evaluating experimental research • The primary purpose is to test causal hypotheses • There should be direct manipulation of the independent variable • There should be clear identification of the specific research design
Evaluating Experimental Designs • Criteria for evaluating experimental research • The design should provide maximum control of extraneous variables • Treatments are substantively different from one another • The number of subjects is dependent on or equal to the number of treatment replications