160 likes | 249 Views
EVALUATING YOUR RESEARCH DESIGN. EDRS 5305 EDUCATIONAL RESEARCH & STATISTICS. Campbell and Stanley (1966). Two general criteria of research designs: Internal validity External validity. INTERNAL VALIDITY.
E N D
EVALUATING YOUR RESEARCH DESIGN EDRS 5305 EDUCATIONAL RESEARCH & STATISTICS
Campbell and Stanley (1966) Two general criteria of research designs: Internal validity External validity
INTERNAL VALIDITY • Definition: refers to the extent to which the changes observed in the DV are caused by the IV.
Internal Validity • ?s of internal validity cannot be answered positively unless the design provides adequate control of extraneous variables. • Essentially a problem of control. • Anything that contributes to the control of a research design contributes to its internal validity.
Internal Validity • History: specific events or conditions, other than the treatment, may occur between the 1st and 2nd measurements of the participants to produce changes in the DV. • Maturation: processes that operate within the participants simply as a function of the passage of time.
Internal Validity • Pretesting: exposure to a pretest may affect participants’ performance on a 2nd test, regardless of the IV. • Measuring instruments: changes in the measuring instruments, in the scorers, or in the observers used may produce changes in the obtained measures.
Internal Validity • Statistical regression: If groups are selected on the basis of extreme scores, statistical regression may operate to produce an effect that could be mistakenly interpreted as an experimental effect.
Internal Validity • Differential selection of participants: important differences may exist between the groups before the IV is applied. • Experimental mortality: occurs when there is a differential loss of respondents from the comparison groups.
Internal Validity • Selection-maturation interaction: Some of these internal validity threats may interact. Frequently arises when volunteers are compared with nonvolunteers.
Internal Validity • Implementation: sometimes implementing the IV threatens internal validity. Experimenter bias effect • Participants’ attitudes: Hawthorne effect-attention was positive; John Henry effect-exert extra effort
Controlling for Threats to Internal Validity • Random assignment • Randomized matching: match on as many variables as possible and then randomly assign one member of the pair to the IV-other goes to the control group.
Homogeneous selection: select samples that are as similar as possible on some extraneous variable (e.g., IQ; age) • Building variables into the design: include the extraneous variable as one of the IVs examined (e.g., gender) • Analysis of covariance: removing portion of performance that is systematically related to an extraneous variable. • Using participants as their own controls: participants are in each of the experimental conditions, one at a time.
External Validity of Research Designs • Refers to generalizability or representativeness of the findings. • Question addressed here is: • To what groups, settings, experimental variables, and measurement variables can these findings be generalized?
Types of External Validity • Population external validity: identifying the population to which results may be generalizable. • Ecological external validity: concerned with generalizing experimental effects to other environmental conditions (i.e., settings).
Types of External Validity • External validity of operations: concerned with how well the operational definitions and the experimental procedures represent the constructs of interest. Would the same relationships be found if a different researcher used different operations (i.e., measures) in investigating the same question?