190 likes | 653 Views
Research Methods. Descriptive Methods Observation Survey Research Experimental Methods Independent Groups Designs Repeated Measures Designs Complex Designs Applied Research Single-Case Designs and Small-n Research Quasi-Experimental Designs and Program Evaluation. Experimental Methods.
E N D
Descriptive Methods • Observation • Survey Research • Experimental Methods • Independent Groups Designs • Repeated Measures Designs • Complex Designs • Applied Research • Single-Case Designs and Small-n Research • Quasi-Experimental Designs and Program Evaluation
Experimental Methods • PSYCHOLOGICAL EXPERIMENTS • LOGIC OF EXPERIMENTAL RESEARCH • RANDOM GROUPS DESIGN • Block Randomization • Threats to Internal Validity • ANALYSIS AND INTERPRETATION OF EXPERIMENTAL FINDINGS • The Role of Data Analysis in Experiments • Describing the Results • Confirming What the Results Reveal • What Data Analysis Can’t Tell Us • ESTABLISHING THE EXTERNAL VALIDITY OF EXPERIMENTAL FINDINGS • MATCHED GROUPS DESIGN • NATURAL GROUPS DESIGN Independent Groups Designs
Psychological Experiments • Experiments: • Empirical testing of hypotheses • Testing contemporary theories • Identification of the causes of behavior • Testing intervention
Logic of Experiments • Manipulation • IV on DV to observe the effect on behavior • Experimental control • causal inference (IV caused the observed changes in the DV) • Control is an essential ingredient • gained through manipulation, holding conditions constant, and balancing • Causal Inferences (three conditions) • covariation, time-order relationship, and elimination of plausible alternative causes. • When confounding occurs, a plausible alternative explanation for the observed covariation exists, and therefore, the experiment lacks internal validity. Plausible alternative explanations are ruled out by holding conditions constant and balancing
Random Groups Design • Each group of subjects participates in only one condition of IV • Comparable groups • Manipulation: Random assignment of conditions • Holding Conditions Constant • Balancing or averaging subject characteristics (individual differences) • independent groups for the levels of the independent variable • Dittmar et al. (2006) • Barbie,Emme, neutral
Threats to Internal Validity • Intact groups: • Potential confounding due to preexisting differences • Balancing Extraneous Variables • Experimenter, observer • Selective subject loss > Mechanical subject loss • Demand characteristics • Placebo control groups • Double-blind experiments
ANALYSIS AND INTERPRETATION OF EXPERIMENTAL FINDINGS • Good research question Good experiment • Role of Data Analysis in Experiments • Statistics as Principled Argument (1995) by Robert Abelson • “primary goal of data analysis is to determine whether observations support a claim about behavior” • Replication Reliability • Data analysis and statistics Alternative to replication
Describing the Results • Descriptive statistics that • Mean (central tendency) • Standard deviation (variation/individual differences) • Measures of effect size • strength of the relationship and they are not affected by sample size. • Cohen’s’ d: More than mean difference • difference between two group means relative to the average variability • small, medium, and large effects (.20, .50, and .80) • Meta-analysis • Measures of effect size to summarize the results of many experiments investigating the same independent variable or dependent variable
Confirming What the Results Reveal • Inferential statistics • Reliable effect of IV on DV? • To infer results of sample on population • Difference due to chance (error variance) • Two methods (Null hypothesis testing and confidence intervals) • NHST • Probability theory whether difference is due to error variance • T-test, F-test etc. • A statistically significant = small likelihood of occurring if the null hypothesis < 5% • confidence intervals • Probability of CI (.95) • Width of interval (the narrower the better) • Degree of overlap reliable difference of sample means
What Data Analysis Can’t Tell Us • Results of study have practical value or even if they are meaningful? • No certainty regarding conclusion • Errors: • A Type I error: is like a false alarm—saying that there is a fire when there is not • Type II error: we have insufficient evidence to reject the null hypothesis and it is, in fact, false
ESTABLISHING THE EXTERNAL VALIDITY • External validity • Application to other individuals, settings, and conditions • Theory-testing • Emphasis on internal validity over external validity • Field experiments increase the external validity • Partial replication external validity • Generalization of conceptual relationships
MATCHED GROUPS DESIGN • A matched groups design • Too few subjects available for random assignment to work effectively • Matching • Best on the dependent variable tasks • After matching task • Random assignment to the conditions
NATURAL GROUPS DESIGN • Individual differences variables (or subject variables) are selected rather than manipulated to form natural groups designs. • The natural groups design represents a type of correlational research in which researchers look for covariations between natural groups variables and dependent variables. • Causal inferences cannot be made regarding the effects of natural groups variables because plausible alternative explanations for group differences exist.