290 likes | 422 Views
Name that tune. . Song title? Performer(s)?. Scientific Method (continued). “Finding New Information” 3/24/2010. Objectives. I want to arm you with a scientist’s skepticism, and a scientist’s tools to conduct research and evaluate others’ research. Randolph – remember to take roll.
E N D
Name that tune. Song title? Performer(s)?
Scientific Method (continued) “Finding New Information” 3/24/2010
Objectives • I want to arm you with a scientist’s skepticism, and a scientist’s tools to conduct research and evaluate others’ research. • Randolph – remember to take roll.
Operational Definitions • Explains a concept solely in terms of the operations used to produce and measure it. • Bad: “Smart people.” • Good: “People with an IQ over 120.” • Bad: “People with long index fingers.” • Good: “People with index fingers at least 7.2 cm.” • Bad: Ugly guys. • Good: “Guys rated as ‘ugly’ by at least 50% of the respondents.”
Validity and Reliability • Validity: the “truthfulness” of a measure. Are you really measuring what you claim to measure? “The validity of a measure . . . the extent that people do as well on it as they do on independent measures that are presumed to measure the same concept.” • Reliability: a measure’s consistency. • A measure can be reliable without being valid, but not vice versa.
Theory and Hypothesis • Theory: a logically organized set of propositions (claims, statements, assertions) that serves to define events (concepts), describe relationships among these events, and explain their occurrence. • Theories organize our knowledge and guide our research • Hypothesis: A tentative explanation. • A scientific hypothesis is TESTABLE.
Goals of Scientific Method • Description • Nomothetic approach – establish broad generalizations and general laws that apply to a diverse population • Versus idiographic approach – interested in the individual, their uniqueness (e.g., case studies) • Prediction • Correlational study – when scores on one variable can be used to predict scores on a second variable. (Doesn’t necessarily tell you “why.”) • Understanding – con’t. on next page • Creating change • Applied research
Understanding • Three important conditions for making a causal inference: • Covariation of events. (IV changes, and the DV changes.) • A time-order relationship. (First the scientist changes the IV – then there’s a change in the DV.) • The elimination of plausible alternative causes.
Confounding • When two potentially effective IVs are allowed to covary simultaneously. • Poor control! • Men, overall, did a better job of remembering the 12 “random” letters. But the men had received a different “clue.” • So GENDER (what type of IV? A SUBJECT variable, or indiv. differences variable) was CONFOUNDED with “type of clue” (an IV).
A bit more about theories • Good theories provide “precision of prediction” • The “rule of parsimony” is followed • The simplest alternative explanations are accepted • A good scientific theory passes the most rigorous tests • Testing will be more informative when you try to DISPROVE (falsify) a theory
Populations and Samples • Population: the set of all cases of interest • Sample: Subset of all the population that we choose to study.
Experimental Design • Description and Prediction are crucial to the scientific study of behavior, but they’re not sufficient for understanding the causes. We need to know WHY. • Best way to answer this question is with the experimental method. • “The special strength of the experimental method is that it is especially effective for establishing cause-and-effect relationships.”
If results of an experiment . . . • . . . (a well-run experiment!) are consistent with theory, we say we’ve supported the theory. (NOT that it is “right.”) • Otherwise, we modify the theory. • Testing hypotheses and revising theories based on the outcomes of experiments – the long process of science.
Logic of Experimental Research • Researchers manipulate an independent variable in an experiment to observe the effect on behavior, as assessed by the dependent variable.
Independent Groups Design • Each group represents a different condition as defined by the independent variable.
Random . . . • Random Selection vs. Random Assignment • Random Selection = every member of the population has an equal chance of being selected for the sample. • Random Assignment = every member of the sample (however chosen) has an equal chance of being placed in the experimental group or the control group. • Random assignment allows for individual differences among test participants to be averaged out.
Let’s step back a minute • An experiment is “personkind’sway of asking nature a question.” • I want to know if one variable (factor, event, thing) has an effect on another variable – does the IV systematically influence the DV? • I manipulate some variables (IVs), control other variables, and count on random selection to wash out the effects of all the rest of the variables.
Challenges to Internal Validity • Testing intact groups. (Why is the group a group? Might be some systematic differences.) • Extraneous variables. (Balance ‘em.) (E.g., experimenter). • Subject loss • Mechanical loss, OK. • Select loss, not OK. • Demand characteristics (cues and other info participants pick up on) – use a placebo, and double-blind procedure • Experimenter effects – use double-blind procedure
Notice • Many things influence how easy or hard it is to discover a difference. • How big the real difference is. • How much variability there is in the population distribution(s). • How much error variance there is. • Let’s talk about variance.
Sources of variance • Systematic vs. Error • Real differences • Error variance • What would happen to the DV if our measurement apparatus was a little inconsistent? • There are OTHER sources of error variance, and the whole point of experimental design is to try to minimize ‘em. Get this: The more error variance, the harder for real differences to “shine through.”
One way to reduce the error variance • Matched groups design • If there’s some variable that you think MIGHT cause some variance, • Pre-test subjects on some matching test that equates the groups on a dimension that is relevant to the outcome of the experiment. (Must have a good matching test.) • Then assign matched groups. This way the groups will be similar on this one important variable. • STILL use random assignment to the groups. • Good when there are a small number of possible test subjects.
Role of Data Analysis in Exps. • Primary goal of data analysis is to determine if our observations support a claim about behavior. Is that difference really different? • We want to draw conclusions about populations, not just the sample. • Two different ways – statistics and replication.
Another design (in addition to “Independent Groups Design” • Natural Groups design • Based on subject (or individual differences) variables. • Selected, not manipulated. • Remember: This will give us description, and prediction, but not understanding (cause and effect).
We’ve been talking about . . . • Making two groups comparable, so that the ONLY systematic difference is the IV. • CONTROL some variables. • Match on some. • Use random selection to wash out the effects of the others. • What would be the best possible match for one subject, or one group of subjects?
Themselves! • When each test subject is his/her own control, then that’s called a • Repeated measures design, or a • Within-subjects design. (And the independent groups design is called a “between subjects” design.)
Repeated Measures • If each subject serves as his/her own control, then we don’t have to worry about individual differences, across experimental and control conditions. • EXCEPT for newly introduced sources of variance – order effects: • Practice effects • Fatigue effects
Counterbalancing • ABBA • Used to overcome order effects. • Assumes practice/fatigue effects are linear. • Some incomplete counterbalancing ideas are offered in the text.
Which method when? • Some questions DO lend themselves to repeated measures (within-subjects) design • Can people read faster in condition A or condition B? • Is memorability improved if words are grouped in this way or that? • Some questions do NOT lend themselves to repeated measures design • Do these instructions help people solve a particular puzzle? • Does this drug reduce cholesterol?
References • Hinton, P. R. Statistics explained. • Shaughnessy, Zechmeister, and Zechmeister. Experimental methods in psychology.