660 likes | 675 Views
This guide covers the importance of variables such as independent, dependent, and extraneous in psychological experiments. Learn to identify and avoid biases, floor and ceiling effects, demand characteristics, and more.
E N D
Experiment Basics: Variables Psych 231: Research Methods in Psychology
Labs: • For labs download the Class Experiment Exercise from the lab webpage • Reminder APA title page exercise is due in labs this week (pg 109) • My office hours: cancelled tomorrow Announcements
Independent variables (explanatory) • Dependent variables (response) • Extraneous variables • Control variables • Random variables • Confound variables Many kinds of Variables
These are things that you want to try to avoid by careful selection of the levels of your IV (may be issues for your DV as well). • Floor and ceiling effects • Demand characteristics • Experimenter bias • Reactivity Identifying potential problems
A value below which a response cannot be made • As a result the effects of your IV (if there are indeed any) can’t be seen. • Imagine a task that is so difficult, that none of your participants can do it. Floor effects
When the dependent variable reaches a level that cannot be exceeded • So while there may be an effect of the IV, that effect can’t be seen because everybody has “maxed out” • Imagine a task that is so easy, that everybody scores a 100% • To avoid floor and ceiling effects you want to pick levels of your IV that result in middle level performance in your DV Ceiling effects
Characteristics of the study that may give away the purpose of the experiment • May influence how the participants behave in the study • Examples: • Experiment title: The effects of horror movies on mood • Obvious manipulation: Ten psychology students looking straight up • Biased or leading questions: Don’t you think it’s bad to murder unborn children? Demand characteristics
Experimenter bias (expectancy effects) • The experimenter may influence the results (intentionally and unintentionally) • E.g., Clever Hans • One solution is to keep the experimenter (as well as the participants) “blind” as to what conditions are being tested Experimenter Bias
Knowing that you are being measured • Just being in an experimental setting, people don’t always respond the way that they “normally” would. • Cooperative • Defensive • Non-cooperative Reactivity
Independent variables (explanatory) • Dependent variables (response) • Extraneous variables • Control variables • Random variables • Confound variables Variables
The variables that are measured by the experimenter • They are “dependent” on the independent variables (if there is a relationship between the IV and DV as the hypothesis predicts). • Consider our class experiment • Conceptual level:Memory • Operational level: Recall test • Present list of words, participants make a judgment for each word • 15 sec. of filler (counting backwards by 3’s) • Measure the accuracy of recall Dependent Variables
How to measure your your construct: • Can the participant provide self-report? • Introspection – specially trained observers of their own thought processes, method fell out of favor in early 1900’s • Rating scales – strongly agree-agree-undecided-disagree-strongly disagree • Is the dependent variable directly observable? • Choice/decision • Is the dependent variable indirectly observable? • Physiological measures (e.g. GSR, heart rate) • Behavioral measures (e.g. speed, accuracy) Choosing your dependent variable
Scales of measurement • Errors in measurement Measuring your dependent variables
Scales of measurement • Errors in measurement Measuring your dependent variables
Scales of measurement - the correspondence between the numbers representing the properties that we’re measuring • The scale that you use will (partially) determine what kinds of statistical analyses you can perform Measuring your dependent variables
Categorical variables (qualitative) • Quantitative variables • Nominal scale Scales of measurement
brown, blue, green, hazel • Label and categorize observations, • Do not make any quantitative distinctions between observations. • Example: • Eye color: • Nominal Scale: Consists of a set of categories that have different names. Scales of measurement
Categorical variables (qualitative) • Nominal scale • Ordinal scale • Quantitative variables • Interval scale • Ratio scale Categories Scales of measurement
Small, Med, Lrg, XL, XXL • Rank observations in terms of size or magnitude. • Example: • T-shirt size: • Ordinal Scale: Consists of a set of categories that are organized in an ordered sequence. Scales of measurement
Categorical variables • Nominal scale • Ordinal scale • Quantitative variables • Interval scale • Ratio scale Categories Categories with order Scales of measurement
Interval Scale: Consists of ordered categories where all of the categories are intervals of exactly the same size. • Example: Fahrenheit temperature scale • With an interval scale, equal differences between numbers on the scale reflect equal differences in magnitude. • However, Ratios of magnitudes are not meaningful. 20º 40º 20º increase The amount of temperature increase is the same 60º 80º 20º increase 40º “Not Twice as hot” 20º Scales of measurement
Categorical variables • Nominal scale • Ordinal scale • Quantitative variables • Interval scale • Ratio scale Categories Categories with order Ordered Categories of same size Scales of measurement
Ratios of numbers DO reflect ratios of magnitude. • It is easy to get ratio and interval scales confused • Example: Measuring your height with playing cards • Ratio scale: An interval scale with the additional feature of an absolute zero point. Scales of measurement
Ratio scale 8 cards high Scales of measurement
Interval scale 5 cards high Scales of measurement
Ratio scale Interval scale 8 cards high 5 cards high 0 cards high means ‘as tall as the table’ 0 cards high means ‘no height’ Scales of measurement
Categorical variables • Nominal scale • Ordinal scale • Quantitative variables • Interval scale • Ratio scale Categories Categories with order Ordered Categories of same size Ordered Categories of same size with zero point “Best” Scale? • Given a choice, usually prefer highest level of measurement possible Scales of measurement
Scales of measurement • Errors in measurement • Reliability & Validity Measuring your dependent variables
Example: Measuring intelligence? • How do we measure the construct? • How good is our measure? • How does it compare to other measures of the construct? • Is it a self-consistent measure? Measuring the true score
In search of the “true score” • Reliability • Do you get the same value with multiple measurements? • Validity • Does your measure really measure the construct? • Is there bias in our measurement? (systematic error) Errors in measurement
Bull’s eye = the “true score” Dartboard analogy
Bull’s eye = the “true score” Reliability = consistency Validity = measuring what is intended reliablevalid unreliable invalid reliable invalid Dartboard analogy
True score + measurement error • A reliable measure will have a small amount of error • Multiple “kinds” of reliability Reliability
Test-restest reliability • Test the same participants more than once • Measurement from the same person at two different times • Should be consistent across different administrations Reliable Unreliable Reliability
Internal consistency reliability • Multiple items testing the same construct • Extent to which scores on the items of a measure correlate with each other • Cronbach’s alpha (α) • Split-half reliability • Correlation of score on one half of the measure with the other half (randomly determined) Reliability
Inter-rater reliability • At least 2 raters observe behavior • Extent to which raters agree in their observations • Are the raters consistent? • Requires some training in judgment 5:00 4:56 Reliability
Does your measure really measure what it is supposed to measure? • There are many “kinds” of validity Validity
VALIDITY CONSTRUCT INTERNAL EXTERNAL FACE CRITERION- ORIENTED PREDICTIVE CONVERGENT CONCURRENT DISCRIMINANT Many kinds of Validity
VALIDITY CONSTRUCT INTERNAL EXTERNAL FACE CRITERION- ORIENTED PREDICTIVE CONVERGENT CONCURRENT DISCRIMINANT Many kinds of Validity
At the surface level, does it look as if the measure is testing the construct? “This guy seems smart to me, and he got a high score on my IQ measure.” Face Validity
Usually requires multiple studies, a large body of evidence that supports the claim that the measure really tests the construct Construct Validity
Did the change in the DV result from the changes in the IV or does it come from something else? • The precision of the results Internal Validity
Experimenter bias & reactivity • History – an event happens the experiment • Maturation – participants get older (and other changes) • Selection – nonrandom selection may lead to biases • Mortality (attrition) – participants drop out or can’t continue • Regression to the mean – extreme performance is often followed by performance closer to the mean • The SI cover jinx Threats to internal validity
Are experiments “real life” behavioral situations, or does the process of control put too much limitation on the “way things really work?” External Validity
Variable representativeness • Relevant variables for the behavior studied along which the sample may vary • Subject representativeness • Characteristics of sample and target population along these relevant variables • Setting representativeness • Ecological validity - are the properties of the research setting similar to those outside the lab External Validity
Scales of measurement • Errors in measurement • Reliability & Validity • Sampling error Measuring your dependent variables
Population • Errors in measurement • Sampling error Everybody that the research is targeted to be about The subset of the population that actually participates in the research Sample Sampling
Sampling to make data collection manageable Inferential statistics used to generalize back Population Sample • Allows us to quantify the Sampling error Sampling
Goals of “good” sampling: • Maximize Representativeness: • To what extent do the characteristics of those in the sample reflect those in the population • Reduce Bias: • A systematic difference between those in the sample and those in the population • Key tool: Random selection Sampling
Have some element of random selection Susceptible to biased selection • Probability sampling • Simple random sampling • Systematic sampling • Stratified sampling • Non-probability sampling • Convenience sampling • Quota sampling Sampling Methods