700 likes | 728 Views
Explore the basics of variables in psychological research, understand types of variables, and learn how to choose and measure them effectively. Identify potential issues like demand characteristics and floor effects. Gain insights into selecting independent and dependent variables, and avoid biases that could impact your results. Enhance your experimental design skills and ensure accurate data collection in your research projects.
E N D
Experiment Basics: Variables Psych 231: Research Methods in Psychology
Print out the Class experiment exercise (listed on syllabus page) and bring it to labs this week • Class experiment results are now posted, you will discuss these in labs this week or next • Labs • Exercise Swap going to assign Variables exercise on LM pg 25 (instead of the one on pg 35), this will be assigned in labs this week & due in labs next week • Group project introduction sections due this week • Quiz 5, is due Friday • Journal Summary #1 is due in labs next week Reminders
Independent variables (explanatory) • Dependent variables (response) • Extraneous variables • Control variables • Random variables • Confound variables Many kinds of Variables
Independent variables (explanatory) • Dependent variables (response) • Extraneous variables • Control variables • Random variables • Confound variables Many kinds of Variables
Choosing the right levels of your independent variable • Review the literature • Do a pilot experiment • Consider the costs, your resources, your limitations • Be realistic • Pick levels found in the “real world” • Pay attention to the range of the levels • Pick a large enough range to show the effect • Aim for the middle of the range Choosing your independent variable
These are things that you want to try to avoid by careful selection of the levels of your IV (may be issues for your DV as well). • Demand characteristics • Experimenter bias • Reactivity • Floor and ceiling effects (range effects) Identifying potential problems
Characteristics of the study that may give away the purpose of the experiment • May influence how the participants behave in the study • Examples: • Experiment title: The effects of horror movies on mood • Obvious manipulation: Having participants see lists of words and pictures and then later testing to see if pictures or words are remembered better • Biased or leading questions: Don’t you think it’s bad to murder unborn children? Demand characteristics
Experimenter bias (expectancy effects) • The experimenter may influence the results (intentionally and unintentionally) • E.g., Clever Hans • One solution is to keep the experimenter (as well as the participants) “blind” as to what conditions are being tested Experimenter Bias
Knowing that you are being measured • Just being in an experimental setting, people don’t always respond the way that they “normally” would. • Cooperative • Defensive • Non-cooperative Reactivity
Floor: A value below which a response cannot be made • As a result the effects of your IV (if there are indeed any) can’t be seen. • Imagine a task that is so difficult, that none of your participants can do it. • Ceiling: When the dependent variable reaches a level that cannot be exceeded • So while there may be an effect of the IV, that effect can’t be seen because everybody has “maxed out” • Imagine a task that is so easy, that everybody scores a 100% • To avoid floor and ceiling effects you want to pick levels of your IV that result in middle level performance in your DV Range effects
Independent variables (explanatory) • Dependent variables (response) • Extraneous variables • Control variables • Random variables • Confound variables Variables
The variables that are measured by the experimenter • They are “dependent” on the independent variables (if there is a relationship between the IV and DV as the hypothesis predicts). Dependent Variables
How to measure your your construct: • Can the participant provide self-report? • Introspection– specially trained observers of their own thought processes, method fell out of favor in early 1900’s • Rating scales–strongly agree-agree-undecided-disagree-strongly disagree • Is the dependent variable directly observable? • Choice/decision • Is the dependent variable indirectly observable? • Physiological measures (e.g. GSR, heart rate) • Behavioral measures (e.g. speed, accuracy) Choosing your dependent variable
Scales of measurement • Errors in measurement Measuring your dependent variables
Scales of measurement • Errors in measurement Measuring your dependent variables
Scales of measurement- the correspondence between the numbers representing the properties that we’re measuring • The scale that you use will (partially) determine what kinds of statistical analyses you can perform Measuring your dependent variables
Categorical variables (qualitative) • Nominal scale • Ordinal scale • Quantitative variables • Interval scale • Ratio scale Scales of measurement
brown, blue, green, hazel • Label and categorize observations, • Do not make any quantitative distinctions between observations. • Example: • Eye color: • Nominal Scale: Consists of a set of categories that have different names. Scales of measurement
Categorical variables (qualitative) • Nominal scale • Ordinal scale • Quantitative variables • Interval scale • Ratio scale Categories Scales of measurement
Small, Med, Lrg, XL, XXL • Rank observations in terms of size or magnitude. • Example: • T-shirt size: • Ordinal Scale: Consists of a set of categories that are organized in an ordered sequence. Scales of measurement
Categorical variables • Nominal scale • Ordinal scale • Quantitative variables • Interval scale • Ratio scale Categories Categories with order Scales of measurement
Interval Scale: Consists of ordered categories where all of the categories are intervals of exactly the same size. • Example: Fahrenheit temperature scale • With an interval scale, equal differences between numbers on the scale reflect equal differences in magnitude. • However, Ratios of magnitudes are not meaningful. 20º 40º 20º increase The amount of temperature increase is the same 60º 80º 20º increase 40º “Not Twice as hot” 20º Scales of measurement
Categorical variables • Nominal scale • Ordinal scale • Quantitative variables • Interval scale • Ratio scale Categories Categories with order Ordered Categories of same size Scales of measurement
Ratios of numbers DO reflect ratios of magnitude. • It is easy to get ratio and interval scales confused • Example: Measuring your height with playing cards • Ratio scale: An interval scale with the additional feature of an absolute zero point. Scales of measurement
Ratio scale 8 cards high Scales of measurement
Interval scale 5 cards high Scales of measurement
Ratio scale Interval scale 8 cards high 5 cards high 0 cards high means ‘as tall as the table’ 0 cards high means ‘no height’ Scales of measurement
Categorical variables • Nominal scale • Ordinal scale • Quantitative variables • Interval scale • Ratio scale Categories Categories with order Ordered Categories of same size Ordered Categories of same size with zero point “Best” Scale? • Given a choice, usually prefer highest level of measurement possible Scales of measurement
Scales of measurement • Errors in measurement • Reliability & Validity • Sampling error Measuring your dependent variables
Example: Measuring intelligence • How do we measure the construct? • How good is our measure? • How does it compare to other measures of the construct? • Is it a self-consistent measure? Measuring the true score
In search of the “true score” • Reliability • Do you get the same value with multiple measurements? • Validity • Does your measure really measure the construct? • Is there bias in our measurement? (systematic error) Errors in measurement
Bull’s eye = the “true score” e.g., a person’s Intelligence • Dart Throw = a measurement • e.g., trying to measure that person’s Intelligence Dartboard analogy
Reliability = consistency Validity = measuring what is intended Bull’s eye = the “true score” for the construct Measurement error Estimate of true score Estimate of true score = average of all of the measurements unreliable invalid - The dots are spread out - The & are different Dartboard analogy
Bull’s eye = the “true score” Reliability = consistency Validity = measuring what is intended Measurement error Estimate of true score biased reliablevalid unreliable invalid reliable invalid Dartboard analogy
In search of the “true score” • Reliability • Do you get the same value with multiple measurements? • Validity • Does your measure really measure the construct? • Is there bias in our measurement? (systematic error) Errors in measurement
True score + measurement error • A reliable measure will have a small amount of error • Multiple “kinds” of reliability • Test-retest • Internal consistency • Inter-rater reliability Reliability
Test-restest reliability • Test the same participants more than once • Measurement from the same person at two different times • Should be consistent across different administrations Reliable Unreliable Reliability
Internal consistency reliability • Multiple items testing the same construct • Extent to which scores on the items of a measure correlate with each other • Cronbach’s alpha (α) • Split-half reliability • Correlation of score on one half of the measure with the other half (randomly determined) Reliability
Inter-rater reliability • At least 2 raters observe behavior • Extent to which raters agree in their observations • Are the raters consistent? • Requires some training in judgment Not very funny Funny 5:00 4:56 Reliability
In search of the “true score” • Reliability • Do you get the same value with multiple measurements? • Validity • Does your measure really measure the construct? • Is there bias in our measurement? (systematic error) Errors in measurement
Does your measure really measure what it is supposed to measure? • There are many “kinds” of validity Validity
VALIDITY CONSTRUCT INTERNAL EXTERNAL FACE CRITERION- ORIENTED PREDICTIVE CONVERGENT CONCURRENT DISCRIMINANT Many kinds of Validity
VALIDITY CONSTRUCT INTERNAL EXTERNAL FACE CRITERION- ORIENTED PREDICTIVE CONVERGENT CONCURRENT DISCRIMINANT Many kinds of Validity
At the surface level, does it look as if the measure is testing the construct? “This guy seems smart to me, and he got a high score on my IQ measure.” Face Validity
Usually requires multiple studies, a large body of evidence that supports the claim that the measure really tests the construct Construct Validity
Did the change in the DV result from the changes in the IV or does it come from something else? • The precision of the results Internal Validity
Experimenter bias & reactivity • History – an event happens the experiment • Maturation – participants get older (and other changes) • Selection – nonrandom selection may lead to biases • Mortality (attrition) – participants drop out or can’t continue • Regression to the mean – extreme performance is often followed by performance closer to the mean • The SI cover jinx Threats to internal validity
Are experiments “real life” behavioral situations, or does the process of control put too much limitation on the “way things really work?” External Validity
Variable representativeness • Relevant variables for the behavior studied along which the sample may vary • Subject representativeness • Characteristics of sample and target population along these relevant variables • Setting representativeness • Ecological validity - are the properties of the research setting similar to those outside the lab External Validity
Scales of measurement • Errors in measurement • Reliability & Validity • Sampling error Measuring your dependent variables