450 likes | 465 Views
Defining and Measuring Variables. Definitions. Variables. Characteristics or conditions that change or have different values for different individuals Age (young , old) Gender (male, female) Score (1-100) Hours of studying (0-14). What variable is in this research?.
E N D
Variables • Characteristics or conditions that change or have different values for different individuals • Age (young , old) • Gender (male, female) • Score (1-100) • Hours of studying (0-14)
What variable is in this research? Mr. fox compared the effectiveness of alcohol and opium on the speed of reaction of teenagers and adults to 3 types of insects.
Variables & Constructs • Variables are well defined, easily observed, and easily measured. • age, time, gender, score • Constructs are intangible, abstract attributes such as , intelligence, guilt, motivation, self- esteem, academic achievement, conservativeness. • Such variables are not directly observable, and the process of measuring them is more complicated. Such variables are called constructs, or hypothetical construct
Operational Definition An operational definition specifies a measurement procedure ( a set of operations) for measuring a construct.
Validity & Reliability Researchers have developed two general criteria for evaluating the quality of any measurement procedure: validity and reliability
Validity • To establish validity, you must demonstrate that the measurement procedure is actually measuring what it claims to be measuring. • How do we know if these tests actually measure intelligence?
Face validity • Face validity is the simplest and least scientific definition of validity. • Face validity concerns the superficial appearance, or face value, of a measurement procedure. • For example, an IQ test ought to include questions that require logic, reasoning, background knowledge, and good memory.
Concurrent Validity • The scores obtained from the new measurement technique are directly related to the scores obtained from another, better- established procedure for measuring the same variable. Examples • Teacher’s agreement • Standard tests
Predictive Validity • When the measurements of a construct accurately predict behavior( according to the theory), the measurement procedure is said to have predictive validity. • A medical science test that predicts passing the Medical Board Exam
Construct validity • If you can demonstrate that your measure matches with what theories and other studies say about that variable. Example, • You would need to show that your measurement scores are in accordance with everything that is known about the construct “ aggression.”
Convergent / Divergent • Convergent validity involves measuring the correlation between scores twodifferent tools measuring the same construct.
Convergent / Divergent Divergent validity, on the other hand, involves demonstrating that we are measuring one specific construct and not combining two different constructs in the same measurement process. Self Esteem IQ Math
Summary, Types of Validity • Face validity • Content Validity • Criterion based validity Concurrent Validity Predictive Validity 3. Construct Validity Convergent Divergent
Reliability A measurement procedure is said to have reliability if it produces identical ( or nearly identical) results when it is used repeatedly to measure the same individual under the same conditions.
3 types of reliability • Successive measurements (test-retest, parallel-forms reliability.) • Simultaneous measurements: (Inter-rater reliability) • Internal consistency: (split- half reliability, , Kuder- Richardson, and Cronbach’s Alpha )
The Relationship Between Reliability and Validity • These two factors are partially related and partially independent. • Reliability is a prerequisite for validity • The consistency of measurement is no guarantee of validity.
SCALES OF MEASUREMENT • In very general terms, measurement is a procedure for classifying individuals. The set of categories used for classification is called the scale of measurement. • Nominal -simply represent qualitative ( not quantitative) differences in the variable measured. • Ordinal (series of ranks, verbal labels such as small, medium, and large) • Interval & Ratio & Scale (The categories on interval and ratio scales are organized sequentially and all categories are the same size)
How does each scale help? • Nominal (gender, ethnicity, major, religion) • Ordinal (age groups, grade level, ranks) • Interval (Fahrenheit, Celsius) • (Celsius: Yesterday 25 Today 50) (Fahrenheit: 77, 122) • Ratio (time, number, age, height, weight) Inches (25/50) Centimeters (62.5/125)
Modes of Measurement The external expressions of a construct are traditionally classified into three categories • Self- report • Physiological • Behavioral
Self-report Advantage No one knows more about the individuals than the individual. Disadvantage • A participant may deliberately lie to create a better self- image. • Response may be influenced subtly by the presence of a researcher. • The wording of the questions. • Other aspects of the research situation.
Physiological Measures • Fear, for example, reveals itself by increased heart rate • Brain imaging techniques such as positron emission tomography ( PET- Positron emission tomography ) • https://www.youtube.com/watch?v=ZieFvCtMCc0
advantage One advantage of physiological measures is that they are extremely objective.
Disadvantage • One disadvantage of such measures is that they typically require equipment that may be expensive or unavailable. • In addition, the presence of monitoring devices creates an unnatural situation that may cause participants to react differently • Example? Sleep Monitoring
Behavioral The behaviors may be completely natural events such as laughing, playing, eating, sleeping, arguing, or speaking. Disadvantage Bias, measurement errors
Multiple Measures • One method of obtaining a more complete measure of a construct is to use two ( or more) different procedures to measure the same variable. • For example, we could record both heart rate and behavior as measures of fear. • Compare this with qualitative measurements such as interviews , focus groups,…etc.
Experimenter Bias Typically, a researcher knows the predicted outcome of a research study and is in a position to influence the results, either intentionally or unintentionally.
How? Even the most trained interviewers • by verbal reinforcement of expected or desired responses • by paralinguistic cues ( variations in tone of voice) that influence the participants to give the expected or desired responses • by kinesthetic cues ( body posture or facial expressions) • by misjudgment of participants’ responses (was it an accident or he mean it?) • by not recording participants’ responses
Participant Reactivity • If we observe or measure an inanimate object such as a table or a block of wood, we do not expect the object to have any response such as “ Whoa! I’m being watched. I had better be on my best behavior.” • Unfortunately this kind of reactivity can happen with human participants.
Four types of subjects Four different subject roles have been identified • The good subject role. (know what we want) • The negativistic subject role. (against us) • The apprehensive subject role.(desirable) • The faithful subject role.(pro science)
Sensitivity and Range Effects In general, if we expect fairly small, subtle changes in a variable to be important , then the measurement procedure must be sensitive enough to detect the changes.
Example Which one is more sensitive? • Pass-Fail • A-B-C-D • 1-10 • 1-100