450 likes | 472 Views
This text discusses the definitions, characteristics, and measurement of variables in research. It explores the concepts of validity, reliability, and different scales of measurement.
E N D
Variables • Characteristics or conditions that change or have different values for different individuals • Age • Gender • Score • Elapsed Time
Variables in research Usually, researchers are interested in how variables are affected by different conditions or how variables differ from one group of individuals to another. • How depression scores change in response to therapy? • How much difference there is in the reading scores for third-grade children versus fourth- grade children?
Variables & Constructs • Variables are well defined, easily observed, and easily measured. • age, time, gender, score • Constructs are intangible, abstract attributes such as , intelligence, motivation, or self- esteem.
Operational Definition An operational definition specifies a measurement procedure ( a set of operations) for measuring a construct.
Validity & Reliability Researchers have developed two general criteria for evaluating the quality of any measurement procedure: validity and reliability
Validity • To establish validity, you must demonstrate that the measurement procedure is actually measuring what it claims to be measuring.
Types of Validity • Face validity • Criterion based validity Concurrent Validity Predictive Validity 3. Construct Validity Convergent Divergent
Face validity • Face validity is the simplest and least scientific definition of validity. • Face validity concerns the superficial appearance, or face value, of a measurement procedure.
Concurrent Validity • The scores obtained from the new measurement technique are directly related to the scores obtained from another, better- established procedure for measuring the same variable. Examples • Teacher’s agreement • Standard tests
Predictive Validity • When the measurements of a construct accurately predict behavior ( according to the theory), the measurement procedure is said to have predictive validity. • A medical science test that predicts passing the Medical Board Exam
Construct validity • If you can demonstrate that your measure matches with what theories and other studies say about that variable. Example, • You would need to study all the past research on aggression and show that the measurement procedure produces scores that behave in accordance with everything that is known about the construct “ aggression.”
Construct validity • If you can demonstrate that your measure matches with what theories and other studies say about that variable. Example, • You would need to study all the past research on aggression and show that the measurement procedure produces scores that behave in accordance with everything that is known about the construct “ aggression.”
Aggression • You search and find all symptoms or all questions that has been used in earlier research to measure aggression and then you do a factor analysis to see which questions are not related to the construct
Convergent / Divergent • Convergent validity involves creating two different methods to measure the same construct, then showing a strong relationship between the measures obtained from the two methods.
Convergent / Divergent • Divergent validity, on the other hand, involves demonstrating that we are measuring one specific construct and not combining two different constructs in the same measurement process. Self Esteem IQ Math
Reliability • A measurement procedure is said to have reliability if it produces identical ( or nearly identical) results when it is used repeatedly to measure the same individual under the same conditions.
3 types of reliability • Successive measurements (test-retest, parallel-forms reliability.) • Simultaneous measurements: (Inter-rater reliability) • Internal consistency: (split- half reliability, Cronbach’s Alpha and the Kuder- Richardson)
The Relationship Between Reliability and Validity • These two factors are partially related and partially independent. • Reliability is a prerequisite for validity • The consistency of measurement is no guarantee of validity.
SCALES OF MEASUREMENT • In very general terms, measurement is a procedure for classifying individuals. The set of categories used for classification is called the scale of measurement. • Nominal -simply represent qualitative ( not quantitative) differences in the variable measured. • Ordinal (series of ranks, verbal labels such as small, medium, and large) • Interval & Ratio & Scale (The categories on interval and ratio scales are organized sequentially and all categories are the same size)
Modes of Measurement The external expressions of a construct are traditionally classified into three categories • Self- report • Physiological • Behavioral
Self-report Advantage No one knows more about the individuals than the individual. Disadvantage • A participant may deliberately lie to create a better self- image. • Response may be influenced subtly by the presence of a researcher. • The wording of the questions. • Other aspects of the research situation.
Physiological Measures • Fear, for example, reveals itself by increased heart rate • Brain imaging techniques such as positron emission tomography ( PET- Positron emission tomography )
advantage • One advantage of physiological measures is that they are extremely objective.
Disadvantage • One disadvantage of such measures is that they typically require equipment that may be expensive or unavailable. • In addition, the presence of monitoring devices creates an unnatural situation that may cause participants to react differently • Example? Lie detector
Behavioral The behaviors may be completely natural events such as laughing, playing, eating, sleeping, arguing, or speaking.
Multiple Measures • One method of obtaining a more complete measure of a construct is to use two ( or more) different procedures to measure the same variable. • For example, we could record both heart rate and behavior as measures of fear.
Sensitivity and Range Effects • In general, if we expect fairly small, subtle changes in a variable, then the measurement procedure must be sensitive enough to detect the changes.
Example Which one is more sensitive? • Pass-Fail • A-B-C-D • 1-10 • 1-100
Experimenter Bias • Typically, a researcher knows the predicted outcome of a research study and is in a position to influence the results, either intentionally or unintentionally.
How? Even the most trained interviewers • by paralinguistic cues ( variations in tone of voice) that influence the participants to give the expected or desired responses • by kinesthetic cues ( body posture or facial expressions) • by verbal reinforcement of expected or desired responses
Participant Reactivity • If we observe or measure an inanimate object such as a table or a block of wood, we do not expect the object to have any response such as “ Whoa! I’m being watched. I had better be on my best behavior.” • Unfortunately this kind of reactivity can happen with human participants.
Four types of subjects Four different subject roles have been identified • The good subject role. (know what we want) • The negativistic subject role. (against us) • The apprehensive subject role.(desirable) • The faithful subject role.(pro science)