1 / 26

Chapter 9 Scaling, Reliability and Validity

Chapter 9 Scaling, Reliability and Validity. Chapter Objectives. K now how and when to use the different forms of rating scales and ranking scales Explain stability and consistency, and how they are established E xplain the different forms of validity

deb
Download Presentation

Chapter 9 Scaling, Reliability and Validity

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 9 Scaling, Reliability and Validity

  2. Chapter Objectives • Know how and when to use the different forms of rating scales and ranking scales • Explain stability and consistency, and how they are established • Explain the different forms of validity • Discuss what ‘goodness’ of measures means, and why it is necessary to establish it in research

  3. Rating and Ranking Scales • Rating Scales • have several response categories and are used to elicit responses with regard to the object, event or person studied. • Ranking Scales • make comparisons between or among objects, events or persons, and elicit the preferred choices and ranking among them.

  4. Rating Scales • dichotomous scale • category scale • Likert scale • numerical scales • semantic differential scale • itemised rating scale • fixed or constant sum rating scale • Stapel scale • graphic rating scale • consensus scale

  5. Dichotomous Scale Used to elicit a Yes or No answer, eg: Do you own a car? Yes No

  6. Category Scales

  7. Likert Scale Indicate the extent to which you agree or disagree with the following statements: My work is very interesting 1 2 3 4 5 Life without my work would be dull 1 2 3 4 5

  8. Responsive Unresponsive Good Bad Timid Courageous Semantic Differential Scale

  9. Numerical Scale How pleased are you with your new car? Extremely Extremely pleased 7 6 5 4 3 2 1 displeased

  10. Itemised Rating Scale This is an unbalanced rating scale which does not have a neutral point.

  11. Fixed or Constant Sum Rating Scale Respondents are asked to distribute a given number of points across various items, eg: Fragrance — Colour — Shape — Size — Texture of lather — Total points 100

  12. Stapel Scale Measures the direction and intensity of the attitude towards the items under study, eg

  13. Graphic Rating Scale

  14. Ranking Scales • paired comparison • forced choice • comparative scale

  15. Paired Comparison • Used when, among a small number of objects, respondents are asked to choose between two objects at a time. • The paired choices for n objects will be ((n) (n-1)/2).

  16. Forced Choice Rank your preferences among the following magazines, 1 being your preferred choice and 5 being your least preferred: Australian Financial Review __ Business Review Weekly __ Playboy __ The Economist __ Time __

  17. Comparative Scale In a volatile financial environment, compared with shares, how useful is it to invest in government bonds? More useful About the same Less useful 1 2 3 4 5

  18. Goodness of Measures • Reliability measures • How stable and consistent is the measuring instrument? • Validity measures • Are we measuring the right thing?

  19. Reliability and Validity in Target Shooting

  20. Forms of Reliability and Validity

  21. Reliability • Stability • refers to the ability of a measure to maintain stability over time, despite uncontrollable testing conditions or the state of the respondents themselves • Internal consistency • indicates how well the items ‘hang together as a set’ and can independently measure the same concept, so respondents attach the same overall meaning to each of the items

  22. Stability of Measures • Test-retest reliability • the reliability coefficient obtained with a repetition of the same measure on a second occasion • Parallel-form reliability • the correlation obtained from responses on two comparable sets of measures (changed for wording & question order) tapping the same construct

  23. Internal Consistency of Measures • Inter-item consistency reliability • test of the consistency of respondents’ answers to all the items in a measure • usually tested byCronbach’s coefficient alpha • Split-half reliability • reflects the correlations between two halves of an instrument

  24. Types of Validity

More Related