210 likes | 317 Views
Measuring Two Scales for Quality Performance. Authors: Winston G. Lewis, Kit F. Pun Terrence R.M. Lalla. Stages of Scale Development and Variable Measurement. Item development – generation of individual items Scale development – manner in which items are combined to form scales
E N D
Measuring Two Scales for Quality Performance Authors: Winston G. Lewis, Kit F. Pun Terrence R.M. Lalla
Stages of Scale Development and Variable Measurement • Item development – generation of individual items • Scale development – manner in which items are combined to form scales • Scale evaluation – psychometric examination of the new measure
Stage 1: Item Generation • Items must adequately capture the specific domain of interest • Items must have not extraneous content • Two approaches used - deductive - inductive
Deductive Approach • Utilizes a classification schema • Requires an understanding of the phenomenon • Through literature review • Develop theoretical definition of construct • Definition used as guide for item development
Stage 2: Scale Development Step 1 – Design of Developmental Study Step 2 – Scale Construction Step 3 – Reliability Assessment
Inductive Approach • Little theory involved at the onset • Researchers develops items based on ethnography
Step 1: Design of Developmental Study • Researcher has identified potential set of items for construct(s) under consideration • Administration of these items is required to determine how well they confirmed expectations about the structure of the measure
Administration of Scale Items • Adequate sample that is representative of the population - description, sampling, response rates, questionnaire administration • Wording of items e.g. reverse scoring • Number of items per measure • Scaling of items e.g. Likert scales • Sample size
Step 2: Scale Construction • Involves data reduction and refining constructs • Many criteria used - Exploratory Factor Analysis - Confirmatory Factor Analysis
Exploratory Factor Analysis (EFA) • Technique used for uncovering the underlying structure (constructs) of a large set of items (variables) • Reduces a set of variables to a couple of constructs • Easy to use • Useful for lots of survey questions • Basis for other instruments e.g. regression analysis with factor scores • Easy to combine with other instruments e.g. confirmatory factor analysis
Confirmatory Factor Analysis (CFA) • Seeks to statistically test the significance of a priori specified theoretical model • Works best when you have measures that have been carefully developed and have been subjected to (and survived) EFA • Researcher specifies a certain number of constructs, which constructs are correlated, and which items measure each construct
Step 3: Reliability Assessment (1) • The degree to which the observed instrument measures the “true” value is a free from measurement error • A reliable measure provides consistent results when administered repeatedly to the same group of people • Usually considered part of the testing stage of a newly developed measure
Step 3: Reliability Assessment (2) • However, many researchers delete items to increase coefficient alpha values, so it is also considered part of the development stage • Two basic concerns - Internal consistency of items within a construct - Stability of the construct over time
Internal Consistency Reliability • Commonly called ‘iter-item reliability’ • Use Cronbach Alpha coefficient • Cronbach Alpha is the average of the correlation coefficient of each item with every other item • Values less that 0.7 – delete item
Stage 3: Scale Evaluation The scale could be further evaluated by testing its validity
Validity • Extent to which a measure or set of measures correctly represents the concept of study • There are three types of validity • Content validity • Criterion-related validity • Construct validity
Content Validity • Adequacy with which a measure assesses the domain of interest • It is the judgment of experts • It is based on item content of the extent to which a scale truly measures what is intended • Scales based on theory derived from extensive literature review, or • Utilize existing scales
Criterion Validity • Pertains to the relationship between a measure and another independent measure • Examines the empirical relationship between the scores on the test instrument (predictor) an objective outcome (criterion) • High valued multiple coefficient between predictor and criterion indicates the scale has criterion validity
Construct Validity • Concerned with the relationship of the measure to the underlying attributes it is attempting to assess • Provides psychometric evidence of convergent validity, discriminant validity and trait and method effects.
Convergent Validity • Correlations between items of the same trait (construct) using different methods (instruments) • Should be in the range of 0.85 to 0.95 or higher
Discriminant Validity • Correlations between items of the different constructs using the same instrument • Should be lower than the convergent validity coefficients