470 likes | 482 Views
Understanding methods to test for Factorial Invariance and its importance in comparing constructs across groups or time points. Learn about Selection Theorem, Classical Measurement Theorem, and levels of invariance. Explore models and equations to evaluate invariance using SEM.
E N D
Factorial Invariance: Why It's Important and How to Test for It Todd D. Little University of Kansas Director, Quantitative Training Program Director, Center for Research Methods and Data Analysis Director, Undergraduate Social and Behavioral Sciences Methodology Minor Member, Developmental Psychology Training Program crmda.KU.edu Colloquium presented 5-24-2012 @ University of Turku, Finland Special Thanks to: Mijke Rhemtulla & Wei Wu crmda.KU.edu
Comparing Across Groups or Across Time • In order to compare constructs across two or more groups OR across two or more time points, the equivalence of measurement must be established. • This need is at the heart of the concept of Factorial Invariance. • Factorial Invariance is assumed in any cross-group or cross-time comparison • SEM is an ideal procedure to test this assumption.
Comparing Across Groups or Across Time • Meredith provides the definitive rationale for the conditions under which invariance will hold (OR not)…Selection Theorem • Note, Pearson originated selection theorem at the turn of the century
Which posits: if the selection process effects only the true score variances of a set of indicators, invariance will hold
Classical Measurement Theorem Xi=Ti+Si+ei Where, Xi is a person’s observed score on an item, Tiis the 'true' score (i.e., what we hope to measure), Si is the item-specific, yet reliable, component, and ei is random error, or noise. Note that Siandeiare assumed to be normally distributed (with mean of zero) and uncorrelated with each other. And, across all items in a domain, the Sis are uncorrelated with each other, as are the eis.
Selection Theorem on Measurement Theorem X1=T1+S1+e1 X2=T2+S2+e2 X3=T3+S3+e3 Selection Process
Levels Of Invariance • There are four levels of invariance: 1)Configural invariance - the pattern of fixed & free parameters is the same. 2)Weak factorial invariance - the relative factor loadings are proportionally equal across groups. 3)Strong factorial invariance - the relative indicator means are proportionally equal across groups. 4)Strict factorial invariance - the indicator residuals are exactly equal across groups (this level is not recommended).
The Covariance Structures Model where... Σ= matrix of model-implied indicator variances and covariances Λ= matrix of factor loadings Ψ = matrix of latent variables / common factor variances and covariances Θ= matrix of unique factor variances (i.e., S + e and all covariances are usually 0)
The Mean Structures Model where... μ= vector of model-implied indicator means τ= vector of indicator intercepts Λ= matrix of factor loadings α= vector of factor means
Factorial Invariance • An ideal method for investigating the degree of invariance characterizing an instrument is multiple-group (or multiple-occasion) confirmatory factor analysis; or mean and covariance structures (MACS) models • MACS models involve specifying the same factor model in multiple groups (occasions) simultaneously and sequentially imposing a series of cross-group (or occasion) constraints.
Some Equations Configural invariance: Same factor loading pattern across groups, no constraints. Weak (metric) invariance: Factor loadings proportionally equal across groups. Strong (scalar) invariance: Loadings & intercepts proportionally equal across groups. Strict invariance: Add unique variances to be exactly equal across groups.
Models and Invariance • It is useful to remember that all models are, strictly speaking, incorrect. Invariance models are no exception. "...invariance is a convenient fiction created to help insecure humans make sense out of a universe in which there may be no sense." (Horn, McArdle, & Mason, 1983, p. 186).
Measured vs. Latent Variables • Measured (Manifest) Variables • Observable • Directly Measurable • A proxy for intended construct • Latent Variables • The construct of interest • Invisible • Must be inferred from measured variables • Usually ‘Causes’ the measured variables (cf. reflective indicators vs. formative indicators) • What you wish you could measure directly
Manifest vs. Latent Variables • “Indicators are our worldly window into the latent space” • John R. Nesselroade
Manifest vs. Latent Variables Ψ11 ξ1 λ11 λ21 λ31 X1 X2 X3 θ11 θ22 θ33
Selection Theorem Selection Influence Ψ11 Ψ11 Group (Time) 1 Group (Time) 2 λ11 λ21 λ31 λ11 λ21 λ31 X1 X2 X3 X1 X2 X3 θ11 θ22 θ33 θ11 θ22 θ33
Estimating Latent Variables Ψ11 ξ1 λ11 λ21 λ31 To solve for the parameters of a latent construct, it is necessary to set a scale (and make sure the parameters are identified) X1 X2 X3 θ11 θ22 θ33 17
Scale Setting and Identification • Three methods of scale-setting • (part of identification process) • Arbitrary metric methods: • Fix the latent variance at 1.0; latent mean at 0 • (reference-group method) • Fix a loading at 1.0; an indicator’s intercept at 0 • (marker-variable method) • Non-Arbitrary metric method • Constrain the average of loadings to be 1 and the average of intercepts at 0 • (effects-coding method; Little, Slegers, & Card, 2006) 18
Fix the Latent Variance to 1.0 • and Latent mean to 0.0) 1.0* ξ1 λ11 λ21 λ31 Three methods of setting scale 1) Fix latent variance (Ψ11) X1 X2 X3 θ11 θ22 θ33 19
2. Fix a Marker Variable to 1.0 (and its intercept to 0.0) Ψ11 ξ1 1.0* λ21 λ31 X1 X2 X3 θ11 θ22 θ33 20
3. Constrain Loadings to Average 1.0 (and the intercepts to average 0.0) Ψ11 ξ1 λ21 λ31 λ11= 3-λ21-λ31 X1 X2 X3 θ22 θ33 θ11 21
Configural invariance xx 1 2 1* 1* Group 1: .57 .61 .63 .63 .59 .60 1 2 3 4 5 6 .10 .12 .10 .11 .10 .07 xx 1 2 1* 1* Group 2: .64 .66 .71 .59 .55 .57 1 2 3 4 5 6 .11 .09 .07 .11 .07 .06
Configural invariance xx 1 2 1* 1* Group 1: .57 .51 .63 .63 .59 .60 1 2 3 4 5 6 .10 .12 .10 .11 .10 .07 xx 1 2 1* 1* Group 2: .64 .76 .71 .59 .55 .57 1 2 3 4 5 6 .11 .09 .07 .11 .07 .06
Configural invariance -.07 1 2 1* 1* Group 1: .57 .61 .63 .63 .59 .60 1 2 3 4 5 6 .10 .12 .10 .11 .10 .07 -.32 1 2 1* 1* Group 2: .64 .66 .71 .56 .55 .57 1 2 3 4 5 6 .11 .09 .07 .11 .07 .06
Weak factorial invariance (equate λs across groups) PS(2,1) 1 2 PS(1,1) PS(2,2) 1* 1* Group 1: LY(1,1) LY(2,1) LY(3,1) LY(4,2) LY(5,2) LY(6,2) 1 2 3 4 5 6 TE(2,2) TE(1,1) TE(3,3) TE(4,4) TE(5,5) TE(6,6) Note: Variances are now Freed in group 2 PS(2,1) 1 2 e e PS(1,1) PS(2,2) Group 2: =LY(1,1) =LY(2,1) =LY(3,1) =LY(4,2) =LY(5,2) =LY(6,2) 1 2 3 4 5 6 TE(2,2) TE(1,1) TE(3,3) TE(4,4) TE(5,5) TE(6,6)
1* 1* F: Test of Weak Factorial Invariance (9.2.1.TwoGroup.Loadings.FactorID) -.07 Positive Negative -.33 1.22 .85 .58 .59 .64 .62 .59 .61 Great +Glad Cheerful + Good Happy + Super Terrible +Sad Down + Blue Unhappy + Bad .12 .10 .11 .10 .07 .11 .09 .07 .10 .07 .06 .11 Model Fit: χ2(20, n=759)=49.0; RMSEA=.062(.040-.084); CFI=.99; NNFI=.99
M: Test of Weak Factorial Invariance (9.2.1.TwoGroup.Loadings.MarkerID) -.03 Positive Negative -.12 .33 .39 .41 .33 1* 1.02 1.11 1* .95 .97 Great +Glad Cheerful + Good Happy + Super Terrible +Sad Down + Blue Unhappy + Bad .12 .10 .11 .10 .07 .11 .09 .07 .10 .07 .06 .11 Model Fit: χ2(20, n=759)=49.0; RMSEA=.062(.040-.084); CFI=.99; NNFI=.99
EF: Test of Weak Factorial Invariance (9.2.1.TwoGroup.Loadings.EffectsID) -.03 Positive Negative -.12 .36 .37 .44 .31 .98 1.00 1.06 .96 1.03 .97 Great +Glad Cheerful + Good Happy + Super Terrible +Sad Down + Blue Unhappy + Bad .12 .10 .11 .10 .07 .11 .09 .07 .10 .07 .06 .11 Model Fit: χ2(20, n=759)=49.0; RMSEA=.062(.040-.084); CFI=.99; NNFI=.99
Results Test of Weak Factorial Invariance • The results of the two-group model with equality constraints on the corresponding loadings provides a test of proportional equivalence of the loadings: Nested significance test: (χ2(20, n=759) = 49.0) - (χ2(16, n=759) = 46.0) = Δχ2(4, n=759) = 3.0, p > .50 The difference in χ2 is non-significant and therefore the constraints are supported. The loadings are invariant across the two age groups. “Reasonableness” tests: RMSEA: weak invariance = .062(.040-.084) versus configural = .069(.046-.093) The two RMSEAs fall within one another’s confidence intervals. CFI: weak invariance = .99 versus configural = .99 The CFIs are virtually identical (one rule of thumb is ΔCFI <= .01 is acceptable). (9.2.TwoGroup. Loadings)
Adding information about means • When we regress indicators on to constructs we can also estimate the intercept of the indicator. • This information can be used to estimate the Latent mean of a construct • Equivalence of the loading intercepts across groups is, in fact, a critical criterion to pass in order to say that one has strong factorial invariance.
Adding information about means 1 2 1* 1* AL(2) AL(1) 1 2 3 4 5 6 TY(1) TY(2) TY(3) TY(4) TY(5) TY(6) X
0* 0* 0* 0* Adding information about means (9.3.0.TwoGroups.FreeMeans) 1 2 1 2 3 4 5 6 3.14 2.99 3.07 1.70 1.53 1.55 X 3.07 2.85 2.98 1.72 1.58 1.55 Model Fit: χ2(20, n=759) = 49.0 (note that model fit does not change)
1* 0* 0* 1* 0.85 1.22 .04 -.16 Strong factorial invariance (aka. loading invariance) – Factor Identification Method (9.3.1.TwoGroups.Intercepts.FactorID) -.07 -.33 1 2 .58 .59 .64 .62 .59 .61 1 2 3 4 5 6 3.15 2.97 3.08 1.70 1.55 1.54 X Model Fit: χ2(24, n=759) = 58.4, RMSEA = .061(.041;.081), NNFI = .986, CFI = .989
3.15 .39 .33 1.70 .40 1.72 3.06 .33 Strong factorial invariance (aka. loading invariance) – Marker Var. Identification Method (9.3.1.TwoGroups.Intercepts.MarkerID) -.03 -.12 1 2 1* 1.03 1.11 1* .95 .97 1 2 3 4 5 6 0* -.28 -.43 0* -.06 -.12 X Model Fit: χ2(24, n=759) = 58.4, RMSEA = .061(.041;.081), NNFI = .986, CFI = .989
3.07 .37 .36 1.59 .44 1.62 2.97 .31 Strong factorial invariance (aka. loading invariance) – Effects Identification Method (9.3.1.TwoGroups.Intercepts.EffectsID) -.03 -.12 1 2 .95 .98 1.06 1.03 .97 1.00 1 2 3 4 5 6 .23 -.05 -.18 .06 -.00 -.06 X Model Fit: χ2(24, n=759) = 58.4, RMSEA = .061(.041;.081), NNFI = .986, CFI = .989
How Are the Means Reproduced? _ _ Indicator mean = intercept + loading(Latent Mean) i.e., Mean of Y = intercept + slope (X) For Positive Affect then: Group 1 (7th grade): Group 2 (8th grade): Y = τ + λ (α) Y = τ + λ (α) 3.14 ≈ 3.15 + .58(0) 3.07 ≈ 3.15 + .58(-.16) = 3.06 2.99 ≈ 2.97 + .59(0) 2.85 ≈ 2.97 + .59(-.16) = 2.88 3.07 ≈ 3.08 + .64(0) 2.97 ≈ 3.08 + .64(-.16) = 2.98 Note: in the raw metric the observed difference would be -.10 3.14 vs. 3.07 = -.07 2.99 vs. 2.85 = -.14 gives an average of -.10 observed 3.07 vs. 2.97 = -.10 ============== i.e. averaging: 3.07 - 2.96 = -.10
0* 1* 0* 1* Positive 1 Negative 2 X The complete model with means, std’s, and r’s (9.7.1.Phantom variables.With Means.FactorID) -.07 Positive 3 Negative 4 -.32 1.11 (in group 2) .92 (in group 2) 1.0* (in group 1) 1.0* (in group 1) Estimated only in group 2! Group 1 = 0 -.16 (z=2.02) .04 (z=0.53) .62 .59 .58 .59 .64 .61 3.15 2.97 3.08 1.70 1.54 1.54 Model Fit: χ2(24, n=759) = 58.4, RMSEA = .061(.041;.081), NNFI = .986, CFI=.989
0* 1* 0* 1* Positive 1 Negative 2 X The complete model with means, std’s, and r’s (9.7.2.Phantom variables.With Means.MarkerID) -.07 Positive 3 Negative 4 -.32 .57 (in group 2) .64 (in group 2) .58 (in group 1) .62 (in group 1) 3.15 3.06 1.70 1.72 1.11 1* .95 1* 1.03 .97 0* -.28 -.43 0* -.06 -.12 Model Fit: χ2(24, n=759) = 58.4, RMSEA = .061(.041;.081), NNFI = .986, CFI=.989
0* 1* 0* 1* Positive 1 Negative 2 X The complete model with means, std’s, and r’s (9.7.3.Phantom variables.With Means.EffectsID) -.07 Positive 3 Negative 4 -.32 .56 (in group 2) .67 (in group 2) .60 (in group 1) .61 (in group 1) 3.07 2.97 1.59 1.62 1.06 1.03 .97 .96 .98 1.00 .23 -.05 -.18 .06 -.00 -.06 Model Fit: χ2(24, n=759) = 58.4, RMSEA = .061(.041;.081), NNFI = .986, CFI=.989
Effect size of latent mean differences Cohen’s d = (M2 – M1) / SDpooled where SDpooled = √[(n1Var1 + n2Var2)/(n1+n2)]
Effect size of latent mean differences Cohen’s d = (M2 – M1) / SDpooled where SDpooled = √[(n1Var1 + n2Var2)/(n1+n2)] Latent d = (α2j – α1j) / √ψpooled where √ψpooled = √[(n1 ψ1jj + n2 ψ2jj)/(n1+n2)]
Effect size of latent mean differences Cohen’s d = (M2 – M1) / SDpooled where SDpooled = √[(n1Var1 + n2Var2)/(n1+n2)] Latent d = (α2j – α1j) / √ψpooled where √ψpooled = √[(n1 ψ1jj + n2 ψ2jj)/(n1+n2)] dpositive = (-.16 – 0) / 1.05 where √ψpooled = √[(380*1 + 379*1.22)/(380+379)] = -.152
Comparing parameters across groups 1. Configural Invariance Inter-occular/model fit Test 2. Invariance of Loadings RMSEA/CFI difference Test 3. Invariance of Intercepts RMSEA/CFI difference Test 4. Invariance of Variance/ Covariance Matrix χ2 difference test 5. Invariance of Variances χ2 difference test 6. Invariance of Correlations/Covariances χ2 difference test 3b or 7. Invariance of Latent Means χ2 difference test
The ‘Null’ Model • The standard ‘null’ model assumes that all covariances are zero – only variances are estimated • In longitudinal research, a more appropriate ‘null’ model is to assume that the variances of each corresponding indicator are equal at each time point and their means (intercepts) are also equal at each time point (see Widaman & Thompson). • In multiple-group settings, a more appropriate ‘null’ model is to assume that the variances of each corresponding indicator are equal across groups and their means are also equal across groups. 44
References Byrne, B. M., Shavelson, R. J., & Muthén, B. (1989). Testing for the equivalence of factor covariance and mean structures: The issue of partial measurement invariance. Psychological Bulletin, 105, 456-466. Cheung, G. W., & Rensvold, R. B. (1999). Testing factorial invariance across groups: A reconceptualization and proposed new method. Journal of Management, 25, 1-27. Gonzalez, R., & Griffin, D. (2001). Testing parameters in structural equation modeling: Every “one” matters. Psychological Methods, 6, 258-269. Kaiser, H. F., & Dickman, K. (1962). Sample and population score matrices and sample correlation matrices from an arbitrary population correlation matrix. Psychometrika, 27, 179-182. Kaplan, D. (1989). Power of the likelihood ratio test in multiple group confirmatory factor analysis under partial measurement invariance. Educational and Psychological Measurement, 49, 579-586. Little, T. D., Slegers, D. W., & Card, N. A. (2006). A non-arbitrary method of identifying and scaling latent variables in SEM and MACS models. Structural Equation Modeling, 13, 59-72. MacCallum, R. C., Roznowski, M., & Necowitz, L. B. (1992). Model modification in covariance structure analysis: The problem of capitalization on chance. Psychological Bulletin, 111, 490-504. Meredith, W. (1993). Measurement invariance, factor analysis and factorial invariance. Psychometrika, 58, 525-543. Steenkamp, J.-B. E. M., & Baumgartner, H. (1998). Assessing measurement invariance in cross-national consumer research. Journal of Consumer Research, 25, 78-90. 45
Factorial Invariance: Why It's Important and How to Test for It Todd D. Little University of Kansas Director, Quantitative Training Program Director, Center for Research Methods and Data Analysis Director, Undergraduate Social and Behavioral Sciences Methodology Minor Member, Developmental Psychology Training Program crmda.KU.edu Colloquium presented 5-24-2012 @ University of Turku, Finland Special Thanks to: Mijke Rhemtulla & Wei Wu crmda.KU.edu
Update Dr. Todd Little is currently at Texas Tech University Director, Institute for Measurement, Methodology, Analysis and Policy (IMMAP) Director, “Stats Camp” Professor, Educational Psychology and Leadership Email: yhat@ttu.edu IMMAP (immap.educ.ttu.edu) Stats Camp (Statscamp.org) www.Quant.KU.edu