1 / 114

Structural Equation Modeling Using Mplus

Structural Equation Modeling Using Mplus. Chongming Yang Research Support Center FHSS College. Structural?. Structuralism Components Relations. Objectives. Introduction to SEM T he model Parameters Estimation Model evaluation Applications E stimate simple models with Mplus.

dolf
Download Presentation

Structural Equation Modeling Using Mplus

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Structural Equation Modeling Using Mplus Chongming Yang Research Support Center FHSS College

  2. Structural? • Structuralism • Components • Relations

  3. Objectives • Introduction to SEM • The model • Parameters • Estimation • Model evaluation • Applications • Estimate simple models with Mplus

  4. Continuous Dependent Variables Session I

  5. Information of Variable • Mean • Variance • Skewedness • Kurtosis

  6. Variance & Covariance

  7. Covariance Matrix (S) x1 x2 x3 x1 V1 x2 Cov21 V2 x3 Cov31 Cov32 V3

  8. Statistical Model • Probabilistic statement about Relations of variables • Imperfect but useful representation of reality

  9. Structural Equation Modeling • A system of regression equations for latent variables to estimate and test direct and indirect effects without the influence of measurement errors. • To estimate and test theories about interrelations among observed and latent variables.

  10. Latent Variable (Construct / Factor / Trait) • A hypothetical variable • cannot be measured directly • No objective measurement unit • inferred from observable manifestations • Multiple manifestations (indicators) • Normally distributed interval dimension

  11. How is Depression Distributed in? • BYU students • Patients for Therapy

  12. Normal Distributions

  13. Levels of Analyses • Observed • Latent

  14. Test Theories • Classical True Score Theory: Observed Score = True score + Error • Item Response Theory • Generalizability (Raykov & Marcoulides, 2006)

  15. Graphic Symbols of SEM • Rectangle – observed variable • Oval -- latent variable or error • Single-headed arrow -- causal relation • Double-headed arrow -- correlation

  16. Graphic Measurement Model of Latent  X1 1 1 2 X2  2 X3 3 3

  17. Equations • Specific equations X1 = 1 + 1 X2 = 2 + 2 X3 = 3 + 3 • Matrix Symbols X =  +  • True Score Theory?

  18. Relations of Variances VX1 = 12 + 1 VX2 = 22 + 2 VX3 = 32 + 3  = measurement error / uniqueness

  19. Unknown Parameters VX1 = 12+ 1 VX2 = 22+ 2 VX3 = 32 + 3

  20. Sample Covariance Matrix (S) x1 x2 x3 x1 V1 x2 Cov21 V2 x3 Cov31 Cov32 V3

  21. Variance of  • Variance of  = common covariance of X1 X2 and X3 1 0 0 Variance of  2 3 0

  22. Unstandardized Parameterization(scaling) • 1 = 1 (set variance of X1 =1; X1 called reference Indicator) • Variance of  = common variance of X1 X2 and X3 • Squared  = explained variance of X (R2) • Variance of  = unexplained variance--error • Total Variance = Squared  +  Variance

  23. Just Identified Model X1 1 1 2 X2  2 X3 3 3

  24. Reference Indicator(marker) • Choose conceptually the best • Small variance  non-convergence • Different markers  different parameters estimates and their standard errors • Affect measurement invariance tests • Not affect standardized estimates

  25. Standardized Parameterizations(scaling) • Variance of  = 1 = common variance of X1 X2 and X3 • Squared  = explained variance of X (R2) • Variance of  = 1 - 2 • Mean of  = 0 • Mean of  = 0

  26. Two Kinds of Parameters • Fixed at 0, 1, or other values • Freely estimated

  27. Structural Equation Modelin Matrix Symbols • X = x +  (exogenous) • Y = y +  (endogenous) •  =  +  +  (structural model) Note: Measurement model reflects the true score theory

  28. Structural Equation Modelin Matrix Symbols • X = x + x +  (measurement) • Y = y + y +  (measurement) •  = α +  +  +  (structural) Note: SEM with mean structure.

  29. Model Implied Covariance Matrix(Σ) Note: This covariance matrix contains unknown parameters in the equations. (I-B) = non-singular

  30. Estimations/Fit Functions • Hypothesis:  = S or  - S = 0 • Maximum Likelihood F = log|||| + trace(S-1) - log||S|| - (p+q)

  31. Convergence -- Reaching Limit • Minimize F while adjust unknown Parameters through iterative process • Convergence value: F difference between last two iterations • Default convergence = .0001 • Increase to help convergence (0.001 or 0.01) e.g. Analysis: convergence = .01;

  32. No Convergence • No unique parameter estimates • Lack of degrees of freedom  under identification • Variance of reference indicator too small • Fixed parameters are left to be freely estimated • Misspecified model

  33. Absolute Fit Index 2 = F(N-1) (N = sample size) df = p(p+1)/2 – q P = number of variances, covariances, & means q = number of unknown parameters to be estimated prob = ? (Nonsignificant 2 indicates good fit, Why?)

  34. Sample Information x1 x2 x3 x4 … x1 v1 x2 cov21 v2 x3 cov31 cov32 v3 x4 cov41 cov42 cov43 v4 … … Mean1 Mean2 Mean3 Mean4 … Total info = P(P+1)/2 + Means

  35. Absolute Fit -- SRMR • Standardized Root Mean Square Residual • SRMR = Difference between observed and implied covariances in standardized metric • Desirable when < .90, but no consensus

  36. Relative Fit: Relative to Baseline (Null) Model • All unknown parameters are fixed at 0 • Variables not related (====0) • Model implied covariance  = 0 • Fit to sample covariance matrix S • Obtain 2, df, prob< .0000

  37. Relative Fit Indices • CFI = 1- (2-df)/(2b-dfb) • b = baseline model • Comparative Fit Index, desirable => .95; 95% better than b model • TLI = (2b/dfb - 2/df) / (2b/dfb-1) (Tucker-Lewis Index, desirable => .90) • RMSEA = √(2-df)/(n*df) (Root Mean Square of Error Approximation, desirable <=.06 penalize a large model with more unknown parameters)

  38. Special Case A

  39. Special Cases A • Assumption: x =  y = x +  +   =  + x + 

  40. Special Case B

  41. Special Cases B Assumption: y =  x = x + x +  y =  +  + 

  42. Other Special Cases of SEM • Confirmatory Factor Analysis (measurement model only) • Multiple & Multivariate Regression • ANOVA / MANOVA (multigroup CFA) • ANCOVA • Path Analysis Model (no latent variables) • Simultaneous Econometric Equations… • Growth Curve Modeling • …

  43. EFA vs. CFA

  44. Multiple Regression

  45. ANCOVA

  46. Multivariate Normality Assumption Observed data summed up perfectly by covariance matrix S (+ means M), S thus is an estimator of the population covariance 

  47. Consequences of Violation Inflated 2 & deflated CFI and TLI reject plausible models Inflated standard errors  attenuate factor loadings and relations of latent variables (structural parameters) (Cause: Sample covariances were underestimated)

More Related