1 / 37

Pre-Processing & Item Analysis

Learn the methods for pre-processing and item analysis to ensure accurate and meaningful data. Understand how to handle missing data and transform responses into usable numbers. Analyze response range and directionality for better inference.

mariebaird
Download Presentation

Pre-Processing & Item Analysis

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. DeShon - 2005 Pre-Processing & Item Analysis

  2. Pre-Processing Method of Pre-processing depends on the type of measurement instrument used General Issues Responses within range? Missing data Item directionality Scoring Transforming responses into numbers that are useful for the desired inference

  3. Checking response range • First step… • Make sure there are no observations outside the range of your measure. • If you use a 1-5 response measure, you can’t have a response of 6. • Histograms and summary statistics (min, max)

  4. Reverse Scoring • Used when combining multiple measures (e.g., items) into a composite • All items should refer to the target trait in the same direction • Alg: (high scale score +1) – score

  5. Missing Data • Huge issue in most behavioral research! • Key issues: • Why is the data missing? • Planned, missing randomly, response bias? • What’s the best analytic strategy with missing data? • Statistical Power • Biased results • Collins, L. M., Schafer, J. L., & Kam, C. M. (2001). A comparison of inclusive and restrictive strategies in modern missing data procedures. Psychological Methods, 6, 330_351. • Schafer, J. L., & Graham, J. W. (2002). Missing data: our view of the state of the art. Psychological Methods, 7, 147-177.

  6. Causes of Missing Data • Common in social research • nonresponse, loss to followup • lack of overlap between linked data sets • social processes • dropping out of school, graduation, etc. • survey design • “skip patterns” between respondents

  7. Missing Data • Step 1: Do everything ethically feasible to avoid missing data during data collection • Step 2: Do everything ethically possible to recover missing data • Step 3: Examine amount and patterns of missing data • Step 4: Use statistical models and methods that replace missing data or are unaffected by missing data

  8. Missing Data Mechanisms • Missing Completely at Random (MCAR) • Missing at Random (MAR) • Not Missing at Random (NMAR) X  not subject to nonresponse (age) Y  subject to nonresponse (income)

  9. Missing Completely at Random • MCAR • Probability of response is independent of X & Y • Ex: Probability that income is recorded is the same for all individuals regardless of age or income

  10. Missing at Random • MAR • Probability of response is dependent on X but not Y • Probability of missingness does not depend on unobserved information • Ex: Probability that income is recorded varies according to age but it is not related to income within a particular age group

  11. Not Missing at Random • NMAR • Probability of missingness does depend on unobserved information • Ex: Probability that income is recorded varies according to income and possibly age

  12. How can you tell? • Look for patterns • Run a logistic regression with your IV’s predicting a dichotomous variable (1=missing; 0=nonmissing LOGISTIC REGRESSION Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) -5.058793 0.367083 -13.781 < 2e-16 *** NEW.AGE 0.181625 0.007524 24.140 < 2e-16 *** SEXMale -0.847947 0.131475 -6.450 1.12e-10 *** DVHHIN94 0.047828 0.026768 1.787 0.0740 . DVSMKT94 -0.015131 0.031662 -0.478 0.6327 NEW.DVPP94 = 0 0.233188 0.226732 1.028 0.3037 NUMCHRON -0.087992 0.048783 -1.804 0.0713 . VISITS 0.012483 0.006563 1.902 0.0572 . NEW.WT6 -0.043935 0.077407 -0.568 0.5703 NEW.DVBMI94 -0.015622 0.017299 -0.903 0.3665

  13. Missing Data Mechanisms • If MAR or MCAR, the missing data mechanism is ignorable for full information likelihood-based inferences • If MCAR, the mechanism is also ignorable for sampling-based inferences (OLS regression) • If NMAR, the mechanism is nonignorable – thus any statistic could be biased

  14. Missing Data Methods • Always Bad Methods • Listwise deletion • Pairwise deletion a.k.a. available case analysis • Person or item mean replacement • Often Good Methods • Regression replacement • Full-Information Maximum Likelihood • SEM – must have full dataset • Multiple Imputation

  15. Listwise Deletion • Assumes that the data are MCAR. • Only appropriate for small amounts of missing data. • Can lower power substantially • Inefficient • Now very rare • Don’t do it!

  16. FIML - AMOS

  17. Imputation-based Procedures • Missing values are filled-in and the resulting “Completed” data are analyzed • Hot deck • Mean imputation • Regression imputation • Some imputation procedures (e.g., Rubin’s multiple imputation) are really model-based procedures.

  18. Mean Imputation • Technique • Calculate mean over cases that have values for Y • Impute this mean where Y is missing • Ditto for X1, X2, etc. • Implicit models • Y=mY • X1=m1 • X2=m2 • Problems • ignores relationships among X and Y • underestimates covariances

  19. Regression Imputation • Technique & implicit models • If Y is missing • impute mean of cases with similar values for X1, X2 • Y = b0 + X1b1 + X2b2 • Likewise, if X2 is missing • impute mean of cases with similar values for X1, Y • X1 = g0 + X1g1 + Y g2 • If both Y and X2 are missing • impute means of cases with similar values for X1 • Y = d0 + X1d1 • X2= f0 + X1f1 • Problem • Ignores random components (no e) Underestimates variances, se’s

  20. Little and Rubin’s Principles • Imputations should be • Conditioned on observed variables • Multivariate • Draws from a predictive distribution • Single imputation methods do not provide a means to correct standard errors for estimation error.

  21. Multiple Imputation • Context: Multiple regression (in general) • Missing values are replaced with “plausible” substitutes based on distributions or model • Construct m>1 simulated versions • Analyze each of the m simulated complete datasets by standard methods • Combine the m estimates • get confidence intervals using Rubin’s rules (micombine) • ADVANTAGE: sampling variability is taken into account by restoring error variance

  22. Multiple Imputation (Rubin, 1987, 1996) Point estimate imputations Variance within + Variance Imputation

  23. Another View • IMPUTATION: Impute the missing entries of the incomplete data sets M times, resulting in M complete data sets. • ANALYSIS: Analyze each of the M completed data sets using weighted least squares. • POOLING: Integrate the M analysis results into a final result. Simple rules exist for combining theM analyses. POOLING IMPUTATION ANALYSIS INCOMPLETE DATA IMPUTED DATA FINAL RESULTS ANALYSIS RESULTS

  24. How many Imputations? • 5 • Efficiency of an estimate: (1+γ/m)-1 γ = percentage of missing info m = number of imputations • If 30% missing, 3 imputations  91% 5 imputations  94% 10 imputations  97%

  25. Imputation in SAS PROC MI • By default generates 5 imputation values for each missing value • Imputation method: MCMC (Markov Chain Monte Carlo) • EM algorithm determines initial values • MCMC repeatedly simulates the distribution of interest from which the imputed values are drawn • Assumption: Data follows multivariate normal distribution PROC REG Fits five weighted linear regression models to the five complete data sets obtained from PROC MI (used by_imputation_statement ) PROC MIANALIZE Reads the parameter estimates and associated covariance matrix from the analysis performed on the multiple imputed data sets and derives valid statistics for the parameters

  26. Example • Case 1 is missing weight • Given 1’s sex and age • generate a plausible distributionfor 1’s weight • At random, sample 5 (or more) plausible weights for case 1 • Impute Y! • For case 6, sample from conditional distribution of age. • Use Y to impute X! • For case 7, sample from conditional bivariate distribution of age & weight

  27. Example • PROC MI; • DATA=missing_weight_age • OUT=weight_age_mi; • VAR years_over_20 weight maleness; • run; • PROC REG; • data=weight_age_mi • model weight = maleness years_over_20; • by _imputation_; • run;

  28. Example – Standard Errors • Total variance in b0 • Variation due to sampling + variation due to imputation • Mean(s2b0) + Var(b0 ) • Actually, there’s a correction factor of (1+1/M) • for the number of imputations M. (Here M=5.) • So total variance in estimating b0 is • Mean(s2b0) + (1+1/M) Var(b0 ) = 179.53 + (1.2) 511.59 = 793.44 • Standard error is 793.44 = 28.17

  29. Example • PROC MIAnalyze data=parameters; • VAR intercept maleness years_over_20; • run; Multiple Imputation Parameter Estimates Parameter Estimate Std Error 95% Confidence Limits DF intercept 178.56452628.168160 70.58553 286.5435 2.2804 maleness 67.110037 14.801696 21.52721 112.6929 3.1866 years_over_20 -0.960283 0.819559 -3.57294 1.6524 2.991 • Other Software • www.stat.psu.edu/~jls/misoftwa.html

  30. Item Analysis Relevant for tests with a right / wrong answer Score the item so that 1=right and 0=wrong Where do the answers come from? Rational analysis Empirical keying

  31. Item Analysis Goal of Item Analysis Determine the extent to which the item is useful in differentiating individuals with respect to the focal construct Improve the measure for future administrations

  32. Item Analysis Item analysis provides info on the effectiveness of the individual items for future use

  33. Typology of Item Analysis Item Analysis Item Response theory Classical Rasch IRT2 IRT3

  34. Item Analysis Classical analysis is the easiest and most widely used form of analysis The statistics can be computed by generic statistical packages (or by hand) and need no specialist software The item statistics apply only to that group of testees on that collection of items Sample Dependent!

  35. Classical Item Analysis Item Difficulty Proportion Correct (1=correct; 0=wrong) the higher the proportion the easier the item In general, need a wide range of item difficulties to cover the range of the trait being assessed If mastery test, need item difficulties to cluster around the cut score Very easy (0.0) or very hard items (1.0) are useless Most variance at p=.5

  36. Classical Item Analysis Item Discrimination – 2 methods Difference in proportion correct between high and low test score groups (27%) Item-total correlation (output in Cronbach’s alpha routines) No negative discriminators Check key or drop item Zero discriminators are not useful Item difficulty and discrimination are interdependent

  37. Classical Item Analysis

More Related