1 / 23

Data Forensics: A Compare and Contrast Analysis of Multiple Methods

Data Forensics: A Compare and Contrast Analysis of Multiple Methods . Christie Plackner. Outlier Score. Applied to most of the methods Statistical probabilities were transformed into a score of 0 to 50 10 = statistically unusual. Erasure Analysis.

apollo
Download Presentation

Data Forensics: A Compare and Contrast Analysis of Multiple Methods

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Data Forensics: A Compare and Contrast Analysis of Multiple Methods Christie Plackner

  2. Outlier Score • Applied to most of the methods • Statistical probabilities were transformed into a score of 0 to 50 • 10 = statistically unusual

  3. Erasure Analysis • Wrong-to-right (WR) erasure rate higher than expected from random events • The baseline for the erasure analysis is the state average • One sample t-test

  4. Scale Score Changes • Scale score changes statistically higher or lower than the previous year • Cohort and Non-cohort • One sample t-test

  5. Performance Level Changes • Large changes in proportion in performance levels across years • Cohort and Non-cohort • Log odds ratio • adjusted to accommodate small sample size • z test

  6. Measurement Model Misfit • Performed better or worse than expected • Rasch residuals summed across operational items • Adjusted for unequal school sizes

  7. Subject Regression • Large deviations from expected scores • Within year – reading and mathematics • Across year – cohort within a subject • One sample t-test

  8. Modified Jacob and Levitt • Only method not resulting in a school receiving a score • Combination of two indicators: • unexpected test score fluctuations across years using a cohort of students, and • unexpected patterns in student answers • Modified application of Jacob and Levitt (2003) • 2 years of data • Sample size

  9. Principal Component Analysis • Does each method contribute to the overall explained variance? • Can the methods be reduced for a more efficient approach?

  10. Multiple Methods • Erasure Analysis (mER) • Scale score changes using non-cohort groups (mSS) • Scale score changes using cohort groups (mSC) • Performance level changes using non-cohort groups (mPL) • Performance level changes using cohort groups (mPLC) • Model misfit using Rasch Residuals (mRR) • Across subject regression using reading scores to predict mathematic scores (mRG) • Within subject regression using a cohort’s previous year score to predict current score (mCR) • Index 1 of the Modified Jacob and Levitt evaluating score changes (mMJL1) • Index 2 of the Modified Jacob and Levitt evaluating answer sheet patterns (mMJL2).

  11. Principal Component Analysis • Grade 4 mathematics exam • 10 methods

  12. Method Correlations

  13. Principal Component Statistics

  14. Scree Plot

  15. Loading Matrix

  16. Simplified Loading Matrix • +/- greater than 1/2 the maximum value in the component • (+)/(-) is between ¼ to ½ the maximum

  17. Principal Component Statistics

  18. Scree Plot

  19. Reducing Variable Set • Determine how many components to retain • Cumulative percentage of total variation • Eigenvalues • The scree plot

  20. Reducing Variable Set • Select one method to represent a component • Selecting methods within components • Positive selection • Retain highest loading method with components • Discarded principal components • Remove highest loading method with

  21. Reducing Variable Set • Cohort regression* • Modified J&L, Index 1* • Non-cohort scale score change • Model misfit

  22. Conclusion • All methods seem to account for variation in detecting test taking irregularities • Accounting for the most • Cohort regression • Cohort scale score change • Cohort performance level change • Method reduction results the same

  23. Discussion • Different component selection methodologies • Closer examination of variables • Remove cohort regression or cohort scale score change • Combine the J&L indexes • Remove erasures

More Related