1 / 31

Stephen C. Court Presented in Symposium American Educational Research Association (AERA)

A District-initiated Appraisal of a State Assessment’s Instructional Sensitivity HOLDING ACCOUNTABILITY TESTS ACCOUNTABLE. Stephen C. Court Presented in Symposium American Educational Research Association (AERA) Annual Meeting May 2, 2010 Denver, Colorado. Accountability. Basic premise :

jovan
Download Presentation

Stephen C. Court Presented in Symposium American Educational Research Association (AERA)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A District-initiated Appraisalof a State Assessment’sInstructional SensitivityHOLDING ACCOUNTABILITY TESTS ACCOUNTABLE Stephen C. Court Presented in Symposium American Educational Research Association (AERA) Annual Meeting May 2, 2010 Denver, Colorado

  2. Accountability Basic premise: Teaching  Learning  Proficiency High proficiency rates = Good schools Low proficiency rates = Bad schools

  3. Accountability Basic Assumption State assessments distinguish well-taught students from not so well-taught students with enough accuracy to support accountability decisions.

  4. Accountability Q: Is the assumption warranted? A: Only if the tests are instructionally sensitive. When tests are insensitive, accountability decisions are based on thewrong things – e.g., SES.

  5. Kansas: SES

  6. Kansas: Test Scores

  7. Kansas: Exemplary by SES

  8. The Situation in Kansas Basic Question Can the instruction in low-poverty districts truly be that much betterthan the instruction in high-poverty districts? Or, do instructionally-irrelevant factors (such as SES) distort or mask the effects of instruction?

  9. Multi-district Study • Purpose • To compare instructional sensitivity appraisal models and methods • To appraise the instructional sensitivity of the Kansas state assessments • District-initiated because no state-level study had been initiated  • Indicator-level analysis • Loss/gain because no indicator-level cut scores • Based initially on empirical approach recommended by Popham (2008)

  10. Tactical Variations • A variety of practical constraints and preliminary findings raised several conceptual and methodological issues. • The original design underwent several revisions. • Several tactical variations involving • data collection • data array, analysis, and interpretation

  11. Tactical Variations See the paper for details… • discusses the issues and design revisions • provides exegesis of item-selection criteria and test-construction that yield instructional insensitivity • describes, demonstrates, and compares the tactical variations employed in the collection, array, and analysis of the data, as well as in the interpretation of the results Due to time constraints, let’s focus just on the “juiciest jewels”…

  12. Study Participants 575 teachers responded • 320 teachers (grades 3-5 reading and math) • 129 reading teachers (grades 6-8) • 126 math teachers (grades 6-8)  14,000 students • Only Grade 5 reading included in this study. • To be reported in June at CCSSO in Detroit: • other reading results (grades 3-8) • all math results (grades 3-8)

  13. A Gold Standard By recommending that teachers be asked to identify their best-taught indicators, Popham (2008) transformed the instructional sensitivity issue in a fundamental way – both conceptually and operationally: For the first time since IS inquiries began about 40 years ago, there now could be a gold standard independent of the test itself – a huge breakthrough!

  14. A = Non-Learning B = Learning C = Slip D = Maintain A = True Fail B = False Pass = II-E C = False Fail = II-D D = True Pass Old and New Model

  15. Initial Analysis Scheme Initial logic: If best-taught students outperform other students, indicator is sensitive to instruction. If mean differences are small or in the wrong direction, indicator is insensitive to instruction.

  16. Problem But significant performance differences between best-taught and other students do not necessarily represent instructional sensitivity. • affluent students provided ineffective instruction typically end up in Cell B • challenged students provided effective instruction typically end up in Cell C

  17. Problem Thus: Means-based and DIF-driven approaches that evaluate between-group differences are not appropriate for appraising instructional sensitivity. Instead: Focus on the degree to which indicators accurately distinguish effective from ineffective instruction – without confounding from instructionally irrelevant easiness or difficulty.

  18. Rather than comparing group differences in terms of means, let’s look instead at the combined proportions of true fail and true pass. That is, (A + D) / (A + B + C + D) Which can be shortened to (A + D) / N = Malta Index Conceptually Correct

  19. Malta Index (A + D) / N Ranges from 0 to 1 (Completely Insensitive to Totally Sensitive) In practice: A value of .50 = chance Equivalent to random guessing

  20. (A + D) / N = (50 + 50) / 100 = 1.0 A perfectly sensitive item or indicator would cluster students into Cell A or Cell D. Totally Sensitive

  21. (A+D) / N = (0+0) / 100 = 0.0 A perfectly insensitive test clusters students into Cell B or Cell C Totally Insensitive

  22. (A+D) / N = (25+25) /100 = 0.50 0.50 = mere chance An indicator that cannot distinguish true fail or pass from false fail or pass is totally useless – no better than random guessing. Useless

  23. Malta Index Parallels The Malta Index is similar conceptually to: • Mann-Whitney U • Wilcoxon ranks statistic • Area Under the Curve (AUC) in Receiver Operating Characteristic (ROC) curve analysis But its interpretation is embedded in the context of instructional sensitivity appraisal.

  24. Malta Index Compared to these other approaches, the Malta Index is easier to… • compute • understand • interpret Thus, it is more accessible conceptually to measurement novices, such as • teachers • reporters • policy-makers

  25. ROC Analysis Malta Index values can be depicted graphically as ROC curves.

  26. Informal Evaluation Malta Index values can be evaluated informally via acceptability criteria (Hosmer & Lemeshow, 2000) Value • .90-1.0 = excellent (A) • .80-.90 = good (B) • .70-.80 = acceptable (C) • .60-.70 = poor (D) • .50-.60 = fail (F)

  27. Summary and Interpretations • AUC and the Malta Index yield very similar but not identical results • Identical conclusions overall: Grade 5 reading indicators lack instructional sensitivity • No indicator was graded better than a “C” • Most were in the “Poor” to “Useless” range • Averages ranged from “Poor” to “Useless”

  28. Summary and Interpretations Low instructional sensitivity values for grade 5 reading were disappointing, especially given: • Local contractor (CETE) • Guidance from TAC (including Popham and Pellegrino) • Concerns from the KAAC (including Court) If Kansas assessments lack instructional sensitivity, what about other states’ assessments?

  29. Conclusion Dear U.S. Department of Education: Please make instructional sensitivity… • An essential component in reviews of RTTT funding applications • A critical element in the approval process of state and consortia accountability plans When the Department revised its Peer Review Guidance (2007) to include alignment as a critical element of technical quality, states were compelled to conduct alignment studies that they otherwise would not have conducted. Instructional sensitivity deserves similar Federal endorsement.

  30. Presenter’s email:scourt1@cox.net Questions, comments, or suggestions are welcome

More Related