310 likes | 494 Views
A District-initiated Appraisal of a State Assessment’s Instructional Sensitivity HOLDING ACCOUNTABILITY TESTS ACCOUNTABLE. Stephen C. Court Presented in Symposium American Educational Research Association (AERA) Annual Meeting May 2, 2010 Denver, Colorado. Accountability. Basic premise :
E N D
A District-initiated Appraisalof a State Assessment’sInstructional SensitivityHOLDING ACCOUNTABILITY TESTS ACCOUNTABLE Stephen C. Court Presented in Symposium American Educational Research Association (AERA) Annual Meeting May 2, 2010 Denver, Colorado
Accountability Basic premise: Teaching Learning Proficiency High proficiency rates = Good schools Low proficiency rates = Bad schools
Accountability Basic Assumption State assessments distinguish well-taught students from not so well-taught students with enough accuracy to support accountability decisions.
Accountability Q: Is the assumption warranted? A: Only if the tests are instructionally sensitive. When tests are insensitive, accountability decisions are based on thewrong things – e.g., SES.
The Situation in Kansas Basic Question Can the instruction in low-poverty districts truly be that much betterthan the instruction in high-poverty districts? Or, do instructionally-irrelevant factors (such as SES) distort or mask the effects of instruction?
Multi-district Study • Purpose • To compare instructional sensitivity appraisal models and methods • To appraise the instructional sensitivity of the Kansas state assessments • District-initiated because no state-level study had been initiated • Indicator-level analysis • Loss/gain because no indicator-level cut scores • Based initially on empirical approach recommended by Popham (2008)
Tactical Variations • A variety of practical constraints and preliminary findings raised several conceptual and methodological issues. • The original design underwent several revisions. • Several tactical variations involving • data collection • data array, analysis, and interpretation
Tactical Variations See the paper for details… • discusses the issues and design revisions • provides exegesis of item-selection criteria and test-construction that yield instructional insensitivity • describes, demonstrates, and compares the tactical variations employed in the collection, array, and analysis of the data, as well as in the interpretation of the results Due to time constraints, let’s focus just on the “juiciest jewels”…
Study Participants 575 teachers responded • 320 teachers (grades 3-5 reading and math) • 129 reading teachers (grades 6-8) • 126 math teachers (grades 6-8) 14,000 students • Only Grade 5 reading included in this study. • To be reported in June at CCSSO in Detroit: • other reading results (grades 3-8) • all math results (grades 3-8)
A Gold Standard By recommending that teachers be asked to identify their best-taught indicators, Popham (2008) transformed the instructional sensitivity issue in a fundamental way – both conceptually and operationally: For the first time since IS inquiries began about 40 years ago, there now could be a gold standard independent of the test itself – a huge breakthrough!
A = Non-Learning B = Learning C = Slip D = Maintain A = True Fail B = False Pass = II-E C = False Fail = II-D D = True Pass Old and New Model
Initial Analysis Scheme Initial logic: If best-taught students outperform other students, indicator is sensitive to instruction. If mean differences are small or in the wrong direction, indicator is insensitive to instruction.
Problem But significant performance differences between best-taught and other students do not necessarily represent instructional sensitivity. • affluent students provided ineffective instruction typically end up in Cell B • challenged students provided effective instruction typically end up in Cell C
Problem Thus: Means-based and DIF-driven approaches that evaluate between-group differences are not appropriate for appraising instructional sensitivity. Instead: Focus on the degree to which indicators accurately distinguish effective from ineffective instruction – without confounding from instructionally irrelevant easiness or difficulty.
Rather than comparing group differences in terms of means, let’s look instead at the combined proportions of true fail and true pass. That is, (A + D) / (A + B + C + D) Which can be shortened to (A + D) / N = Malta Index Conceptually Correct
Malta Index (A + D) / N Ranges from 0 to 1 (Completely Insensitive to Totally Sensitive) In practice: A value of .50 = chance Equivalent to random guessing
(A + D) / N = (50 + 50) / 100 = 1.0 A perfectly sensitive item or indicator would cluster students into Cell A or Cell D. Totally Sensitive
(A+D) / N = (0+0) / 100 = 0.0 A perfectly insensitive test clusters students into Cell B or Cell C Totally Insensitive
(A+D) / N = (25+25) /100 = 0.50 0.50 = mere chance An indicator that cannot distinguish true fail or pass from false fail or pass is totally useless – no better than random guessing. Useless
Malta Index Parallels The Malta Index is similar conceptually to: • Mann-Whitney U • Wilcoxon ranks statistic • Area Under the Curve (AUC) in Receiver Operating Characteristic (ROC) curve analysis But its interpretation is embedded in the context of instructional sensitivity appraisal.
Malta Index Compared to these other approaches, the Malta Index is easier to… • compute • understand • interpret Thus, it is more accessible conceptually to measurement novices, such as • teachers • reporters • policy-makers
ROC Analysis Malta Index values can be depicted graphically as ROC curves.
Informal Evaluation Malta Index values can be evaluated informally via acceptability criteria (Hosmer & Lemeshow, 2000) Value • .90-1.0 = excellent (A) • .80-.90 = good (B) • .70-.80 = acceptable (C) • .60-.70 = poor (D) • .50-.60 = fail (F)
Summary and Interpretations • AUC and the Malta Index yield very similar but not identical results • Identical conclusions overall: Grade 5 reading indicators lack instructional sensitivity • No indicator was graded better than a “C” • Most were in the “Poor” to “Useless” range • Averages ranged from “Poor” to “Useless”
Summary and Interpretations Low instructional sensitivity values for grade 5 reading were disappointing, especially given: • Local contractor (CETE) • Guidance from TAC (including Popham and Pellegrino) • Concerns from the KAAC (including Court) If Kansas assessments lack instructional sensitivity, what about other states’ assessments?
Conclusion Dear U.S. Department of Education: Please make instructional sensitivity… • An essential component in reviews of RTTT funding applications • A critical element in the approval process of state and consortia accountability plans When the Department revised its Peer Review Guidance (2007) to include alignment as a critical element of technical quality, states were compelled to conduct alignment studies that they otherwise would not have conducted. Instructional sensitivity deserves similar Federal endorsement.
Presenter’s email:scourt1@cox.net Questions, comments, or suggestions are welcome