200 likes | 356 Views
Making Data Driven Decisions: Cut Points, Curve Analysis, and Odd Balls. Robert Rosenthal, David Lillenstein, Jason Pedersen, Laura Lent, Richard Hall, Joe Kovaleski, and Edward Shapiro. Agenda.
E N D
Making Data Driven Decisions: Cut Points, Curve Analysis, and Odd Balls Robert Rosenthal, David Lillenstein, Jason Pedersen, Laura Lent, Richard Hall, Joe Kovaleski, and Edward Shapiro
Agenda To hear how decisions are made regarding intervention and evaluation in schools from all over Pennsylvania that implement Response to Intervention and Instruction models. To learn about some outcomes as a result of these decision strategies.
Grade Level Team Meetings • Examine data every 6 weeks • Include all data on excel spreadsheet • Use DIBELS Prog monitoring charts • Calculate slope (rate of progress) • Generally follow DIBELS recommended Instructional levels • Must present data to not follow recommended levels
Data examined at Team Meetings • Universal screening (DIBELS) • Unit (curriculum) test scores • Unit (curriculum) weekly assessment • 4-Sight scores (3 times) • PSSA (annual state assessment) • Rate of progress (slope of PM data) • Length of time at a tier level • Instructional program at T2&3 • Behavior infractions
Tier Assignment Decisions • First look at DIBELS recommendation • K-2 • Then examine Unit/Weekly test scores • For students in T2 or 3: • Sub-groups decoding/fluency/comp • 3-6 • Then examine Unit/Weekly tests, PSSA, 4-Sight • Sub-group fluency/decoding/comprehension/writing
Making Sub-Groups Every 6 Weeks, Once Tier Level Decision is Made: Group by high vs low Group by decoding vs fluency vs comp Group by Program (Fundations)
Percent of Time We Followed DIBELS Instructional Recommendations
When Didn’t Follow Inst Rec • 43 times (10% of total students) we gave more support than indicated • 31 times (8%) we gave less support • Reasons: • Unit Test scores • Behavior/Emotional Issues (gave more) • Borderline- look at other data • Not a fluency problem (gave more) • A fluke (gave less- other indicators ok) • Resources- group when similar
Decision to Evaluate • Rate of progress is below target and typical rate (unless not fluency prob) • History of failure in curriculum • In targeted instructional support for at least 6 months with multiple data-driven changes using research-proven techniques and programs • PM shows significantly below peers • BB or B on PSSA’s
Eligibility Decisions • LEA decided to use discrepancy • Augment ER with RtI data • Slope scores can help support decision (especially when ½ target rate) • Helps in making recommendations • Type and quantity of program • Sometimes data is conflictual: • Used to be: Discrepancy rules • Now any sign of success makes it difficult • Always helps with ED classification
Conclusions • Must include special ed students • Teachers need more training • We see a reduction in testing, with school referrals being more accurate • Now at Team meetings staff don’t ask about evaluations, they ask about interventions • Must continually remind staff to look at data to make decisions- we need to move them to less support more often