940 likes | 1.06k Views
E N D
1. RTI: Using Student-Centered Data to Make Intervention and Eligibility Decisions Laguna Cliffs Institute
Sopris West Educational Services
Dr. George M. Batsche
Co-Director, Institute for School Reform
Florida Problem-Solving/RtI Statewide Project
University of South Florida
Tampa, Florida
3. Steps in the Problem-Solving Process PROBLEM IDENTIFICATION
• Identify replacement behavior
• Data- current level of performance
• Data- benchmark level(s)
• Data- peer performance
• Data- GAP analysis
PROBLEM ANALYSIS
• Develop hypotheses( brainstorming)
• Develop predictions/assessment
INTERVENTION DEVELOPMENT
• Develop interventions in those areas for which data are available and hypotheses verified
• Proximal/Distal
• Implementation support
• Intervention Fidelity/Integrity
Response to Intervention (RtI)
• Frequently collected data
• Type of Response- good, questionable, poor
4. Criteria for Evaluating Response to Intervention Is the gap between desired/current rate or gap between slopes of current and benchmark converging? If yes, this is a POSITIVE RtI
Is the gap closing but not converging (e.g., parallel)? If yes, this is a QUESTIONABLE RtI
If the rate/slope remains unchanged OR if there is improvement but shows no evidence of closing the gap, then this is a POOR RtI
5. Data For Each Tier - Where Do They Come From? Tier 1: Universal Screening, accountability assessments, grades, classroom assessments, referral patterns, disciplikne referrals
Tier 2: Universal Screening - Group Level Diagnostics (maybe), systematic progress monitoring, large-scale assessment data and classroom assessment
Tier 3: Universal Screenings, Individual Diagnostics, intensive and systematic progress monitoring, formative assessment, other informal assessments
6. How Does it Fit Together? Group-Level Diagnostic Std. Treatment Protocol
7. How Does it Fit Together? Uniform Standard Treatment Protocol
8. “Universals” 85% of “referrals” or “requests for assistance” are for 5-7 reasons
Phonics, fluency, comprehension
Written language fluency
Failure to complete work
Inability to sustain on-task attention
Non-compliance
etc
9. Therefore…. Building principals can predict, with 85% accuracy, next years referral types
Annual referrals (or referrals to office, teacher surveys) area primary source of data to predict building needs
Teachers refer students for whom they believe they do not have the skills or resources to meet student needs
CPD should focus on these building issues to enhance “capacity”
10. Planning Ahead:Predicting Who Will Be Referred Code referrals (reasons) for past 2-3 years
Identifies problems teachers feel they do not have the skills/support to handle
Referral pattern reflects skill pattern of the staff, the resources currently in place and the “history” of what constitutes a referral in that building
Identifies likely referral types for next 2 years
Identifies focus of Professional Development Activities AND potential Tier II and III interventions
Present data to staff. Reinforces “Need” concept
11. Data-Driven Infrastructure:Identifying Needed Interventions Assess current “Supplemental Interventions”
Identify all students receiving supplemental interventions
For those interventions, identify
Type and Focus (academic, direct instruction, etc)
Duration (minutes/week)
Provider
Aggregate
Identifies instructional support types in building
This constitutes Tier II and III intervention needs
12. Steps in the Problem-Solving Process PROBLEM IDENTIFICATION
• Identify replacement behavior
• Data- current level of performance
• Data- benchmark level(s)
• Data- peer performance
• Data- GAP analysis
13. Example- ORF Current Level of Performance:
40 WCPM
Benchmark
92 WCPM
Peer Performance
88 WCPM
GAP Analysis: 92/40= 2+X difference SIGNIFICANT GAP
Is instruction effective? Yes, peer performance is at benchmark.
14. Example- Behavior Current Level of Performance:
Complies 35% of time
Benchmark (set by teacher)
75%
Peer Performance
40%
GAP Analysis: 40/35= 1.1X difference NO SIGNIFICANT GAP
Is behavior program effective? No, peers have significant gap from benchmark as well.
15. Outline – Implementing An RtI System Tier 1 Decision Making
Collect and evaluate universal screening data against criterion for successful Core (many suggest 80% proficiency based on Core instruction)
If modification of the Core is needed
Conduct curriculum diagnostic assessment – compare core curriculum against a standard if available (e.g., Kame’enui & Simmons) or evaluate core using problem analysis procedures
Create hypotheses and predictions
Modify curriculum and instruction
Evaluate curriculum and instruction modifications
Monitor sufficiency of core each time universal screening is completed – modify as necessary
16. Tier I Data Example
17. Tier 1 Data Example
20. Screening indicates math problem grades 3 to 5
21. Screening indicates math problem grades 3 to 5
22. Screening indicates math problem grades 3-5
24. Analyze Discipline Referrals Gender
Grade Level
Type
Frequency
Race
SES
ELL
Time
Schedule
25. Tier 1 or Tier 2?:Behavior Example Replacement Behavior “Waiting Turn”
Current Level of Performance
27% Accuracy (success/opportunity)
Peer Performance
58% Average
Benchmark
75%
27. Intervention Decision? Is the student significantly below benchmark performance?
75/27= 2+ Times GAP
Is the peer group significantly below benchmark performance?
75/58= 1.3+ Times GAP
Not 2X, but not appropriate either
DECISION?
29. Outcome? Rate of Peer Performance?
82-58= 58/24 or 2.42
Rate of Target Student Performance?
42-27= 27/15 or 1.80
Type of Response to Intervention?
Peer??
Student??
Intervention Effectiveness Decision?
31. Analyze Data Tier 1: Type of RtI
Postive, Questionable, Poor?
Intervention Decision?
Keep As Is?
Modify Existing?
Change Completely?
32. Outline – Implementing An RtI System Tier 2 Decision Making – Dx Assm’t Option
Identify less than proficient students
Administer additional brief assessments to examine performance profiles
Group students with like performance profiles for supplemental instruction
Provide supplemental instruction based on skill needs
Monitor progress
Review student progress monitoring data at scheduled intervals
How successful are students in response to Tier 2 Interventions?
70% is a good criterion
Modify supplemental instruction as necessary
Move students across tiers as data warrant
33. Tier 2 Decision-Making:Small Group 11 Students
High Risk: Initial Sounds Fluency
Additional 30 Minutes Direct Instruction
Wilson’s Fundations
Fluency
34. Tier 2
36. A Smart System Structure
40. Decision Model at Tier 1- General Education Instruction Step 1: Screening
ORF = 50 wcpm, fall benchmark for some risk = 44 wcpm
Comprehension skills are judged as at levels equal to ORF by her teacher
Is this student at risk?
Training Notes:
This is the decision model at Tier 1. Lisa’s ORF is above benchmark for some risk, her comprehension skills are judged as at levels equal to that reflected in her ORF. Therefore, she is not at risk, which means that the instruction within the Core Curriculum (in this case, Open Court) is working and she is making the expected level of progress. She is NOT a student with a disability.
You would continue to maintain her at Tier 1 (core curriculum) instruction.Training Notes:
This is the decision model at Tier 1. Lisa’s ORF is above benchmark for some risk, her comprehension skills are judged as at levels equal to that reflected in her ORF. Therefore, she is not at risk, which means that the instruction within the Core Curriculum (in this case, Open Court) is working and she is making the expected level of progress. She is NOT a student with a disability.
You would continue to maintain her at Tier 1 (core curriculum) instruction.
42. Decision Model at Tier 1- General Education Instruction Step 1: Screening
ORF at end of 2nd grade is 93 cwpm, end of 2nd benchmark for some risk is 90 cwpm
Reading comprehension skills are judged as adequate by her teacher.
Is this student at risk?
Training Notes:
This is the decision model at Tier 1. Latana earned a CBM score of 93, above the level of the at risk benchmarks. Her teacher believes she is an above average reader in her class. Therefore, she is not at risk, which means that the instruction within the Core Curriculum is working and she is making the expected level of progress. She is NOT a student with a disability.
You would continue to maintain her at Tier 1 (core curriculum) instruction.Training Notes:
This is the decision model at Tier 1. Latana earned a CBM score of 93, above the level of the at risk benchmarks. Her teacher believes she is an above average reader in her class. Therefore, she is not at risk, which means that the instruction within the Core Curriculum is working and she is making the expected level of progress. She is NOT a student with a disability.
You would continue to maintain her at Tier 1 (core curriculum) instruction.
43. Rita Second grade student
Beginning of school year
Regular Education
Scores at 20 wcpm in second grade material
Teacher judges (based on in-class observation/evaluation) comprehension to not be substantially different from ORF
Training Notes:
Rita is in the same class, and at the fall benchmark screening she scores 20 wcpm. The teacher judges that her comprehension skills are equally low and not reflected as being substantially different than ORF.Training Notes:
Rita is in the same class, and at the fall benchmark screening she scores 20 wcpm. The teacher judges that her comprehension skills are equally low and not reflected as being substantially different than ORF.
44. Training Notes
This slide graphically illustrates the difference. Visual displays such as this one are valuable ways to efficiently talk about student performance at team meetings. The boxes represent the 25th to 75th percentile of the normative group.
Additional Info on Comprehension (John Delgaddo offered this language which I think works):
In reading fluency we have specific targets that we know result in improved comprehension and we have data on thousands of students to indicate this relationship or correlation. There is no one set number in reading fluency where we can guarantee comprehension, so there are ranges of reading fluency where we believe a student should be within in order to have the greatest opportunity to comprehend the text. In the area of comprehension we do not have something such as words read correct to count as we do in fluency. We do however, have certain long-standing targets for mastering information, such as we often define mastery on specific skills as being 90-100%. Therefore, in reading comprehension, we know that we want 100% comprehension as our target. When students fall below the 100% mark in total comprehension, or on one of its subcomponents, they will have difficulty with the meaning of text in narrative and expository text.
Traditionally, when large scale achievement tests are administered, it is advisable to consider scores in the bottom 50% on a particular subtest to be at-risk of academic failure, and deserving additional attention. When assessing students for reading comprehension and its subcomponents, we want students to be at 100%. Even 50% for comprehension is not adequate, but we often use it as an indictor in large scale assessments. When we take a look at a student's reading fluency and compare it with responses on reading comprehension subcomponents, it is reasonable to find that if a student only reads 30% of the words, that his comprehension may only be 30%. However, it is possible that this student may recognize specific words in other parts of the text that he was not able to read fluently, and subsequently answer some comprehension questions correctly. Therefore, there is no direct one to one correspondence between reading fluency scores and the percentages for reading comprehension subcomponents, but it is clearly understood that we want comprehension to approach 100%. We become concerned any time it is not near 100% and even more concerned on specific subcomponents of comprehension that may be deficient, even though a student may read the text.
Training Notes
This slide graphically illustrates the difference. Visual displays such as this one are valuable ways to efficiently talk about student performance at team meetings. The boxes represent the 25th to 75th percentile of the normative group.
Additional Info on Comprehension (John Delgaddo offered this language which I think works):
In reading fluency we have specific targets that we know result in improved comprehension and we have data on thousands of students to indicate this relationship or correlation. There is no one set number in reading fluency where we can guarantee comprehension, so there are ranges of reading fluency where we believe a student should be within in order to have the greatest opportunity to comprehend the text. In the area of comprehension we do not have something such as words read correct to count as we do in fluency. We do however, have certain long-standing targets for mastering information, such as we often define mastery on specific skills as being 90-100%. Therefore, in reading comprehension, we know that we want 100% comprehension as our target. When students fall below the 100% mark in total comprehension, or on one of its subcomponents, they will have difficulty with the meaning of text in narrative and expository text.
Traditionally, when large scale achievement tests are administered, it is advisable to consider scores in the bottom 50% on a particular subtest to be at-risk of academic failure, and deserving additional attention. When assessing students for reading comprehension and its subcomponents, we want students to be at 100%. Even 50% for comprehension is not adequate, but we often use it as an indictor in large scale assessments. When we take a look at a student's reading fluency and compare it with responses on reading comprehension subcomponents, it is reasonable to find that if a student only reads 30% of the words, that his comprehension may only be 30%. However, it is possible that this student may recognize specific words in other parts of the text that he was not able to read fluently, and subsequently answer some comprehension questions correctly. Therefore, there is no direct one to one correspondence between reading fluency scores and the percentages for reading comprehension subcomponents, but it is clearly understood that we want comprehension to approach 100%. We become concerned any time it is not near 100% and even more concerned on specific subcomponents of comprehension that may be deficient, even though a student may read the text.
45. Decision Model at Tier 1- General Education Instruction Step 1: Screening
ORF = 20 wcpm, fall benchmark for some risk = 44 wcpm
Comprehension deficits in all 4 of 5 areas are noted
Is this student at risk?
Training Notes:
Decision making at Tier 1 shows that Rita is NOT making sufficient progress in general education setting, and that the instruction within the core curriculum is not sufficient for her to meet benchmarks. As such, you move to a Tier 2, Strategic intervention.
Training Notes:
Decision making at Tier 1 shows that Rita is NOT making sufficient progress in general education setting, and that the instruction within the core curriculum is not sufficient for her to meet benchmarks. As such, you move to a Tier 2, Strategic intervention.
46. Data-Based Determination of Expectations Data- Current Level of Performance
Data- Benchmark Level
Date- # of Weeks to Benchmark
Calculate-
Difference between current and benchmark level
Divide by # Weeks
Result: Rate per week of growth required
REALISTIC? Compare to Peer Group Rate
47. Data-Based Determination of Expectations: Rita Benchmark Level: 54 WCPM
Current Level: 20 WCPM
Difference to Feb Benchmark (Gap): 34 WCPM
Time to Benchmark: 20 Weeks
Rate of Growth Required:
34/20= 1.70 WCPM for Rita
Peer Group Rate = 1.20 WCPM growth (at benchmark) 1.40 WCMP (for “some risk” benchmark)
REALISTIC? Not unless you increase AET
48. Training Notes
This slide graphically illustrates the difference. Visual displays such as this one are valuable ways to efficiently talk about student performance at team meetings. The boxes represent the 25th to 75th percentile of the normative group.
Additional Info on Comprehension (John Delgaddo offered this language which I think works):
In reading fluency we have specific targets that we know result in improved comprehension and we have data on thousands of students to indicate this relationship or correlation. There is no one set number in reading fluency where we can guarantee comprehension, so there are ranges of reading fluency where we believe a student should be within in order to have the greatest opportunity to comprehend the text. In the area of comprehension we do not have something such as words read correct to count as we do in fluency. We do however, have certain long-standing targets for mastering information, such as we often define mastery on specific skills as being 90-100%. Therefore, in reading comprehension, we know that we want 100% comprehension as our target. When students fall below the 100% mark in total comprehension, or on one of its subcomponents, they will have difficulty with the meaning of text in narrative and expository text.
Traditionally, when large scale achievement tests are administered, it is advisable to consider scores in the bottom 50% on a particular subtest to be at-risk of academic failure, and deserving additional attention. When assessing students for reading comprehension and its subcomponents, we want students to be at 100%. Even 50% for comprehension is not adequate, but we often use it as an indictor in large scale assessments. When we take a look at a student's reading fluency and compare it with responses on reading comprehension subcomponents, it is reasonable to find that if a student only reads 30% of the words, that his comprehension may only be 30%. However, it is possible that this student may recognize specific words in other parts of the text that he was not able to read fluently, and subsequently answer some comprehension questions correctly. Therefore, there is no direct one to one correspondence between reading fluency scores and the percentages for reading comprehension subcomponents, but it is clearly understood that we want comprehension to approach 100%. We become concerned any time it is not near 100% and even more concerned on specific subcomponents of comprehension that may be deficient, even though a student may read the text.
Training Notes
This slide graphically illustrates the difference. Visual displays such as this one are valuable ways to efficiently talk about student performance at team meetings. The boxes represent the 25th to 75th percentile of the normative group.
Additional Info on Comprehension (John Delgaddo offered this language which I think works):
In reading fluency we have specific targets that we know result in improved comprehension and we have data on thousands of students to indicate this relationship or correlation. There is no one set number in reading fluency where we can guarantee comprehension, so there are ranges of reading fluency where we believe a student should be within in order to have the greatest opportunity to comprehend the text. In the area of comprehension we do not have something such as words read correct to count as we do in fluency. We do however, have certain long-standing targets for mastering information, such as we often define mastery on specific skills as being 90-100%. Therefore, in reading comprehension, we know that we want 100% comprehension as our target. When students fall below the 100% mark in total comprehension, or on one of its subcomponents, they will have difficulty with the meaning of text in narrative and expository text.
Traditionally, when large scale achievement tests are administered, it is advisable to consider scores in the bottom 50% on a particular subtest to be at-risk of academic failure, and deserving additional attention. When assessing students for reading comprehension and its subcomponents, we want students to be at 100%. Even 50% for comprehension is not adequate, but we often use it as an indictor in large scale assessments. When we take a look at a student's reading fluency and compare it with responses on reading comprehension subcomponents, it is reasonable to find that if a student only reads 30% of the words, that his comprehension may only be 30%. However, it is possible that this student may recognize specific words in other parts of the text that he was not able to read fluently, and subsequently answer some comprehension questions correctly. Therefore, there is no direct one to one correspondence between reading fluency scores and the percentages for reading comprehension subcomponents, but it is clearly understood that we want comprehension to approach 100%. We become concerned any time it is not near 100% and even more concerned on specific subcomponents of comprehension that may be deficient, even though a student may read the text.
49. Decision Model at Tier 2- Strategic Interventions & Instruction Supplemental, small group instruction (3-4 students with similar skill levels)
Standard protocol intervention
3x per week, 30 minutes each
Team selects PALS (Peer Tutoring Strategy)
Implemented by 2 different available instructional personnel
Implemented for 8 weeks
Progress monitoring once every 2 weeks Training Notes
The data based decision making team decides that the PALS program, a peer tutoring program, would be an excellent method for Rita to improve her reading. The specifics of PALS are described by clicking the link and showing the audience the following information off the link:
Click PALS Manual, Sample, PALS Student Question Card. This illustrates two examples of the strategies of the PALS program, paragraph shrinking and prediction relay.
2. Return to click DEMO VIDEO, PALS, and show the two video clips from the PALS. Although the images are small and cannot be enlarged, they illustrate the nature of a standard protocol intervention.
The intervention will be implemented in small groups from the second and third grade where students have similar skill levels and needs. The team puts the intervention into place 3 times per week for 30 minutes each, two different instructional personnel (the special ed teacher and an instructional aide) are available to facilitate the intervention, and the intervention will be in place for 8 weeks with PM conducted every 2 weeks.
Given that this is a strategic intervention, one increases the intensity of the supplemental instruction AND the PM over the benchmark, tier 1 level, but not at the level that will occur at tier 3. Remind the audience that it is the variation in intensity of instruction and the frequency of monitoring that are key variables that change as one moves up the tiers.Training Notes
The data based decision making team decides that the PALS program, a peer tutoring program, would be an excellent method for Rita to improve her reading. The specifics of PALS are described by clicking the link and showing the audience the following information off the link:
Click PALS Manual, Sample, PALS Student Question Card. This illustrates two examples of the strategies of the PALS program, paragraph shrinking and prediction relay.
2. Return to click DEMO VIDEO, PALS, and show the two video clips from the PALS. Although the images are small and cannot be enlarged, they illustrate the nature of a standard protocol intervention.
The intervention will be implemented in small groups from the second and third grade where students have similar skill levels and needs. The team puts the intervention into place 3 times per week for 30 minutes each, two different instructional personnel (the special ed teacher and an instructional aide) are available to facilitate the intervention, and the intervention will be in place for 8 weeks with PM conducted every 2 weeks.
Given that this is a strategic intervention, one increases the intensity of the supplemental instruction AND the PM over the benchmark, tier 1 level, but not at the level that will occur at tier 3. Remind the audience that it is the variation in intensity of instruction and the frequency of monitoring that are key variables that change as one moves up the tiers.
50. Intervention Implementation Find additional time
Ensure that supplemental and intensive interventions are integrated with core instruction/behavior plan
Intervention support available
Frequent meetings with teacher(s)
Data review
Review intervention steps
51. Intervention Implementation Identify number of intervention support personnel available
Identify the number of students needing supplemental and intensive support
See if the ratios make sense!
Example
600 students, 300 making benchmarks
30 teachers, 6 support personnel
30 teachers for 300 students
6 support staff for 300 students
DOES NOT MAKE SENSE
52. Intervention Development and Support Intervention Development
Proximal (Immediate)
Increase Supervision
Lower Difficulty Level
Distal (Longer Term)
Teach skills
Shape Behavior
Empirically Supported
53. Intervention Development and Support Intervention Support (G. Noell, 2006)
Initial Week Teacher Meeting
2 or more times
Subsequent-weekly (6-8 week minimum)
Agenda for Meetings
Review Data
Review Intervention Steps
Problem Solve Barriers
55. Training Notes:
This graph depicts the outcomes of the Tier 2 intervention. The aimline, based on the decided level of expected progress is shown in blue dots, the actual outcomes of the intervention is shown by the trendline. The dashed blackline shows the anticipated outcomes over time based on the intervention imlementation.
A question will arise as to whether one now continues the intervention until benchmarking occurs again in winter. I believe that IF the resources are available to continue the process until benchmarking, that would be the right decision. The student would then be exited from tier 2 as long as they met benchmarks. IF a resource problem develops and the team cannot sustain the effort, the child would be exited to tier 1 at the end of 8 weeks but obviously would be carefully examined as the second benchmark period was implemented.Training Notes:
This graph depicts the outcomes of the Tier 2 intervention. The aimline, based on the decided level of expected progress is shown in blue dots, the actual outcomes of the intervention is shown by the trendline. The dashed blackline shows the anticipated outcomes over time based on the intervention imlementation.
A question will arise as to whether one now continues the intervention until benchmarking occurs again in winter. I believe that IF the resources are available to continue the process until benchmarking, that would be the right decision. The student would then be exited from tier 2 as long as they met benchmarks. IF a resource problem develops and the team cannot sustain the effort, the child would be exited to tier 1 at the end of 8 weeks but obviously would be carefully examined as the second benchmark period was implemented.
56. Decision Model at Tier 2- Strategic Intervention & Instruction ORF = 34 wcpm, winter benchmark (still 8 weeks away) for some risk = 52 wcpm
Target rate of gain over Tier 1 assessment is 1.70 words/week
Actual attained rate of gain was 1.85 words/week
Gains above benchmark in 4 of 5 comprehension areas
Student on target to attain benchmark
Step 2: Is student responsive to intervention?
Training Notes:
The decision making process at tier 2 shows that Rita was responsive to the intervention and would not be a student who is considered any further to have a disability. Given her data, she would probably be exiting from the tier 2 intervention and return to tier 1, core instruction only intervention .
Training Notes:
The decision making process at tier 2 shows that Rita was responsive to the intervention and would not be a student who is considered any further to have a disability. Given her data, she would probably be exiting from the tier 2 intervention and return to tier 1, core instruction only intervention .
57. Elsie Second grade student
End of School Year
Regular Education
Scores at 62 wcpm in second grade material
Teacher judges (based on in-class observation/evaluation) comprehension to not be substantially different from ORF – not great, not terrible
At the end of second grade, Elsie is a reader, but not a good reader. Her fluency is below benchmark. Her Fluency is also below her peers.At the end of second grade, Elsie is a reader, but not a good reader. Her fluency is below benchmark. Her Fluency is also below her peers.
59. Decision Model at Tier 1- General Education Instruction Step 1: Screening
ORF = 62 wcpm, end of second grade benchmark for at risk is 70 wcpm (see bottom of box)
Compared to other Heartland students, Elsie scores around the 12th percentile + or -
Elsie’s teacher reports that she struggles with multisyllabic words and that she makes many decoding errors when she reads
Is this student at risk? Training Notes:
This is an example from a school that collected DIBELS data for a time, but did not use it. The spring of this school year is when they first started using their data to help make instructional decisions. Elsie’s historical data throughout second grade (fall and winter) are included for illustrative purposes.
These are data of tier 1 screening data indicating an intense problem. That is, Training Notes:
This is an example from a school that collected DIBELS data for a time, but did not use it. The spring of this school year is when they first started using their data to help make instructional decisions. Elsie’s historical data throughout second grade (fall and winter) are included for illustrative purposes.
These are data of tier 1 screening data indicating an intense problem. That is,
60. Decision Model at Tier 2- Supplemental Instruction Supplemental, small group instruction will be provided to Elsie
She will participate in two different supplemental groups, one focused on Decoding (Phonics for Reading; Archer) and one focused on fluency building (Read Naturally; Imholt)
She will participate in small group instruction 3x per week, 30 minutes each – and she will also continue with her core instruction
Supplemental instruction implemented by certified teachers in her school (2 different teachers)
Progress monitoring about every 2 weeks Training Notes
Additional data were collected on Elsie’s performance using additional reading fluency passages to look at both Fluency and accuracy (a proxy for decoding), she completed some maze comprehension assessments. Because her reading accuracy was below the preset cutoff in her school (95%), Elsie was asked to read some additional passages to elicit a set of errors. Her errors were typified and summarized. There were a number of patterns present in her errors (including multisyllabic words, compound word errors, and leaving off suffixes). It was noted that Elsie’s fluency also decreases as she is given even slightly more difficult passages.
The literacy team in her school placed her in two existing supplemental groups as described above.Training Notes
Additional data were collected on Elsie’s performance using additional reading fluency passages to look at both Fluency and accuracy (a proxy for decoding), she completed some maze comprehension assessments. Because her reading accuracy was below the preset cutoff in her school (95%), Elsie was asked to read some additional passages to elicit a set of errors. Her errors were typified and summarized. There were a number of patterns present in her errors (including multisyllabic words, compound word errors, and leaving off suffixes). It was noted that Elsie’s fluency also decreases as she is given even slightly more difficult passages.
The literacy team in her school placed her in two existing supplemental groups as described above.
61. Training notes. At the end of Training notes. At the end of
62. Data-Based Determination of Expectations: Elsie Benchmark Level: 90 WCPM
Current Level: 47 WCPM
Difference to June Benchmark (Gap): 34 WCPM
Time to Benchmark: 41 Weeks
Rate of Growth Required:
34/41= .83 WCPM for Elsie
NOT VERY AMBITIOUS!!!!!!!!!!!!!!!!
What would happen if we moved the target to the middle of the “some risk box?”
63. Training notes. At the end of Training notes. At the end of
64. Data-Based Determination of Expectations: Elsie Benchmark Level: 100 WCPM
Current Level: 47 WCPM
Difference to June Benchmark (Gap): 53 WCPM
Time to Benchmark: 41 Weeks
Rate of Growth Required:
53/41= 1.29 WCPM for Elsie
Peer Group Rate = about 1.1 WCPM growth (at benchmark) 1.2 WCMP (for “some risk” benchmark)
REALISTIC? Not unless you increase AET
65. Training notes. At the end of Training notes. At the end of
66. Tier 2- Supplemental Instruction - Revision The intervention appeared to be working. What the teachers thought was needed was increased time in supplemental instruction.
They worked together and found a way to give Elsie 30 minutes of supplemental instruction, on phonics and fluency, 5x per week. Training Notes
The revised intervention made a huge difference. Elsie was now getting 2:00 hours of reading instruction a day (between her core and her supplemental) and she is making gangbusters progress.Training Notes
The revised intervention made a huge difference. Elsie was now getting 2:00 hours of reading instruction a day (between her core and her supplemental) and she is making gangbusters progress.
67. Data-Based Determination of Expectations: Elsie Benchmark Level: 100 WCPM
Current Level: 56 WCPM
Difference to June Benchmark (Gap): 44 WCPM
Time to Benchmark: 27 Weeks
Rate of Growth Required:
44/27= 1.62 WCPM for Elsie
Peer Group Rate = 1.1 WCPM growth (at benchmark) 1.2 WCMP (for “some risk” benchmark)
REALISTIC? Not unless you increase AET
68. Training notes. At the end of Training notes. At the end of
69. Training notes. At the end of Training notes. At the end of
70. By the Spring of Third Grade Elsie’s reading accuracy had improved significantly. Her average % correct hovers around 95 percent.
She still struggles with multisyllabic words
Normatively, at periodic and annual review time, she is now performing at about the 19th percentile compared to peers from Heartland AEA. She is catching up!
Elsie is not a student with a disability
71. Decision Model at Tier 1- General Education Instruction Step 1: Screening
ORF = on track for 100 wcpm, end of third grade benchmark for some risk is 110 wcpm (see top of box)
Compared to other Heartland students, Elsie scores around the 19th percentile + or -
Is this student at risk?
Still a bit of risk, maintain Tier II instruction for another benchmark period, if progress continues, move to tier 1
72. Steven Second grade student
Beginning of school year
Regular Education
Scores at 20 wcpm in second grade material
Teacher judges (based on in-class observation/evaluation) comprehension to not be substantially different from ORF
Training Notes
Steven is a third student from the same class. All the same things from previous slides introducing students apply here.Training Notes
Steven is a third student from the same class. All the same things from previous slides introducing students apply here.
74. Decision Model at Tier 1- General Education Instruction Step 1: Screening
ORF = 20 wcpm, fall benchmark for some risk = 44 wcpm
Comprehension screen also shows deficits in all 5 areas
Current Gen Ed Instruction is NOT Working
Is this student at risk?
Training Notes:
The decision process at Tier 1 shows that instruction in the core curriculum alone is not working, so one move to a tier 2 strategic intervention.Training Notes:
The decision process at Tier 1 shows that instruction in the core curriculum alone is not working, so one move to a tier 2 strategic intervention.
75. Decision Model at Tier 2- Strategic Interventions & Instruction Supplemental, small group instruction in Rita’s group (3-4 students with similar skill levels)
Standard protocol implementation
3x per week, 30 minutes each
Team selects PALS (Peer Tutoring Strategy)
Implemented by 2 different available instructional personnel
Implemented for 8 weeks
Progress monitoring once every 2 weeks Training Notes:
Steven is placed into the same group with Lisa for PALS. This slide just reiterates that this is a tier 2 strategic intervention.Training Notes:
Steven is placed into the same group with Lisa for PALS. This slide just reiterates that this is a tier 2 strategic intervention.
76. Training Notes:
Steven’s performance on ORF is shown here. While he is making some progress, the rate of improvement over the 8 weeks is about 3 times slower than the target which is shown on the aimline. One can see that if the same trend was maintained he is unlikely to meet the winter benchmark.Training Notes:
Steven’s performance on ORF is shown here. While he is making some progress, the rate of improvement over the 8 weeks is about 3 times slower than the target which is shown on the aimline. One can see that if the same trend was maintained he is unlikely to meet the winter benchmark.
77. Decision Model at Tier 2- Strategic Intervention & Instruction Step 2: Is student responsive to intervention?
ORF = 24 wcpm, winter benchmark (still 8 weeks away) for some risk = 52 wcpm
Target rate of gain over Tier 1 assessment is 1.5 words/week
Actual attained rate of gain was 0.55 words/week
Below comprehension benchmarks in 4 of 5 areas
Student NOT on target to attain benchmark
Is student responsive to intervention at Tier 2?
Training Notes:
The decision making process at tier 2 shows that while gains were present, they were far below the expected level. As such, Steven needs to move to tier 3 interventions which would be greater in intensity and with more frequent progress monitoring. At this point he has NOT been responsive to intervention,
Training Notes:
The decision making process at tier 2 shows that while gains were present, they were far below the expected level. As such, Steven needs to move to tier 3 interventions which would be greater in intensity and with more frequent progress monitoring. At this point he has NOT been responsive to intervention,
78. Outline – Implementing An RtI System Tier 3 Decision Making
Conduct additional, instructionally relevant diagnostic assessments to determine more precisely student performance profile
Create individual hypotheses and predictions based on student performance
Match intensive instruction to student performance needs (identify resources within the school to support intensive instruction, e.g., title 1, ELL, SPED)
Monitor progress at least once a week
Modify intensive instruction as necessary based on progress monitoring data
Move students across tiers as data warrant
79. Decision Model at Tier 3- Intensive Interventions & Instruction Supplemental, 1:3, pull-out instruction
Individualized Problem-Solving, Targeted Instruction
Specific decoding and analysis strategies
Emphasis on comprehension strategies
5x per week, 30 minutes each
Implemented by 2 different available instructional personnel
Implemented for 8 weeks
Progress monitoring once every week Training Notes:
At tier 3, the teams looks specifically at Steven’s skill development and pulls together an individualized plan that emphasizes specific decoding and analysis strategies, increases the intensity of instruction (5x per week), and increases the frequency of PM (once per week). Two additional peers with similar level of need at Tier 3 are grouped with Steven, these students come from a different class within his grade within his school. Again, two different personnel are selected to implement the strategies, a regular ed teacher and a special ed teacher.
Training Notes:
At tier 3, the teams looks specifically at Steven’s skill development and pulls together an individualized plan that emphasizes specific decoding and analysis strategies, increases the intensity of instruction (5x per week), and increases the frequency of PM (once per week). Two additional peers with similar level of need at Tier 3 are grouped with Steven, these students come from a different class within his grade within his school. Again, two different personnel are selected to implement the strategies, a regular ed teacher and a special ed teacher.
80. Training Notes
This slide depicts the outcomes of the intensive strategic intervention effort. As seen in the trendline, Steven shows substantial improvement here and reaches a level of gain that is likely to lead to a successful winter benchmark. Again, the question of whether he would continue until the winter benchmark will be raised. Given that this would now be 16 weeks after starting tier 2 intervention, it is likely that the winter benchmark is about to occur so the team would likely leave him in the strategy until the benchmark was taken. Assuming he meets the benchmark, the team may ease him back to tier 2 strategies again to see if a less intense effort will sustain his progress.Training Notes
This slide depicts the outcomes of the intensive strategic intervention effort. As seen in the trendline, Steven shows substantial improvement here and reaches a level of gain that is likely to lead to a successful winter benchmark. Again, the question of whether he would continue until the winter benchmark will be raised. Given that this would now be 16 weeks after starting tier 2 intervention, it is likely that the winter benchmark is about to occur so the team would likely leave him in the strategy until the benchmark was taken. Assuming he meets the benchmark, the team may ease him back to tier 2 strategies again to see if a less intense effort will sustain his progress.
81. Decision Model at Tier 3- Intensive Intervention & Instruction Step 3: Is student responsive to intervention at Tier 3?
ORF = 45 wcpm, winter benchmark (still 4 weeks away) for some risk = 52 wcpm
Target rate of gain over Tier 2 assessment is 1.5 words/week
Actual attained rate of gain was 2.32 words/week
At or above comprehension benchmarks in 4 of 5 areas
Student on target to attain benchmark
Step 3: Is student responsive to intervention?
Move student back to Strategic intervention
Training Notes:
Decision making at tier 3 shows that Steven is responsive to intervention and there should be no need to move toward a special ed determination decision. Given his data, one would probably continue monitoring through the end of the benchmark period and if he maintains his progress, return to a Tier 2 intervention.
Training Notes:
Decision making at tier 3 shows that Steven is responsive to intervention and there should be no need to move toward a special ed determination decision. Given his data, one would probably continue monitoring through the end of the benchmark period and if he maintains his progress, return to a Tier 2 intervention.
82. Bart Second grade student
Beginning of school year
Regular Education
Scores at 20 wcpm in second grade material
Teacher judges (based on in-class observation/evaluation) comprehension to not be substantially different from ORF
Training Notes:
Bart is the 4th student from the same class.Training Notes:
Bart is the 4th student from the same class.
83. Training Notes:
Bart moves through the same process as Rita and Steven, but with far less success. As one can see, even with intensive tier 3 intervention, Bart’s progress does not reach the level that will likely lead him to meet winter benchmarks. Given that we have moved through tier 2 and tier 3 interventions, and these interventions have been done with integrity, Bart is referred for consideration for special education eligibility.Training Notes:
Bart moves through the same process as Rita and Steven, but with far less success. As one can see, even with intensive tier 3 intervention, Bart’s progress does not reach the level that will likely lead him to meet winter benchmarks. Given that we have moved through tier 2 and tier 3 interventions, and these interventions have been done with integrity, Bart is referred for consideration for special education eligibility.
84. Decision Model at Tier 3- Intensive Intervention & Instruction Step 3: Is student responsive to intervention at Tier 3?
ORF = 31 wcpm, winter benchmark (still 4 weeks away) for some risk = 52 wcpm
Target rate of gain over Tier 2 assessment is 1.5 words/week
Actual attained rate of gain was 0.95 words/week
Below comprehension benchmarks in all areas
Student NOT on target to attain benchmark
Training Notes
Bart is NOT responsive to intervention at the level that will lead to successful outcomes, even with intensive interventions. As such, one moves to a special ed eligibility determination.
Training Notes
Bart is NOT responsive to intervention at the level that will lead to successful outcomes, even with intensive interventions. As such, one moves to a special ed eligibility determination.
86. HOW DO WE DOCUMENT THIS?
87. Problem-Solving Process ADDITIONAL DATA ADDED THROUGH THE PROBLEM-SOLVING PROCESS AND MEETING - It is extremely important to set a goal, as well as a follow-up date to evaluate the intervention process.ADDITIONAL DATA ADDED THROUGH THE PROBLEM-SOLVING PROCESS AND MEETING - It is extremely important to set a goal, as well as a follow-up date to evaluate the intervention process.
88. Criteria for Special Education Eligibility I: Establish NEED
•Significant gap exists between student and benchmark/peer performance.
•The Response to Intervention is insufficient to predict attaining benchmark
•Student is not a functionally independent learner
II: Student Possesses CHARACTERISTICS
•Complete comprehensive evaluation
89. IDEIA Comprehensive Evaluation Problem Identification
Oral Expression
Listening Comprehension
Written Expression
Basic Reading Skill
Reading Fluency Skills
Reading Comprehension
Mathematics Calculation
Mathematics Problem-Solving
90. IDEIA Comprehensive Evaluation Relevant behavior noted during the observation and relationship of Bx to academic functioning
Data from required observation
91. IDEIA Comprehensive Evaluation The child does not achieve adequately for the child’s age or to meet state-approved grade-level standards
GAP Analysis from Tier 1
AND
92. IDEIA Comprehensive Evaluation The child does not make sufficient progress to meet age or to meet state-approved standards when using a process based on the child’ response to scientific, research-based intervention
RtI Data from Tiers 2 and 3
OR
93. IDEIA Comprehensive Evaluation The child exhibits a pattern of strengths and weaknesses in performance, achievement or both , relative to age, state-approved grade level standards or intellectual development that is determined by the group to be relevant to the identification of a SLD, using appropriate assessments
Differential Academic Performance Levels
NOTE: Requirement for a severe discrepancy between ability and achievement was removed.
94. IDEIA Comprehensive Evaluation The findings are not primarily the result of:
Sensory or Motor Disability
Mental Retardation
Assess Adaptive Behavior First
Emotional Disturbance
Data from observation
Observation and performance data
Cultural Factors
AYP Data for Race (NCLB)
Comparative AYP for Culture (Local Norms)
Environmental or Economic Disadvantage
AYP Data for Low SES
Limited English Proficiency
AYP Data for LEP