1.37k likes | 1.5k Views
Fall 2011. Child Find and Eligibility Determination for AEA Special Education Support Staff. Day 2. Overview of Day 2. Discrepancy Needs Exclusionary Factors Decision Making . Discrepancy. Discrepancy.
E N D
Fall 2011 Child Find and Eligibility Determination for AEA Special Education Support Staff Day 2
Overview of Day 2 Discrepancy Needs Exclusionary Factors Decision Making
Discrepancy The difference between the individual’s current level of performance compared to peers’ level of performance or other expected standards at a single point in time. -- Iowa Special Education Eligibility Standards, 2006
Rigor of Decisions • A basic tenet of problem solving is that as the intensity of a problem rises, the amount of resources that we use in solving the problem also rises. • Similar to this, as the intensity of a problem rises, the rigor of our discrepancy information also needs to increase.
What Makes Data More or Less Rigorous? • Technical Adequacy • Objectivity • Amount • Directness of Measure
Technical Adequacy High Rigor # of letters written: all students receive 10 minutes to write the alphabet on standard paper during writing class at their desk • Standardized Administration: administered under conditions that specify where, when, how, and for how long children may respond to the questions or "prompts.” • Reliability and validity of the data source # of letters written: students use their own writing paper to write all letters during the day Low Rigor
Technical Adequacy High Rigor Assessing Math Skills for multiplication: Permanent product of classroom multiplication test administered in a standardized fashion. • Reliability: Consistency or Repeatability • Validity: The test measures what it is intended to measure • Meaningful measure of a targeted skill Assessing Math Skills for the Unit: Total grades for the unit made up of homework, tests and quizzes. Low Rigor
Technical Adequacy: So What? High Rigor Peer Comparison Performance Standard: Data must be able to be compared to a performance standard to be useful to make decisions Teacher Expectation Low Rigor
Technical Adequacy DIBELS Oral Reading Fluency High
Technical Adequacy Teacher-designed math rubric Low
Objectivity High Rigor Aggression: Number of incidents of aggression • Data that refers to observable and measurable characteristics of the problem • Objective data can be assessed quantitatively or qualitatively Aggression: Number of suspensions (suspensions aggression) Low Rigor
Objectivity BASC (behavior rating scale) used to measure specific behavior of concern Low
Objectivity Parent log of the number of ounces consumed High
Amount High Rigor A median of 20 blurt outs during 30 minute observation periods across multiple settings with multiple teachers • Multiple data sources • Consistent data collected at different times • Consistency across data provides more confidence in our decisions 1 data point indicating 20 blurt outs during a 30 minute period Low Rigor
Amount Baseline collected once Low
Amount Stable and representative baseline collected 3 times across 3 days High
Directness High Rigor Health: BP, urinalysis, blood level assessments etc. • Measures what you intend and need to measure • Skill specific • Example methods: • Direct observation/assessment • Review of Permanent product • Parent checklist • Teacher rating • Teacher/parent report Health: Web MD’s checklist of symptoms Low Rigor
Directness Teacher tallied student’s incidences of hitting based on definition in intervention plan High
Directness Aggression measured through a teacher’s report completed at end of day based on memory Low
Checklist Not all data sources will meet all elements of TOAD Multiples measures and data sources help assure all elements of TOAD can be addressed Some measures weigh higher on the decisions than others
Checklist: Let’s Do One Together Ms. K made a checklist of morning routine tasks. She asked the TA to complete it based on Katie's independence on her morning routine. The TA completed it one time during her break at lunchtime.
Comparing Data • Is the 1st ranked team twice as good as the 2nd ranked team? • Is the difference between the skills of the 3rd and 4th ranked teams the same as the difference between the skills of the 18th and 19th ranked teams?
Four Types of Data + − + − × ÷
Nominal • A scale of measurement in which numbers stand for names • Allows only for classification Examples: 1 = proficient 2 = non-proficient 1 = true 2 = neutral 3 = false
Think About “Our school meets the unique needs of all of its students.” On a Likert scale of 1-5 (1=strongly disagree and 5=strongly agree), the average score was 2.8. Thus, more than ½ of the community stakeholders have positive perceptions regarding this question. Is this an accurate statement?
Ordinal A way of measuring that ranks (orders) on a variable. The difference between the ranks needs not be equal (unequal intervals between units of measure). • Examples: • Percentile rank • Class rank • Rubric scores • Grade and age equivalents
Grade/Age Equivalent Scores • If a 5th grade student receives a grade equivalent score of 7.4 this DOES NOT mean that student can perform 7th grade work. • It suggests that a typical 7th grader in the fourth month of school would receive the same score if 7th graders had taken the 5th grade test.
Interval A scale or measurement that describes variables in such a way that the distance between any two adjacent units of measure (or intervals) is the same, but in which there is no meaningful zero point. • Examples: • Year (A.D.) • Fahrenheit • Celsius • Standard Scores
Ratio A measurement or scale in which any two adjoining values are the same distance apart and in which there is a zero point • Examples: • ITBS National Standard Score (NSS) • MAP-RIT scores • Percent • Frequency, duration (raw scores) • Lexile scores
Thumbs Up or Thumbs Down? Using national norms, the average 2nd grade student in the fall of the school year reads at a rate of 44 correct words per minute. In the spring of the year, the average 2nd grade student reads at a rate of 90 correct words per minute. To meet this goal in 24 weeks, the student must gain approximately 1.9 words per minute per week.
Thumbs Up or Thumbs Down? Beth obtained an ITED National Standard Score (NSS) of 250 during her 8th grade year. She obtained a NSS of 260 during her 9th grade year. Given that average students are to grow 10 NSS points between these two years, Beth demonstrated average growth during this time.
What Types of Data Do You Use? Rubric Scores % ITBS NSS Percentile Rank CBM Scores What type of data do you frequently use? Are you using data appropriately? Are you reporting data appropriately?
Discrepancy During Evaluation AEA Special Education Procedures Manual (July, 2011), p.44
Discrepancy Multiple Methods and Data Sources R
Multiple Sources of Data Using RIOT Methods There must be at least two sources of data for each area of concern.
Discrepancy During Evaluation AEA Special Education Procedures (July, 2011), p.44
Discrepancy Peer/Expected Performance R
Performance Standards A performance standard (or standard of comparison) is used as a rule or basis of comparison in measuring or judging performance. Data Based Decision-Making Manual, Heartland AEA 11, 2008
Performance Standards First Consider: • Iowa Core Curriculum Essential Concepts and Skills • Iowa Early Learning Standards • Iowa Core Content Standards Then Consider: • District measure of peer performance • District/AEA/state/national norms • Developmental norms • Classroom expectations • School policies
Performance Standards Norm Referenced Comparisons • Individual’s performance is compared with the performance of a normed group • e.g. local, national, user Criterion Referenced Comparisons • Individual's performance is compared to an established standard of performance • e.g. research, developmental, parent, medical, teacher
Expected Level of Performance • The individual would be able to perform at the “floor” of an expected range. • When using percentiles, the expected range would be one standard deviation below or above the “middle” = 16th percentile to the 84th percentile. • Record the range, not just a score.
Discrepancy During Evaluation AEA Special Education Procedures (July, 2011), p.44
Discrepancy Magnitude of Discrepancy R
Determining Magnitude Read the Magnitude of Discrepancy information in the Portfolio. This is a section of the Special Education Procedures Manual. Discuss the reading with your table partners.
Magnitude of the Discrepancy • Size of the difference between the standard and current performance • Ways to measure the magnitude: • Absolute difference • Percentile ranks • Discrepancy ratios
Absolute Difference • Absolute difference is the difference between the current performance from the performance standard • Percentage of points earned on an objectively defined behavior point sheet • Peers: 95%; Student: 65% • 95 – 65 = 30 percentage points (Absolute difference)
Absolute Difference Magnitude Q: How do you determine if an absolute difference is significant? A: Convert the absolute difference into a percentage, then use the guideline of 25% or more difference.
Percentile Rank • Describes how a score compares to other scores on the same assessment • Examples: • 30th percentile means a student scored as well or better than 30% of the comparison group • 50th percentile means a student scored as well or better than 50% of the comparison group
Percentile Rank vs. Percent • Portion of the whole thing • Answers the question, “how much?” or “what part of 100?” • Absolute difference • Describes how score fits into the distribution of scores • Answer the question, “how well compared to…” • Relative (dependent on how everyone else performs)