820 likes | 840 Views
Formative Assessment: Specific Tools to Measure Student Academic Skills Jim Wright www.interventioncentral.org. What is the relevant academic or behavioral outcome measure to be tracked?.
E N D
Formative Assessment: Specific Tools to Measure Student Academic SkillsJim Wrightwww.interventioncentral.org
What is the relevant academic or behavioral outcome measure to be tracked? • Is the focus the core curriculum or system, subgroups of underperforming learners, or individual struggling students? • What method(s) should be used to measure the target academic skill or behavior? • What goal(s) are set for improvement? • How does the school check up on progress toward the goal(s)? Effective Formative Evaluation: The Underlying Logic…
Use Time & Resources Efficiently By Collecting Information Only on ‘Things That Are Alterable’ “…Time should be spent thinking about things that the intervention team can influence through instruction, consultation, related services, or adjustments to the student’s program. These are things that are alterable.…Beware of statements about cognitive processes that shift the focus from the curriculum and may even encourage questionable educational practice. They can also promote writing off a student because of the rationale that the student’s insufficient performance is due to a limited and fixed potential. “ p.359 Source: Howell, K. W., Hosp, J. L., & Kurns, S. (2008). Best practices in curriculum-based evaluation. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology V (pp.349-362). Bethesda, MD: National Association of School Psychologists.
School Instructional Time: The Irreplaceable Resource “In the average school system, there are 330 minutes in the instructional day, 1,650 minutes in the instructional week, and 56,700 minutes in the instructional year. Except in unusual circumstances, these are the only minutes we have to provide effective services for students. The number of years we have to apply these minutes is fixed. Therefore, each minute counts and schools cannot afford to support inefficient models of service delivery.” p. 177 Source: Batsche, G. M., Castillo, J. M., Dixon, D. N., & Forde, S. (2008). Best practices in problem analysis. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology V (pp. 177-193).
Summative data is static information that provides a fixed ‘snapshot’ of the student’s academic performance or behaviors at a particular point in time. School records are one source of data that is often summative in nature—frequently referred to as archival data. Attendance data and office disciplinary referrals are two examples of archival records, data that is routinely collected on all students. In contrast to archival data, background information is collected specifically on the target student. Examples of background information are teacher interviews and student interest surveys, each of which can shed light on a student’s academic or behavioral strengths and weaknesses. Like archival data, background information is usually summative, providing a measurement of the student at a single point in time.
Formative assessment measures are those that can be administered or collected frequently—for example, on a weekly or even daily basis. These measures provide a flow of regularly updated information (progress monitoring) about the student’s progress in the identified area(s) of academic or behavioral concern. Formative data provide a ‘moving picture’ of the student; the data unfold through time to tell the story of that student’s response to various classroom instructional and behavior management strategies. Examples of measures that provide formative data are Curriculum-Based Measurement probes in oral reading fluency and Daily Behavior Report Cards.
Formal Assessment Defined “Formative assessment [in academics] refers to the gathering and use of information about students’ ongoing learning by both teachers and students to modify teaching and learning activities. …. Today…there are compelling research results indicating that the practice of formative assessment may be the most significant single factor in raising the academic achievement of all students—and especially that of lower-achieving students.” p. 7 Source: Harlen, W. (2003). Enhancing inquiry through formative assessment. San Francisco, CA: Exploratorium. Retrieved on September 17, 2008, from http://www.exploratorium.edu/ifi/resources/harlen_monograph.pdf
Formative Assessment: Essential Questions… 1. What is the relevant academic or behavioral outcome measure to be tracked? Problems identified for formative assessment should be: • Important to school stakeholders. • Measureable & observable. • Stated positively as ‘replacement behaviors’ or goal statements rather than as general negative concerns (Bastche et al., 2008). • Based on a minimum of inference (T. Christ, 2008). Source: Batsche, G. M., Castillo, J. M., Dixon, D. N., & Forde, S. (2008). Best practices in problem analysis. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology V (pp. 177-193).Christ, T. (2008). Best practices in problem analysis. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology V (pp. 159-176).
Academic or Behavioral Targets Are Stated as ‘Replacement Behaviors’ “The implementation of successful interventions begins with accurate problem identification. Traditionally, the student problem was stated as a broad, general concern (e.g., impulsive, aggressive, reading below grade level) that a teacher identified. In a competency-based approach, however, the problem identification is stated in terms of the desired replacement behaviors that will increase the student’s probability of successful adaptation to the task demands of the academic setting.” p. 178 Source: Batsche, G. M., Castillo, J. M., Dixon, D. N., & Forde, S. (2008). Best practices in problem analysis. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology V (pp. 177-193).
Inference: Moving Beyond the Margins of the ‘Known’ “An inference is a tentative conclusion without direct or conclusive support from available data. All hypotheses are, by definition, inferences. It is critical that problem analysts make distinctions between what is known and what is inferred or hypothesized….Low-level inferences should be exhausted prior to the use of high-level inferences.” p. 161 Source: Christ, T. (2008). Best practices in problem analysis. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology V (pp. 159-176).
High-Inference Hypothesis. The student has an auditory processing issue that prevents success in reading. The student requires a multisensory approach to reading instruction to address reading deficits. Unknown Known Unknown Low-Inference Hypothesis. The student needs to build reading fluency skills to become more proficient in decoding. Known Examples of High vs. Low Inference Hypotheses The results of grade-wide benchmarking in reading show that a target 2nd-grade student can read aloud at approximately half the rate of the median child in the grade.
Adopting a Low-Inference Model of Reading Skills • 5 Big Ideas in Beginning Reading • Phonemic Awareness • Alphabetic Principle • Fluency with Text • Vocabulary • Comprehension Source: Source: Big ideas in beginning reading. University of Oregon. Retrieved September 23, 2007, from http://reading.uoregon.edu/index.php
Formative Assessment: Essential Questions… 2. Is the focus the core curriculum or system, subgroups of underperforming learners, or individual struggling students? Apply the ‘80-15-5 ‘Rule (T. Christ, 2008) : • If fewer than 80% of students are successfully meeting academic or behavioral goals, the formative assessment focus is on the core curriculum and general student population. • If no more than 15% of students are not successful in meeting academic or behavioral goals, the formative assessment focus is on small-group ‘treatments’ or interventions. • If no more than 5% of students are not successful in meeting academic or behavioral goals, the formative assessment focus is on the individual student. Source: Christ, T. (2008). Best practices in problem analysis. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology V (pp. 159-176).
RTI Literacy: Assessment & Progress-Monitoring To measure student ‘response to instruction/intervention’ effectively, the RTI Literacy model measures students’ reading performance and progress on schedules matched to each student’s risk profile and intervention Tier membership. • Benchmarking/Universal Screening. All children in a grade level are assessed at least 3 times per year on a common collection of literacy assessments. • Strategic Monitoring. Students placed in Tier 2 (supplemental) reading groups are assessed 1-2 times per month to gauge their progress with this intervention. • Intensive Monitoring. Students who participate in an intensive, individualized Tier 3 reading intervention are assessed at least once per week. Source: Burns, M. K., & Gibbons, K. A. (2008). Implementing response-to-intervention in elementary and secondary schools: Procedures to assure scientific-based practices. New York: Routledge.
Baylor Elementary School : Grade Norms: Correctly Read Words Per Min : Sample Size: 23 Students Group Norms: Correctly Read Words Per Min: Book 4-1: Raw Data 31 34 34 39 41 43 52 55 59 61 68 71 74 75 85 89 102 108 112 115 118 118 131 LOCAL NORMS EXAMPLE: Twenty-three 4th-grade students were administered oral reading fluency Curriculum-Based Measurement passages at the 4th-grade level in their school. • In their current number form, these data are not easy to interpret. • So the school converts them into a visual display—a box-plot —to show the distribution of scores and to convert the scores to percentile form. • When Billy, a struggling reader, is screened in CBM reading fluency, he shows a SIGNIFICANT skill gap when compared to his grade peers.
Median (2nd Quartile)=71 Group Norms: Converted to Box-Plot National Reading Norms: 112 CRW Per Min 1st Quartile=43 3rd Quartile=108 Source: Tindal, G., Hansbrouck, J., & Jones, C. (2005).Oral reading fluency: 90 years of measurement [Technical report #33]. Eugene, OR: University of Oregon. Billy=19 Hi Value=131 Low Value=31 0 20 40 60 80 100 120 140 160 Correctly Read Words-Book 4-1 Baylor Elementary School : Grade Norms: Correctly Read Words Per Min : Sample Size: 23 Students January Benchmarking Group Norms: Correctly Read Words Per Min: Book 4-1: Raw Data 31 34 34 39 41 43 52 55 59 61 68 71 74 75 85 89 102 108 112 115 118 118 131
Team Activity: Formative Assessment and Your Schools • At your tables, discuss: • What kinds of formative measures your schools tend to collect most often. • How ‘ready’ your schools are to collect, interpret, and act on formative assessment data..
Formative Assessment: Essential Questions… 3. What method(s) should be used to measure the target academic skill or behavior? Formative assessment methods should be as direct a measure as possible of the problem or issue being evaluated. These assessment methods can: • Consist of General Outcome Measures or Specific Sub-Skill Mastery Measures • Include existing (‘extant’) data from the school system Curriculum-Based Measurement (CBM) is widely used to track basic student academic skills. Daily Behavior Report Cards (DBRCs) are increasingly used as one source of formative behavioral data. Source: Burns, M. K., & Gibbons, K. A. (2008). Implementing response-to-intervention in elementary and secondary schools: Procedures to assure scientific-based practices. New York: Routledge.
Formal Tests: Only One Source of Student Assessment Information “Tests are often overused and misunderstood in and out of the field of school psychology. When necessary, analog [i.e., test] observations can be used to test relevant hypotheses within controlled conditions. Testing is a highly standardized form of observation. ….The only reason to administer a test is to answer well-specified questions and examine well-specified hypotheses. It is best practice to identify and make explicit the most relevant questions before assessment begins. …The process of assessment should follow these questions. The questions should not follow assessment. “ p.170 Source: Christ, T. (2008). Best practices in problem analysis. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology V (pp. 159-176). Bethesda, MD: National Association of School Psychologists.
Curriculum-Based Measurement: Assessing Basic Academic Skills
Curriculum-Based Measurement: Advantages as a Set of Tools to Monitor RTI/Academic Cases • Aligns with curriculum-goals and materials • Is reliable and valid (has ‘technical adequacy’) • Is criterion-referenced: sets specific performance levels for specific tasks • Uses standard procedures to prepare materials, administer, and score • Samples student performance to give objective, observable ‘low-inference’ information about student performance • Has decision rules to help educators to interpret student data and make appropriate instructional decisions • Is efficient to implement in schools (e.g., training can be done quickly; the measures are brief and feasible for classrooms, etc.) • Provides data that can be converted into visual displays for ease of communication Source: Hosp, M.K., Hosp, J. L., & Howell, K. W. (2007). The ABCs of CBM. New York: Guilford.
CBM Student Reading Samples: What Difference Does Fluency Make? • 3rd Grade: 19 Words Per Minute • 3rd Grade: 70 Words Per Minute • 3rd Grade: 98 Words Per Minute
CBM techniques have been developed to assess: • Phonemic awareness skills • Reading fluency • Reading comprehension • Early math skills • Math computation • Math applications & concepts • Writing • Spelling
CBM Math Measures: Selected Sources • AimsWeb (http://www.aimsweb.com) • Easy CBM (http://www.easycbm.com) • iSteep (http://www.isteep.com) • EdCheckup (http://www.edcheckup.com) • Intervention Central (http://www.interventioncentral.org)
Measuring General vs. Specific Academic Outcomes • General Outcome Measures: Track the student’s increasing proficiency on general curriculum goals such as reading fluency. Example: CBM-Oral Reading Fluency (Hintz et al., 2006). • Specific Sub-Skill Mastery Measures: Track short-term student academic progress with clear criteria for mastery (Burns & Gibbons, 2008). Example: Letter Identification. Sources: Burns, M. K., & Gibbons, K. A. (2008). Implementing response-to-intervention in elementary and secondary schools: Procedures to assure scientific-based practices. New York: Routledge. Hintz, J. M., Christ, T. J., & Methe, S. A. (2006). Curriculum-based assessment. Psychology in the Schools, 43, 45-56.
Formative Assessment: Essential Questions… 4. What goal(s) are set for improvement? Goals are defined at the system, group, or individual student level. Goal statements: • Are worded in measureable, observable terms, • Include a timeline for achieving those goals. • Are tied to the formative assessment methods used to monitor progress toward the goal(s).
Writing CBM Goals in Student IEPs (Wright, 1992) Source: Wright, J. (1992). Curriculum-based measurement: A manual for teachers. Retrieved on September 4, 2008, from http://www.jimwrightonline.com/pdfdocs/cbaManual.pdf
Writing CBM Goals in Student IEPs (Wright, 1992) Source: Wright, J. (1992). Curriculum-based measurement: A manual for teachers. Retrieved on September 4, 2008, from http://www.jimwrightonline.com/pdfdocs/cbaManual.pdf
Writing CBM Goals in Student IEPs (Wright, 1992) Source: Wright, J. (1992). Curriculum-based measurement: A manual for teachers. Retrieved on September 4, 2008, from http://www.jimwrightonline.com/pdfdocs/cbaManual.pdf
Reading In [number of weeks until Annual Review], when given a randomly selected passage from [level and name of reading series] for 1 minute Student will read aloud At [number] correctly read words with no more than [number] decoding errors. IEP Goals for CBA/CBM: READING
Written Expression In [number of weeks until Annual Review], when given a story starter or topic sentence and 3 minutes in which to write Student will write IEP Goals for CBA/CBM: Written Expression A total of: [number] of wordsor [number] of correctly spelled wordsor [number] of correct word/writing sequences
Spelling In [number of weeks until Annual Review], when dictated randomly selected words from [level and name of spelling series or description of spelling word list] for 2 minutes Student will write [Number of correct letter sequences] IEP Goals for CBA/CBM: Spelling
Single-Subject (Applied) Research Designs “Single-case designs evolved because of the need to understand patterns of individual behavior in response to independent variables, and more practically, to examine intervention effectiveness. Design use can be flexible, described as a process of response-guided experimentation…, providing a mechanism for documenting attempts to live up to legal mandates for students who are not responding to routine instructional methods.” p. 71 Source: Barnett, D. W., Daly, E. J., Jones, K. M., & Lentz, F.E. (2004). Response to intervention: Empirically based special service decisions from single-case designs of increasing and decreasing intensity. Journal of Special Education, 38, 66-79.
3 17 1 20 1 27 1 13 4 14 2 10 2 3 3 3 3 10 3 24 3 31 4 7 2 24 4 11 2 28 2 7 2 14 1 31 3 7 4 18 3 14 3 21 3 28 1 17 4 4 1 24 Jared: Intervention Phase 1: Weeks 1-6 X X F 3/7 82 CRW Th 2/27 79 CRW W 1/29 77 CRW Th 2/13 75 CRW M 2/3 75 CRW W 1/22 71 CRW
Formative Assessment: Essential Questions… 5. How does the school check up on progress toward the goal(s)? The school periodically checks the formative assessment data to determine whether the goal is being attained. Examples of this progress evaluation process include the following: • System-Wide: A school-wide team meets on a monthly basis to review the frequency and type of office disciplinary referrals to judge whether those referrals have dropped below the acceptable threshold for student behavior. • Group Level: Teachers at a grade level assembles every six weeks to review CBM data on students receiving small-group supplemental instruction to determine whether students are ready to exit (Burns & Gibbons, 2008). • Individual Level: A building problem-solving team gathers every eight weeks to review CBM data to a student’s response to an intensive reading fluency plan. Sources: Burns, M. K., & Gibbons, K. A. (2008). Implementing response-to-intervention in elementary and secondary schools: Procedures to assure scientific-based practices. New York: Routledge. Shinn, M. R. (1989). Curriculum-based measurement: Assessing special children. New York: Guilford.
What is the relevant academic or behavioral outcome measure to be tracked? • Is the focus the core curriculum or system, subgroups of underperforming learners, or individual struggling students? • What method(s) should be used to measure the target academic skill or behavior? • What goal(s) are set for improvement? • How does the school check up on progress toward the goal(s)? Effective Formative Evaluation: The Underlying Logic…
Team Activity: Data ‘Decision Points’ • At your tables: • Discuss what opportunities are available at the school, group, or individual student level to discuss data on student performance and make decisions about the effectiveness of your instructional or intervention programs.
School-Wide Case Example: Using Data to Evaluate Appropriateness of Core Reading Program
“Risk for reading failure always involves the interaction of a particular set of child characteristics with specific characteristics of the instructional environment. Risk status is not entirely inherent in the child, but always involves a “mismatch” between child characteristics and the instruction that is provided.” (Foorman & Torgesen, 2001; p. 206). “ ” Source: Foorman, B. R., & Torgesen, J. (2001). Critical elements of classroom and small-group instruction promote reading success in all children. Learning Disabilities Research & Practice, 16, 203-212.
Direct / Indirect Instruction Continuum “Literature-based instruction emphasizes use of authentic literature for independent reading, read-alouds, and collaborative discussions. It stands in contrast to skills-based programs that are typically defined as traditional programs that use a commercially available basal reading program and follow a sequence of skills ordered in difficulty.” (Foorman & Torgesen, 2001; p. 204) “less direct instruction in sound-spelling patterns embedded in trade books (embedded code)” (Foorman & Torgesen, 2001; p. 204) “implicit instruction in the alphabetic principle while reading trade books (implicit code)” (Foorman & Torgesen, 2001; p. 204) “direct instruction in letter-sound correspondences practices in controlled vocabulary texts (direct code)” (Foorman & Torgesen, 2001; p. 204) Source: Foorman, B. R., & Torgesen, J. (2001). Critical elements of classroom and small-group instruction promote reading success in all children. Learning Disabilities Research & Practice, 16, 203-212.
RTI Core Literacy Instruction: Elements Use Benchmarking/Universal Screening Data to Verify that the Current Core Reading Program is Appropriate. The school uses benchmarking/universal screening data in literacy to verify that its current reading program can effectively meet the needs of its student population at each grade level. • In grades K-2, if fewer than 80% of students are successful on phonemic awareness and alphabetics screenings, the core reading program at that grade level is patterned after direct instruction (Foorman & Torgesen, 2001). • In grades K-2, if more than 80% of students are successful on phonemic awareness and alphabetics screenings, the school may choose to adopt a reading program that provides “less direct instruction in sound-spelling patterns embedded in trade books (embedded code)” (Foorman & Torgesen, 2001; p. 205).
Comparison of Sunnyside & Baylor Schools: Winter Benchmarking: Gr 1 Source: DIBELS Website. Retrieved on May 8, 2007, from https://dibels.uoregon.edu/