580 likes | 728 Views
How Do I Know If They Are Getting It?: Measuring Student Responsiveness to Reading and Writing Instruction. Dr. Michael Faggella-Luby Dr. Natalie Olinghouse Dr. Michael Coyne. Research
E N D
How Do I Know If They Are Getting It?: Measuring Student Responsiveness to Reading and Writing Instruction Dr. Michael Faggella-Luby Dr. Natalie Olinghouse Dr. Michael Coyne
Research Conduct school-based research on developing and evaluating evidence based practices in literacy, behavior supports, and assessment Translating Research to Practice Support schools, districts, and states in adopting, implementing, and sustaining evidence based practices
Overview • Introduction • Results from CT Reading Summit • RtI (SRBI) • Assessment • Early Reading • Adolescent Reading • Writing • Summary & Discussion
Original logic: Public health & disease prevention (Larson, 1994) • Tertiary (FEW) • Reduce complications, intensity, severity of current cases • Secondary (SOME) • Reduce current cases • of students with literacy difficulties • Primary (ALL) • Reduce new cases • of students with literacy difficulties
Four Purposes for Assessment Screening- Assessments that are administered to determine which children are at risk for reading/writing difficulty and who will need additional intervention. Diagnosis - Assessments that help teachers plan instruction by providing in-depth information about students’ skills and instructional needs. ProgressMonitoring - Assessments that determine if instruction or intervention is enabling students to make adequate progress. Evaluation - Assessments that provide a bottom-line evaluation of the effectiveness of the reading/writing program.
Oral Reading Fluency CBM: Curriculum Based Measurement (http://dibels.uoregon.edu) (http://aimsweb.com) Counting the number of correct words while a student reads aloud from grade-level text for 1 minute. “Because oral reading fluency reflects the complex orchestration of many different reading skills, it can be used in an elegant and reliable way to characterize overall reading expertise.” (Fuchs, Fuchs, Hosp, & Jenkins, 2002) Measures of oral reading fluency are highly correlated with reading comprehension in the primary grades.
89 • Total Words Read: • Errors: • Words Read Correctly: 4 85
Oral Reading Fluency Relationship between reading fluency and comprehension: He had never seen dogs fight as these w___ c____ fought, and his first ex___ t__t him an unf___able l__n. It is true, it was a vi__ ex____, else he would not have lived to pr__it by it. Curly was the v___. They were camped near the log store, where she, in her friend__ way, made ad___ to a husky dog the size of a full-__ wolf, the ___not half so large as __he. __ere was no w__ing, only a leap in like a flash, a met__ clip of teeth , a leap out equal__ swift, and Curly’s face was ripped open from eye to jaw. (London)
Oral Reading Fluency A student who does not read fluently • Even if she has good understanding, she will have difficulty with reading comprehension • If she also has difficulty with understanding, she will have even more difficulty with reading comprehension A student that does read fluently • If she has good understanding, her reading comprehension will be good • If she has difficulty with understanding, she will have difficulty with reading comprehension
50% Low Risk (>77) 30% Some Risk (53-76) 20% High Risk (<53) ORF: Screening
A change in intervention ORF: Progress Monitoring Aimline
ORF: Evaluation First Grade Reading Outcomes Before School Changes
ORF: Evaluation First Grade Reading Outcomes After School Changes
ORF: Summary • Screening – yes • Progress monitoring – yes, for code-based skills • Diagnosis – no • Evaluation – yes, but more for internal evaluation
How do you make ORF useful? • Coordinate administration at a school wide level • Assess all students 3 times per year, assess students who are at risk more often • Organize and manage data at the building level • Supplement ORF with other diagnostic measures • Use consistent data-based decision rules to make instructional decisions (fail safe procedures) • Screening (who gets intervention) • Progress monitoring (how to intensify intervention)
Gates-MacGinitie Reading Tests • Definition • Group administered vocabulary and reading comprehension achievement assessment • Characteristics • Word Meaning & Passage level comprehension • Norm referenced, (PR, GE, etc). • Developmentally appropriate K-Adult Measures • Two Forms (S&T) for pre- & posttesting • 55 minutes (most levels) • Multiple scoring options
Summary • Screening – yes, but must be administered in a timely and reliable manner • Progress monitoring –no • Diagnosis – yes, with regard to adding reading achievement information • Evaluation – yes, allowing norm comparisons and grade equivalent scores
How do you make the Gates-MacGinitie useful? • Must be scored and used to make instructional placement decisions in a timely manner • Data should be clearly organized and easily summarized for student grouping and instructional decision making • Data should be available to classroom teachers and a school-wide data team • Ensure standardized administration and scoring to be reliable and valid
Cloze • Definition • A timed sentence level reading comprehension measure in which every nth word is removed • Example: AIMSweb Maze • 3-minute individual OR group administration • Standardized and Normed* • Multiple-choice for easy scoring • Includes grade-level Fall, Winter and Spring benchmarks
Summary • Screening – yes, but perhaps not in universal assessment • Progress monitoring – yes, but creating usable system for interpretation/presentation is essential • Diagnosis– maybe, with regard to adding additional reading ability information • Evaluation– yes, allowing norm comparisons, overall growth picture
How do you make CBM Maze useful? • Decide what kind of information you hope to glean from measures, whom to assess, and how often • Organization of materials, administration schedule, data collection, scoring and interpretation is essential • Data should be available to classroom teaches and a school-wide data team in a timely manner
KU Descriptive Study Measures Assessment Area Measure Alphabetics Woodcock Language Proficiency Battery- R Decoding WLPB-Revised: Word Attack Word identification WLPB-Revised: Word Identification Fluency Pace/Rate Test of Word Reading Efficiency (TOWRE) Phonetic Decoding Efficiency (TOWRE) Accuracy Gray Oral Reading Test-4 (GORT-4) Vocabulary Expressive Peabody Picture Vocabulary Test III Reading WLPB-R Reading Vocabulary subtest Comprehension Reading Comprehension WLPB-R Passage Comprehension subtest Listening Comprehension Gray Oral Reading Tests-4 (GORT-4) WLPB-R Listening comprehension subtest The Learner Motivation The Motivation for Reading Questionnaire (MQR) Hope The Hope Scale for Motivation Achievement Kansas State Assessment (KSA)-Reading Subtest
Reading Component Profile ∆ Proficient ◊ ASRS 115 110 105 100 95 90 85 80 75 70 ∆ ∆ ∆ ∆ ∆ ∆ ∆ ∆ ∆ ∆ ∆ Mean Standard Scores ◊ ◊ ◊ ◊ ◊ ◊ ◊ ◊ ◊ ◊ ◊ ALPHABETICS FLUENCY VOCABULARY COMPREHENSION Word ID-Word Att Rate-Accuracy-SWE-PDE PPVT-WLPB Rd-Vocab-List CompPass Comp-Rdg Comp Scores from the WLPB-R, GORT, TOWRE, PPVT, Sub tests *Statistically Different
Writing Assessments-Overview • Text level • Objective measures (CBM, research) • Number of words written • Number of correctly spelled words • Correct word sequences • Subjective measures (portfolio, norm-reference tests, large-scale assessments, research) • Holistic • Analytic (e.g., 6 Traits) • Primary Trait • Sentence/word level (norm-referenced tests, research, large-scale assessments) • Word spelling • Editing • Sentence fluency (syntax + production); Sentence combining • Vocabulary
Norm-Referenced Tests of Writing • Test of Written Language (TOWL-3) • Woodcock-Johnson III (WJ III) Writing Cluster • Wechsler Individual Achievement Test-2 (WIAT-2) Written Language Composite
Summary • Screening-no, tests can be lengthy to administer and score • Progress monitoring-no, tests can be lengthy to administer and score • Diagnosis-yes, primary use of norm-referenced writing tests • Evaluation-no, tests can be lengthy to administer and score
How do you make norm-referenced writing tests useful? • Choose an instrument that provides the type of information you need-different writing tests measure different skills • Ensure standardized administration and scoring to be reliable and valid • Carefully follow the scoring guidelines in the manual • If multiple people are scoring, provide training sessions • Include other assessments of writing to provide a complete picture of a student’s writing abilities
Large-Scale Writing Assessments • 49 states currently have direct writing assessments • Grades 3-5: 37 states • Grades 6-8: 40 states • Grades 9-12: 36 states • Typical format • Oral, written, or pictorial prompt that introduces a topic for the written response. • Most often a ‘stand-alone’ test. If combined with a subject area, writing quality often is not measured