120 likes | 344 Views
Assessment for Diagnosis. EDUC 602 Week 1. Principles of Effective Assessment. Assessment should… Inform instruction Be realistic/authentic The best assessments are those closest to skills & strategies being used in classrooms Be interactive
E N D
Assessment for Diagnosis EDUC 602 Week 1 Benedictine University
Principles of Effective Assessment • Assessment should… • Inform instruction • Be realistic/authentic • The best assessments are those closest to skills & strategies being used in classrooms • Be interactive • Measures both top-down and bottom-up factors • Be multidimensional: • All major areas are assessed - general cognitive abilities, hearing, spelling, writing • Be dynamic: • Measure what the student can do now and predict the student’s potential for change • Cover many categories Benedictine University
Principles of Assessment • Five major factors must be considered when assessing students: • 3. Techniques being used • 2. Text • 4. Reading or writing tasks • 1. Reader • 5. Situational context in which reading or writing is performed • All of the “five interlocking aspects of reading” should be considered because they influence assessment results Benedictine University
Assessment Categories • Assessments fall into the following categories: • Screening: Identify students who appear to be “at risk” and require additional assessment • Diagnostic: Determine areas of need • Monitoring: Are students making adequate progress? • Formative Assessment • Outcome: Administered at end of semester or year • Summative Assessment • Think about assessments you give your students • Into what categories do these assessments fit? • Prepare to explain an assessment(s), the category(s) and why the assessment best fits into the category(s) during Session 3 Assessment Benedictine University
Assessment Types • Read Gunning pages 66 – 72 and Bar pages 322 – 329 • As you read pages 66-72 in Assessing and Correcting Reading and Writing Difficulties and pages 322-329 in Reading Diagnosis for Teachers: An Instructional Approach, compare and contrast the following assessment types: • Norm-referenced vs. Criterion-Referenced • Survey vs. Diagnostic tools • Formal vs. Informal • Be prepared to respond to the Short Answer Assignment on the following slide • Portfolio Artifact Case Study Information: You will use one norm-referenced test and one criterion-referenced test that you will choose in the case study that is included in your Professional Portfolio Artifact • Begin to consider the tests you might use with your student Benedictine University
EDUC 602 Week 1 Evaluating Assessment Devices Based on:-Reliability-Validity-Standard Error of Measurement Unless stated otherwise the content of this section is based on Chapter 3 – Gunning, T.G. (2010) Assessing and correcting reading and writing difficulties. Boston, MA.: Pearson, Education, Inc
Evaluating Assessment Devices • Assessmentsmust be reliableand valid to make useful educational decisions aboutand forstudents • Therefore, as a Reading Specialist making educational decisions, you must carefully consider the following areas before using assessment data: • Reliability • Validity • Standard Error of Measurement (SEM) Benedictine University Gunning, T.G. (2010) Assessing and correcting reading and writing difficulties. Boston, MA.: Pearson, Education, Inc. (pp. 73 - 75)
Evaluating Assessment Devices • Reliability is the consistency of an assessment device • Test reliability is high if you give the same test to the same or matched students on two different occasions and the test shows similar results Benedictine University
Evaluating Assessment Devices • Validity is the degree to which an assessment device measures what it intends to measure and the degree to which it can be used to make educational decisions • Construct Validityis the degree to which a test measures a theoretical trait, like critical reading or learning ability • Content or Curricular Validity is the degree to which the content of a test reflects reading or tasks as they are taught in schools • For commercially-produced tests, information about their reliability and validity is usually found in the test administration manual Benedictine University
Evaluating Assessment Devices • Standard Error of Measurement (SEM) is an estimate of the difference between the obtained score and what the score would be if the test were perfect • Example: • If a norm-referenced test has a SEM of 5 percentile points, and a student has a score of 50 • He or she is likely to score between 45-55 if they took the test again • SEM is a measure of reliability Benedictine University Gunning, T.G. (2010) Assessing and correcting reading and writing difficulties. Boston, MA.: Pearson, Education, Inc. (p. 75)
Evaluating Assessment Devices • Fairness can be impacted by biases caused by differences in background knowledge • Tests can be biased based on: • Geography • Gender • Socioeconomic status • Ethnicity • Race Benedictine University
Evaluating Assessment Devices • Read pages 73-76 in Assessing and Correcting Reading Difficulties to learn more about how validity, reliability, and standard error of measurement (SEM) are expressed • Then, click the following link and choose the latest version of the Illinois State Assessments Technical Manual • The Illinois Standards Achievement Test Manual is also located in the course resource folder • Use the Table of Contents to locate information on reliability andSEM data for the ISAT test Benedictine University