1 / 103

2015–16 CAASPP Post-Test Workshops: Connecting Assessments to Instruction

Discover how CAASPP student data can inform instruction in this informative workshop. Learn about 2015-16 test results, assessment components, scoring principles, and computer adaptive testing.

exie
Download Presentation

2015–16 CAASPP Post-Test Workshops: Connecting Assessments to Instruction

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. 2015–16 CAASPP Post-Test Workshops: Connecting Assessments to Instruction Webcast, May 19, 2016 Presented by ETS, WestEd, and the CDE Assessment Fellows

  2. Overall Purpose The purpose of the in-person workshop is to inform local educational agencies (LEAs) of the various components of the 2015–16 California Assessment of Student Performance and Progress (CAASPP) student results and reports and to discuss how this information might be used to inform instruction. “California’s testing system makes improving instruction a priority.” —Tom Torlakson, State Superintendent of Public Instruction 2015–16 Post-Test Workshops: Connecting Assessments to Instruction

  3. Focus of this Workshop • Overview of the reporting system for all CAASPP operational summative assessments: • Smarter Balanced English Language Arts/Literacy (ELA) and Mathematics • California Standards Tests (CSTs) for Science, California Modified Assessment (CMA) for Science, and California Alternate Performance Assessment (CAPA) for Science • Standards-based Tests in Spanish (STS) for Reading/Language Arts (RLA) • Focus on Smarter Balanced ELA and Mathematics 2015–16 Post-Test Workshops: Connecting Assessments to Instruction

  4. Morning Session:CAASPP Score Report Resources and Tools

  5. Agenda—Morning Session • Purpose of the Workshop Morning Session • Goals of the Workshop Morning Session • Principles of Scoring • Understanding the Reports • What’s New for 2015–16 • Accessing the Test Results • Overview of the Reporting Timeline • Future Reporting Tools • Using Your Test Results 2015–16 Post-Test Workshops: Connecting Assessments to Instruction

  6. Purpose of the Morning Session • Prepare LEA CAASPP coordinators to access the 2015–16 test results • Provide supplementary resources and reference materials to assist LEA CAASPP coordinators with their local training efforts 2015–16 Post-Test Workshops: Connecting Assessments to Instruction

  7. Goals By the end of this morning session, LEA CAASPP coordinators will know • how to access their 2015–16 and 2014–15 test results; and • how to access supplementary resources and reference materials to assist with their local training efforts. 2015–16 Post-Test Workshops: Connecting Assessments to Instruction

  8. Principles of Scoring

  9. Goals of the Section • Provide an overview of computer adaptive testing (CAT) and scoring • Describe the relative contribution of the performance tasks (PTs) and CAT to the overall scores • Describe: • Score scale • Achievement levels • Error bands • Claims • Assessment targets 2015–16 Post-Test Workshops: Connecting Assessments to Instruction

  10. Computer Adaptive Testing: Philosophy “An important innovation in assessment is the trend toward computerized adaptive tests, which tailor questions to students' individual ability levels. This means assessment can be customized for each student, making the test more effective…The strength of adaptive tests lies in their ability to provide a clear picture of a student's strengths and weaknesses in a given subject. Since computerized test scores are immediate, teachers and students can discover academic weaknesses — and begin addressing them — right away.” Shorr, P. W. (2002). A look at tools for assessment and accountability. Administrator Magazine. Retrieved from http://www.scholastic.com/browse/article.jsp?id=463 2015–16 Post-Test Workshops: Connecting Assessments to Instruction

  11. How Does a CAT Work? • Each student is administered a set of test questions that is appropriately challenging. • The student’s performance on the test questions determines if subsequent questions are harder or easier. • The test adapts to the student item by item and not in stages. • Fewer test questions are needed as compared to a fixed form to obtain precise estimates of students’ ability. • The test continues until the test content outlined in the blueprint is covered. 2015–16 Post-Test Workshops: Connecting Assessments to Instruction

  12. How Does a CAT Work?Example: A Student of Average Ability Expanded Very High Ability High Med–High Medium Med–Low Low Expanded VeryLow 1 2 3 4 5 6 7 8 9 10 Test Questions Answers (right/wrong) R R W R W W W W R R

  13. Computer Adaptive Testing:Behind the Scenes • Requires a large pool of test questions statistically calibrated on a common scale with ability estimates (e.g., from the field test or previous administrations) • Uses an algorithm to select questions based on a student’s responses, to score responses, and to iteratively estimate the student’s performance • Bases final scale scores on item pattern scoring 2015–16 Post-Test Workshops: Connecting Assessments to Instruction

  14. Computer Adaptive Testing:Practical Considerations • Each student’s test is constrained to ensure coverage of the full range of appropriate grade-level content, (e.g., the ELA test cannot consist of only reading informational items). • The level of test-question exposure is constrained to maintain test security. • Sets of test questions based on a common passage or stimulus constrains the ability to adapt within the set. • The responses must be machine-scored to select the next question. • Human-scored performance task responses are combined later with the CAT results. 2015–16 Post-Test Workshops: Connecting Assessments to Instruction

  15. Scoring the CAT • As a student progresses through the test, his or her pattern of responses is tracked and revised estimates of the student’s ability are calculated. • Successive test questions are selected to increase the precision of the student’s level-of-achievement estimate, given the current understanding of his or her ability. • Resulting scores from the CAT portion of the test are based on the specific test questions selected as a result of the student’s responses and NOT on the sum of the number answered correctly. • The test question pools for a particular grade level are designed to include an enhanced pool of test questions that are more or less difficult for that grade but still match the grade’s test blueprint. 2015–16 Post-Test Workshops: Connecting Assessments to Instruction

  16. Human-Scored Items in the CAT • Some items administered on the Smarter Balanced CAT component require human scoring. • The adaptive algorithm will select these items based on the student’s performance on prior items. • Since these items cannot be scored in real time by a human, the student’s performance on these items will not impact subsequent item selection. 2015–16 Post-Test Workshops: Connecting Assessments to Instruction

  17. Performance Tasks (PTs) • In all Smarter Balanced tests, a PT and a set of stimuli on a given topic are administered as well as the CAT. • PTs are administered at the classroom/group level, so they are not targeted to students’ specific ability level. • The items associated with the PTs may be scored by a machine or by human raters. 2015–16 Post-Test Workshops: Connecting Assessments to Instruction

  18. Final Scoring • For each student, the responses from the PT and CAT portions are merged for final scoring. • The resulting ability estimates are based on the specific test questions that a student answered and not the total number of items answered correctly. • Higher ability estimates are associated with test takers who correctly answer difficult and more discriminating items. • Lower ability estimates are associated with test takers who correctly answer easier and less discriminating items. • Two students will have the same ability estimate if they have the same set of test questions with the same responses. • It is possible for students to have the same ability estimate through different response patterns. • This type of scoring is called “Item Pattern Scoring.” 2015–16 Post-Test Workshops: Connecting Assessments to Instruction

  19. Final Scoring: Contribution of the CAT and PT Sections 2015–16 Post-Test Workshops: Connecting Assessments to Instruction

  20. Final Scoring: Contribution of the CAT and PT Sections (cont.) • Based on the test blueprint, the CAT section is emphasized because there are more CAT items/points than PT items/points. • Claims with more items/points are emphasized. • Mathematics: Concepts and Procedures  Problem Solving/Modeling and Data Analysis  Communicating Reasoning • ELA: Reading  Writing  Speaking/Listening  Research • Because scores are based on pattern scoring, groups of items that are more difficult and discriminating will have a larger contribution on final scores. • Therefore there is no specific weight associated with either the PT or CAT sections. 2015–16 Post-Test Workshops: Connecting Assessments to Instruction

  21. Final Scoring: Mapping • After estimating the student’s overall ability, it is mapped onto the reporting scale through a linear transformation. • Mathematics: • Scaled Score = 2514.9 + 79.3 * • ELA: • Scaled Score = 2508.2 + 85.8 * • The mapping is limited by the grade level’s lowest and highest obtainable scaled score. 2015–16 Post-Test Workshops: Connecting Assessments to Instruction

  22. Properties of the Reporting Scale • Scores are on a vertical scale. • Expressed on a single continuum for a content area • Allows users to describe student growth over time across grade levels • Scale score range • ELA: 2114–2795 • Mathematics: 2189–2862 • For each grade level and content area, there is a separate scale score range. 2015–16 Post-Test Workshops: Connecting Assessments to Instruction

  23. Smarter Balanced Scale Score Ranges by Grade Level 2015–16 Post-Test Workshops: Connecting Assessments to Instruction

  24. Achievement Levels • Achievement level classifications based on overall scores • Level 1—Standard Not Met • Level 2—Standard Nearly Met • Level 3—Standard Met • Level 4—Standard Exceeded 2015–16 Post-Test Workshops: Connecting Assessments to Instruction

  25. Achievement Levels by Grade 2015–16 Post-Test Workshops: Connecting Assessments to Instruction

  26. Smarter Balanced Scale Score Rangesfor ELA 2015–16 Post-Test Workshops: Connecting Assessments to Instruction

  27. Achievement Levels by Grade 2015–16 Post-Test Workshops: Connecting Assessments to Instruction

  28. Smarter Balanced Scale Score Rangesfor Mathematics 2015–16 Post-Test Workshops: Connecting Assessments to Instruction

  29. Measurement Precision: Error Bands • For each scale score estimated for a student, there is measurement error associated with it. An error band is a useful tool that describes the measurement error associated with a reported scale score. • Error bands are used to construct an interval estimate corresponding to a student’s true ability/proficiency for a particular content area with a certain level of confidence. • The error bands used to construct interval estimates were based on one standard error of measurement. • If the same test is given to a student multiple times, the student will score within this band about 68 percent of the time. 2015–16 Post-Test Workshops: Connecting Assessments to Instruction

  30. Achievement Levels for Claims • Achievement levels for claims are very similar to subscores. They provide supplemental information regarding a student’s strengths or weaknesses. • No achievement level–setting occurred for claims. • Only three achievement levels for claims were developed since there are fewer items within each claim. • Achievement levels for claims are based on the distance a student’s performance on the claim is from the Level 3 “standard met” criterion. • A student must complete all items within a claim to receive an estimate of his or her performance on a claim. 2015–16 Post-Test Workshops: Connecting Assessments to Instruction

  31. Achievement Levels for Claims (cont.) • A student’s ability, along with the corresponding standard error, are estimated for each claim. • Using the standard error, an interval estimate corresponding to the student’s true performance on the claim, is constructed. • The interval is defined as • This interval is compared against the Level 3 Standard Met criterion, • If the interval does not contain the Level 3 Standard Met criterion value for a particular claim, it would indicate a strength or weakness. 2015–16 Post-Test Workshops: Connecting Assessments to Instruction

  32. Achievement Levels for Claims (cont.) Near Standard Below Standard 2015–16 Post-Test Workshops: Connecting Assessments to Instruction

  33. Achievement Levels for Claims (cont.) Above Standard 2015–16 Post-Test Workshops: Connecting Assessments to Instruction

  34. Understanding the Reports

  35. Simplified Data Flow Student Demographic Data TOMS (Test Assignments and Settings) Test Settings & Basic Student Data CALPADS Test Delivery System Student Responses ORS Scoring System Test Results TOMS (Reporting) 2015–16 Post-Test Workshops: Connecting Assessments to Instruction

  36. Partial Results Available Now • Test results are available three weeks after a student completes both parts—CAT and PT—of a content area. • Test results are added nightly. Use Caution: The results available in this reporting system are partial and may not be a good representation of your school’s or LEA’s final aggregate results. These partial results are not appropriate for public release. Final data will be released publicly by the California Department of Education (CDE) by August 15, 2016. 2015–16 Post-Test Workshops: Connecting Assessments to Instruction

  37. What’s New in 2015–16 • Additional features in the Online Reporting System (ORS) • Access to the previous year’s results • Assessment Target Reports • Access to ORS test results by additional users • Access to Student Score Reports in the Test Operations Management System (TOMS) • Redesigned Student Score Reports • Student Score Reports in Spanish • Earlier release of score results 2015–16 Post-Test Workshops: Connecting Assessments to Instruction

  38. Available Reports • † PDFs of the Student Score Reports will be available in TOMS. • †† LEAs must forward or mail the copy of the CAASPP Student Score Report to each student’s parent/guardian within 20 working days of its delivery to the LEA. 2015–16 Post-Test Workshops: Connecting Assessments to Instruction

  39. Test Results in theOnline Reporting System (ORS) CAASPP Portal: http://www.caaspp.org/

  40. Important Reminder The results available in this reporting system are partial and may not be a good representation of your school’s or LEA’s final aggregate results. As a real-time system, these results will change as additional data are received and relevant appeals and rescores are processed. Your school’s or LEA’s final aggregate results may be higher or lower than the partial results posted to this system and, therefore, are not appropriate for public release. Final data will be released publicly by the CDE in September. 2015–16 Post-Test Workshops: Connecting Assessments to Instruction

  41. ORS Summary • The Online Reporting System (ORS) is a Web-based system that displays score reports and completion data for each student who has taken the following California assessments: • Smarter Balanced for ELA or Mathematics • CSTs for Science • CMA for Science • CAPA for Science • STS for RLA • Note: All score report data, except for individual student’s score reports, can be disaggregated into subgroups for detailed analysis. 2015–16 Post-Test Workshops: Connecting Assessments to Instruction

  42. Users with Access to Score Reports NEW! NEW! For additional information about user roles available within TOMS, refer to the 2015–16 User Role Guidance document on the CAASPP Portal. 2015–16 Post-Test Workshops: Connecting Assessments to Instruction

  43. Features and Reports in ORS • Home Page Dashboard • Subject Detail Report • Claim-Level Detail Report • Assessment Target Report • Student Listing Report • Student Detail Report • Manage Rosters Accessing the ORS Quick Start Guide is available at http://www.caaspp.org/administration/reporting/index.html 2015–16 Post-Test Workshops: Connecting Assessments to Instruction

  44. Home Page Dashboard 1 2

  45. NEW! Home Page Dashboard:Select Test, Administration, and Enrollment Status Select Test 1 • Select Administration • 2014–15 data are final • 2015–16 are partial Enrollment Status View 2015–16 Post-Test Workshops: Connecting Assessments to Instruction

  46. Home Page Dashboard:Select Grade and Subject 2 Students are in the process of completing tests—results are not yet available. Test results are available for students who have completed both parts—CAT and PT. 2015–16 Post-Test Workshops: Connecting Assessments to Instruction

  47. Subject Detail Report Report Name Legend Show/Hide Comparison Data Updated Demographic Subgroups Show/Hide Columns Exploration Menu Time Stamp 2015–16 Post-Test Workshops: Connecting Assessments to Instruction

  48. UPDATED! Available Demographic Subgroup • Same subgroups as reported to the CDE CAASPP public Web reporting site • Based on data received from the California Longitudinal Pupil Achievement Data System (CALPADS) at the time of the demographic snapshot taken at the end of the selected testing window 2015–16 Post-Test Workshops: Connecting Assessments to Instruction

  49. UPDATED! Exploration Menu 2015–16 Post-Test Workshops: Connecting Assessments to Instruction

  50. Claim-Level Report Detail 2015–16 Post-Test Workshops: Connecting Assessments to Instruction

More Related