1 / 35

Analyzing Assessment Data

Using A Data System To Inform Instruction Presented by: Tony Tripolone Technology Leadership Institute For School District Administrators Lower Hudson Regional Information Center Westchester Marriott January 9, 2006. Analyzing Assessment Data. Purpose of Assessments

gratia
Download Presentation

Analyzing Assessment Data

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Using A Data System To Inform Instruction Presented by:Tony TripoloneTechnology Leadership InstituteFor School District AdministratorsLower Hudson Regional Information CenterWestchester Marriott January 9, 2006

  2. Analyzing Assessment Data • Purpose of Assessments • Appropriate Data Comparison • Systemic Change • Need to Narrow Focus

  3. The Benchmark • “Snapshot” • Caution: Use Multiple Measures • Item Analysis Data -Identifies strengths and weaknesses -Helps align curriculum -Indicates need for parallel tasks • “Cut Points” • Graphing will show “Gaps”

  4. P Value or Item Difficulty • Definition • Why P Values Are Used • What It Means (High & Low P) • How To Calculate

  5. P Value or Item Difficulty Calculation • P Value is the proportion of students in an identified norm group who answer a test item correctly; usually referred to as the difficulty index. • The reason for using a P Value graph is to allow a district to compare student performance on each assessment to a larger reference group as a benchmark. • A high P Value would indicate a very easy question while a low P Value would indicate an extremely difficult question purposely designed to discriminate performance levels. • How a P Value is calculated: Multiple Choice questions are either right or wrong and receive a score of 1 or 0. These values are added and then divided by the number of students taking the assessment resulting in a P Value number between 0 and 1. Constructed Response questions have rubric scores greater than 1. Therefore, the total number of points received on each question are added and then divided by the number of students taking the assessment. The resulting average score is then divided by the number of possible points in the rubric resulting in a P Value number between 0 and 1.

  6. Gap Region Outperforms District P Value or Item Difficulty The proportion of students who answered the item correctly. Low P Value reflects a difficult question High P Value reflects an easier question

  7. Items Grouped by: Key Idea and Performance Indicators

  8. Midrange of Item Difficulty • Definition • Why Midrange Is Used • What It Means • How To Calculate

  9. Narrowing The FocusWhat is the Midrange? • The identified “gaps” provide a focus for improvement. • Understanding that the questions were designed to discriminate between what a student needs to know and be able to do at the various accountability levels is very important in the analysis of the data. • The data is useless without comparing district results to a larger reference group as a benchmark. • Determination of P Value or Item Difficulty is critical in distinguishing between easy and difficult questions. • The “midrange” is a narrowing of the range of assessment scores to provide a more reasonable and manageable focus for teachers to evaluate their program. • If there is a highlighted “gap” within the “midrange,” i.e. the region outperformed your district on questions you would expect your students to answer correctly, then it is reasonable for you to make meaningful decisions to effectively address instructional and curricular changes.

  10. Midrange of Item Difficulty How To Calculate: • Determine the range of scores by subtracting the lowest regional P Value from the highest on an assessment • Multiply that difference by 20% • Add this product to the lowest score • Subtract this product from the highest score For example: If the range of scores is between .90 (the highest score) and .50 (the lowest score), then .90 - .50=.40 X .20 = .08 .50+.08 =.58 .90-.08 = .82 Midrange is .58 to .82 on the P Value graph

  11. Midrange of item difficulty is .44 - .82 Highlight Key Idea, PI, & Question Number of “GAP” above

  12. Midrange of item difficulty is .44 - .82 Highlight Key Idea, PI, & Question Number of “GAP” above

  13. Trend Summary Chart Displays 5 Years of Assessment Items For Collaborative Discussion

  14. The Trend Summary Chart • Captures the proportion of questions asked in relation to the identified Standards and Performance Indicators or Subskills assessed over a Five Year period. • Highlights identified “Gaps” or areas of weakness. • Reflects only weaknesses that were identified from the midrange on the assessment data charts that one could reasonably consider needed to be addressed. • Shows whether multiple choice or extended response questions contributed to the weakness. • Reveals curriculum balance issues.

  15. Considerations After Viewing Trend Data • Number and frequency of items asked • Performance Indicators that seem to be targeted • Any emerging patterns • Consistent areas of weaknesses on assessments • Consistent areas of weaknesses in student work throughout the year • Appropriate balance and emphasis in your curriculum as indicated by the assessments • Curriculum alignment issues • Instructional changes that demonstrate above considerations

  16. Narrowing The Focus • Select Three Areas Of Weakness • Look At Student Work • Work Collaboratively • Determine Root Cause • Align and Map Curriculum (Horizontally and Vertically) • Develop Parallel Tasks • Establish Periodic Benchmarks • Analyze Data • Begin Again

  17. Collaboration Activity

  18. The Collaborative Effort 1. Look at the actual questions to: Identify SKILLS needed Determine STRATEGIES necessary 2. Look at student workto evaluate: RANGE of RESPONSE What’s missing? DECLARATIVE or PROCEDURAL Knowledge 3. Develop Parallel Tasks to: Provide BENCHMARK experiential opportunities Increase RIGOR and RELEVANCE of expectations

  19. The Collaborative Process • Determine what you know or don’t know by doing a “Gap Analysis” • Come to discussion prepared by viewing “dataMentor” • Alignment of curriculum and instruction to standards is the agenda • Discussion should be collegial and supportive • Decision making is best with active participation by everyone • Freedom to share identified weaknesses is encouraged • Examine instruction methods to determine “best practices” or strategies • Seek suggestions for improvement • Remember: We want to improve student achievement for all students

  20. Does the test behave ? Look at the “Range of Responses” distribution Determine number and selection at each accountability level Identify item difficulty by correct responses for multiple choice max. points increase in constructed response as accountability level increases

  21. Range Of Responses

  22. Range Of Responses

  23. Range Of Responses

  24. Range Of Responses

  25. Go to: DataMentor.org to see how this data system can seamlessly facilitate the process of using data to inform instruction(Follow Handout Materials) Contact Information: Tony Tripolone Administrator for Data Management and Analysis Wayne-Finger Lakes BOCES ttripolone@wflboces.org 585-394-9239 ext. 1045

More Related