1 / 12

Sensitivity of Teacher Value-Added Estimates to Student and Peer Control Variables

Sensitivity of Teacher Value-Added Estimates to Student and Peer Control Variables. March 2012 Presentation to the Association of Education Finance and Policy Conference Matt Johnson  Stephen Lipscomb  Brian Gill. VAMs Used Today Differ in Their Specifications. Research Questions.

bobby
Download Presentation

Sensitivity of Teacher Value-Added Estimates to Student and Peer Control Variables

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Sensitivity of Teacher Value-Added Estimates to Student and Peer Control Variables March 2012 Presentation to the Association of Education Finance and Policy Conference Matt Johnson  Stephen Lipscomb Brian Gill

  2. VAMs Used Today Differ in Their Specifications

  3. Research Questions • How sensitive are teacher value-added model (VAM) estimates to changes in the model specification? • Student characteristics • Classroom characteristics • Multiple years of prior scores • How sensitive are estimates to loss of students from sample due to missing prior scores?

  4. Preview of Main Results • Teacher value-added estimates are not highly sensitive to inclusion of: • Student characteristics (correlation ≥ 0.990) • Multiple years of prior scores (correlation ≥ 0.987) • Estimates are more sensitive to inclusion of classroom characteristics (correlation = 0.915 to .955) • Estimates are not very sensitive to loss of students with missing prior test scores from sample (correlation = 0.992) • Precision increases when two prior scores are used but fewer teacher VAM estimates are produced

  5. Baseline Model • Explore sensitivity to several specifications: • Exclude score from two prior years (Yi,t-2) • Exclude student characteristics (Xi,t) • Include class average characteristics • Student data from a medium-sized urban district for 2008–2009 to 2010–2011 school years • All models run using the same set of student observations • Instrument using opposite subject prior score to control for measurement error

  6. Student and Class Characteristics

  7. Correlation of 6th-Grade Teacher Estimates Relative to Baseline VAM Specification Baseline: Student Characteristics and Prior Scores from t-1 and t-2 Findings are based on VAM estimates from 2008–2009 to 2010–2011 on the same sample of students.

  8. Percentage of 6th-Grade Reading Teachers in Effectiveness Quintiles, by VAM Specification Findings are based on VAM estimates for 99 reading teachers in grade 6 from 2008–2009 to 2010–2011 for a medium-sized, urban district. Correlation with baseline = 0.996.

  9. Percentage of 6th-Grade Reading Teachers in Effectiveness Quintiles, by VAM Specification Findings are based on VAM estimates for 99 reading teachers in grade 6 from 2008–2009 to 2010–2011 for a medium-sized, urban district. Correlation with baseline = 0.915.

  10. One or Two Years of Prior Scores? • Benefits of including two prior years: • More accurate measure of student ability • Increase in precision of estimates • Costs of using two prior years: • Students with missing prior scores dropped • Some teachers dropped from sample • Relative magnitude of costs/benefits?

  11. One or Two Years of Prior Scores? • Estimate two VAMs using one year of prior scores • First VAM includes all students • Second VAM restricts sample to students with nonmissing second prior year of scores • Correlation between teacher estimates: 0.992 • Percentage of students dropped: 6.2 • Percentage of teachers dropped: 3.9 • Net increase in precision from using two prior years • Increase in average standard error of estimates: 2.3% when students with missing scores are dropped • Decrease in average standard error of estimates: 7.6% when second year of prior scores added

  12. For More Information • Please contact • Matt Johnson • MJohnson@mathematica-mpr.com • Stephen Lipscomb • SLipscomb@mathematica-mpr.com • Brian Gill • BGill@mathematica-mpr.com

More Related