270 likes | 279 Views
This article presents a comprehensive model for evaluating Supplemental Educational Services (SES) providers, focusing on measures of effectiveness, customer satisfaction, and service delivery compliance. The model includes various evaluation designs and measures to assess student achievement, customer perceptions, and service delivery.
E N D
Evaluating SES Providers Steven M. Ross Allison Potter Center for Research in Educational Policy The University of Memphis http://www.memphis.edu/crep
Supplemental Educational Services (SES) • Required under No Child Left Behind (NLCB) for Title I Schools that have not made Adequate Yearly Progress (AYP) for three consecutive years. • Low-income students from identified Title I schools are eligible to receive free tutoring services. • Students are prioritized by greatest academic need if district funds are limited.
Service Providers • Potential service providers apply to serve students and may be approved by the State Department of Education. • Providers contract with Local Educational Agencies (LEAs) to provide tutoring services to students. • Providers are paid for their services - an amount not to exceed the Title I per pupil allotment.
Determining Evaluation Measures Effectiveness: Increased student achievement in reading/language arts or mathematics. Customer satisfaction: Positive perceptions by parents of SES students. Service delivery and compliance: Positive perceptions by principals, teachers, LEA staff, etc.
Overall Provider Assessment Figure 1. Components of a Comprehensive SES/Evaluation Modeling Plan Service Delivery and Compliance District Coordinator Survey Customer Satisfaction Principal/Liaison Survey Provider Survey Teacher Survey Parent Survey Effectiveness (Student Achievement) State Tests Additional Tests
Effectiveness Measures Student-level test scores from state-mandated assessments • Considerations: • availability only for certain grades (e.g., 3-higher)? • Lack of pretest scores prevents gains from being determined
Effectiveness Measures 2. Supplementary individualized assessments in reading/language arts or math • Considerations: • Without pretest scores and comparison students, SES gain cannot be determined • Validity may be suspect if assessments not administered by trained independent testers
Effectiveness Measures Provider-developed assessments in reading/language arts or math • Considerations: • Test results may not be valid or suitable for state’s evaluation purpose • Tests may favor provider’s strategies
Customer Satisfaction Measures Parent and family perceptions • Considerations: • Parent respondents may not be representative of the population served by provider • Sample sizes will vary due to provider size • Comparisons limited due to parent familiarity with only one provider
Customer Satisfaction Measures 2. Student perceptions • Considerations: • Young students may have difficulty judging quality of services and communicating impressions • Time consuming and may require parent permission to obtain
Service Delivery and Compliance Measures Records of services provided, student attendance rates, and costs • Considerations: • States may obtain data from a variety of sources, including providers, teachers, principals, and district staff • Corroborating data from multiple sources can increase accuracy of evaluation conclusions
Service Delivery and Compliance Measures 2. Feedback from SES customers • Considerations: • First-hand impressions or observations may be lacking • Translation may be needed to reach parents who do not speak English • Obtaining representative samples may be difficult
Service Delivery and Compliance Measures 3. Feedback from district staff • Considerations: • Districts may lack firsthand impressions or observations of tutoring services • Some districts may also be SES providers
Service Delivery and Compliance Measures 4. Feedback from school staff • Considerations: • Teachers may also be SES instructors or lack first-hand impressions of providers • Teachers may need to provide information on multiple providers, which may be confusing and time consuming • Identifying teachers to solicit responses may be difficult
Evaluation Designs: Student Achievement A. Benchmark Comparison Rating = ++ (Low to Moderate rigor) Percentage of SES students by provider attaining “proficiency” on state assessment
Evaluation Designs: Student Achievement A. Benchmark Comparison Upgrades • Percentage of SES in all performance categories (“Below Basic”, “Basic”, etc.) • Comparison of performance relative to prior year and to state norms • Comparison to a “control” sample
Evaluation Designs: Student Achievement Benchmark Comparison • Advantages • Inexpensive and less demanding • Easily understood by practitioners and public • Linked directly to NCLB accountability • Disadvantages • Doesn’t control for student characteristics • Doesn’t control for schools • Uses broad achievement indices
Evaluation Designs: Student Achievement B. Multiple Linear Regression Design Rating = +++ (Moderate rigor) Compares actual gains to predicted gains for students enrolled in SES, using district data to control for student variables (e.g., income, ethnicity, gender, ELL, special education status, etc.).
Evaluation Designs: Student Achievement Multiple Linear Regression Design • Advantages • More costly than Benchmark, but relatively economical • Student characteristics are statistically controlled • Disadvantages • Doesn’t control for school effects • Less understandable to practitioners and public • Effect sizes may be less stable than for Model C.
Evaluation Designs: Student Achievement C. Matched Samples Design Rating = ++++ (High Moderate to Strong rigor) Match and compare SES students to similar students attending same school (or, if not feasible, similar school) Use multiple matches if possible
Evaluation Designs: Student Achievement C. Matched Samples Design • Advantages • Some control over school effects • Easily understood by practitioners and public • Highest potential rigor of all designs • Disadvantages • More costly and time consuming • Within-school matches may be difficult to achieve
Evaluation Designs: Student Achievement D. Combination (Hybrid) Design • Uses a mixture of three main designs to meet special data situations within the State • State level analysis may be benchmark for most districts and matched samples for largest district(s) • Accommodates different student-level data and statistical staff resources
Data Collection Tools • Surveys for LEAs, principals/site coordinators, teachers, parents, and providers. • Common core set of questions from all groups to permit triangulation. • Open-ended question, “Additional comments”
NO COMPLIANCE? YES Removal Serious? YES NO Achievement? Indeterminable Positive Negative Implementation? Minor Implementation? Negative Compliance Removal Violations? Negative Positive Positive NO YES Satisfactory Standing Probation II Probation I NO NO Full Standing Probation II Last Year? Last Year? YES YES Achievement Implementation NO Removal Improved? Improved? YES YES NO Probation I Probation II Probation I Decision Tree for SES Providers Probation I
CONCLUSION • Each state should begin its SES evaluation planning process by identifying • the specific questions that its SES evaluation needs to answer, and • b) the resources that can be allocated reasonably to support further evaluation planning, data collection, analysis, reporting, and dissemination.
CONCLUSION • Work through the hierarchy of evaluation designs presented here and select the design that allows the highest level of rigor. • States may wish to engage third-party evaluation experts in helping to plan and conduct these evaluations.