120 likes | 318 Views
Standards, Assessment and Accountability: Administration of Environmental Rating Scales by EEC Regional Staff Board of Early Education and Care December 8, 2009. Early Education and Care System Components: Training EEC Staff on Environmental Rating Scales. Governance (FS, C, I)
E N D
Standards, Assessment and Accountability: Administration of Environmental Rating Scales by EEC Regional StaffBoard of Early Education and CareDecember 8, 2009
Early Education and Care System Components: Training EEC Staff on Environmental Rating Scales • Governance (FS, C, I) • Regulations (Q, WF, C) • Workforce and Professional Development (Q, WF) • Standards, Assessment and Accountability (Q, FS,WF) • Informed Families and Public (FS, C, I) • Finance (Q, FS, WF, I) EEC Strategic Directions: Q = Quality FS = Family support, access, and affordability WF = Workforce C = Communications I = Infrastructure
Background Information • Purpose: To develop a system of accountability for measuring quality of early education and care programs and out of school time as a part of the QRIS system. Implement a statewide system for rating the quality of all licensed programs, center-based, family child care and after-school care, either on an informal basis as technical assistance to a program or on an official rating scale basis to support application of QRIS rating level. • Alignment with QRIS MA QRIS standards include measurement using Environmental Rating Scales (ERS) ratings scales. • Level 3, programs use the ERS tools as a self-assessment tool. • Level 4, programs or providers are reviewed by an external reviewer to demonstrate their evidence of meeting quality criteria across several criteria.
Overview of Environmental Rating Scales • The environmental rating scales (Frank Porter Graham Child Development Institute, UNC) are used by individual programs, in state QRIS systems and by researchers in many major studies to measure quality in early education and care programs. • These scales, with sound psychometric properties of validity and reliability, assess aspects of process quality (interaction with teachers, peers and materials etc.) Process quality has been cited as more critical to outcomes than structural quality; i.e. group size or teacher child ratio. There are four (4) instruments
Use of Environmental Rating Scales • Self Improvement - director receives orientation to the process and completes a self assessment of some classroom or all classrooms in order to plan for program improvement and scores are not reported; • Informal Rating - classroom or sample classrooms are rated by a reliable outside rater (such as a licensing staff person or other) and scores are not reported but used by the program and coordinated with technical assistance from an outside consultant/licensor for program improvement. This could be a part of Step 3 of QRIS and be used as to plan for movement to level four; or
Use of Environmental Rating Scales, Contd. • Formal Rating - rated by a trained rater who meets reliability requirements with scores reported to EEC. • Program would be required to develop a program improvement plan using a designated format that includes actions, persons responsible and timelines to submit to their regional office with a follow up visit after the action plan is completed. • Could be designed to include a small incentive grant (i.e. $500.00) to meet the costs of improvement and a follow up visit by a reliable rater in order to rate the program and report on the program's improvement. This could either occur at the completion of the action plan or at the next site visit whichever comes first. • A formal rating would occur as part of a QRIS at Step 4.
Proposed Model: Train-the-Trainers • Staff selected to be trainers must complete the training with the FPG staff over a 5 day period. • The first day is the introduction to the scales and an overview training with the video followed by 4 days of practice visits in order to achieve 85% reliability with the FPG staff. • This training has a maximum of 2-3 trainees per FPG staff. • Depending on the number of individuals trained and the implementation selected the prices would vary. • Depending on the number of FPG staff needed, FPG has indicated that the first available training would be early March.
Implementation Option One • 4 staff from each (5) EEC regional office / 1 per scale, (ECERS, ITERS, FDCRS, SACERS) = Total of 20 people statewide (or 4 Trainers per office); • 20 would become the state Trainers and would train other staff to become additional raters so that each office would have a minimum of at least two raters for each scale. • The Trainers would be responsible for conducting the required reliability checks for the raters in their office. • Trainers would be responsible for doing their own quarterly reliability checks with one of the other statewide Trainers of their scale COST : Approximately $120,000 OUTCOME: 20 staff in state who are trained and able to reliably train reviewers within EEC or the field
Implementation Option Two • Across the regional offices, at least 3 people per each scale (ECERS, ITERS, FDCRS, SACERS) = 12 people statewide trained. • A collaboration plan between offices would be developed to ensure each office has at least 2 Trainers and that they work with staff in both offices (e.g. Springfield might have 1 ECERS trainer and 1 FDCRS trainer while Worcester might have 1 ITERS and 1 SACERS.) • The licensed staff in each office would be trained on an instrument so that each office would have raters for all 4 instruments. (e.g. if an office had a total of 14 licensors, there would be at least 3 people trained to do ratings for each scale.) • Trainers would then be responsible for training staff in both offices as well as doing the required reliability checks for staff in those offices. • Trainers would be required to do their own quarterly reliability checks with one of the other two Trainers in the state for their scale. COST: Approximately $110,000 OUTCOME: 12 staff in state who are able to reliably train reviewers within EEC or the field
Implementation Option Three • Across the regional offices, train two staff on each scale = total of 8 people statewide • Designate two staff as responsible for the training and reliability checks of staff in the other regional offices. (e.g. one to cover Springfield and Worcester offices and one in the Quincy office that would also cover Lawrence and Taunton.) COST: Approximately $100,000 (Please note that the costs do not decrease substantially since there are 4 scales and the minimum number of FPG staff would still need to be 4) OUTCOME: 8 staff in state who are able to reliably train reviewers within EEC or the field
Additional Costs Included in All Options • The following additional costs are factored into each option: • Funds to purchase software or creating reporting forms to allow off-site supervisors to review assessments, provide feedback and monitor raters and maintain inter-rater reliability. • 15% Indirect Rate • Travel Costs • Follow up reliability training as needed and annual reliability training refresher.
EEC Recommendation • EEC would recommend Option One: • 20 staff statewide • One staff in each of the 5 EEC regional offices dedicated to each instrument per program type • Implement Train the Trainers model • The Trainers would be responsible for conducting the required reliability checks for the raters in their office or region. • Trainers would be responsible for doing their own quarterly reliability checks with one of the other statewide Trainers of their scale COST : Approximately $120,000 OUTCOME: 20 staff in state who are trained and able to reliably train reviewers within EEC or the field