381 likes | 759 Views
The Program Assessment Rating Tool (PART). Mary Cassell Office of Management and Budget April 28, 2011. Overview. What is the PART? How was it developed? What are the components? Quality controls How was the PART used? Budget Program Improvements Lessons learned. “In God we trust…
E N D
The Program Assessment Rating Tool (PART) Mary Cassell Office of Management and Budget April 28, 2011
Overview • What is the PART? • How was it developed? • What are the components? • Quality controls • How was the PART used? • Budget • Program Improvements • Lessons learned
“In God we trust… …all others, bring data.” -W. Edwards Deming
Introduction • The PART was a component of the Bush Administration’s Management Agenda that focused on Budget and Performance Integration • The PART promoted efforts to achieve concrete and measurable results • The PART supported program improvements CJ
What is the Program Assessment Rating Tool (PART)? • A set of questions that evaluates program performance in four critical areas: • Program Purpose and Design • Strategic Planning • Program Management • Program Results and Accountability • A tool to assess performance using evidence • Provides a consistent, transparent approach to evaluating programs across the Federal government CJ
Why PART? • Measure and diagnose program performance • Evaluate programs in a systematic, consistent, and transparent manner • Inform agency and OMB decisions on resource allocations • Focus on program improvements through management, legislative, regulatory, or budgetary actions • Establish accountability for results
How did the PART work? • Answers to questions generated scores which are weighted to tally to a total score. • Based on evidence, evaluations, and data • Ratings based on total scores: Effective, Moderately Effective, Adequate, Ineffective. • Results Not Demonstrated assigned to programs that do not have performance measures or data, regardless of overall score.
PART Questions and Process • Roughly 25-30 analytical questions; explanations and evidence are required • Standards of evidence hold programs to a high bar • Question weight can be tailored to reflect program specifics • Interactions between questions. • Yes/No answers in diagnostic sections. Four levels of answers in results section. • Collaborative process with agencies; OMB had the pen.
How was the PART developed? • Designed by 12 OMB career staff, including one representative from each division • Piloted with about 60 programs • Pilot generated extensive input from agencies that resulted in several revisions – changes in scoring, elimination of a question about whether the program served an appropriate federal roles • Conducted trial runs with research institutions • Agency roll-out: • OMB training • Agency meetings • Agency trainings • Incorporation into 2002 budget decisions and materials • Development, pilot, and revision process took about 6 months, including development of guidance and training.
PART Program Types • Direct Federal • Competitive Grant • Block/Formula Grant • Regulatory Based • Capital Assets and Service Acquisition • Credit • Research and Development
PART Questions • Section I: Program Purpose & Design (20%) • Is the program purpose clear? • Does the program address an existing problem or need? • Is the program unnecessarily duplicative? • Is the program free of major design flaws? • Is the program targeted effectively? • Section II: Strategic Planning (10%) • Does the program have strong long-term performance measures? • Do the long-term measures have ambitious targets • Does the program have strong annual performance targets? • Does the program have baselines and ambitious targets? • Do all partners agree to the goals and targets? • Are independent evaluations conducted of the program? • Are budgets tied to performance goals? • Has the program taken steps to correct strategic planning deficiencies?
PART Questions • Section III: Program Management (20%) • Does the program collect timely performance information and use it to manage? • Are managers and partners held accountable for program performance? • Are funds obligated in a timely manner? • Does the program have procedures (IT, competitive sourcing, etc) to improve efficiency? • Does the program collaborate with related programs? • Does the program use strong financial management practices? • Has the program taken meaningful steps to address management deficiencies? • Additional questions for specific types of programs. • Section VI: Program Results (50%) • Has the program made adequate progress in achieving its long-term goals? • Does the program achieve its annual performance goals? • Does the program demonstrate improved efficiencies? • Does the program compare favorably to similar programs, both public and private? • Do independent evaluations shows positive results?
Performance Measures, Data and Evaluations • Strong focus on performance measures. Performance measures should capture the most important aspects of a program’s mission and priorities. • Key issues to consider: 1) performance measures and targets . 2) focus on outcomes whenever possible. 3) annual and long-term timeframes. • Efficiency measures required • Rigorous evaluations are strongly encouraged
Quality Controls • The PART is a tool used to guide a collective analysis-not a valid and reliable evaluation instrument. Therefore it required other mechanisms to promote consistent application. • Guidance and standards of evidence • Training • On-going technical assistance • Consistency check • Appeals process • Public transparency
How was the PART used? A Focus on Improvement • Every program developed improvement plans • Focus on findings in the PART assessments • Implementation of plans and report on progress • Reassessments occurred once the program has made substantive changes
The Use of the PART in the Budget Process • Informed budget decisions (funding, legislative, and management) • Increased prominence of performance in the Budget • Increased accountability and focus on data and results
Example: Migrant Education and the PART • Collaborative process between OMB and program office. • Program office provided evidence to back up PART answers (such as monitoring instruments, State data, action plans, etc.) • OMB and ED met to discuss evidence • OMB and ED shared PART drafts • ED developed follow-up actions.
Migrant Education PART PART Findings: • Program is well-designed and has a good strategic planning structure • Program is well-managed • Issues relating to possible inaccuracies in the eligible student count are being addressed • States are showing progress in providing data and in improving student achievement • Results section: • Ensure all States report complete and accurate data • Continue to improve student achievement outcomes • Improve efficiencies, in particular in migrant student records transfer system • Complete a program evaluation Areas for Improvement and Action Steps for Migrant Education • Complete national audit of child eligibility determinations • Implement and collect data on Migrant Student Information Exchange (MSIX) • Use data, in particular on student achievement, to improve performance
The Process Distribution of Ratings Government-wide 45% 75% 55% 25%
The Process Department of Education Cumulative Ratings
Lessons Learned • Pros • Focus on results, data, performance measurement, evaluation • Program improvements • Common analysis • Transparency • Cross-program and cross-agency comparisons between similar programs • Identification of best practices • Informed budget descisions
Lessons Learned • Cons • Not consistent enough to allow trade-offs between unlike programs • Better for program improvement than accountability, unless coupled with strong evaluation • Became too burdensome • Not fully embraced by agencies or Congress