150 likes | 371 Views
PPA 502 – Program Evaluation. Lecture 3b – Outcome Monitoring. Introduction. The routine and periodic monitoring of outcomes is an important development in the evolution of performance monitoring systems.
E N D
PPA 502 – Program Evaluation Lecture 3b – Outcome Monitoring
Introduction • The routine and periodic monitoring of outcomes is an important development in the evolution of performance monitoring systems. • Outcome monitoring requires the routine measurement and reporting of important indicators of outcome-oriented results.
What Is Outcome Monitoring? • Outcome monitoring is the regular (periodic, frequent) reporting of program results in ways that stakeholders can use to understand and judge those results. • The indicators measured should have some validity, some meaning that is closely tied to performance expectations. • The ways in which they are reported should also have utility, that is, they must be easily interpreted and focus attention on the key points.
Other Forms of Monitoring • Program monitoring – Site visits by experts for compliance-focused reviews of program operations. Designed to remedy procedural deficiencies. • Outcome monitoring is outcome-focused or results-oriented. • Built into the routines of data reporting within program operations. • Provides frequent and public feedback on performance. • Outcome monitoring is also not impact assessment, which measures in what ways the program produced the outcomes.
Why Do Outcome Monitoring • The accountability mandate. • Modern demands for accountability require proof. • Examples: local government (north Carolina), human services (Florida). • http://www.iog.unc.edu/programs/perfmeas/. • http://www.oppaga.state.fl.us/reports/pdf/HealthHS_2007.pdf. • Government performance and results act. • USGAO.
Why Do Outcome Monitoring • Directed performance improvements. • A tool for making more efficient use of resources. • The essence of continuous quality improvement is the focused diagnosis of barriers to better performance, followed by the design of alternatives to remove or circumvent the barriers, the implementation of trials to test those alternatives, and finally the expansion of successful efforts to raise performance levels while shrinking variability in performance. • Florida example. • http://www.oppaga.state.fl.us/default.asp.
Why Do Outcome Monitoring • Commitment to continuous performance improvement. • Comparative snapshot of performance for all those who are responsible for outcomes. • Stimulates competition and unleashes creativity. • More efficient use of support resources. • Performance assessment focuses diagnostic skills on specific, underperforming elements of the program. • Increases efficiencies in the conduct of program evaluations. • Raw data for evaluation • Focus evaluator’s attention on programs most relevant to stakeholders.
Why Do Outcome Monitoring? • Growing confidence in organizational performance. • No system creates a PR nirvana. Critics will always find ammunition. • But a good outcome monitoring system can limit the damage by underscoring ongoing improvement efforts. • Internally, outcome monitoring provides perspective to officials burdened with program details.
Design Issues for Outcome Monitoring • What to measure? • Measures must be appropriate • Measures must sufficiently cover the range of intended outcomes. • Stakeholders should be involved in the identification of outcome measures. • How many measures? • A small number of highly relevant measures for upper management. • A more comprehensive set of measures to supplement the key indicators.
Design Issues for Outcome Monitoring • How (and how often) should performance be measured? • Automated measures allow more frequent assessment than labor-intensive data collection and reporting systems. • Some measures cannot be determined from automated systems. • They should be built into program operations. • But, cannot but too many burdens on program staff. • Sampling, but affected by sample size. • Contract requirements. • Mobilization of outside groups. • Final answer: whatever it takes.
Design Issues for Outcome Monitoring • How to present the results. • Varies by message, sender, and receiver. • Different levels of aggregation and different emphases. • Graphics. • Data should be comparative. • Review presentation standards periodically.
Pitfalls in Outcome Monitoring • Unrealistic expectations. • Not a panacea. • Data collection is not easy. • Size and scope often underestimated. • Avoiding a clear focus on outcomes. • Easier to measure inputs, processes, and outputs than outcomes. • Some may not be measurable directly. • Persistence, good communications, and group facilitation skills can overcome resistance.
Pitfalls in Outcome Monitoring • Irrelevance. • Measures far removed from program reality. • Changes in policy priorities without requisite changes in performance measures. • Unwarranted conclusions. • Program targeting rather than performance improvement.