140 likes | 158 Views
PPA 502 – Program Evaluation. Lecture 2b – Evaluability Assessment. Evaluability Assessment. Problems confronting evaluation. Evaluators and users fail to agree on goals, objectives, side effects, and performance criteria to be used in evaluating the program.
E N D
PPA 502 – Program Evaluation Lecture 2b – Evaluability Assessment
Evaluability Assessment • Problems confronting evaluation. • Evaluators and users fail to agree on goals, objectives, side effects, and performance criteria to be used in evaluating the program. • Program goals and objectives are found to be unrealistic given the resources that have been committed to them and the program activities underway. • Relevant information on program performance is often not available. • Administrators on the policy or operating level are unable or unwilling to change the program on the basis of evaluation information.
Evaluability Assessment • Program goals, objectives, important side effects, and priority information needs are well-defined. • Program goals and objectives are plausible. • Relevant performance data can be obtained. • The intended users of the evaluation results have agreed on how they will use the information.
Key steps in evaluability assessment • Involve the intended users of evaluation information. • Clarify the intended program from the perspectives of policy-makers, managers, staff and other key stakeholders. • Explore program reality, including the plausibility and measurability of program goals and objectives. • Reach agreement on needed changes in program activities and objectives. • Explore alternative evaluation designs. • Agree on evaluation priorities and intended uses of information on program performance.
Gaining and Holding the Support of Managers • Form a policy group and a work group to involve policymakers, managers, and key staff in evaluation. • Clarify the types of products and results expected. • Use briefings to present: • The perspectives of policymakers and managers • The reality of program operations, and • Options for changes in program activities or the collection and use of information on program performance.
Clarifying Program Intent • Develop program design models documenting program resources, program activities, important intended program outcomes, and assumed causal linkages from the perspectives of key policymakers, managers, and interest groups. • Develop program design models at varying levels of detail. • Use more detailed program design models to ensure that evaluators and managers have a common understanding of the intended program, including negative side effects to be minimized.
Clarifying Program Intent (contd.) • Use less detailed program design models to focus briefings and discussions on key issues. • Develop lists of currently agreed-on performance indicators and possible new performance indicators to ensure that there is a common understanding of the goals, objectives, important side effects, and performance indicators to be used in subsequent evaluation work.
Exploring Program Reality • Focus on descriptions of actual program activities and outcomes, reviews of performance measurement systems currently in use, and description of especially strong project performance and of problems inhibiting effective program performance. • Use site visits and prior reports to make preliminary estimates of the likelihood that program objectives will be achieved. • Identify feasible measures of program performance.
Reaching Agreement on Any Needed Changes in Program Design • If appropriate, suggest changes in program design that appear likely to improve program performance. • Proceed by successive iterations, spelling out the likely costs and likely consequences of the program change options of greatest interest to program managers.
Exploring Alternative Evaluation Designs • Spell out the costs and intended uses of evaluation options: measurements of specific variables or tests of specific causal assumptions. • Present examples of the type of data to be produced. • Interact with intended evaluation users at frequent intervals. • Hold managers’ interest by providing early evaluability assessment products. • Brief key managers and policymakers on evaluability assessment findings and options.
Exploring Alternative Evaluation Designs (contd.) • Explain the implications of the “status quo option” (no further evaluation) and the costs and potential uses of various evaluation options. • Ensure that a mechanism is available for speeding initiation of follow-on evaluation procedures.
Documenting Policy and Management Decisions • Conclude each phase of an evaluability assessment with a brief memorandum documenting significant decisions made in meetings with managers and policymakers.
Proceeding by Successive Iterations • Do the entire evaluability assessment once early in the assessment; obtain tentative management decisions on program objectives, important side effects, evaluation criteria, and intended uses of evaluation information; and redo portions of the evaluability assessment as often as necessary to obtain informed management decisions.
Reducing Evaluability Assessment Costs • Minimize production of intermediate written products. • Use briefings that present the information required for management decisions.