920 likes | 1.27k Views
ROLE OF EVALUATION IN POLICY DEVELOPMENT AND IMPLEMENTATION. PRESENTATION BY ARDEN HANDLER, DrPH April 10, 2002. RELATIONSHIP BETWEEN EVALUATION AND POLICY. Multiple program evaluations can lead to the development of policy
E N D
ROLE OF EVALUATION IN POLICY DEVELOPMENT AND IMPLEMENTATION PRESENTATION BY ARDEN HANDLER, DrPH April 10, 2002
RELATIONSHIP BETWEEN EVALUATION AND POLICY • Multiple program evaluations can lead to the development of policy • Programs are often thought of as the expressions of policy so when we do program evaluation, we may be in fact evaluating a policy (e.g., Head Start = day care policy for low-income children)
RELATIONSHIP BETWEEN EVALUATION AND POLICY • Population-based programs (e.g., Medicaid) are often thought of as a policy; when we are evaluating population-based programs we are usually using program evaluation methods to examine a policy
PROGRAM EVALUATION VERSUS POLICY ANALYSIS • Program evaluation uses research designs with explicit designation of comparison groups to determine effectiveness
PROGRAM EVALUATION VERSUS POLICY ANALYSIS • Policy analysis uses a variety of different frameworks to answer one or more questions about a policy: • HISTORICAL FRAMEWORK • VALUATIVE FRAMEWORK • FEASIBILITY FRAMEWORK
PROGRAM EVALUATION VERSUS POLICY ANALYSIS • Policy analysis often relies on policy/program evaluations • The tools of program evaluation can be used to evaluate the effectiveness of policies; • However, this is not policy analysis
Purposes of Evaluation/ Evaluation Questions • Produce information in order to enhance management decision-making • Improve program operations • Maximize benefits to clients: to what extent and how well was the policy/program implemented?
Purposes of Evaluation/ Evaluation Questions • Assess systematically the impact of programs/policies on the problems they are designed to ameliorate • How well did the program/policy work? • Was the program worth its costs? • What is the impact of the program/policy on the community?
Two Main Types Of Evaluation • Process or formative • Outcome or summative
Process or Formative Evaluation • Did the program/policy meet its process objectives? • Was the program/policy implemented as planned? • What were the type and volume of services provided? • Who was served among the population at risk?
Why Do We Do Process Evaluation? • Process evaluation describes the policy/program and the general environment in which it operates, including: • Which services are being delivered • Who delivers the services • Who are the persons served • The costs involved
Why Do We Do Process Evaluation? • Process evaluation as program monitoring • Charts progress towards achievement of objectives • Systematically compares data generated by the program with targets set by the program in its objectives
Why Do We Do Process Evaluation? • Process evaluation • Provides feedback to the administrator regarding the program • Allows others to replicate the program if program looks attractive • Provides info to the outcome evaluation about program implementation and helps explain findings
Outcome or Summative Evaluation • Did the program/policy meet its outcome objectives/goals? • Did the program/policy make a difference?
Outcome or Summative Evaluation • What change occurred in the population participating in or affected by the program/policy? • What are the intended and unintended consequences of this program/policy? • Requires a comparison group to judge success
Outcome or Summative Evaluation • What impact did the program/policy have on the target community? • Requires information about coverage
Why Do We Do Outcome Evaluation? • We want to know if what we are doing works better than nothing at all • We want to know if what we are doing new works better than what we usually do
Why Do We Do Outcome Evaluation? • Which of two or more programs/policies work better? • We want to know if we are doing what we are doing efficiently
What Kind of Outcomes Should We Focus on? • Outcomes which can clearly be attributed to the program/policy • Outcomes which are sensitive to change and intervention • Outcomes which are realistic; can the outcomes be achieved in the time frame of the evaluation?
Efficiency Analysis • Once outcomes have been selected and measured, an extension of outcome evaluation is efficiency analysis: • cost-efficiency • cost-effectiveness • cost-benefit
Evaluation Success Whether an evaluation will demonstrate a positive impact of a policy or program depends on other phases of the planning process as well as on adequate evaluation designand data collection
Evaluation Success Whether an evaluation will show a positive effect of a policy/program depends on: • Adequate Program Theory • Adequate Program Implementation • Adequate Program Evaluation
Reasons Why Evaluations May Demonstrate No Program Effect Theory failure
Program Theory • What is a program’s theory? • Plausible model of how program/policy works • Demonstrates cause and effect relationships • Shows links between a program’s/policy’s inputs, processes and outcomes
Means-ends HierarchyM.Q. Patton • Program theory links the program means to the program ends • Theory of Action
Means-ends HierarchyM.Q. Patton Constructing a causal chain of events forces us to make explicit the assumptions of the program/policy • What series of activities must take place before we can expect that any impact will result?
Theory Failure • Evaluations may fail to find a positive impact if program/policy theory is incorrect • The program/policy is not causally linked with the hypothesized outcomes (sometimes because true cause of problem not identified)
Theory Failure • Evaluation may fail to find a positive impact if program/policy theory is not sufficiently detailed to allow for the development of a program plan adequate to activate the causal chain from intervention to outcomes
Theory Failure • Evaluation may fail if program/policy was not targeted for an appropriate population (theory about who will benefit is incorrect) These 3 issues are usually under control of those designing the program/policy
Other Reasons Why Evaluations May Demonstrate No Policy/Program Effect Program/policy failure
Program/policy Failure • Program/policy goals and objectives were not fully specified during the planning process
Other Program Reasons • Program/policy was not fully delivered • Program/policy delivery did not adhere to the specified protocol
Other Program Reasons • Delivery of treatment deteriorated during program implementation • Program/policy resources were inadequate (may explain above)
Other Program Reasons • Program/policy delivered under prior experimental conditions was not representative of treatment able to be delivered in practice • e.g., Translation from university to "real" setting or from pilot to full state program
Non-Program Reasons Why Evaluations May Demonstrate No Program Effect • Evaluation Design And Plan • Are design and methods used appropriate for the questions being asked? • Is design free of bias? • Is measurement reliable and valid?
Conducting an Outcome Evaluation How do we choose the appropriate evaluation design to assess system, service, program or policy effectiveness?
Tools for Assessing Effectiveness • Multiple paradigms exist for examining system, service, program, and policy effectiveness • Each has unique rhetoric and analytic tools which ultimately provide the same answers
Tools for Assessing Effectiveness • Epidemiology • e.g., Are cases less likely to have had exposure to the program than controls? • Health Services Research • e.g., Does differential utilization of services by enrollees and non-enrollees lead to differential outcomes?
Tools for Assessing Effectiveness • Evaluation Research • e.g., Are outcomes for individuals in the program (intervention) different than those in the comparison or control group?
Mix and Match The evaluator uses a mix and match of Methods/Paradigms
Depending on: • Whether program/service/policy and/or system change covers: • entire target population in state/city/county • entire target population in several counties/community areas
Depending on: 2. Whether program/service/policy and/or system change includes an evaluation component at initiation 3. Whether adequate resources are available for evaluation 4. Whether it is ethical/possible to manipulate exposure to the intervention
Outcome Evaluation Strategies • Questions to consider: Is service/program/policy and/or intervention population based or individually based?
Outcome Evaluation Strategies • Population Based --e.g., Title V, Title X, Medicaid • from the point of view of evaluation, these programs/policies can be considered “universal” since all individuals of a certain eligibility status are entitled to receive services
Outcome Evaluation Strategies • Population Based: issues • With coverage aimed at entire population, who is the comparison or the unexposed group? • What are the differences between eligibles served and not served which may affect outcomes? Between eligibles and ineligibles?
Outcome Evaluation Strategies • Population Based: issues • How do we determine the extent of program exposure or coverage? (need population-based denominators and quality program data)
Outcome Evaluation Strategies • Population Based: issues • Which measures to use? • Measures are typically derived from population data sets e.g., Medicaid claims data, surveillance, vital records, census data
Outcome Evaluation Strategies • Individually Based • e.g., Aids/sex education program in two schools; smoking cessation program in two county clinics • Traditional evaluation strategies can be more readily used/designs are more straightforward
Outcome Evaluation Strategies • Questions to Consider: • Is the evaluation Prospective or Retrospective? • Retrospective design limits options for measurement and for selection of comparison groups • Prospective design requires evaluation resources committed up front
Outcome Evaluation Strategies • Questions to consider • Which design to choose? • Experimental, quasi-experimental, case-control, retrospective cohort? • What biases are inherent in one design versus another? • What are the trade-offs and costs?