540 likes | 1.01k Views
Introduction to Program Evaluation (Ex-Post Policy Analysis). Why It’s NOT Ex-Ante Policy Analysis Why It’s NOT Research Current Jumble of Approaches. Before and After. What we do before a policy is passed is generally referred to as “Ex-ante” “Policy Analysis”
E N D
Introduction to Program Evaluation (Ex-Post Policy Analysis) Why It’s NOT Ex-Ante Policy Analysis Why It’s NOT Research Current Jumble of Approaches
Before and After • What we do before a policy is passed is generally referred to as “Ex-ante” “Policy Analysis” • What we do after a policy is passed and programs are established and operating can be referred to as “Ex-post” policy analysis, but, most often is referred to as “Program Evaluation”
Ex-ante policy analysis "for policy making" - Analyzing policies. programs, and projects BEFORE they are implemented Commonly referred to as policy analysis Policy formulation Research for ….
Ex-post policy analysis"of policy" - Analyzing policies, programs, and projects AFTER they have been implemented Often called program evaluation Research during and after policy implementation
Ex-ante policy analysis • Ex-ante seems straight forward, doesn’t it? • Figure out what you want to do • Figure out ways to do it • Compare the ways • Choose the best one • But it’s not – It is incredibly chaotic
Process Rational Institutional Choice Incremental Implementation Group Elite Agenda Evaluation Policy Tower of Babel • The policy analysis field is currently home to a babble of tongues • Dozens of “approaches,” “methodologies” and “frameworks” are discussed throughout the literature • Almost always without reference to the other half-dozen nearly identical “approaches”
No unification of thought coming “The policy field is currently marked by an extraordinary variety of technical approaches, reflecting the variety of research traditions in contemporary social science. That variety is likely to persist for the foreseeable future, for the reductionist dream of a unified social science under a single theoretical banner is dead.” Davis Bobrow and John Dryzek, 1987
Frames • While there are literally dozens (if not hundreds) of “methods” to conduct “Policy Analysis” they can be summarized into some larger headings based on their underlying assumptions (beliefs in truth) • Of course, many policy professors have attempted just that and now we have many different “frameworks” of “methods”
Frames • The one I find most conceptually clear is that of Bobrow and Dryzek, they see all of these different approaches falling into one of 5 overarching frameworks • Welfare Economics • Public Choice • Social Structure • Information Processing • And Political Philosophy
Frames • Welfare economics • has the greatest number of policy field practitioners and is manifested in such familiar techniques as cost-benefit analysis and cost-effectiveness analysis
Frames • Public Choice • Straddles the disciplines of microeconomics, political science, and public administration and concerns itself mostly with the analysis and design of decision structures
Frames • Social structure • Rooted in sociology, has some crucial subdivisions, most noticeably those focusing on individual endowments vs. those focusing on group endowments
Frames • Information processing • Mostly focuses on the limits inherent upon any participant in the policy process (although some optimistic practitioners see a chance for change through recognition of the situation)
Frames • Political philosophy • Practitioners focus on applying moral reasoning to the content of policy and the process of policy making • A key idea to remember is that policy analyses conducted within these frames, by and large, are not directly concerned with the final results in terms of program outcomes
So why is the policy analysis field so chaotic? • What do you think? • What does policy result in? • Given that, is there any way that a completely “rational” view can achieved?
Wamsley and Zald • If we think of every program, organization, network, etc as having an external/internal dimension and a political economic dimension, the difference between policy analysis and program evaluation becomes more clear
Economic External Economy Technical Internal Economy Social Internal Polity Political External Polity Wamsley & Zald Remapped
Economic External Economy Technical Internal Economy Social Internal Polity Political External Polity Wamsley & Zald Remapped • External to the program itself is a political and economic environment trying to decide WHAT TO DO!
Economic External Economy Technical Internal Economy Social Internal Polity Political External Polity Wamsley & Zald Remapped • Internal to the program itself is a social and technical environment trying to decide IF WE DID IT WELL!
Economic Political Economic Economic Economic Economic Technical Technical Technical Technical Social Social Social Social Political Political Political Political One Policy provides environment for many Programs Policy Analysis Program Evaluation
What is Program Evaluation (Ex-post Policy Analysis)? • Early definition • “…determining the worth or merit of something.” Scriven (1967) • Contemporary definition • “…the identification, clarification, and application of defensible criteria to determine an evaluation object’s value (worth or merit) in relation to those criteria” Ftizpatrick, et al. (2004) • What’s the difference?
What is Program Evaluation (Ex-post Policy Analysis)? • One educator may like a new reading curriculum because of the love of reading it instills • Another educator may not like the same curriculum because it doesn’t move the child along as rapidly as other curricula in terms of letter interpretation, word interpretation, or sentence meaning • They are looking at the same program using different “criteria”
Program Evaluation is not Research • Research and Evaluation differ in their purposes and, as a result, in the roles of the researcher and evaluator in their work, their preparation, the generalizability of their results, and the criteria used to judge their work.
Which approach seems more amenable to the likely roles of the public administrator? Why? Which approach seems more amenable to the likely roles of the public administrator? Why? Program Evaluation is not Research Develop knowledge/theory Help make judgments/decisions Researcher Stakeholders Specific to the evaluation object Widespread Internal validity (causality) & external validity (generalizability) Accuracy, utility, feasibility, propriety One discipline Interdisciplinary
“Research seeks to prove, evaluation seeks to improve…” M.Q. Patton
Formal vs. Informal Evaluation • Evaluation is not new! • Neanderthals used it in determining with saplings made the best spears
Formal vs. Informal Evaluation • English yeoman abandoned their own crossbows in favor of the Welsh longbow • No GAO report has been found but we assume an informal evaluation was conducted as some point • Result: clobbered the French who tried the longbow but went back to the crossbow (BAD EVALUATION)
Formal vs. Informal Evaluation • As humans we informally evaluate things everyday • Administrators make quick judgments on personnel, programs, budgets, etc. These judgments lead to decisions • A policy maker may make a judgment leading to a voting decision on a policy based on a single speech
Formal vs. Informal Evaluation • Informal evaluation may result in poor or wise decisions • The point is that they are characterized by an absence of breadth and depth because they lack systematic procedures and formally collected evidence • Program evaluation is about “formalizing” our approaches in forming judgments and making decisions
Formal Evaluation Process • Determine standards for judging quality • Collect relevant information • Apply the standards to determine value, quality, utility, effectiveness or significance • Identify recommendations to optimize evaluation object (program)
Evaluation’s Purposes • Typical Purpose • determine merit or worth of something; render judgments about the value of whatever is being evaluated • Alternative purposes • Serve political functions • Facilitate learning • Social betterment • Foster deliberative democracy
Why Evaluate Programs? • To gain insight about a program and its operations – to see where we are going and where we are coming from, and to find out what works and what doesn’t • To improve practice – to modify or adapt practice to enhance the success of activities • Toassess effects – to see how well we are meeting objectives and goals, how the program benefits the community, and to provide evidence of effectiveness • Tobuild capacity - increase funding, enhance skills, strengthen accountability
Direct service interventions Community mobilization efforts Research initiatives Surveillance systems Policy development activities Outbreak investigations Laboratory diagnostics Communication campaigns Infrastructure-building projects Training and educational services Administrative systems What Can be Evaluated? 34
When to Conduct Evaluation? Planning a NEW program Assessing a DEVELOPING program Assessing a STABLE, MATURE program Assessing a program after it has ENDED Conception Completion The stage of program development influences the reason for program evaluation.
Two basic types of Evaluation • Formative (Process) • Provide information for program improvement, typically to judge the merit and worth of a part of a program • Audience is generally the people delivering the program or those close to it. • Typically qualitative in nature
Two basic types of Evaluation • Summative (Impact or Outcomes) • Summative evaluation is a process of identifying larger patterns and trends in performance and judging these summary statements against criteria to obtain performance ratings • Provide information for making decisions about program adoption, continuation, or expansion • Audience is generally potential consumers (students, teachers, employees, managers, etc) • Mostly quantitative in nature
Three subtypes: Needs Assessment, Process, and Outcome Evaluations • Needs assessment • Does a problem/need exist? • Recommend ways to reduce the problem • Process/Monitoring • Description of program delivery • Outcome • Descriptions of changes in recipients or other secondary audiences based on program delivery
So that’s it? • No way! • The way an evaluator looks at Truth vs. truth creates another dimension! • Each one of those study types can be conducted in a manner focusing on replication with a lot of data or focusing on deep understanding of very little data
Objectivist vs. Subjectivist Epistemology Objectivism • Requires an evaluation study to utilize data collection and analysis techniques that yield results that are reproducible and verifiable by other competent persons using the same techniques. Subjectivism • Bases its validity claims on “an appeal to experience rather than to the scientific method”
A Typology of Evaluation Studies This question could be answered by a survey of all employees (quantitative-objective) or by convening a panel of “experts” in the field (qualitative-subjective)
Evaluation Approaches • The Objective-Subjective Dimension creates a broader set of “approaches” • Any of the “types” of evaluation could fall within any of these “approaches” • It all depends on the methodologies employed (how you collect and analyze your data)
Objectivism (mostly quantitative) Subjectivism (mostly qualitative) Evaluation Approaches Objectives-Oriented Approaches • Focus on specifying goals and objectives and determining the extent to which they have been attained. Management-Oriented Approaches • Central concern is identifying and meeting the information needs of managerial decision makers. Consumer-Oriented Approaches • Central issue is developing evaluative information on “products” and accountability for consumers. Expertise-Oriented Approaches • Depend on the direct application of professional expertise to judge quality of whatever is being evaluated. Participant-Oriented Approaches • Involvement of participants (primarily stakeholders) are central in determining the values, criteria, needs, data, and conclusions for the evaluation.