1 / 52

Chapter 11 Evaluation and Policy Research

Chapter 11 Evaluation and Policy Research. Evaluation Research. Evaluation research is not a method of data collection, like survey research of experiments, nor is it a unique component of research designs, like sampling or measurement.

tomai
Download Presentation

Chapter 11 Evaluation and Policy Research

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 11Evaluation and Policy Research

  2. Evaluation Research • Evaluation research is not a method of data collection, like survey research of experiments, nor is it a unique component of research designs, like sampling or measurement. • Instead, evaluation research is social research that is conducted for a distinctive purpose: to investigate social programs (e.g., substance abuse treatment programs, welfare programs, criminal justice programs, or employment and training programs).

  3. Evaluation Research may use one or more of the methods (Experiment, Survey, Observation, Intensive Interview, Focus Groups etc.) to analyze data

  4. Evaluation Research, cont. • For each project, an evaluation researcher must select a research design and method of data collection that are useful for answering the particular research questions posed and appropriate for the particular program investigated. • The development of evaluation research as a major enterprise followed on the heels of the expansion of the federal government during the Great Depression and World War II.

  5. Evaluation Research, cont. • Large Depression-era government outlays for social programs stimulated interest in monitoring program output, and the military effort in World War II led to some of the necessary review and contracting procedures for sponsoring evaluation research. • In the 1960s, criminal justice researchers began to use experiments to test the value of different policies (Orr 1999:24).

  6. Evaluation Research, cont. • In the early 1980s, after this period of rapid growth, many evaluation research firms closed in tandem with the decline of many Great Society programs. • However, the demand for evaluation research continues, due, in part, to government requirements. • The growth of evaluation research is also reflected in the social science community. The American Evaluation Association was founded in 1986 as a professional organization for evaluation researchers (merging two previous associations) and the publisher of an evaluation research journal.

  7. Evaluation Basics • First, clients, customers, students, or some other persons or units—cases—enter the program as inputs. (people functioning as raw materials to be processed.) • Resources and staff required by a program are also program inputs.

  8. Evaluation Basics, cont. • Next some service or treatment is provided to the cases. This may be attendance in a class, assistance with a health problem, residence in new housing, or receipt of special cash benefits. • The program process may be simple or complicated, short or long, but it is designed to have some impact on the cases.

  9. Evaluation Basics, cont. • The direct product of the program’s service delivery process is its output. • Program outputs may include clients served, case managers trained, food parcels delivered, or arrests made. • The program outputs may be desirable in themselves, but they primarily serve to indicate that the program is operating.

  10. Evaluation Basics, cont. • Program outcomes indicate the impact of the program on the cases that have been processed. • Outcomes can range from improved test scores or higher rates of job retention to fewer criminal offenses and lower rates of poverty. • Any social program is likely to have multiple outcomes, some intended and some unintended, some positive and others that are viewed as negative.

  11. Evaluation Basics, cont. • Variation in both outputs and outcomes, in turn, influence the inputs to the program through a feedback process. • If not enough clients are being served, recruitment of new clients may increase. • If too many negative side effects result from a trial medication, the trials may be limited or terminated. • If a program does not appear to lead to improved outcomes, clients may go elsewhere.

  12. Evaluation Basics, cont. • Evaluation research is simply a systematic approach to feedback: It strengthens the feedback loop through credible analyses of program operations and outcomes. • Evaluation research also broadens this loop to include connections to parties outside of the program itself. • A funding agency or political authority may mandate the research, outside experts may be brought in to conduct the research, and the evaluation research findings may be released to the public, or at least funders, in a formal report.

  13. Evaluation Basics, cont. • The evaluation process as a whole, and feedback in particular, can be understood only in relation to the interests and perspective of program stakeholders. • Stakeholders are those individuals and groups who have some basis of concern with the program. • They might be clients, staff, managers, funders, or the public. • Who the program stakeholders are and what role they play in the program evaluation will have tremendous consequences for the research.

  14. Evaluation Basics, cont. • Unlike explanatory social science research, evaluation research is not designed to test the implications of a social theory; the basic issue often is: What is the program’s impact? • Process evaluation, for instance, often uses qualitative methods like traditional social science does, but unlike exploratory research, the goal is not to induce a broad theoretical explanation for what is discovered.

  15. Evaluation Basics, cont. • Instead, the question is: How does the program do what it does? • Unlike social science research, the researchers cannot design evaluation studies simply in accord with the highest scientific standards and the most important research questions; instead, it is program stakeholders who set the agenda. • But there is no sharp boundary between the two. • In their attempt to explain why the program has an impact, and whether the program is needed, evaluation researchers often bring social theories into their projects, but for immediately practical aims.

  16. Questions for Evaluation Research • Evaluation projects can focus on several questions related to the operation of social programs and the impact they have: • Is the program needed? • Can the program be evaluated? • How does the program operate? • What is the program’s impact? • How efficient is the program?

  17. Needs Assessment • Is a new program needed or an old one still required? Is there a need at all? • A needs assessment attempts to answer these questions with systematic, credible evidence. • Need may be assessed by social indicators such as the poverty rate or the level of home ownership, by interviews of such local experts as school board members or team captains, by surveys of populations in need, or by focus groups with community residents (Rossi & Freeman 1989).

  18. Evaluability Assessment • Evaluation research will be pointless if the program itself cannot be evaluated. • Yes, some type of study is always possible, but a study specifically to identify the effects of a particular program may not be possible within the available time and resources. • So researchers may conduct an evaluability assessment to learn this in advance, rather than expend time and effort on a fruitless project.

  19. Evaluability Assessment, cont. • Why might a social program not be evaluable? • Management only wants to have its superior performance confirmed and does not really care whether the program is having its intended effects. This is a very common problem. • Staff are so alienated from the agency that they don’t trust any attempt sponsored by management to check on their performance.

  20. Evaluability Assessment, cont. • Program personnel are just “helping people” or “putting in time” without any clear sense of what the program is trying to achieve. • The program is not clearly distinct from other services delivered from the agency and so can’t be evaluated by itself (Patton 2002:164).

  21. Evaluability Assessment, cont. • An evaluability assessment can help to solve the problems identified. • Discussion with program managers and staff can result in changes in program operations. • The evaluators may use the evaluability assessment to “sell” the evaluation to participants and sensitize them to the importance of clarifying their goals and objectives. • Knowledge about the program gleaned through the evaluability assessment can be used to refine evaluation plans.

  22. Evaluability Assessment, cont. • Because they are preliminary studies to “check things out,” evaluability assessments often rely on qualitative methods. • Program managers and key staff may be interviewed in-depth, or program sponsors may be asked about the importance they attach to different goals. • These assessments also may have an “action research” aspect, because the researcher presents the findings to program managers and encourages changes in program operations.

  23. Process Evaluation • What actually happens in a social program? • Finding this out would be process analysis or process evaluation—research to investigate the process of service delivery. • Process evaluation is even more important when more complex programs are evaluated. Many social programs comprise multiple elements and are delivered over an extended period of time, often by different providers in different areas.

  24. Process Evaluation, cont. • The term formative evaluation may be used instead of process evaluation when the evaluation findings are used to help shape and refine the program • Formative evaluation procedures that are incorporated into the initial development of the service program can specify the treatment process and lead to changes in recruitment procedures, program delivery, or measurement tools

  25. Process Evaluation, cont. • Process evaluation can employ a wide range of indicators. • Program coverage can be monitored through program records, participant surveys, community surveys, or utilizers versus dropouts and ineligibles. • Service delivery can be monitored through service records completed by program staff, a management information system maintained by program administrators, or reports by program recipients (Rossi & Freeman, 1989).

  26. Process Evaluation, cont. • Qualitative methods are often a key component of process evaluation studies because they can be used to elucidate and understand internal program dynamics—even those that were not anticipated • Qualitative researchers may develop detailed descriptions of how program participants engage with each other, how the program experience varies for different people, and how the program changes and evolves over time.

  27. Impact Analysis • The core questions of evaluation research are Did the program work? and Did it have the intended result? • This part of the research is variously called impact analysis, impact evaluation, or summative evaluation. • Formally speaking, impact analysis compares what happened after a program with what would have happened had there been no program.

  28. Impact Analysis, cont. • Rigorous evaluations often lead to the conclusion that a program does not have the desired effect • Depending on political support for the program and its goals, the result may be efforts to redesign the program (as with D.A.R.E.) or reduction or termination of program funding.

  29. Efficiency Analysis • Whatever the program’s benefits, are they sufficient to offset the program’s costs? • Are the taxpayers getting their money’s worth? • What resources are required by the program? • These efficiency questions can be the primary reason that funders require evaluation of the programs they fund. As a result, efficiency analysis, which compares program effects to costs, is often a necessary component of an evaluation research project.

  30. Efficiency analysis A type of evaluation research that compares program costs to program effects. It can be either a cost-benefit analysis or a cost-effectiveness analysis.

  31. Cost-benefit analysis A type of evaluation research that compares program costs to the economic value of program benefits.

  32. Efficiency Analysis, cont. • Acost-benefit analysismust identify the specific program costs and the procedures for estimating the economic value of specific program benefits. • This type of analysis also requires that the analyst identify whose perspective will be used in order to determine what can be considered a benefit rather than a cost. • Program clients will have a different perspective on these issues than do taxpayers or program staff.

  33. Cost-effectiveness analysis A type of evaluation research that compares program costs to actual program outcomes.

  34. Efficiency Analysis, cont. • Acost-effectiveness analysisfocuses attention directly on the program’s outcomes rather than on the economic value of those outcomes. • In a cost-effectiveness analysis, the specific costs of the program are compared to the program’s outcomes, such as the number of jobs obtained, the extent of improvement in reading scores, or the degree of decline in crimes committed.

  35. Design Decisions • Once we have decided on, or identified, the goal or focus for a program evaluation, there are still important decisions to be made about how to design the specific evaluation project.

  36. Design Decisions, cont. • The most important decisions are the following: • —Do we care how the program gets results? • —Whose goals matter most? Researcher or Stakeholder • —Which methods provide the best answers? • Qualitative, Quantitative or both • —How complicated should the findings be?

  37. Quantitative or Qualitative Methods • Evaluation research that attempts to identify the effects of a social program typically is quantitative: • Did the response times of emergency personnel tend to decrease? • Did the students’ test scores increase? • Did housing retention improve?

  38. Quantitative or Qualitative Methods, cont. • It’s fair to say that when there’s an interest in comparing outcomes between an experimental and a control group, or tracking change over time in a systematic manner, quantitative methods are favored. • But qualitative methods can add much to quantitative evaluation research studies, including more depth, detail, nuance, and exemplary case studies • Perhaps the greatest contribution qualitative methods can make in many evaluation studies is investigating the program

  39. Quantitative or Qualitative Methods, cont. • Although it is possible to track service delivery with quantitative measures like staff contact and frequency of complaints, finding out what is happening to clients and how clients experience the program can often best be accomplished by observing program activities and interviewing staff and clients intensively. • Another good reason for using qualitative methods in evaluation research is the importance of learning how different individuals react to the program. • Qualitative methods can also help reveal how social programs actually operate.

  40. Simple or Complex Outcomes • Does the program have only one outcome? Unlikely. • The decision to focus on one outcome rather than another, on a single outcome or on several, can have enormous implications. • In spite of the additional difficulties introduced by measuring multiple outcomes, most evaluation researchers attempt to do so. The result usually is a much more realistic, and richer, understanding of program impact.

  41. Ethics in Evaluation • Evaluation research can make a difference in people’s lives while the research is being conducted, as well as after the results are reported. • Job opportunities, welfare requirements, housing options, treatment for substance abuse, and training programs are each potentially important benefits, and an evaluation research project can change both the type and availability of such benefits. • This direct impact on research participants and, potentially, their families, heightens the attention that evaluation researchers have to give to human subjects’ concerns.

  42. Ethics in Evaluation, cont. • It is when program impact is the focus that human subjects considerations multiply. • What about assigning persons randomly to receive some social program or benefit?

  43. Ethics in Evaluation, cont. • One justification given by evaluation researchers has to do with the scarcity of these resources. • If not everyone in the population who is eligible for a program can receive it, due to resource limitations, what could be a fairer way to distribute the program benefits than through a lottery? • Random assignment also seems like a reasonable way to allocate potential program benefits when a new program is being tested with only some members of the target recipient population.

  44. Ethics in Evaluation, cont. • However, when an ongoing entitlement program is being evaluated and experimental subjects would normally be eligible for program participation, it may not be ethical simply to bar some potential participants from the programs. • Instead, evaluation researchers may test alternative treatments or provide some alternative benefit while the treatment is being denied.

  45. Ethics in Evaluation, cont. • There are many other ethical challenges in evaluation research: • How can confidentiality be preserved when the data are owned by a government agency or are subject to discovery in a legal proceeding? • Is it legitimate for research decisions to be shaped by political considerations?

  46. Ethics in Evaluation, cont. • Must evaluation findings be shared with stakeholders rather than only with policy makers? • Will the results actually be used?

  47. Ethics in Evaluation, cont. • The problem of maintaining subject confidentiality is particularly thorny, because researchers, in general, are not legally protected from the requirements that they provide evidence requested in legal proceedings, particularly through the process known as “discovery.” • However, it is important to be aware that several federal statutes have been passed specifically to protect research data about vulnerable populations from legal disclosure requirements.

  48. Ethics in Evaluation, cont. • Ethical concerns must also be given special attention when evaluation research projects involve members of vulnerable populations as subjects. • In order to conduct research on children, parental consent usually is required before the child can be approached directly about the research. • Adding this requirement to an evaluation research project can dramatically reduce participation, because many parents simply do not bother to respond to mailed consent forms. (mentally disabled?)

  49. Conclusions • Hopes for evaluation research are high: Society could benefit from the development of programs that work well, accomplish their policy goals, and that serve people who genuinely need them.

  50. Conclusions, cont. • Because social programs and the people who use them are complex, evaluation research designs can easily miss important outcomes or aspects of the program process. • Because the many program stakeholders all have an interest in particular results from the evaluation, researchers can be subject to an unusual level of cross-pressures and demands. • Because the need to include program stakeholders in research decisions may undermine adherence to scientific standards, research designs can be weakened.

More Related