1 / 24

Evaluation Research

Evaluation Research. Pierre-Auguste Renoir: Le Grenouillere, 1869. Evaluation Research Introduction Evaluation research refers to a research purpose rather than to a specific method.

nika
Download Presentation

Evaluation Research

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Evaluation Research Pierre-Auguste Renoir: Le Grenouillere, 1869

  2. Evaluation Research • Introduction • Evaluation research refers to a research purpose rather than to a specific method. • Evaluation research can include many different types of methods aimed at understanding the effectiveness of a social program that is intended to bring about desired change. • This form of research helps sociologists complete the tasks of identifying social problems and assessing the efficacy and consequences of social change programs.

  3. Evaluation Research • Evaluation research includes: • Needs assessment studies. • Determine the existence, extent, and awareness of social problems. • Cost-benefit studies. • Assess the extent to which the outcomes of social change programs justify their costs. • Monitoring studies. • Provide information about ongoing social problems.

  4. Evaluation Research • Evaluation research includes: • Program evaluation (outcome assessment). • Determine the extent to which social programs are reducing social problems.

  5. Formulating the Problem • Issues of Measurement • One cannot measure efficacy and desired outcomes unless one knows specifically the outcomes of a social program or policy expected within a certain time frame. • Sometimes, goals are not initially well-defined, change over time, or broaden in scope over time. • Sometimes, intended outcomes require a long time to materialize, but funding guidelines require early evaluation of programs or policies.

  6. Formulating the Problem • Specifying Outcomes • The response variable, or outcome, must be clearly defined. • Sometimes, outcomes are defined by the guidelines of the funding agency. • Ideally, definitions of outcomes are specified prior to the implementation of the program or policy being evaluated. • But, things change….

  7. Formulating the Problem • Measuring Experimental Contexts • Obviously, to assess the efficacy of a program or policy, one needs to know and be able to measure its characteristics. • Sometimes, characteristics are easy to identify (e.g., hours of contact, labor hours, funding, time, guidelines for behavior). • In some cases, characteristics are more difficult to identify (e.g., quality of contact, expertise of labor, timing of funding, flexibility in guidelines).

  8. Formulating the Problem • Specifying Interventions • Evaluation research often does not enjoy the level of control available in a laboratory experiment. • Thus, specifying the independent variables, the “interventions,” is not necessarily a straightforward task. • People participate differentially in programs. • People come and go within programs. • Program delivery varies over time and space.

  9. Formulating the Problem • Specifying the Population • Specifying the participants in a program is not always straightforward. • People vary in the characteristics they bring into a social change program. • People vary in the extent to which they have adopted and adapted to the desired changes of the program.

  10. Formulating the Problem • New versus Existing Measures • Specifying new or existing measures affects the validity and reliability of the evaluation. • The use of new or existing measures also can affect the extent of acceptance of an evaluation by funding agencies, the public, and the community of scholars. • Standardized measures, often specified by funding agencies, can have advantages and disadvantages for evaluation of programs and policies.

  11. Formulating the Problem • Operationalizing Success and Failure • Specifying what constitutes success or failure can be challenging. • How much change is success? • What types of change are success? • Are unanticipated changes success? • When should success happen, immediately or over a long period of time? • Which measures indicate success? • What happens when some measures indicate success and others indicate failure?

  12. Types of Evaluation Research Designs • Experimental Designs • Typically, evaluation research involves assessments of programs and policies in field (i.e., natural) experiments. • One does not have the level of control available within the laboratory. • Unless evaluation is planned within the context of social change programs and policies, one might not be able to conduct a classical experiment.

  13. Types of Evaluation Research Designs • Quasi-Experimental Designs • Subjects are not randomly assigned to experimental and control conditions. • Assessments do not occur both at Time 1 and Time 3 (i.e., pretest and posttest for all subjects).

  14. Types of Evaluation Research Designs • Quasi-Experimental Designs (Continued) • Time-Series Designs: If time-series evaluations do not involve classical experiments, it can be challenging to infer an effect of the treatment. • Consider this situation: • An instructor introduces the use of “controversial discussion topics” midway through the semester, and then observes the level of classroom participation. • Which of the following patterns of classroom participation support a treatment effect?

  15. Types of Evaluation Research Designs • Quasi-Experimental Designs (Continued) • Pattern One: • Classroom participation is low at the beginning of the semester, but steadily increases at a constant rate throughout the semester. • Pattern Two: • Classroom participation has a random pattern of low and high levels of interaction throughout the semester.

  16. Types of Evaluation Research Designs • Quasi-Experimental Designs (Continued) • Pattern Three: • Classroom participation is low at the beginning of the semester, but steadily increases at a constant rate throughout the semester.

  17. Types of Evaluation Research Designs • Quasi-Experimental Designs (Continued) • Time-Series Designs • In observing Pattern 1, the researcher might conclude that participation increases throughout the semester, regardless of the introduction of a treatment. • In observing Pattern 2, the researcher might conclude that participation is erratic and not related to the introduction of a treatment. • Pattern 3 indicates a treatment effect.

  18. Types of Evaluation Research Designs • Quasi-Experimental Designs (Continued) • Nonequivalent Control Groups • Researchers seek naturally-occurring control groups with similar characteristics to the experimental group. • Multiple Time-Series Designs • Comparison of trends across naturally-occurring groups, wherein one group experiences some type of treatment effect.

  19. Types of Evaluation Research Designs • Qualitative Evaluations • Qualitative methods can be equally as effective in evaluating programs and policies as are quantitative methods. • The most effective evaluation research often uses both quantitative and qualitative methods.

  20. The Social Context • Logistical Problems • Evaluation research implies an assessment of employee performance. • Employees of organizations and agencies being evaluated, therefore, often are reluctant to reveal problems with a program or policy. • Motivating personnel to participate fully in an evaluation can be a challenge. • Administrators, in particular, might feel threatened by evaluation research. • Administrators might hinder the quality of the evaluation research.

  21. The Social Context • Ethical Issues • Evaluation research implies becoming involved in the programs being conducted. Hence, the evaluator might disturb the normal functioning of the program. • The results of an evaluation sometimes reveal a need for immediate change to protect human subjects. But the aims of the evaluation argue for nonintervention to best complete the evaluation.

  22. The Social Context • Use of Research Results • Evaluation research sometimes is funded with the goal of applauding or discrediting a program or policy. • When the purposes are biased, then the quality of the research is more likely to become biased. • When the results of evaluation research do not support biased goals, then they might be critiqued or squashed.

  23. Social Indicators Research • Social Indicators • Social indicators are aggregated statistics that reflect various forms of societal well-being. • Consumer price index • Poverty levels • Levels of illiteracy • Infant mortality statistics • Divorce rates • Although such indicators provide only rough approximations of societal health, they are part of common practice.

  24. Social Indicators Research • Computer Simulation • High speed, large capacity computers allow for complex simulations using many indicators of societal conditions to forecast trends or predict the outcome of suggested programs or policies. • Simulations are restricted by knowledge of current technologies and conditions, which might change dramatically over the course of the simulation period.

More Related