1 / 18

Evaluation Theory for Program Success: Paradigms and Practices

Learn about evaluation paradigms, theories, and their application in program assessment. Understand positivist, interpretive, and critical-emancipatory approaches. Develop logic models and evaluation plans to address complex problems.

davidmack
Download Presentation

Evaluation Theory for Program Success: Paradigms and Practices

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Linda K Larkey, PhD Scottsdale Healthcare Professor of Biobehavioral Oncology Research Arizona State University College of Nursing and Health Innovation Linda.larkey@asu.edu Matching Evaluation to the Theory Guiding your Program

  2. overview • evaluation paradigms • what is an “evaluation theory”? • what does a theory buy you? • how do you build theoretical constructs into evaluation? • practice

  3. positivist approach • assumes there are “objective”, observable and measurable aspects of a program • preference for predominantly quantitative evidence • needs assessment • assessment of program theory • assessment of program process • impact assessment • efficiency assessment (Rossi, Lipsey and Freeman, 2004)

  4. three basic paradigms • positivist • Interpretive • critical-emancipatory • you don’t have to join a club--- feel free to blend • Potter, C. (2006). Program Evaluation. In M. Terre Blanche, K. Durrheim & D. Painter (Eds.), Research in practice: Applied methods for the social sciences (2nd ed.)

  5. interpretive approaches • purports it is essential that the evaluator develops an understanding of the perspective, experiences, needs and expectations of all stakeholders • key words, “understanding” “meaning” “rich description” • considered crucial before one is able to make judgments about the merit or value of a program. • The evaluator’s contact with the program is important, over time • observation • Interviews • focus groups

  6. critical-emancipatory approaches • based on action research for the purposes of social transformation • particularly useful in developing countries or underserved populations • concerns: empowering populations to create change from within, providing “voice”

  7. so what? • it matters whether you think your goal is to achieve some measurable state or reality… • or, if you think that the process along the way and people’s opinions and perceptions are most important • or, if you see the program goals to empower, raise consciousness, give voice

  8. What is the result expected? Typical logic: • Process: We will reach X number of female, single heads of household with food stamp information by…. • Short-term outcome: We will distribute $X within Maricopa County by… • Longer-term outcome: We will increase the use of food stamps by $X • Impact: We will reduce food shortage by X%

  9. what “causes” the desired end result? how would we… • reduce food instability in our state • reduce flu epidemics • increase number of servings of F & V eaten by school age children in school lunch • increase cancer screening rates • what else?

  10. Pick one “impact” level goal • What causes it? (Needs assessment, what is the problem, what contributes?) • What do you think might fix it? One thing or many? • How would you know if it were fixed?

  11. That is your Evaluation Theory • Once you name the causes, purported improvement strategies, and expected outcome, you have a LOGIC MODEL • Your construal of the problem (antecedents) is YOUR THEORY • This theory, PLUS your assumptions about what is important to change to get a result is your EVALUATION THEORY

  12. Simpler “Logic Model”: ATM

  13. Lack of funding for uninsured (screening/ treatment) Inadequate Lobbying (for policy and/or funding) Patients can’t access or are discouraged from accessing services Existing health care plans inadequate Facilities Lacking Long Waits for services Inappropriate Utilization Not enough trained providers Low rates of colorectal cancer screening Patients not aware of need/importance/value of screening New and complicated message (not your average bumper sticker) Patients not demanding screening Cultural factors Uncomfortable topic (distaste for “things scatological”) Providers not making referrals Conflicting research evidence, that screening saves lives, or best method Providers not clear on guidelines (who/what/when/risk) Mixed/unclear financial impact (will referred patients return? reimbursement?) Lack of office reminder and/or flagging systems Competing priorities during patient/provider appointment: too little time

  14. Such complex map of problems… what’s the solution?

  15. building your own evaluation plan… • share example of one of your programs • map “antecedent conditions”– what are the causes? (theory of the problem) • so, if those are the causes, what could fix this problem? (theory of solution) • what is the final outcome expected? • then… • measure what along the way? At the end? • things you can count? meaning/perception? • what does the community say? Remember the “critical” view

  16. holding your vision while negotiating the terrain…

More Related