150 likes | 312 Views
SOCI 4466 PROGRAM & POLICY EVALUATION LECTURE #8. 1. Evaluation projects 2. Take-home final 3. Questions?. 2. Strategies for Impact Assessment impact: the net effects of a program - the effects that can be uniquely attributed to the program intervention,
E N D
SOCI 4466 PROGRAM & POLICY EVALUATION LECTURE #8 1. Evaluation projects 2. Take-home final 3. Questions?
2. Strategies for Impact Assessment • impact: the net effects of a program - the effects that can be • uniquely attributed to the program intervention, • controlling for the confounding effects of other • variables/sources of change • impact assessments can be carried out at virtually any stage • of the program - piloting, program design, implementation, • monitoring, outcome evaluation • all impact assessments are comparative - comparing the • net effect on those who got the program as compared to • some other group - either themselves earlier, a control group, • those in an alternative program, etc.
strongest approach to assessing impact is the use of the • randomized experimental model • Exp - R 0 X 0 • Con - R 0 0
pre-requisites for assessing impacts: • 1. clearly defined goals and objectives that can be • operationalized • 2. proper implementation of the intervention • note here the considerable difficulties evaluators face in • ensuring the above two criteria are met
the three criteria of causality: • 1. correlation • 2. temporal asymmetry • 3. non-spuriousness • note the difficulty in demonstrating that a program • intervention is the “cause” of a specific outcome • - the issue of causation versus correlation • - bias in selection of targets • - “history” • - intervention (Hawthorne) effects • - poor measurement
Campbell versus Cronbach: perfect versus good enough impact • assessments • - lack of experimental control • - inability to randomize • - “history” • - time/money restraints • - balancing the importance and impact of the program against • practicality • gross versus net outcomes • Gross = Effects of + Effects of + Design • outcome intervention other processes Effects • (net effect) (extraneous • factors)
extraneous confounding factors: • - uncontrolled selection (selection bias) • - both agency/self selection • - “deselection” processes - the drop-out problem • - endogenous change (naturally occurring change • processes, like healing, learning) • - secular drift • - interfering effects (history) • - maturational and developmental effects
design effects: • - stochastic effects: chance fluctuations - the difference • between real change and random change • - the importance of sampling here, allowing the use of • inferential statistics • - statistical significance and statistical power: • alpha: Type I error (false positive) • beta: Type II error (false negative) • - significance here of cell sizes and sample size • - note differential concern with Type I or II error • depending on program type
design effects (continued) • - measurement reliability (qualitative/quantitative) • - measurement validity (domain, internal consistency, • predictive, concurrent) • - experimenter/evaluator effects • - missing data • - sampling biases
choice of outcome measures • - back to the measurement model, and reliability and • validity • - must be feasible to employ, responsive, exhaustive • mutually exclusive and, ideally, quantitative • - multiple measures best • - direct versus indirect
isolating the effects of extraneous factors: • - randomized controls • - regression-discontinuity controls (pre-determined selection • variables) • - matched constructed controls • - statistically-equated controls • - reflexive controls (pre-post) • - repeated measures reflexive controls (e.g. panel) • - time series reflexive controls • - generic controls (established norms, standards)
Full versus partial-coverage programs • - if program is delivered to virtually all targets (full coverage), • more difficult to find a design to assess impact • (e.g. government-funded pension plans; OHIP) • - partial coverage programs are not delivered to all targets, so • there is opportunity to identify reasonable control/comparison • groups
judgmental impact assessments: • - expert or “connoisseurial” assessments • - administrator assessments • - participants’ judgments • the use of qualitative versus quantitative data
inference validity issues: • - reproducibility of the evaluation design + results • - generalizability • - pooling evaluations - meta analysis