180 likes | 194 Views
Learn how to identify evaluation audiences, set boundaries, and analyze resources in evaluation planning with a focus on the political context. Understand the importance of audience identification, program description, and analyzing evaluation resources. Discover how to characterize the evaluand, develop a program theory, and analyze the political context impacting evaluations. Gain insights into leveraging evaluation resources effectively, including personnel and technology.
E N D
Evaluation Planning II: Setting Boundaries and Analyzing the Evaluation Context Dr. Suzan Ayers Western Michigan University (courtesy of Dr. Mary Schutten)
Four considerations • Identifying evaluation audiences • Setting boundaries on whatever is evaluated • Analyzing evaluation resources • Analyzing the political context
1. Audience Identification • Evaluation is adequate only if it collects information from and reports to all legitimate evaluation audiences • Primary Audience: sponsor and client • Secondary audiences: depends on how the evaluator defines constituents • Common to limit to too narrow an audience • Figure 11.1 (p. 202) • Return to list of audiences periodically • Who will use results and how is key to outlining study
Potential Secondary Audiences • Policy makers • Managers • Program funders • Representatives of program employees • Community members • Students and their parents (or other program clients) • Retirees • Reps of influence groups
2. Setting the Boundaries • Start point: detailed description of the program being evaluated • Program description: describes the critical elements of the program (goals, objectives, activities, target audiences, physical setting, context, personnel) • Need for description to be thorough enough to convey program’s essence
Characterizing the Evaluand • What problem was program designed to correct? • Of what does the program consist? • What is the program’s setting and context? • Who participates in the program? • What is the program’s history? Duration?
When and under what conditions is the program implemented? • Are there unique contextual events (contract negotiations, budget, elections…) that may distort evaluation? • What resources (human, materials, time) are consumed by the program? • Has there been a previous evaluation?
Program Theory • Specification of what must be done to achieve desired goals, other impacts may be anticipated, & how goals & impacts would be generated (Chen, 1990) • Serves as a tool for: • Understanding program • Guiding evaluation • Evaluators must understand assumptions that link problem to resolve with program actions & characteristics & those a/c with desired outcomes
Helpful in developing program theory (Rossi, 1971) 1. Causal hypothesis: links problem to a cause 2. Intervention hypothesis: links program actions to the cause 3. Action hypothesis: links the program activities with reduction of original problem Sample Problem • Declining fitness levels in children • Causal hypothesis? • Intervention hypothesis? • Action hypothesis?
Methods for Describing Evaluand • Descriptive Documents • Program documents, proposals for funding, publications, minutes of meetings, etc… • Interviews • Stakeholders, all relevant audiences • Observations • Observe program in action, get a “feel” for what really is going on • Often reveal difference between how program runs and how it is supposed to run
Challenge of balancing different perspectives • Minor differences may reflect stakeholder values or positions and can be informative • Major differences require that evaluator attempt to achieve some consensus description of the program before initiating the evaluation • Redescribing evaluand as it changes • Changes may be due to • Responsiveness to feedback • Implementation not quite aligned with designers’ vision • Natural historical evolution of an evaluand
3. Analyzing Evaluation Resources: $ • Cost-free evaluation: cost savings realized via evaluation may pay for evaluation over time • If budget limits are set before the evaluation process begins, it will affect planning decisions that follow • Often evaluator has no input into the budget • Offer 2-3 levels of services (Chevy vs. BMW) • Budgets should remain somewhat flexible to allow for evaluation process to focus on new insights during the process
Analyzing Resources- Personnel • Can the evaluator use ‘free’ staff on site? • Program staff could collect data • Secretaries type, search records • Grad students doing internship, course-related work • PTA • Key that evaluator ORIENT, TRAIN, QC such volunteers to maintain evaluation’s integrity • Supervision and spot-checking useful practices • Task selection is essential to maintain study’s validity/credibility
Analyzing Resources:Technology, others, constraints • The more information that must be generated by the evaluator, the costlier the evaluation • Are existing data, records, evaluations, and other documents available? • Using newer technology, less expensive means of data collection can be employed • Web-based surveys, e-mails, conference calls, posting final reports on websites • Time (avoid setting unrealistic timelines)
4. Analyzing the Political Context • Politics begin with decision to evaluate and influence entire evaluation process • Who stands to gain/lose most from different evaluation scenarios? • Who has the power in this setting? • How is evaluator expected to relate to different groups? • From which stakeholders will cooperation be required? Are they willing to cooperate? • Who has vested interest in outcomes? • Who will need to be informed along the way? • What safeguards need to be formalized (i.e., IRB)?
Variations Caused byEvaluation Approach Used • Variations in the evaluation plan will occur based on the approach taken by the evaluator • Each approach has strengths and limitations • Review Table 9.1 for characteristics of each • Use of single approaches tends to be limiting
To Proceed or Not? • Based on information about context, program, stakeholders & resources, decide ‘go/no-go’ • Ch. 10 inappropriate evaluation conditions: • Evaluation would produce trivial information • Evaluation results will not be used • Cannot yield useful, valid information • Evaluation is premature for the stage of the program • Motives of the evaluation are improper • Ethical considerations (utility, feasibility, propriety, accuracy)