390 likes | 404 Views
Summative Program Evaluations. The External Evaluation. Used with the permission of: John R. Slate. Definitions:. The use of data to determine the effectiveness of a unit, course, or program AFTER it has been completed.
E N D
Summative Program Evaluations The External Evaluation Used with the permission of: John R. Slate
Definitions: • The use of data to determine the effectiveness of a unit, course, or program AFTER it has been completed. • An evaluation that provides information about the overall effectiveness, impact, and/or outcomes of a program.
The goal of summative evaluation . . . • to collect and present information needed for summary statements and judgments about the program and its value.
Used in . . . • making terminal end-of-experience judgments of: • worth & value • appropriateness of the experience • goodness • assessing the end results of an experience
Examples • Termination of employment • Final grade in a course • Final report for a program that is ending • Board report
The role of the evaluator: • to provide findings about the program which can be generalized to other contexts beyond the program being evaluated. • to focus the evaluation on the primary features and outcomes of the program and on the policy questions which may underlie the program.
to educate the audience about what constitutes good and poor evidence of program success. • to admonish the audience about the foolishness of basing important decisions on a single study. • to be a program advocate when merited.
to convey to your audience as complete a depiction of the program’s crucial characteristics as possible. • to express opinions about the quality of the program. • to be able to defend your conclusions.
Phase A: Set the Boundaries of the Evaluation • Research the program • Encourage Trust, Cooperation, and Ownership • Identify the audience/stakeholders • Identify programmatic goals
Questions of sponsors & audiences . . . • Is the program worth continuing? • How effective was it? • What does the program look like and accomplish? • What conclusions could you draw about program effectiveness?
Step 2: Find out as much as you can about the program. • Collect and scrutinize written documents that describe the program. • Talk to people
What are the goals and objectives of the program? Does the program lead to goal achievement? How effective is the program? Are there more effective alternative programs available? What are the most important characteristics, activities, services, staffing, and administrative arrangements of the program? Did the planned program occur? Questions on the mind of the evaluator . . .
Questions to be asked of the stakeholders . . . • What are the most important outcomes of the program, including planned, serendipitous, and unanticipated? • Which aspects of the program do you think wield greatest influence in producing program outcomes? • What are the most important organizational and administrative aspects of the program?
Which parts of the program do you consider its most distinctive characteristics, those that make it unique among programs of its kind? • With what types of students/clients, participants, staff do you think the program is most/least effective?
What is the theory of action behind the program? • What are the policy alternatives if the program is found effective? • How much expansion is possible? • How might expansion sites be selected? • What are the possible markets, communities, or sites for future expansion?
What are the policy alternatives if the program is found ineffective? • Would the program be cutback, eliminated, and/or refined?
Step 3: Develop a written description of the program as you understand it.
Step 4: Focus the evaluation • Judge the adequacy of your written documents for describing the program • Visualize what you might do as the evaluator • Assess your own strengths and preferences
Step 5: Negotiate your role • Agree generally about the basic outline of the evaluation • Verify with the evaluation sponsor your general agreement about services and responsibilities
Phase B: Select Appropriate Evaluation Methods • Establish a common understanding with your study sponsor(s) and program staff about the purposes of the evaluation and about the nature of the activities.
Step 1: Data Collection • Determine appropriate sources for data collection • Select data collection instruments • Develop instruments where necessary
Step 2: Consolidate your concerns • Time • Money • Availability of data collection sources • Availability of staff and/or students/clients
Step 3: Plan the construction and purchase of instruments • Schedule, schedule, schedule • Field-testing
Step 4:Plan the data analysis you will perform • Mostly quantitative • SPSS • Mostly qualitative • Themes
Step 5: Choose evaluation design • True Control Group • Identify all participants • Pretest all participants • Randomly divide participants into one of two groups (Control or Experimental) • Avoid confounding and contaminating variables • Posttest both groups simultaneously.
True Control Group with Posttest only • Same as True Control Group, BUT no pretest is given. • Hope that randomization ensures equivalence of groups.
Non-equivalent Control Group • Find a group similar to your experimental group to serve as the control • Pretest both groups • Investigate differences • Posttest both groups
Single Group Time Series • Collection of scores from same group • Several occasions prior to experiment • Several occasions during experiment • Several occasions after experiment
Time Series with Non-Equivalent Control Groups • Not randomly assigned • Same procedure as above with both groups
Before and After Design • Informal comparisons • Compare experimental group with national sample norms • Examine school records • Examine on predetermined standards
Step 6: Choose sampling strategy for conducting data collection • Step 7: Estimate the cost of the evaluation • Step 8: Come to final agreement about services and responsibilities
Phase C: Collect and Analyze Information • Step 1: Set deadlines • Step 2: Set up the evaluation design • Step 3: Administer instruments, score, and record data • Step 4: Conduct the data analysis
Phase D: Reporting the Findings • Step 1: Plan the report • Step 2: Choose a method of presentation
Be careful to . . . • apply standards and criteria appropriately. • use valid and reliable instruments. • be objective not subjective (formative). • make sure that program implementation has been completed.
Realize that . . . • the program documentation that you generate may be used for accountability, creating a lasting description/impression of the program, and/or creating a list of the possible causes of program effects.
The critical characteristic is: • to provide the best possible information that could have been collected under the circumstances, and that this information meet the credibility requirements of the audience.
In closing, remember . . . • The summative evaluation is most often conducted at the conclusion of the program to provide potential consumers with judgments about the program’s worth or merit. • The more skeptical your audience, the greater the necessity for providing formal backup data.