340 likes | 467 Views
S519: Evaluation of Information Systems. Evaluation Report Template. The steps involved (D-p4). Step1: understanding the basic about evaluation (ch1) Step2: defining the main purposes of the evaluation and the „big picture“ questions that need answers (ch2)
E N D
S519: Evaluation of Information Systems Evaluation Report Template
The steps involved (D-p4) • Step1: understanding the basic about evaluation (ch1) • Step2: defining the main purposes of the evaluation and the „big picture“ questions that need answers (ch2) • Step3: Identifying the evaluative criteria (ch3) • Step4: Organizing the list of criteria and choosing sources of evidence (collecting data) (ch4)
The steps involved (D-p4) • Step5: analyzing data • dealing with the causation issue (which cause what, why), to avoid „subjectivity“ (ch5+6) • importance weighting: weight the results (ch7) • Meric determination: how well your evaluand has done on the criteria (good? Unacceptable?) (ch8) • Synthesis methodology: systematic methods for condensing evaluation findings (ch9) • Staticistical analysis: Salkind (2007)
The steps involved (D-p4) • Step6: result • Putting it all together: fitting the pieces into the KEC framework (ch10) • Step7: feedback • Meta-evaluation: how to figure out whether your evlauation is any good (ch11)
The Key Evaluation Checklist (Davidson, 2005, p. 6-7) I. Executive Summary II. Preface III. Methodology 1. Background & Context 2. Descriptions & Definitions 3. Consumers 4. Resources 5. Values 10. Exportability 7. Outcome Evaluation 6. Process Evaluation 8 & 9. Comparative Cost-Effectiveness 11. Overall Significance 14. Reporting & Follow-up 12. Recommendations & Explanations 13. Responsibilities 15. Meta-evaluation
Step 1: Understand the basic of evaluation • Identify the evaluand • Background and context of evaluand • Why did this program or product come into existence in the first place? • Descriptions and definitions • Describe the evaluand in enough detail so that virtually anyone can understand what it is and what it does • How: collect background information, pay a firsthand visit or literature review
Step1: Output report • Output: one or two page overview of the evaluand and findings • What is your evaluand • Background and context of your evaluand • Description of your evaluand • Try to be as detail as possible
Step2: Defining the Purpose of the Evaluation (D-Ch2) • Who asked for this evaluation and why? • What are the main evaluation questions? • Who are the main audience? • Aboslute merit or relative merit
Step2: Output report • Your step2 output report should answer the following questions: • Define the evaluation purpose • Do you need to demonstrate to someone (yourself) the overall quality of something? • Or Do you need to find a file for improvement? • Or do you do both? • Once you answer above questions, figure out what are your big picture questions: • Is your evaluation related to the absolute merit of your evaluand? • Or the relative merit of your evaluand
Step3: Defining evaluative criteria • To build a criterion list, consider the following procedures: • A needs assessment • Logic model of linking the evaluand to the needs • An assesment of other relevant values, such as process, outcomes, and cost • A strategy to organize your criterion checklist Make sure that you go into the evaluation with a well-thought-out plan so that you know what you need to know, where to get that information, and how you are going to put it together when you write up your report.
Needs assessment • Understand the true needs of your evaluation end users (consumers or impactees) • Who are your end users? • They are the person or entity who buys or users a product or service, enroll in a training program,etc. • Upstream stakeholder (i.e. People on upper level of the structure – manager, designer) • Immediate recipients (i.e. People who directly consume your product or service – consumer, trainee) • Downstream consumers (i.e. People who indirectly involved in your evaluation)
Understanding needs • Needs vs. Wants • Difference and why • A need is something without which unsatisfactory functioning occurs. • Different kind of needs • Context dependence • Conscious needs vs. Unconsious needs • Needs we know and needs we do not know • Met needs vs. Unmet needs • Building a factory (increase job, but create pollution) • Performance needs vs. Instrumental needs • „need to do“ something for satisfactory functioning (actual problems) vs. Proposed solutions • Access email vs. Lightweight laptop • Most of the case, performance needs is considered, but not the instrumental needs
Needs assessment method • Two phases: • Identifying and documenting performance needs • Investigating the underlying causes of performance needs Training program Improved skills Improved performance
Step3: output report • Needs assessment • Identify consumers or impactees (e.g. Table3.2) • Identify different needs (e.g. Table3.3) • Logic model (e.g. Exhibit3.6 and Exhibit3.7) • An assessment of other relavent values with the consideration of process, outcome and cost (e.g. Table3.4) • Organizing your criteria • see step4 output report
Step4: Organizing criteria and indentifying sources of evidence • When organizing your criteria, always keep the followings in mind: • Process • How good are the evaluand‘s content and implementation • Outcomes • How good are the impacts on immediate recipients and other impactees • Comparative Cost-Effectives • How costly is it? Excessive, quite high, acceptable or reasonable • Exportability • How can we extend this to other settings?
The process evaluation checkpoint • Process evaluation • Content • What the evaluand consists of, i.e., basic components or design) • Implementation • How well or efficiently the evaluand was implemented or delivered to the consumers who needed it • Other features • Any other features that make the program good or bad which are not covered by the first two and are not outcomes or cost-related criteria
The outcome evaluation checkpoint • What is outcome • Things that happen as a result of the program • Outcomes can affect anyone listed as consumers • How to do • Based on logic model in step3 (e.g. Exhibit3.6 and Exhibit3.7) • Organize them into subcategories • See Table4.3 (D-p60)
The comparative cost-effectiveness checkpoint • Any evaluation has to take cost into account • What are costs? • Money • Time • Effort • Space • Opportunity costs
The exportability checkpoint • What elements of the evaluand (i.e., innovative design or approach) might make it potentially valuable or a significant contribution or advance in another setting
Step4: Output report • Checkpoints for • Process (e.g. Table4.1, 4.2) • Outcomes (e.g., Table4.3) • Comparative Cost-Effectives (e.g., cost cube table) • Exportability • Short summary of potential areas for exportability
Mid-term report • Step 1- Step 4
Step5: Analysing data • 5.1 Inferencing causation • 5.2 Determining importance • 5.3 Merit determination • 5.4 Synthesis
5.1 Certainty about causation (D-ch5) • Each decision-making context requires a different level of certainty • Quantitative or qualitative analysis • All-quantitative or all-qualitative • Sample choosing • Sample size • Mix of them • More in statistical analysis
Inferrencing causation: 8 strategies • 1. Ask observers • 2. Check whether the content of the evaluand matches the outcome • 3. Look for other telltale patterns that suggest one cause or another • 4. Check whether the timing of outcomes makes sense • 5. Check whether the „dose“ is related logically to the „response“. • 6. Make comparisons with a „control“ or „comparison“ group • 7. Control statistically for extraneous variables • 8. Identify and check the underlying causal mechanism(s)
5.2 Determining importance (D-ch7) • 5.2 Importance determiniation is the process of assigning labels to dimensions or components to indicate their importance. • Different evaluations • Dimensional evaluation • Component evaluation • Holistic evaluation
Determining importance: 6 strategies • 1. having stakeholders or consumers „vote“ on importance • 2. Drawing on the knowledge of selected stakeholders • 3. Using evidence from the literature • 4. Using specialist judgment • 5. Using evidence from the needs and values assessments • 6. Using program theory and evidence of causal linkages
5.3 Merit determination • It is the process of setting „standards“ (definitions of what performance should constitute „satisfactory“, „good“, etc.) and applying those standards to descriptive data to draw explicitly evaluative conclusions about performance on a particular dimension or component. Decscriptive facts about performance Quality or value determinatoin guide Evaluative conclusions
Rubric • Rubric is a tool that provides an evaluative description of what performance or quality „looks like“. • It has two levels: • Grading rubric is used to determin absolute quality or value (e.g., Table8.2) • Ranking rubric is used to determin relative quality or value
5.4 Synthesis methodology • Synthesis is the process of combining a set of ratings or performances on several components or dimensions into an overall rating. • Quantitative synthesis • Using numerical weights • Qualitative synthesis • Using qualitative labels
Qualitative (nonnumerical) weighting example 2 • Dimension by dimension • Layer by layer Sub-dimnention1 Dimnention1 Sub-dimnention2 Overall rating Sub-dimnention3 Dimnention2 Sub-dimnention4
Step 6: Result • Putting it all together: fitting the pieces into the KEC framework (ch10) • Now we are ready to write our evaluation report.
Step 7: Feedback (optional) • Meta-evaluation: how to figure out whether your evlauation is any good (ch11)
Related links • KEC • http://www.wmich.edu/evalctr/checklists/kec.htm • http://www.wmich.edu/evalctr/checklists/kec_feb07.pdf • Questionnaire examples • http://www.go2itech.org/HTML/TT06/toolkit/evaluation/forms.html • http://enhancinged.wgbh.org/formats/person/evaluate.html • http://www.dioceseofspokane.org/policies/HR/Appendix%20II/SampleForms.htm • http://njaes.rutgers.edu/evaluation/