310 likes | 459 Views
Designing the Evaluation . FCS 5470 Summer 2005. An early thought…. “If we don’t know where we are going, we will likely end up somewhere else.”. Steps in Designing the Evaluation Plan. Assess the evaluation needs Focus the evaluation Select an evaluation design
E N D
Designing the Evaluation FCS 5470 Summer 2005
An early thought… • “If we don’t know where we are going, we will likely end up somewhere else.”
Steps in Designing the Evaluation Plan • Assess the evaluation needs • Focus the evaluation • Select an evaluation design • Determine data collection methods • Determine data analysis methods • Collect and analyze data • Communicate and report findings Russ-Eft & Presill, 2001; Posavac & Carey, 2003
Step 1: Assess the evaluation needs • Who wants it? • Why is it wanted? • When? • Who is going to do it? • Resources available? • Evaluability? • Focus of it? Posavac & Carey, 2003
Step 2: Focus the Evaluation • Develop a complete description of the evaluand • Why important? • Background and history of the program • How to learn more? Weiss, 1998
Step 2: Focus the Evaluation • Rationale and Purpose of Evaluation • Builds from history with the problems identified early on • Ends with a clear purpose that includes a statement of how the results will be utilized Russ-Eft & Preskill, 2001
Step 2: Focus the Evaluation • Identify the stakeholders • Anyone who has a direct or indirect interest (stake) in the evaluand or its evaluation (Weiss, 1998) • Primary, secondary, and tertiary levels • Critical to identify all, categorizing isn’t as critical Russ-Eft & Preskill, 2001
Primary Funding agencies, designers, implementers, staff Secondary Managers, administrators, students, participants, customers/clients, trainers, parents Tertiary Potential users or adopters, professional colleagues, professional organizations, community members, governing boards, legislators Stakeholder examples Russ-Eft & Preskill, 2001
Step 2: Focus the Evaluation • Develop key evaluation questions • Form the boundary and scope • May need to prioritize or group questions into themes • Need to know v nice to know • Types of questions asked Russ-Eft & Preskill, 2001; Weiss, 1998
Types of Evaluation Questions • Program process • Program outcomes • Attributing outcomes to the program • Links between processes and outcomes • Explanations Weiss, 1998
Step 3: Select an Evaluation Design • One-shot design • Retrospective pre-test design • One-group pretest-posttest design • Posttest-only control group design • Pretest-posttest control-group design • Time series design • Case study design Russ-Eft & Preskill, 2001
One shot design Posttest Intervention Sample Russ-Eft & Preskill, 2001
Retrospective Pretest Design Posttest, Including Retrospective KAB Intervention Sample Russ-Eft & Preskill, 2001
One-group pretest-posttest Posttest Sample Pretest Intervention Russ-Eft & Preskill, 2001
Posttest-only Control Group Russ-Eft & Preskill, 2001
Pretest-Posttest Control-Group Russ-Eft & Preskill, 2001
Time Series • Similar to a course • Several points of assessment within the intervention • Advantages (provides evidence over time) • Disadvantages (costly) Russ-Eft & Preskill, 2001
Case study • In-depth descriptive data collection and analysis of individuals • Most useful when you want to answer “how” and “why” Russ-Eft & Preskill, 2001
Triangulation • Data • Methods • Investigator • Theory Russ-Eft & Preskill, 2001
Step 4: Data Collection Methods • Archival Data • Observation Data • Surveys and Questionnaires • Knowledge tests • Individual interviews • Focus groups
Factors influencing selection • Evaluation key questions • Evaluator skill • Available resources • Stakeholders’ preferences • Level of acceptable intrusiveness • Availability of data Russ-Eft & Preskill, 2001
Factors influencing selection • Objectivity • Timeliness • Degree of desired structure • Validity and reliability issues Russ-Eft & Preskill, 2001
Validity • Accuracy of data collection • Measures what it claims to measure • Types of validity • Content, construct, face, criterion, and predictive Leddy & Ormrod, 2001
Time or history Maturation Effects on testing Statistical regression Instrumentation Mortality Selection Diffusion/imitation of treatments Bias Threats to validity Leddy & Ormrod, 2001
Sources of potential bias • Sample selection • Concealment of the truth • Lack of knowledge • Non-response • Processing errors • Conceptual problems Leddy & Ormrod, 2001
Ensuring Validity • Random assignment in comparison groups • Ample number of survey items • Reduce response bias • Control over confounding variables • Use of multiple measures Russ-Eft & Preskill, 2001
Reliability • An instrument that gives approximately the same results during subsequent measures • Results can be replicated • Types of reliability • Inter-rater, internal consistency, equivalent forms, and test-retest Leddy & Ormrod, 2001
Threats to reliability • Fluctuations in mental alertness of participants • Variations in conditions under which instrumentation occurred • Differences in interpreting the results • Personal motivation of participants • Length of instrument Leddy & Ormrod, 2001
Techniques for ensuring validity and reliability • Pilot testing • Checking different types of validity • Make repeated and persistent observations • Triangulation • Use of three different sources Russ-Eft & Preskill, 2001
Qualitative Ethnographic or naturalistic Evaluators can’t separate themselves Purpose is to understand Inductive analysis Quantitative Empirical Independent and apart from what is evaluated Statistics commonly used Deductive analysis Mixed Methods Russ-Eft & Preskill, 2001
Steps in Designing the Evaluation Plan • Assess the evaluation needs • Focus the evaluation • Select an evaluation design • Determine data collection methods • Determine data analysis methods • Collect and analyze data • Communicate and report findings Russ-Eft & Presill, 2001; Posavac & Carey, 2003