230 likes | 252 Views
Learn how to develop a comprehensive evaluation plan using logic maps and performance indicators to guide your TAH program assessment. The process focuses on identifying goals, using authentic assessment methods, and generating reports for evaluation purposes.
E N D
Creating a TAH Evaluation Plan Using Logic Maps and Performance Indicators to Guide Program Evaluation
Jeff Sun • jsun@sun-associates.com • Zora Warren • zwarren@sun-associates.com • www.sun-associates.com • www.sun-associates.com/taheval.htm • www.sun-associates.com/evalws • www.edtechevaluation.com/logicmaphow2.htm
Our Basic Evaluation Model • Based on the authentic assessment component of project-based learning
This Evaluation Process • Helps clarify project goals, processes, products • Revolves around indicators of success written for this particular project’s goals • Is highly qualitative and formative • Qualitatively, are you achieving your goals? • What adjustments can be made to your project to realize greater success? • Makes use of a variety of data sources • Generates the necessary reports for the U.S. Department of Education
The Basic Process • Develop the Project Logic/Plan • A part of the proposal-writing process! • Identify Evaluation Questions • Derived from the RFP and the stated goals in the proposal • Are we doing what we need to do to support the purposes for which we were funded? • Create Performance Rubrics • These allow for authentic, qualitative, and holistic evaluation • Conduct Data Collection • Tied to indicators in the rubrics • Report • Formatively and summatively
What are Logic Maps? • A graphic organizer for cause and effect • More about linking concepts than process flow • Not really the same as a flow chart • Details how your project will… • organize resources • in response to needs • to fulfill its ultimate goal • But actually…not in that order • Needs Responses Goals
Sample Project Objectives (aka “Goals”) • Strengthen teacher content knowledge in American history • Help teachers help students achieve Historical Thinking Standards • Create a collaboration between participating districts and content providers • These will be the things we evaluate, because these are the things that we do. • Program evaluation is about evaluating the project’s work and progress • It is not about testing the underlying research hypothesis!
Sample Evaluation Questions • These come from the basic indicators that were specified in the proposal… • To what extent has Our Project strengthened teachers’ knowledge of traditional American history? • To what extent has Our Project increased the capacity of high need districts to provide high quality American history instruction?
Basic Performance Indicators • Teachers in project districts will demonstrate increased knowledge of traditional American history content • Participating districts will provide increased opportunities for students to participate in high quality American history courses
Basic Indicator - Q1 • Teachers in project districts will demonstrate increased knowledge of traditional American history content • Teachers - particularly those from high-need districts - will show gains on pre/post tests of content knowledge • There is a connection between these gains and the particular professional development offered by the project’s consortium • An analysis of participant deliverables - the outputs of the professional development - shows increased teacher knowledge and skills
Basic Indicator - Q2 • Participating districts will provide increased opportunities for students to participate in high quality American history courses • Increases in the demand for, and availability of, AP American history courses • Students (of participants) will show increased mastery of Historical Thinking Standards • Participants will increase their use of improved tools for learning such as information technology • Participants will create lessons, courses, and units of study that support the development of student historical thinking skills
Evidence - Question 1 Indicator • What evidence would we need to gather to prove that we’re seeing what is described in that indicator? • Increased interest in the program as a result of participant testimonies (recruitment for the 2nd year) • Increase collaboration between participants – sharing of docs, peer collaboration • Participants can refer to the specific standards and can use the language of these standards in high-level discussions with students and each other • Increased use of instructional technology • Wider variety of primary sources used, increased comfort level, increased familiarity • Types of questions that teachers ask in the classroom – shows that they’re analytical • Types of answers that teachers can give to student questions • Types of resources teachers can direct students toward • How engaged students are, how frequently they are participating, etc. • Ask teachers how their evaluations of students will change after their PD experience • How students are able to transfer knowledge (access prior knowledge, etc.)….ASK OF TEACHERS as well • Looking at the teachers’ materials (their products)
Evidence - Question 2 Indicator • What evidence would we need to gather to prove that we’re seeing what is described in that indicator? • Have teachers peaked student interest in the Jr. year so that there is a greater demand for AP in the Sr. year? • Increased awareness of history among administrators to increase the number of higher level courses offered. (could be long term) • Survey of student interest in History as a discipline. Interest in more classes? • Increased enrollment in (HS) history electives • A lot of what we’re looking for in Q1 applies here too • Participation in history-related after-school activities • Increase in the “value” given to history as a subject in districts (among teachers, admins – scheduling, parents?)
Data • Needs to support/confirm the established indicators • Needs to be formative and qualitative • Can’t just be the results of a “test” at the end • Needs to draw from a wide variety of sources
Next Steps? • Finalize the rubrics • Establish data collection “schedule” • Establish meeting schedule • Review performance against rubrics • Reporting