880 likes | 919 Views
Background and History. EDN 568 Program Design and Evaluation. The Birth of Evaluation. In the beginning, God created Heaven and Earth… And God saw everything he had made. “Behold,” God said, “it is very good.”. And the evening and the morning were the sixth day.
E N D
Background and History EDN 568 Program Design and Evaluation
The Birth of Evaluation In the beginning, God created Heaven and Earth… And God saw everything he had made. “Behold,” God said, “it is very good.”
And the evening and the morning were the sixth day. And on the seventh day, God rested from all his work. And on the seventh day, God’s ArchAngel came unto Him asking, “God, how do you know what you created is ‘good?’ What are your criteria? On what data do you base your judgment? Just exactly what results were you expecting to attain? And God, aren’t you a little too close to the situation to make a fair and unbiased evaluation?”
God thought very carefully about these questions the ArchAngel had asked him all that day…and His rest was greatly disturbed. So on the eighth day God rose up, faced the ArchAngel, and said…
Well, Lucifer… “YOU CAN JUST GO TO HELL!” And thus, evaluation was born in a blaze of glory
So, now that we know how evaluation was conceived, What do we need to know about it? • Evaluation is a tool which can be used to help judge whether a school program or instructional approach is being implemented as planned • It allows us to assess the extent to which our goals and objectives are being achieved.
Helps Answer These Questions • Are we doing the things for our students and teachers that we said we would? • Are our students learning what we set out to teach them? • How can we make improvements to the curriculum, teaching methods, and school programs?
Evaluation Myths • Many people believed evaluation was a useless activity that generates lots of boring data with useless conclusions. • This was a problem with evaluations in the past when program evaluation methods were chosen largely on the basis of achieving complete scientific accuracy, reliability and validity. • This approach often generated extensive data from which very carefully chosen conclusions were drawn.
Many people used to believe that evaluation was about proving the success or failure of a program. • This myth assumed that success is implementing the perfect program and never having to deal with it again- the program will now run itself perfectly. • This doesn't happen in real life. • Success is remaining open to continuing feedback and adjusting the program accordingly. • Evaluation gives you this continuing feedback.
Evaluation Myths • Many people believed that evaluation was a highly unique and complex process that occurred at a certain time in a certain way, and almost always had to include the use of outside experts. • Many people also believed they must always demonstrate validity and reliability.
Evaluation Myths • For many years, people did not make generalizations & recommendations following program evaluations. • As a result, evaluation reports usually stated the obvious and left program administrators disappointed and confused. • Thanks to Michael Patton's development of utilization assessment, program evaluation has focused on utility, relevance and practicality.
Qualitative Case Study Analysis Patton, M. Q. (2002). Qualitative research and evaluation methods (3rd ed.). Thousand Oaks, CA: Sage. The Guru of Qualitative Program Evaluation & Analysis
NCLB & Qualitative Studies • Case study research analysis allows us to measure the extent to which the educational performance and the future potential for students has improved. • Case study analysis is context specific and therefore a “good fit” for educational program evaluation (when coupled with quantitative data).
Quantitative versus Qualitative Quantitative Evaluation is based the principles of the scientific method. The goal to “prove” something based on valid and reliable outcome measures. Qualitative Evaluation is context-based and analyzes the comprehensive program impact effects on participants.
Program Evaluation Defined • Public school programs are organized methods to provide certain related services to students and other educational stakeholders • Public School programs must be evaluated to decide if the programs are indeed useful for our stakeholders.
Program Evaluation Defined • Program evaluation is a process by which society learns about itself. • Program evaluations contribute to enlightened discussion of alternative plans for social action. • Program evaluation must also be as much about political interaction as it is about determining facts.
The Politics of Program Evaluation It very important to remember in debates over controversial educational programs that sometimes liars figure & figures lie. The educational program evaluator has the responsibility to protect students and schools from both types of deception.
The Politics of Educational Program Evaluation The evaluator is an educator; his or her success is to be judged by what others learn. Those who shape policy should reach decisions with their eyes wide open…not half shut. It is the evaluator's task to illuminate the situation, not to dictate the decision.
History of Program Evaluation During WW II, Program Evaluation developed at light speed to monitor soldier morale, evaluate personnel policies, and develop propaganda techniques.
The 1950s Following the evaluation boom after WWII, program evaluation became a common tool for measuring all kinds of things in organizations. Investigators assessed medical treatments, housing programs, educational activities, etc.
The 1960s Dramatic increase in articles and books on program evaluation in the 60s. President Johnson’s War on Poverty and Great Society Initiative provided a need to evaluate new social programs throughout the country.
The 1970s By the 70s, evaluation research became a specialty field in all the social sciences. The first journal in evaluation, Evaluation Review, began in 1976. After that we were well on our way into the 1980s,1990s, and the 21st Century to….
Surfing Data & Evaluating Programs EDN 568Program Design & Evaluation
Understanding Program Design and Evaluation In the words of Stephen Covey, first things first…. • What is a program? • What is program evaluation?
What Exactly Is a Program? • Organizations usually try to identify several goals which must be reached to accomplish their mission. • In nonprofit organizations, each of these goals often becomes a program.
Organizations & Program Evaluation If programs represent the goals of an organization, then there are two (2) basic ingredients required for program evaluation: 1. An organization & 2. A program (goal)
Program Evaluation • An assessment of an organization’s goals • An assessment of the worth, value, or merit of an intervention, innovation, or service • An assessment that identifies change – past and future • An assessment that tries to solve a problem • An assessment that carefully collects information about a program or some aspect of a program in order to make necessary decisions about the program.
Purpose & Uses of Program Evaluation Evaluations of educational programs have expanded considerably over the past 40 years. Title I of the Elementary and Secondary Education Act (ESEA) of 1965 represented the first major piece of federal social legislation that included a mandate for evaluation
Title I Program Evaluation Legislation The 1965 legislation was passed with the evaluation requirement stated in very general language. This created quite a bit of controversy State and local school systems were allowed considerable flexibility for interpretation and discretion.
Title I Evaluation Act This evaluation requirement had two purposes: (1) to ensure that the funds were being used to address the needs of disadvantaged children; and (2) to provide information that would empower parents and communities to push for better education.
Title I Program Evaluation: A Means to Upgrade Schools Many people saw the use of evaluation information on Title I programs and their effectiveness as a means of encouraging schools to improve performance. Federal staff in the U.S. HEW welcomed the opportunity to have information about programs, populations served, and educational strategies used. The Secretary of HEW in 1965 promoted the evaluation requirement as a means of finding out "what works" as a first step to promoting the dissemination of effective practices.
1965 Viewpoints on Title I Program Evaluation • Expectation of reform and the view that evaluation was central to the development of change. • A common assumption that evaluation activities would generate objective, reliable, and useful reports, and that findings would be used as the basis of decision-making and improvement.
What Happened? • Unfortunately, the Title I Program Evaluations did not produce any of the intended results due to local schools’ failing to comply with the mandate • Widespread support for evaluation did not take place at the local level. • There was a concern that federal requirements for reporting would eventually lead to more federal control over schooling
The 1970s & Title I Program Evaluation • It became clear in the 70s that the evaluation requirements in federal education legislation were not generating their desired results. • The reauthorization of Title I of the federal Elementary and Secondary Education Act (ESEA) in 1974 strengthened the requirement for collecting information and reporting data by local grantees. • It required the U.S. Office of Education to develop evaluation standards and models for state and local agencies. • It also required the Office to provide technical assistance nationwide so exemplary programs could be identified and evaluation results could be disseminated.
TIERS The Title I Evaluation and Reporting System (TIERS) was part of a continuing development effort to improve accountability and program services. In 1980, the U.S. Department of Education created general administration regulations which established criteria for judging evaluation components of grant applications. These various changes in legislation and regulation reflected a continuing federal interest in evaluation data.
Use of Evaluation Results A system at the federal, state, or local levels was not in place to collect, analyze, and use the evaluation results to effect program or project improvements. In 1988, amendments to ESEA reauthorized the Title I program, and strengthened the emphasis on evaluation and local program improvement.
1988 Amendments The legislation required that state agencies identify programs: • That did not show aggregate student achievement gains • That did not make substantial progress toward the goals set by the local school district. Those programs that were identified as needing improvement were required to write program improvement plans. If, after one year, improvement was not sufficient, then the state agency was required to work with the local program to develop a program improvement process to raise student achievement
The 1990s There was a further call for school reform, improvement, and accountability in the 1990s. The National Education Goals were formalized through the Educate America Act of 1994. The new law issued a call for "world class" standards, assessment and accountability to challenge the nation's educators, parents, and students
2001 No Child Left Behind Act The Elementary and Secondary Education Act (ESEA), renamed "No Child Left Behind" (NCLB) in 2001, established goals of high standards, accountability for all, and the belief that all children can learn, regardless of their background or ability.
NCLB A Law of Accountability for Public Schools On January 8, 2002, President Bush signed into law the No Child Left Behind (NCLB) Act, the most sweeping education reform legislation in decades
No Child Left Behind A New Level of Educational Program Evaluation All educational programs evaluated on basis of student achievement and school performance measures An almost totally “quantitative” evaluation of public schools.
Purpose of NCLB To ensure that allchildren have fair and equal opportunities to reach proficiency on state academic achievement standards. Note emphasis on the word “opportunities.”
NCLB and Program Evaluation The goal, worth, merit, and value of public schools shall be determined by student achievement and school performance measures. A series of performance targets that states, school districts, and schools must achieve each year to meet the proficiency requirements. To “meet” AYP, schools must be making adequate yearly progress towards the 2013/2014 NCLB goal.
AYP and Proficiency • The ultimate goal of NCLB is to bring all students to PROFICIENCY (Level III or Above) as defined by North Carolina, no later than 2013-14. • For the purpose of school, district and state accountability, the interim benchmark for progressing toward the goal is Adequate Yearly Progress (AYP) in raising student achievement.
Target Groups and Subgroups • AYP focuses on all students and sub-groups of students in schools, school districts, and states, with a goal of closing achievement gaps and increasing proficiency to 100 percent. • Ten student groups defined in NC public schools.
Ten Student Subgroups in NC 1. School as a whole (all students) 2. American Indian 3. Asian 4. Black 5. Hispanic 6. Multi-racial 7. White 8. Economically Disadvantaged (Free & Reduced Lunch) 9. Limited English Proficient (LEP) 10. Students with Disabilities (SWD)