190 likes | 329 Views
The Nuts & Bolts of Assessment Scott Jackson Dantley, Ph.D. Coppin State University Baltimore, Maryland. N C A T E’S Institutional Orientation Fall 2009. What is Assessment?. Assessment is the ongoing process of: Establishing clear, measurable expected outcomes of student learning.
E N D
TheNuts & Boltsof AssessmentScott Jackson Dantley, Ph.D.Coppin State UniversityBaltimore, Maryland N C A T E’S Institutional Orientation Fall 2009
What is Assessment? Assessment is the ongoing process of: • Establishing clear, measurable expected outcomes of student learning. • Systematically gathering, analyzing, and interpreting evidence to determine how well student learning matches our expectations. • Ensuring that students have sufficient opportunities to achieve those outcomes. • Using the resulting information to understand and improve student learning. (Suskie 2004)
Definition of Assessment • An evaluated activity or task used by a program or unit to determine the extent to which specific learning proficiencies, outcomes, or standards have been mastered by candidates. Assessments usually include an instrument that details the task or activity and a scoring guide used to evaluate the task or activity (NCATE- Professional Standards, 2008)
Goal (s) of Assessment: • The goal of assessment is to transform the institution into one which creates the best conditions for learning, encourages, best practices, and inspires creativity and innovation. • Can serve more than one entity, such as NCATE, university needs, regional and state requirements (i.e. Middle States, SAAC, etc.)
Assessment Types • Traditional: paper and pencil type, true-false, fill-in-the-blank, multiple-choice, and short-answer-type tests • Standardized: the administration, scoring and test are FIXED, allows for comparability, usually multiple-choice ex. Large-scale assessments • Alternative: tend to be criterion referenced, can include teacher observations of student performance, student interviews, student self-assessments, presentations, projects, concept maps • Authentic: based on activities that represent actual progress toward instructional goals and reflects task typical of classrooms and real life settings. (ex. A test for driving a car…they would actual drive a car) • Performance :work assignments or tasks are used to obtain information about how well a student has learned; may be alternative assessments.
Direct vs. Indirect Measures • Direct Measures: Evidence of things that demonstrate what students have or haven’t precisely learned. Evidence is generally very visible and self-explanatory. • Clear evidence demonstrates actual learning has occurred related to a specific content or skill (MSCHE, 2007) • Indirect Measures: generally are less evidence of what students are learning, less clear and less persuasive without additional evidence. • Reveal characteristics of associated learning, but only imply learning has occurred (MSCHE, 2007)
Direct Measures • Pass rates on licensure exams (i.e. praxis, NCLEX) –can be used as key assessments for program reviews.* • Capstone experiences such as class projects, research papers, dissertations, theses, exhibits ---all have valid rubrics for judging quality • Performance projects with rubrics* • Portfolios (electronic)* • School level assessments such as employer evaluations of candidate performance • Other related assessments that directly measure defined student learning outcomes *(key assessment)
Indirect Measures (student Learning) • Alumni perceptions of career satisfactions • Alumni satisfaction with college experiences or their learning collected through focus groups, exit interviews, and/or surveys • Recognitions, awards and honors of students and candidates • Graduate school placement rates • Course grades • End of course evaluations on examining student’s ***Handout-Direct vs. Indirect measures
Good Assessments • Give us useful information. • Give us reasonable accurate, truthful information. • Are fair to all students. • Are ethical and protect the privacy and dignity of those involved. • Are systematized and planned (i.e. linked to institutional goals) • Are cost effective, yielding value that justifies the time and expense we put into them. (Suskie 2004)
Key Assessments (Example #1)Program Review (Science) • National licensure Results (Assessment #1) • Content Knowledge Assessment-general knowledge of discipline (Assessment #2) • Pedagogical and Knowledge, Skills, and dispositions (Assessment #3) • Student teaching assessments-field experiences (Assessment #4) • Student learning (Assessment #5) • Ethical and Safely assessments (Assessment #6) • Research and Investigation assessments (Assessment #7) • Content knowledge-Contextual Content (Assessment #8)
Key Assessments (Example #2)Program Review (science) • Praxis II content (Assessment #1) • Science Content G.P.A. (final grades in science content– (Assessment #2) • Unit Plan and lesson Plans-evaluation rubric (Assessment #3) • Field experiences and rubrics (Assessment #4) • Portfolio Project w/ rubric (Assessment #5) • Method’s project with rubric (Assessment #6) • Praxis II Pedagogy (Assessment #7) • Action Research Project with rubrics (Assessment #8)
Assessment as a four step Continuous Cycle • Establish Learning Goals • Provide Learning Opportunities • Assess Student Learning Direct Measures-evaluate student work examples include exams, papers, projects etc. Indirect Measures include asking students or alumni how well they thought they learned, tracking graduate school or job placements rates. 4. Use the Results for improvement (Suskie, 2004)
Assessment Systemsfeatures: • Includes assessment plans, timelines for data collection and analysis related to candidates and unit operations. • Requires unit collaborations with professional community • Candidate assessments align with professional, state and institutional standards • Uses multiple indicators (e.g. GEN ED knowledge, content mastery, life and work experiences) all aligned to point of entry into programs (i.e. freshman, juniors, etc.) • Has multiple decision points (e.g. entry points, prior to clinical practice, and at completion) • ****the unit administers multiple assessments in a variety of forms and align with candidate proficiencies. For example end of course evaluations, written essays, or topical papers and from instructional purposes projects, journals, observations by faculty, comments from cooperating teachers. • Establish scoring guides and rubrics • Ensures credibility avoid of bias, fair, consistent, and accurate • Data used from system to make programmatic improvements for the unit and programs in the areas of instruction, field experiences, practices, and assesments
Why Assess student learning? • To improve teaching and learning by answering questions: • Are our students learning what we think is important? • Are they learning what they need to succeed in their future endeavors? • Should our curriculum and/or teaching strategies be modified? • To demonstrate the effectiveness of teaching/learning efforts to governing boards and external audiences. (accreditation organizations & foundations)
Driving Forces of Assessment • Federal Requirements for Regional Accreditation • Disciplinary Accreditation Requirements • Calls for Accountability • Different learning styles of students • The development of thinking and performance skills and attitudes is increasingly stressed
Benefits of Assessment • Students benefit because assessment information gives them documentation of what they have learned that they can use to apply for jobs, awards and programs. • Assessment activities help faculty see how their course link together to form coherent programs and how the course they teach contribute to student success in subsequent pursuits. • Administrators benefit because assessment can help ensure that institutional resources are being spent in the most effective ways possible-where they will have the greatest impact on student learning.
Promoting an Assessment Culture • Campus leaders must be onboard • Focus on teaching and learning rather than assessment • Empower faculty an staff working on assessment • Provide opportunities to learn about assessment • Set Clear Expectations and be flexible • Make Assessment Relevant
References Suskie, L. (2004). Assessing Student Learning: A Common Sense guide. Anker Publishing Company, Inc. Bolton, Massachusetts. Middle States Commission on Higher Education (2007). Student learning assessment: Options and resources, 2nd ed. Philadelphia, Pa. Professional Standards for the Accreditation of Teacher Preparation Institutions (2008). National council for accreditation of teacher education.
TheNuts & Boltsof Assessment Scott Jackson Dantley, Ph.D. Associate Vice President of Institutional Effectiveness and Planning, Professor of Chemistry sdantley@coppin.edu Coppin State University 410.951.3828 www.coppin.edu