1 / 19

Assessment of Student Learning: Expanding the Toolbox

Assessment of Student Learning: Expanding the Toolbox. Sarah Bunnell Ohio Wesleyan University. A Framework for Thinking about Assessment. Backward Design (Wiggins & McTighe): 1. Identify desired results. What do you want to know?

curry
Download Presentation

Assessment of Student Learning: Expanding the Toolbox

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Assessment of Student Learning: Expanding the Toolbox Sarah Bunnell Ohio Wesleyan University

  2. A Framework for Thinking about Assessment • Backward Design (Wiggins & McTighe): • 1. Identify desired results. • What do you want to know? • 2. Determine acceptable evidence and appropriate methods for acquiring evidence. • 3. Then and only then… develop learning experiences and instruction. NOTE: It’s called backward design because you start with the goal FIRST.

  3. Different Types of “What do you want to know?” Questions 1. What works? Do my students learn better when I do X? How would I know? 2. What is? What is happening when students are learning or trying to learn? Where are they getting stuck? 3. What could be? “Visions of the Possible”, What is the ideal outcome and how might I help get students there? (From the Carnegie Academy for the Scholarship of Teaching and Learning; CASTL)

  4. Overarching Goals of EREN 1. Students will be able to apply scientific methodology (including hypothesis generation and experimental design) and recognize the importance of uncertainty for experiments at single and multiple sites. 2. Students will be able to identify factors that vary among sites across geographic or temporal scales, describe how these factors interact, and how they may affect ecological processes.   3. Students will be able to describe the value and techniques of scientific collaboration. 4. Students will demonstrate best practices in the accurate collection, recording, and ethical management of multi-site, multi-participant datasets. 5. Students will be able to analyze, interpret, and draw conclusions from data collected in multi-site studies.

  5. What works? What is? What could be? • Students will be able to apply scientific methodology(including hypothesis generation and experimental design) and recognize the importance of uncertainty for experiments at single and multiple sites.

  6. Revisit the Framework • Backward Design (Wiggins & McTighe): • 1. Identify desired results. • 2. Determine acceptable evidence and appropriate methods for acquiring evidence. • 3. Then and only then… develop learning experiences and instruction.

  7. Determining Acceptable Evidence – Where to look? • Where to find evidence of Product: • Exam, Quiz, in-class performances • Pre vs. post-test shifts in learning • Group presentations, lab reports • What about evidence of Process?

  8. Adapted from Bass & Elmendorf, 2007 Novice MIRACLE Expert • How can we better understand these intermediate, invisible processes? • How might we capture them? Foster them? Product Product

  9. Dimensions of Assessment What Works Fine Grain Holistic What IS Temporal Dimension What’s Possible Can be applied to questions of Product as well as questions of Process

  10. What Works? Example • Impact of teaching with “Two Step Process” on students’ ability to analyze graphical data • Pre- and post-tests; 4 institutions (240 Ss) • Competency rubric (“Improved”; “No change, satisfactory”; “No change, unsatisfactory”; or “Worsened”) • Improved ability to describe and create graphs • Consistent difficulty understanding IVs and DVs., detecting trends in noisy data, and interpreting Interactions Picone et al. (2007). Teaching Issues and Experiments in Ecology, Vol. 5

  11. What is? Example • Jim Sandefur - Foundations Mathematics course • Students were struggling with problem-solving • Used Video-taped think-alouds • Observed that students who were struggling: • Got stuck right out of the gate • Didn’t try multiple strategies • They didn’t test examples

  12. What could be? ExampleFostering Uncertainty and Community • References to not knowing, • Rates and qualities of questions, and • Student-to student discussions

  13. Scales of AssessmentFine-Grained Analysis Example • Lexical Analysis (applied to discussions or journals) • Discussion Boards vs. Blogs • Use discussion boards for clarification, and blogs for metacognitive reflection and connection p=0.10 *** ** • Lexical Inquiry and Word Count (LIWC; Pennebaker et al.)

  14. Scales of AssessmentMid-Level Analyses • Secondary Analysis of High-or Low-Stakes Assessments • On what content area(s) or types of questions did most students perform well? Struggle? • For areas of struggle: Can you identify the bottleneck? • Functional, Affective, or Conceptual

  15. Scales of AssessmentHolistic Analyses • Categorical judgments of performance • Capturing students’ ways of thinking (e.g., Marcia Baxter-Magolda) • Absolute, Transitional, Independent, and Contextual • Multiple Components of Performance  Rubrics • Often not the same as grading rubrics • Some very good rubrics already exist for use • AAC&U Value Rubrics

  16. Creating your own Rubrics • Some helpful initial steps: • As experts, complete the task to set high end • Do an initial sort to clarify categories • Then, look for similarities within piles • Recommend focusing on 3-5 dimensions • Goal is clear, non-double barreled descriptions of levels of performance, from novice to expertise • Ideal reliability between raters is ~80%

  17. Some Final Assessment Factors to Consider • Temporal dimension of assessment • Rapid experience sampling, semester-long assessments, longer assessments? • Structure of Comparison Groups • Within- vs. Between-Groups • Links back to “What do you want to know?” Qs • Not a “clean sample” • Individual difference variables of interest?

  18. Final Thoughts and Recommendations • Sustainability and Scale are key • Not only to faculty quality of life • “When I changed X, Y changed.” • Use what you already have • Meaningful, contextualized assessments • You already have a TON of data on student learning… don’t throw it away!

  19. Questions? Comments?Reflections?

More Related