1 / 35

Research in Practice: Using Better Research Design and Evidence to Evaluate the Impact of Technology on Student Outcome

Gail Wisan, Ph.D. University Director of Assessment Institutional Effectiveness and Analysis Florida Atlantic University Presented at the FAU Faculty Technology Learning Community Boca Raton, Fl November 12, 2010.

elu
Download Presentation

Research in Practice: Using Better Research Design and Evidence to Evaluate the Impact of Technology on Student Outcome

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Gail Wisan, Ph.D. University Director of Assessment Institutional Effectiveness and Analysis Florida Atlantic University Presented at the FAU Faculty Technology Learning Community Boca Raton, Fl November 12, 2010 Research in Practice: Using Better Research Design and Evidence to Evaluate the Impact of Technology on Student Outcomes

  2. Department of Education Evaluates Evidence

  3. Perspective/ point of view: Evaluation Research should drive outcomes assessment because: • it helps identify what works; • it provides direct evidence; • it helps improve educational outcomes.

  4. Overview of Presentation: Benefits/Learning Outcomes • Be Able to explain evaluation research; • identify the benefits of evaluation research; • Able to explain use of experimental and quasi-experimental design evaluation research in education assessment; • Able to apply evaluation research strategies to outcomes assessment at your institution to improve student learning outcomes. ; After this pres improve student learning outcomes at your institution. Assessment should seek systematic evidence of the effectiveness of existing programs, pedagogies, methodologies and approaches to improve student learning outcomes and instill a cycle of continuous improvement.

  5. Outcomes Assessment and Evaluation Research • Outcomes Assessment, at its most effective, incorporates the tools and methods of evaluation research. •   1. Outcomes Evaluation Research •   2. Field Experiment Research

  6. Outcomes Assessment and Evaluation Research Field Experiment Research assesses the effects of new programs, pedagogies, and educational strategies on students’ learning, competencies, and skills

  7. Outcomes Assessment and Evaluation Research Outcomes Evaluation Research assesses the effects of existing programs, pedagogies, and educational strategies on students’ learning, competencies, and skills

  8. Outcomes Assessment and Evaluation Research •  Evaluation Research can answer the question: How can Assessment Improve Education?

  9. Research Design Examples: Overview Notation: X, O, R • Experimental Design • Pre-Experimental Design and its problems in educational research 1. Threats to internal validity (Is X really having an effect?) 2. Threats to External Validity (generalizability)

  10. Research Design Examples: Quasi-Experimental Designs Versus Pre-Experimental Designs • QUASI- Experimental Designs Better Answers 1. Better Solutions to internal validity threats (Is X really having an effect?) 2. Better Solutions to external validity threats (generalizability)

  11. Notation on Diagrams • An X will represent the exposure of a group to an experimental variable or teaching method, the effects of which are to be measured. • O will refer to observation or measurement. • R refers to a random assignment.

  12. Research Design • How Quasi-experimental Design helps to solve the problems of Pre-experimental Design

  13. Experimental Designs • Pretest-Posttest Control Group Design: • Random assignment to two groups R O X O R O O

  14. Experimental Designs • Pretest-Posttest Control Group Design R O X O R O O • Sources of Invalidity • External • Interaction of Testing and X • Interaction of Selection and X ? • Reactive Arrangements ?

  15. Experimental Designs • Posttest-Only Control Group Design R X O R O

  16. Experimental Designs • Posttest-Only Control Group Design R X O R O • Sources of Invalidity • External • Interaction of Selection and X ? • Reactive Arrangements ?

  17. Pre-Experimental Designs • One-Shot Case Study X O • Sources of Invalidity • Internal • History • Maturation • Selection • Mortality • External • Interaction of Selection and X

  18. Pre-Experimental Designs • One-Group Pretest-Posttest Design O X O • Sources of Invalidity • Internal • History • Maturation • Testing • Instrumentation • Interaction of Selection and Maturation, etc. • Regression ? • External • Interaction of Testing and X • Interaction of Selection and X • Reactive Arrangements ?

  19. Pre-Experimental Designs • Static-Group Comparison X O O • Sources of Invalidity • Internal • Selection • Mortality • Interaction of Selection and Maturation, etc. • Maturation ? • External • Interaction of Selection and X

  20. Threats to Internal Validity • History, the specific events occurring between the first and second measurement in addition to the experimental variable. • Maturation, processes within the respondents operating as a function of the passage of time per se (not specific to the particular events), including growing older, growing hungrier, growing more tired etc. • Testing, the effects of taking a test upon the scores of a second testing.

  21. Threats to Internal Validity • Instrumentation, in which changes in the calibration of a measuring instrument or changes in the observers or scorers used, may produce changes in the obtained measurements. • Regression. This operates where groups have been selected on the basis of their extreme scores.

  22. Threats to External Validity • Interaction of Testing and X. A pretest might increase/decrease the respondent’s sensitivity or responsiveness to the experimental variable, making the results obtained for a pretested population unrepresentative for the unpretested universe from which the respondents were selected. • Interaction of Selection and X

  23. Threats to External Validity • Reactive Arrangements. This would preclude generalization about the effect of the experimental variable upon persons being exposed to it in nonexperimental settings. • Multiple-X Interference. This is likely to occur whenever multiple treatments are applied to the same respondents, because the effects of prior treatments are not usually erasable.

  24. Threats to Internal Validity • Selection. There could be biases resulting in differential selection of respondents for the comparison groups. • Mortality. This refers to differential loss of respondents from the comparison groups. • Interaction of Selection and Maturation, etc., which in certain of the multiple-group quasi-experimental designs might be mistaken for the effect of the experimental variable.

  25. Quasi-Experimental Designs: • Nonequivalent Control Group Design O X O O O

  26. Quasi-Experimental Designs: • Nonequivalent Control Group Design: Comparing Math Classes Example O X O O O

  27. Quasi-Experimental Designs • Nonequivalent Control Group Design O X O O O • Sources of Invalidity • Internal • Interaction of Selection and Maturation, etc • Regression ? • External • Interaction of Testing and X • Interaction of Selection and X ? • Reactive Arrangements ?

  28. Examples of Other Quasi-Experimental Designs • Time Series O O O O X O O O O • Multiple Time Series O O O O X O O O O O O O O O O O O

  29. Quasi-Experimental Designs • Time Series O O O OXO O O O • Sources of Invalidity • Internal • History • Instrumentation ? • External • Interaction of Testing and X • Interaction of Selection and X ? • Reactive Arrangements ?

  30. U.S. Dep’t. of Ed Focuses on Level of Evidence • U.S. Department of Education highlights “What Works” in educational strategies; • “What works” is based upon assessment of level of evidence provided by educational research: evaluation research

  31. Department of Education Evaluates Evidence

  32. General Education & Learning Outcomes Assessment:The National Context At the National Symposium on Student Success, Secretary of Education Margaret Spellings and others called on colleges to measure and provide evidence of student learning. “Measuring Up”-National Report Cards By State: Little Data on Whether students are Learning • Outcomes assessment has two purposes • Accountability (standardized national tests?) • Assessment/Effectiveness —Are Students Learning? How much?

  33. Performing Assessment as Research in Practice Assessment should seek systematic evidence of the effectiveness of existing programs, pedagogies, methodologies and approaches to improve student learning outcomes and instill a cycle of continuous improvement. Implementation Strategy: Aim for Quasi-Experimental Designs (or Exp. Designs)

  34. Revitalizing Assessment:Consider these Next Steps • Encourage comparing teaching strategies when faculty are teaching more than one section of the same course • Communicate and Use Results

  35. QUESTIONS? Please email gwisan@fau.edu

More Related