1 / 38

Towards Collaborative Learning @ Scale

Towards Collaborative Learning @ Scale. Marti A. Hearst UC Berkeley Joint work with Bjorn Hartmann, Armando Fox, Derrick Coetzee, Taek Lim Sponsored in part by a Google Social Interactions Grant. 20 million minds foundation. MOOC Drawbacks. Retention Learning (?) Isolation (?).

abel-franco
Download Presentation

Towards Collaborative Learning @ Scale

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Towards Collaborative Learning @ Scale Marti A. Hearst UC Berkeley Joint work with Bjorn Hartmann, Armando Fox, Derrick Coetzee, Taek Lim Sponsored in part by a Google Social Interactions Grant

  2. 20 million minds foundation

  3. MOOC Drawbacks • Retention • Learning (?) • Isolation (?)

  4. Collaborative Learning “Quick Thinks” Structured Groups

  5. Active & Peer Learning: The Evidence (Large Courses) • Pausing frequently during lecture for 2 minute discussions leads to better comprehension (1-2 grade points higher) • [Ruhl et al, Jrnl Teacher Ed. 1987] • A meta-analysis over 60 physics courses and 6,500 students found improvements of almost 2 std.dev. • [Hake, Am. J. Physics, 1998] • Controlled experiment with > 500 physics students found improved attendance, engagement, and more than twice the learning. • [Deslaurieset al., Science 2011]

  6. Active & Peer Learning: The Evidence (Large Courses) Even if no one in the group knows the answer, discussing improves results (genetics) [Smith et al, Science 323, Jan 2, 2009]

  7. Peer Learning Example • From Deslauries et al: • Pre-class reading assignments and quizzes • (CQ) In-class clicker questions with student-student discussion • (GT) Small-group active learning tasks • Turn in individual written response • (IF) Targeted in-class instructor feedback • Typical schedule for 50-min class: • CQ1, 2 min; IF, 4 min. • CQ2, 2 min; IF, 4 min; CQ2 (continued), 3 min; IF, 5 min; Revote CQ2, 1 min. • CQ3, 3 min; IF, 6 min. • GT1, 6 min; IF with a demonstration, 6 min; GT1 (continued), 4 min; and IF, 3 min.

  8. From Deslauries et al., for a one-week intervention Results for Controlled Experiment

  9. Peer Learning (Smaller Classes)

  10. Peer Learning Core Ideas • Students learn better by explaining to others • Extended group work must be structured • Must promote both: • Positive Interdependence • Individual Accountability • Group makeup: • Best if heterogeneous • Groups can change frequently

  11. In-Person Course: Applied NLP

  12. In-Person Course: Applied NLP

  13. In-Person Course: Applied NLP

  14. After 4 Weeks

  15. After 12 Weeks

  16. What Can Be Improved? More short assignments!

  17. Project goal:MOOCS + Peer Learning How to do it?

  18. First Step: Try MTurk • Hypothesis: • People in groups will get answers right more often than those working alone • Expectations: • The chats will be on topic • People will try to solve the problems

  19. First Step: Try MTurk • Issues? • How to motivate the workers? • How to coordinate the workers? • What kinds of questions to use? • How to structure the conversation?

  20. How To Motivate? • Experimental Manipulation: • If entire group gets the right answer, everyone gets a bonus • Control Group: • No mention of a bonus (no incentive for helping others)

  21. MOOC Arrival Times, First Question, First Lecture

  22. MOOC Arrival Times, Last Question, Last Lecture

  23. Question Type: GMAT Critical Reasoning

  24. System Workflow Real Time Crowdsourcing: Lasecki, et al, CSCW 2013, Bernstein et al, UIST 2011

  25. Interaction: Small-Group Chat • CMC Literature suggests the affordances are appropriate • Video on next slide

  26. Experimental Setup • 226 worker sessions lasting on average 12.8 minutes. • (15.0 minutes excluding solo workers), with 169 solo workers, 25 discussions of size 2, and 73 discussions of size 3. • Each session consisted of 2 questions. 2 minutes alone, 5 minutes in discussion, 20 seconds for final answer choice • 56% of the 452 attempts to answer questions were answered correctly.

  27. Results • All hypotheses confirmed • Engaging in discussion leads to more correct answers. • The bonus incentive leads to more correct changed answers. • The participants have substantive discussions. • Of interest, but not a result: • More discussion is correlated with more correct answers

  28. Results • 138 workers (61%) kept their original choices unchanged on both questions • 74 (33%) changed one answer after the discussion • 14 (6%) changed both. • 50% of workers who changed their answers improved their score • 18% lowered their score; • 86% of workers who changed both answers improved their score.

  29. Results • Engaging in Discussion Leads to More Correct Answers • The mean percentage of correct responses is higher in chatrooms with more than one student (Fisher’s exact test, p < 0:01).

  30. Results • Bonus Incentive Leads to More Correct Answers: • In the control condition, participants changed 33 out of 121 (27%) In the bonus condition they changed 44 out of 139 answers (32%). No significant difference (Fisher’s exact test, two-tailed p = 0.50 ). • However, among the changed answers, 14 answers (12%)changed from incorrect to correct in the control condition, while 31 (22%) changed from incorrect to correct in the bonus condition, a significant difference (Fisher’s exact test, two-tailed p < 0.04 )

  31. Results • Participants have Substantive Discussions • 3 independent raters, Scale of 1 to 4 • 73 of 98 discussions (74%) were rated 4 by all raters • 80 (82%) had a median rating of 4. (Spearman’s rho=0.65)

  32. Next Steps • Put this into MOOCs! • We have an experiment underway right now.

  33. Other MOOC Projects • Forum Usage • Role of Instructor • Untangling Correlation from Causation • MOOC Instructor Dashboards

  34. Thank you! Marti A. Hearst UC Berkeley Joint work with Bjorn Hartmann, Armando Fox, Derrick Coetzee, Taek Lim Sponsored in part by a Google Social Interactions Grant

More Related