270 likes | 298 Views
Developing a Reading Strategy ITS: Competing Constraints from Theory, Technology, Pedagogy, and Experiments. Danielle S. McNamara, Irwin Levinstein, Chutima Boonthum, and Srinivasa Pillarisetti University of Memphis Psychology / Institute for Intelligent Systems.
E N D
Developing a Reading Strategy ITS: Competing Constraints from Theory, Technology, Pedagogy, and Experiments Danielle S. McNamara, Irwin Levinstein, Chutima Boonthum, and Srinivasa Pillarisetti University of Memphis Psychology / Institute for Intelligent Systems Funded by the IES Reading Program and the NSF IERI Program
iSTART Investigators Co-PIs and Senior Researchers: Irwin Levinstein (ODU), Keith Millis (NIU), Joe Magliano (NIU), Grant Sinclair, Katja Wiemer-Hastings (NIU), Max Louwerse, Art Graesser Postdocs & Staff: Cedrick Bellissens, Rachel Best, Chutima Boonthum, Zhiqiang Cai, David Dufty, Joyce Kim, Chris Kurby, Phil McCarthy, Tenaha O’Reilly, Yasuhiro Ozuru, Margie Petrowski, Srinivasa Pillarisetti, Roger Taylor Many students at Memphis, ODU, and NIU
Interactive Strategy Training for Active Reading and Thinking • Currently provides self-explanation reading strategy training that • combines training to self-explain text and training to use active reading strategies • is adaptive • engages the trainee in an interactive learning environment using animated agents • Goal is to provide via the internet • a variety of empirically supported interventions to improve strategies for reading and thinking McNamara, Levinstein, & Boonthum, 2004
Introduction Module • Teacher-Agent and 2 Student Agents discuss reading strategies • Demonstration Module • Genie self-explains a text • Merlin provides feedback • Trainee is asked to identify strategies • Practice Module • Trainee types self-explanations to science text • Merlin guides the trainee and provides feedback
Based on SERT Self-Explanation Reading Training Training to self-explain text using reading strategies (e.g., paraphrasing, bridging, elaborating) Self Explanation: say aloud or type an explanation McNamara, 2004
History • 1996-2002: Funded by McDonnell and ODU • Develop and test SERT • 2000-2006: Funded by NSF IERI • Test SERT in high-school classrooms • Develop iSTART • 2004-2008: Funded by IES Reading • Increase adaptivity - add texts, modules, student model • Develop teacher interface • 2000-2002: Developed iSTART v1.0 • 2002-2004: Developed iSTART v2.0 • 2004-2006: Developed iSTART v3.0 • Currently developing Teacher Interface
Overarching Goals and Considerations • Follow Original SERT Script as closely as possible • But, take advantage of computer environment • Facilitates individualized interaction • Enables more fine-tuned feedback • Increases time for practice • Escapes (some) social dynamics of classroom • Anticipated older computers in high schools • Anticipated recursive development and frequent revisions
Initial Decision Making (2000-2001) • Architecture (e.g., software on server) • Programs – nonproprietary: Java, MySQL • Agents vs. Text • Pedagogical Agents vs. Real People • Synthesized Voices vs. Human Voices • Full bodied vs. Talking Heads • Cartoon-like vs. Human-like • Develop Agents vs. Use Microsoft Agents
Version 2.0 Changes • Presentation order of the five strategies • Mini-demonstration • Dialogue scripts (e.g., more examples, short dialogues) • Multiple-choice quizzes revised • Synthesized voices improved • Revised interface
Changed background colors to reduce distraction (probably not one of our best moves)
Developed pause and repeat buttons. Scroll Bars replaced buttons.
Boxes, characters, text, and speech bubbles no longer overlap
Motivation for Changes • Human Factors • Observations during testing • Data and theory
Theory vs. Pedagogy vs. Data • Come up with more ideas than we can test • But, have to avoid the kitchen sink • Can’t make every modification you think of • Progress is made by relating ideas to theory • But, testing them remains complicated • And, not all ideas turn out to be good ones • Testing the revisions • We ain’t in Experimental Psychology Land anymore • So, hard to know if each revision ‘works’ • Time constraints
Two Examples • Data indicated that there was a problem • Theory pointed to solutions • Testing told the tale
Demonstration Section • Students asked to do a wide range of tasks • Identify and locate strategies • Locate text that is the source of strategies • Point, click, highlight • College students did fine • High School students – not so much • Revised to increase scaffolding and reduce WM load • Data indicates that changes were effective • Students increase in levels
Increase Paraphrasing • Hypothesized that less skilled students needed more practice at basic skills • Developed Paraphrasing Practice Module • Conducted Experiment • Students received dedicated practice in paraphrasing (without self explanation) • Predicted that less skilled students would benefit from more paraphrasing practice • Au Contraire – the more skilled students benefited from the extra practice and the less skilled students benefited more from the version without it
Can you test it? • The New Demonstration included a host of changes • scaffolding, reduce number of choices, visually chunk self-explanation, etc. • The Paraphrase Module consisted of a single modification (per se)
Considerations • The fun factor • must be inherently interesting or challenging, but not too much • The boredom factor • Can’t be too repetitious or too long • The embarrassment factor • e.g., 'Genie to the rescue' failed because other students saw rescued students singled out • Avoiding distractions • developed a means for students to adjust volume pitch and speed of voice, but never used it because they would play • Theory vs. Intuition (stay the path)
Thanks! • For more information and papers http://csep.psyc.memphis.edu