240 likes | 318 Views
FITL 2011 FITL 2011 FITL 2011 FITL 2011 FITL 2011. Rochester Institute of Technology. C2Learn C2Learn C2Learn C2Learn C2Learn C2Learn C2Learn C2Learn C2Learn C2Learn C2Learn C2Learn C2Learn C2Learn C2Learn. Efficacy of a Direct Learning Tool for Deaf and Hard of Hearing Students.
E N D
FITL 2011 FITL 2011 FITL 2011 FITL 2011 FITL 2011 Rochester Institute of Technology C2Learn C2LearnC2LearnC2LearnC2LearnC2LearnC2LearnC2LearnC2LearnC2LearnC2LearnC2LearnC2LearnC2LearnC2Learn Efficacy of a Direct Learning Tool for Deaf and Hard of Hearing Students Direct Learning Direct Learning Direct Learning Direct Learning Direct Learning Direct Learning Direct Learning Brian Trager Raja Kushalnagar National Technical Institute for the Deaf National Technical Institute for the Deaf National Technical Institute for the Deaf National Technical Institute for the Deaf National Technical Institute for the Deaf National Technical Institute for the Deaf RIT FITL 2011 - May 25, 2011
Rochester Institute of Technology “Pictures, beside the pleasure they give, act as definers of the text, and convey far more correct ideas than could be gained from words alone.” – James H. Logan (1870)
Rochester Institute of Technology Learning Styles • Three different learning styles • auditory, visual, kinesthetic • Deaf students – • Inherently visual learners due to sensory compensation • Visual learners – • Want to see the process of how things are done. “Can I see that again?” • Presentation should show clear demonstrations • Concrete examples, graphs, charts, visual representations of abstract concepts
Rochester Institute of Technology See To Learn • Solely rely on vision to gather information • Adequate time needed to gather all information • Severalvisual sources of information: • Visual presentation (PowerPoint, Whiteboard, video without CC) • Interpreter/ C-Print • Instructor • Visual dispersion
Rochester Institute of Technology Direct vs. Mediated Instruction • Direct instruction • Information from an instructional source is presented directly to the audience • Mediated instruction • Information is presented through an interpreter • Usually in mainstream environment
Rochester Institute of Technology Direct vs. Mediated Instruction • Comprehension test of lecture content (Marschark & Sapere, 2004) • Highly qualified interpreter provided • Deaf students consistently scored lower than hearing peers • Access services are not at fault • Direct instruction cannot be replicated with mediated instruction even under optimal conditions
Relational vs Item-specific processing • Relational processing • Ability to relate distinctive concepts/ideas • Item-specific • Individual ideas/concepts • Deaf individuals appear to be item-specific (Marschark, 2002) • Programming concepts such as objects and classes require relational processing • Deaf and hearing score equally when recalling individual items • Deaf lag behind hearing peers when recalling ideas in relationship to each other
Rochester Institute of Technology Instructional Tool Study • Study conducted by Dowaliby and Lang (1999) • Various multimedia strategies examined • 11 lessons on the human eye • 144 deaf participants • Split into three categories based on their reading skills (low, middle, high)
Rochester Institute of Technology Instructional Tool Study • Text Only – deaf participants scored 6.9 • Adjunct questions proved to be the most effective tool of all the conditions (2.8 points increase) • An increase of 3.7 from text only to full • Low reading skills scored higher with full adjunct aids than high skilled readers with text only • Instructional tool proven to be effective with increase of scores between pretest and posttest • Adjunct questions most effective for low to moderate skilled readers • Sign movies most effective for highly skilled readers
Rochester Institute of Technology C2Learn Application • User-driven application • Four lessons: focuses on decisions and advanced decisions in Java • Average of 12 “slides” for each lessons • 50+ minutes of video • 30+ adjunct questions • 10+ animated examples
Rochester Institute of Technology Methodology • Research study undertaken to investigate the effectiveness of the C2Learn software • 41 Participants – Deaf and hard-of-hearing students registered in either introductory programming courses or similar bridge course
Rochester Institute of Technology Methodology – Cont’d. • Test #1 – Administered before C2Learn software was given to participants • Learning Tool – Participants were informed to start with if statement module • Test #2 – Administered when participants completed all modules in C2Learn software • Learning Tool Survey – Inquire thoughts and gather feedback
Rochester Institute of Technology Test Results • Each test has 14 questions • One point is given for each question that is answered correctly • A combination of multiple-choice questions and fill-in-the-blanks
Rochester Institute of Technology Comparison of average test scores (percentage) among participants
Rochester Institute of Technology Gain Scores • Gain Scores • Hearing Instructor CP* Mean Std. Error ASL 2.000 0.672 SimCom 3.000 1.778 Oral 4.000 1.778 Cohen’s d = -0.61 • Deaf Instructor CP* Mean Std. Error ASL 1.267 0.459 SimCom 0.833 0.513 Oral 2.000 0.759 *- Communication Preference
Rochester Institute of Technology Further Analysis • Split two groups based on scores • Low scoring group • High scoring group • High scoring group showed medium size effect • Cohen’s d = 0.58618572 • Low scoring group indicated a high size effect • Cohen’s d = 0.86822513
Rochester Institute of Technology Test Results Conclusion • Low performing learns benefit most from C2Learn • Communication preferences has no effect on results • This approach may be applicable to ESL students and visual learners in general
Rochester Institute of Technology References • Dowaliby, F., & Lang, H. (1999). Adjunct aids in instructional prose: a multimedia study with deaf college students. Journal of deaf studies and deaf education, 4(4), 270-82. doi: 10.1093/deafed/4.4.270. • Lang, H. G. (2002). Higher education for deaf students: research priorities in the new millennium. Journal of deaf studies and deaf education, 7(4), 267-80. doi: 10.1093/deafed/7.4.267. • Marschark, M., Pelz, J. B., Convertino, C., Sapere, P., Arndt, M. E., & Seewagen, R. (2005). Classroom Interpreting and Visual Information Processing in Mainstream Education for Deaf Students: Live or Memorex(R)?. American Educational Research Journal, 42(4), 727-761. doi: 10.3102/00028312042004727. • Marschark, Marc, Leigh, G., Sapere, Patricia, Burnham, D., Convertino, Carol, Stinson, M., et al. (2006). Benefits of sign language interpreting and text alternatives for deaf students’ classroom learning. Journal of deaf studies and deaf education, 11(4), 421-37. doi: 10.1093/deafed/enl013. • Marschark, Marc, Sapere, Patricia, Convertino, Carol, & Pelz, J. (2008). Learning via direct and mediated instruction by deaf students. Journal of deaf studies and deaf education, 13(4), 546-61. doi: 10.1093/deafed/enn014. • McKinney, D., & Denton, L. (2004). Houston, we have a problem: there’s a leak in the CS1 affective oxygen tank. Proceedings of the 35th SIGCSE Technical Symposium on Computer Science Education (pp. 236-239). doi: http://doi.acm.org/10.1145/971300.971386. • Paul, P. V., & O’Rourke, J. P. (1988). Multimeaning Words and Reading Comprehension: Implications for Special Education Students. Remedial and Special Education, 9(3), 42-52. doi: 10.1177/074193258800900308. • Thomas, L., Ratcliffe, M., Woodbury, J., & Jarman, E. (2002). Learning styles and performance in the introductory programming sequence. Proceedings of the 33rd SIGCSE technical symposium on Computer science education (Vol. 34, p. 33–37). ACM. doi: 10.1145/563517.563352. • Traxler, C. B. (2000). The Stanford Achievement Test, 9th Edition: National Norming and Performance Standards for Deaf and Hard-of-Hearing Students. Journal of deaf studies and deaf education, 5(4), 337-48. doi: 10.1093/deafed/5.4.337.