80 likes | 153 Views
Towards Lifelike Interfaces That Learn. Jason Leigh, Andrew Johnson, Luc Renambot, Steve Jones, Maxine Brown. The Electronic Visualization Laboratory. Established in 1973 Jason Leigh, Director; Tom DeFanti, Co-Director; Dan Sandin, Director Emeritus 10 full time staff
E N D
Towards Lifelike Interfaces That Learn Jason Leigh, Andrew Johnson, Luc Renambot, Steve Jones, Maxine Brown
The Electronic Visualization Laboratory • Established in 1973 • Jason Leigh, Director; Tom DeFanti, Co-Director; Dan Sandin, Director Emeritus • 10 full time staff • Interdisciplinary Computer Science, Art & Communication • 30 students, 15 funded students, • Research in: • Advanced display systems • Visualization and virtual reality • High speed networking • Collaboration & human computer interaction • 34 years of collaboration with Science, Industry & Arts to apply new computer science techniques to these disciplines. • Major support by NSF and ONR.
Goal in 3 Years • Life-sized Avatar capable of reacting to speech input with naturalistic facial and gestural responses. • A methodology of how to capture and translate human verbal and non-verbal communication into an interactive digital representation. • Deeper understanding of how to create believable/credible avatars.
System Components Knowledge Capture Natural Language Processing Speech Recognition AlexDSS Knowledge Processing Textual & Contextual Information Facial Expression Recognition Responsive Avatar Engine Responsive Avatar Eye-tracking Speech Synthesis Lip Synch Gestural Articulation Facial Articulation Facial & Body Motion / Performance Capture Phonetic Speech Sampling
EVL Year 1 • Digitize facial images and audio of Alex • Shadow Alex to capture information about his mannerisms • Create 3D Alex focusing largely on facial features • Prototype initial RAE & merge initial avatar, speech recognition, AlexDSS, pre-recorded voices • Validate provision of non-verbal avatar cues, evaluate efficacy of cues
EVL Year 2 • Full-scale Motion & performance capture to create gestural responses to AlexDSS • Speech synthesis using Alex’s voice patterns to create verbal responses to AlexDSS • Use eye-tracking to begin to experiment with aspects of non-verbal communication • Evaluate merging of verbal and non-verbal information in users’ understandings of • avatar believability and credibility (ethos) • information retrieved • avatar emotional appeals (pathos)
EVL Year 3 Life-sized projection • Utilize camera-based recognition of facial expressions as additional non-verbal input • Conduct user studies: • relative to a believability and credibility (ethos) • to correlate attention to non-verbal communication relative to comprehension and retention • to assess value of avatar emotional appeals (pathos) • to address formation of longer-term relationship formation between avatar and user. Camera Microphone
Thanks! • This project was supported by grants from the National Science Foundation • Award CNS 0703916and CNS 0420477