180 likes | 194 Views
Transparent control of avatar gestures A prototype. Francesca Barrientos. GUIR Meeting 28 April 2000. Review of interface elements. Tracking (restricted) hand motion Free form Does not require watching the widget except to put down pen Facilitate proprioception
E N D
Transparent control of avatar gesturesA prototype Francesca Barrientos GUIR Meeting 28 April 2000
Review of interface elements • Tracking (restricted) hand motion • Free form • Does not require watching the widget except to put down pen • Facilitate proprioception • Detect rests between gestures - calibration • Design of mapping • Qualitative control of motion • No positional accuracy in output • Does not require input accuracy
Kinematic mapping New joint angle Old joint angle + (mouse position * scaling factor) • For small movements, motion in horizontal plane • Moving pen closer to self moves avatar hand closer to body
Things that worked • Within a small range, tracking is intuitive • Can produce free form gestures • Movement seems expressive • Control is transparent
Limitations • Large motions not intuitive • Hard to form gestures based on proximity to other parts of body • Mapping may behave differently on different systems • Limited range of motion • Want hand to be somewhat independent • Move one body part at a time
Project goals • Build a desktop VR system for controlling avatar gestures • In particular for gesticulation • Control by tracking a part of user’s body • Study use of system • Prove can perform some kinds of gesture not possible with other systems • Understand which features contribute to communicative power of system
Gesticulation • Gesticulation is gesture that co-occurs with speech • Meaning of the utterance is divided up between the words and gestures • Gestures derive meaning from timing with respect to speech
Main problems • Limited input and complex output • Control interface divides user’s attention
Other solutions for nonverbal communication • Discrete choices (menus) of expressions • Usually affective (happy, sad, angry…) • Usually facial • Usually used with chat environments • Examples: • Emotion wheel in ComicChat • Palace • Gesture/Mimic panel in Vlnet
Other solutions continued • Analysis of text • ComicChat uses keywords, acronyms, punctuation, etc. • Semi-autonomous behaviors • BodyChat by Vilhjálmsson • Simple kinematic controls • Sliders and similar widgets (e.g.. Slater) • Full body motion capture
Where I fit in • Forms and functions • Emphasis and color • Discourse marking • Personality? • Features • Free form • Co-occur with speech • Intention and awareness • User has voluntary control • Continuous control
Design informed by gesture research • Stroke - most effortful phase • Resting position part of prototype • Space utilization • Design/choice of mapping may be based on type of gesture • Beats have favored direction and location • Generation of beat gesture should be easily repeatable
Next step • 6DOF tracker input • Track position of wrist in space instead of position on a plane • Still designing kinematic mapping • Proprioception • Detect rest and preparation of gesture • Independence of hand orientation • Qualitative control • Forward kinematic mapping
And next • Networked virtual environment • User interface features • Navigation • Other gesture feedback (since user will have avatar point of view) • User studies • …
Summary • Presented a prototype gesture control system • Reminded you of my research goals • Suggested a framework for selecting controls suitable for different types of gestures • Described how design is informed by gesture research • Next steps