390 likes | 472 Views
Real-Time Virtual Humans. Norman Badler Armin Bruderlin Ken Perlin Athomas Goldberg Nadia Magnenat-Thalmann Dimitris Metaxas. Schedule. 8:30 - 9:45 Badler 9:45 - 10:00; 10:15 - 11:00 Bruderlin 11:00 - 12:00; 1:30 - 1:45 Perlin/Goldberg 1:45 - 3:00 Magnenat-Thalmann 3:15 - 4:30 Metaxas
E N D
Real-Time Virtual Humans • Norman Badler • Armin Bruderlin • Ken Perlin • Athomas Goldberg • Nadia Magnenat-Thalmann • Dimitris Metaxas SIGGRAPH '98 Course 28
Schedule • 8:30 - 9:45 Badler • 9:45 - 10:00; 10:15 - 11:00 Bruderlin • 11:00 - 12:00; 1:30 - 1:45 Perlin/Goldberg • 1:45 - 3:00 Magnenat-Thalmann • 3:15 - 4:30 Metaxas • 4:30 - 5:00 Panel (all) SIGGRAPH '98 Course 28
Virtual Humans • Norman I. Badler • Center for Human Modeling and Simulation • University of Pennsylvania • Philadelphia, PA 19104-6389 • 215-898-5862 phone; 215-573-7453 fax • http://www.cis.upenn.edu/~badler SIGGRAPH '98 Course 28
What are Virtual Humans? • Computer models of people that can be used as substitutes for ``the real thing’’ in • Evaluating ergonomics prior to actual construction of some system. • Representing ourselves or other live or constructed participants in virtual environments SIGGRAPH '98 Course 28
Applications for Virtual Humans: • Engineering Ergonomics. • Maintenance Essessment. • Games/Special Effects. • Military Simulations. • Job Education/Training. • Medical Simulations. SIGGRAPH '98 Course 28
Virtual Human “Dimensions” • Appearance • Function • Time • Autonomy • Individuality SIGGRAPH '98 Course 28
Appearance: • 2D drawings > 3D wireframe > • 3D polyhedra > curved surfaces > freeform deformations > • accurate surfaces > muscles, fat > biomechanics > clothing, equipment > physiological effects (perspiration, irritation, injury) SIGGRAPH '98 Course 28
Function: • cartoon > jointed skeleton > • joint limits > strength limits > • fatigue > hazards > injury > skills > effects of loads and stressors > psychological models > • cognitive models > roles > teaming SIGGRAPH '98 Course 28
Time (~Number): • off-line animation > • interactive manipulation > • real-time motion playback > parameterized motion synthesis > multiple agents > • crowds > coordinated teams SIGGRAPH '98 Course 28
Autonomy: • drawing > scripting > • interacting > reacting > • making decisions > • communicating > intending > • taking initiative > leading SIGGRAPH '98 Course 28
Individuality: • generic character > • hand-crafted character > • cultural distinctions > personality > psychological-physiological profiles > gender and age > • specific individual SIGGRAPH '98 Course 28
Comparative Virtual Humans Application Appear. Function Time Autonomy Individ. Cartoons high low high low high Gameshigh low low med med Sp. Effectshigh low high low med Medicalhigh high med med med Ergonomicsmed high med med low Educationmed low low med med Tutoring med low med high low Military med med low med low SIGGRAPH '98 Course 28
What Makes a Virtual Human Human? • Degree of human-like appearance is relative to application. • Apparent decision-making and emotional reactions yield believability. • Communicating intentions builds community. SIGGRAPH '98 Course 28
Control for Interactivity • Now mostly point-and-click. • Need language-like commands (text or speech). • Move the human interface toward a command or instructional view -- as if the Virtual Human were another real person. SIGGRAPH '98 Course 28
Control for Autonomy • Provide human-like reactions and decision-making (“AI”). • Personality, roles, culture, skills, perceptions, and intelligence affect interaction with the environment, situation, and other agents. SIGGRAPH '98 Course 28
Human-Like Appearance • Require human-like structure • Spine and neck • Shoulder, clavicle, and arm • Require human-like surfaces • Smooth skin • Face • Synthetic clothing SIGGRAPH '98 Course 28
Basic Human Movement Capabilities • Gesture / Reach / Grasp. • Walk / Orient / Locomote. • Visual Attention / Search. • Pull / Lift / Carry. • Motion playback (previously scripted or stored, e.g. parameterized motion capture). SIGGRAPH '98 Course 28
Arms and Hands • Object-specific reasoning for approach, grasping, and use. • Gesture artifacts during communication. • Fast position and orientation inverse kinematics. • Two-handed grasps. SIGGRAPH '98 Course 28
Synthesized Motions • Inverse kinematics for arms, legs, spine. • Paths or footsteps driving locomotion. • Balance constraint on whole body. • Dynamics control from forces and torques. • Facial expressions • Secondary motions to enhance simpler forms (feet, blinks, eye gaze, breathing). SIGGRAPH '98 Course 28
Locomotion • Agent given attributes such as motivation, directedness, speed. • Sensors available: attractor, repulser, range, terrain, low obstacle, humans, etc. • Anticipation (prediction). • Chasing, hiding, evading, searching, climbing, etc. SIGGRAPH '98 Course 28
Attention • Is not usually stated explicitly. • Is a resource. • Monitors a task queue. • Attention and other sensing actions consume time. • Produces proactive behaviors. SIGGRAPH '98 Course 28
Parallel Transition Networks (PaT-Nets) • A Virtual Parallel Execution Engine for virtual human actions: • Processes are nodes. • Instantaneous (conditional or probabilistic) transitions are edges. • Message passing and synchronization. • Lisp and C++ versions. SIGGRAPH '98 Course 28
PaT-Net Applications • Gesture, head motion, and eye gaze during conversation. (SIGGRAPH 94) • Hide and seek. (VRAIS ‘96) • Physiological state under trauma and treatment. (Presence J. 96) • Jack Presenter. (AAAI-97 Workshop) • JackMOO. (WebSim ‘98) SIGGRAPH '98 Course 28
“Smart Avatars” (3rd person VR) • Provide REAL-TIME human-like appearance, reactions, and decision-making. • Ultimately personality, roles, culture, skills, perceptions, and intelligence should affect interaction with the environment, situation, and other agents. SIGGRAPH '98 Course 28
JackMOO Components • Jack: virtual human creation and motion authoring system • lambdaMOO: multi-user, network accessible, programmable, interactive system from Xerox Parc • Client: Java applet providing user interface and control flow mediation SIGGRAPH '98 Course 28
Architecture SIGGRAPH '98 Course 28
Handshaking Scenario: Two avatars “TJ” and “Norm” get each other’s attention; shake hands. Experiment 1 SIGGRAPH '98 Course 28
Experiment 2 • Agent-Object-Preposition Interaction: • Agent walks to and sits in chair, wherever placed; agent walks around chair. SIGGRAPH '98 Course 28
Experiment 3 • Relationship Scenario: • One follows the other out of the room. SIGGRAPH '98 Course 28
Observations • lambdaMOO verbs (simple imperative sentences) linked to multi-user, distributed Jack capability. • Jack PaT-Net programs provide API for lambdaMOO verbs. • Need a representation for agent actions more compatible with language. SIGGRAPH '98 Course 28
Connecting Language and Animation • Design a representation which is able to bridge concepts from both language and animation. • Implement semantics for human actions to create SMART AVATARS. SIGGRAPH '98 Course 28
Parameterized Action Representation (PAR) • Representation derived from BOTH NL analyses and animation requirements: • Agent, Objects, Sub-Actions • Preconditions, Postconditions • Applicability and Culmination conditions. • Spatio-temporal terms. • Agent Manner parameters. SIGGRAPH '98 Course 28
Software Architecture NL Commands/NL Generation PAR and database/lexicon Execution Engine and global clock Agent Procedures and Motion Synthesis OpenGL and Transom Jack Toolkit SIGGRAPH '98 Course 28
Role of Linguistics in Knowledge Modeling (1) • Linguistic classifications based on distributional analysis can provide: • Typical properties of animate agents • Typical modifiers of animate agents • Dimensions along which agent behavior can vary • AGENT, +DELIBERATE, +CAREFUL, +STRONG (e.g.) SIGGRAPH '98 Course 28
Role of Linguistics in Knowledge Modeling (2) • Typical actions of animate agents • Typical modifiers for actions • Dimensions along which action execution can vary • ACTION CAN BE PERFORMED CAUSALLY, CAREFULLY, GENTLY, etc. • Agent manner executed via “Effort” basis functions. • Basing an agent and action ontology on linguistic evidence ensures extensibility SIGGRAPH '98 Course 28
Work in Progress • PAR development and implementation. • Various human model improvements (walking, motion models). • Smart avatars. • Multi-user virtual environments. • Agent models with culture, roles, manner, and personality. SIGGRAPH '98 Course 28
Conclusions • Reactive, proactive, and decision-making behaviors needed for action execution. • Individuals vary in perceptions of context. • Language interfaces (through PAR) will improve access and usability. • Hardware improvements alone will not yield intelligence. SIGGRAPH '98 Course 28