230 likes | 317 Views
A Task Definition Language for Virtual Agents. WSCG’03 Spyros Vosinakis, Themis Panayiotopoulos Informatics Dept., University of Piraeus Presenter : Sie-kyung Jang, VR Lab., KAIST. Outline. Introduction Approach Related Work 3D Environment with Virtual Agent The Task Definition Language
E N D
A Task Definition Language for Virtual Agents WSCG’03 Spyros Vosinakis, Themis Panayiotopoulos Informatics Dept., University of Piraeus Presenter : Sie-kyung Jang, VR Lab., KAIST
Outline • Introduction • Approach • Related Work • 3D Environment with Virtual Agent • The Task Definition Language • Actions • Arguments • Defining a Task • Example • Conclusion
Introduction (1/2) • Virtual Environment as a user interface can be important for certain type • Virtual Environment is more attractive when they are populated by virtual agents • Virtual Agent • Autonomous entity in a virtual environment • Look like and behave as a living organism • To enhance virtual agent’s autonomy model the agent’s functionality and behavior so as to resemble the real one
Introduction (2/2) • Strong dependence between the action definition and context • Lack of general-purpose tools for designing and implementing intelligent virtual environment • No tool to describe action combinations and sequences needed to achieve specific tasks
Approach • Lack of standard architectures, methodologies and general-purpose tools Context-independent task definition • No tool to define action combination and sequences needed to achieve specific tasks Use of a procedural language • Context-independent definition tasks using a high-level language
Related Work • Task execution of agents is an important issue • Three Important Approaches • Parameterized Action Representation [Bad00] • Smart Object approach [Kal02] • Improv system [Per96]
3D Environment with Virtual Agent • SimHuman • Tool for creation of 3D Environments with virtual agents • Consists of a programming library • Embedded characteristics : Inverse Kinematics, Physically Based Modeling, Collision Detection, Response, Vision • Define and animate 3D scenes with an arbitrary number of objects and virtual agents • Consists of two utilities • Designing the environments • Designing the agents’ animation sequences
3D Environment with Virtual Agent • SimHuman • Entities • Type • Agent : Perform actions and perceive the current state Have an autonomous operation • Object : World, … • Tree-structures hierarchy • Geometry • A set of attributes • <name, type, value>
The Task Definition Language • Fill the gap between higher-level decision processes and the agent’s motion and interaction in the environment • Combine numerous built-in functions and commands to describe complex task • Advantages • Easier for the user to specify action combinations and scenarios for virtual agents • Easily reuse task with different agents , environments
Task Structure • Structure <task Definition> #Variables < variable declaration> #Body <block of commands>
Actions • Process that causes changes to the world , entities • Has duration (Fixed or variant) • Executed in continuous timeframes • Primitive action • Set of commands that an agent can perform in one timeframe • SimHuman • Changing the geometric properties of entities • Adding / removing entities to / from the world • Sending messages to other agents • Agent can execute more than one primitive actions in one timeframes
Actions Representation • Implemented as a sequence of primitive action sets • Predefined • a1, a2, … ,an with duration = n *∆t • ai : set of primitive actions performed in timeframe I • Goal-oriented • ai = f(ai-1, I, G) • I : Set of info. About the object & properties the agent receives • G : Goal
Actions • Keyframing • Ability to execute predefined animation sequences • Selecting the agent’s body parts and adjusting their rotation • Animation library • Set of user defined animation sequences • Anim <name>
Actions • Locomotion • Ability to walk inside the environment • Use state machine • Ensure that the movement and rotation of the body takes place in a correct and believable manner • Inverse Kinematics • IK <chain> <position> • Continuous correction sequence • Tests at every step the best rotation for each joint to achieve the target • Works with moving targets, avoid collision
Arguments – Variable • Variable type • {boolean, integer, float, string, entity, list of entities, vector, list of vectors, relation} - relation : composition type ex) person(‘John’, 28, 1.80, ‘single’) • Variable set Types • Agent’s attributes • Task arguments • Task’s internal variables • Other entity's attributes • [<entity name>] <variable name>
Arguments – function • Use of function as arguments allow actions to be called with values that are adapted to different environments • One can define tasks that may be executed in dynamic world • Track the current state of the world, possible relations between entities.
Arguments – function • Some functions detect spatial relations • Conditional execution of actions • Managing the agents beliefs about the world • Using current geometric properties (position, size, orientation) • Evaluated by current geometric properties (position, size, orientation) • Near, on, front_of, behind, left_of, right_of, above, below • Some function deal with logical properties • Useful for defining complex conditions • and, or, not
Task Structure • Structure <task Definition> #Variables < variable declaration> #Body <block of commands> - TASK name (type1 arg1,.., typen argn) - Defines local variables • c1; c2; … ci : ci is task commands • Possible Commands • <action> • PAR(<block b1>, <block b2>) • DO(<block b>) UNTIL c • IF <bool c> THEN (block b1>) ELSE (<block b2>)
Example Interacting with Object • Catch an object using hand < Doing without grasping > • End-effecter on the surface of the hand • Spot on the surface of the object • Use inverse kinematics action • Suitable for large environments that need simplified agent models.
Example Interacting with Object < Doing with grasping > Need complex agent models and skeleton • Define many spots • Use IK motions using PAR command • Constantly rotate all fingers • Use DO-UNTIL command • Avoids the definition of spots More general • Must use of constant collision detection checks increase the computational cost
Example Observing the Environment • Observing is needed when agent do not know which object to interact with
A complete Scenario • Agent visiting a bar • walking around until there is a free table • Sitting on a chair • Calling a waiter • Ordering drink • Drinking from a bottle • Decrease every time the agent brings the bottle to the mouth • Communication between agent and waiter
Conclusion • Context-independent definition tasks using a high-level language • Future Work • Addition of more complex object interactions ( Facial animation for expressing the agents’ emotions ) • Improve action execution and the world functionality (Able to add more bio-mechanical characteristics to the agent)