1 / 37

Versatile Human Behavior Generation via Dynamic, Data-Driven control

Explore the generation of versatile human behaviors through dynamic, data-driven control motivation. This approach combines motion capture, key-framing, data synthesis, physical-based animation, and a hybrid approach to create lifelike and dynamic virtual characters.

gfryer
Download Presentation

Versatile Human Behavior Generation via Dynamic, Data-Driven control

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Versatile Human Behavior Generation via Dynamic, Data-Driven control

  2. Motivation FIFA 2006 (EA) • Motion of virtual character is prevalent in: • Game • Movie (visual effect) • Virtual reality • And more… NaturalMotion endorphin

  3. Motivation What virtual characters should be able to do: • Lots of behaviors - leaping, grasping, moving, looking, attacking • Exhibit personality - move “sneakily” or “aggressively” • Awareness of environment - balance/posture adjustments • Physical force-induced movements (jumping, falling, swinging)

  4. Outline • Motion generation techniques • Motion capture and key-framing • Data-driven synthesis • Physical-based animation • Hybrid approach • Dynamic motion controllers • Quick Ragdoll Introduction • Controllers • Transitioning between simulation and motion data • Motion search – When and where • Simulation-driven transition - How

  5. Mocap and Key-framing (+) Captures style and subtle nuances (+) Absolute control “wyciwyg” (-) Difficult to adapt, edit, reuse (-) Not physically dynamic, especially highly dynamic motion

  6. Data-driven synthesis • Generate motion from examples • Blending, displacement map • Kinematic controller built upon existing data • Optimization / learning statistical model (+) creators retain control Creators define all rules for movement (-) violates the “checks and balances” of motion Motion control abuses its power over physics (-) limits emergent behavior

  7. Physical-based animation • Ragdoll simulation • Dynamic controllers (+) Interacts well with environment (-) “Ragdoll” movement is lifeless (-) Difficult to develop complex behaviors

  8. Hybrid approaches Mocap Stylistic realism Physical simulation Physical realism Hybrid approaches: • Combine the best of both approaches • Activate either one when most appropriate • Add life to ragdolls using control systems (only simulate behaviors that are manageable)

  9. A high-level example

  10. Outline • Motion generation techniques • Motion capture and key-framing • Data-driven synthesis • Physical-based animation • Hybrid approach • Dynamic motion controllers • Quick Ragdoll Introduction • Controllers • Transitioning between simulation and motion data • Motion search – When and Where • Simulation-driven transition - How

  11. Overview of dynamic controller • Decision making: objectives, current state (x[t]) → desired motion (xd[t]) • Motion Control: desired motion (xd[t]), current state (x[t]) → motor forces (u[t]) • Physics: current state (x[t]) → next state (x[t+1]) xd[t]=Goal(x[t]) u[t]=MC(xd[t]-x[t]) x[t+1]=P(x[t],u[t]) Decision Making xd[t] Motion Control u[t] Physics objectives x[t+1]

  12. Physics: setting up ragdolls • Given a dynamics engine • Set primitive for each body part • Mass and inertial properties • Create 1, 2, or 3-DOF joints between parts • Set joint limit constraints for each joint • External forces (gravity, impact etc) • Dynamics Engine Supplies • Updated positions/orientations • Collision resolution with world xd[t]=Goal(x[t]) u[t]=MC(xd[t]-x[t]) Decision Making xd[t] Motion Control u[t] objectives x[t+1]

  13. Controller types • Basic Joint-torque Controller • Low-level control • Sparse Pose control (May be specified by artist) • Continuous control (e.g.: Tracking mocap data) • Hierarchical Controller • Layered controllers • Higher level controller determines correct desired value for low level • Derived from sensor or state info, support polygon, center of mass, body contacts, etc.

  14. Joint-torque controller Proportional-Derivative (PD servo) Controller • Actuate each joint towards desired target: • Acts like a damped spring attached to joint (restposition at desired angle) θdes is desired joint angle and θ is current angle ks and kd are spring and damper gains

  15. Live demo Created with http://www.ode.org

  16. Outline • Motion generation techniques • Motion capture and key-framing • Data-driven synthesis • Physical-based animation • Hybrid approach • Dynamic motion controllers • Quick Ragdoll Introduction • Controllers • Transitioning between simulation and motion data • Motion search – When and where • Simulation-driven transition - How

  17. Simulating falling and recovering behavior [Mandel 2004]

  18. Transitioning between Techniques • Motion data  Simulation • When: Significant external forces applied on a virtual character • How: Just initialize simulation with pose and velocities extracted from motion data. • Simulation  Motion data • When and where: some appropriate pose is reached (hard to decide); Motion frame closest to simulated pose. • How: Drive simulation toward matched motion data using PD controller.

  19. Motion state spaces • State space of data-driven technique: • Any pose present in the motion database • State space of dynamics-based technique: • Set of poses allowable by physical constraints • The latter is larger because it: • can produce motion difficult to animate or capture • includes large set of unnatural poses • Correspondence must be made to allow transitions between the two

  20. Motion searching • Problem: Find nearest matches in the motion database to the current simulated motion. Approach: • Data representation • Joint position • Process into spatial data structure • kd-tree/bbd-tree (box-decomposition) • Search structure at runtime • Query pose comes from simulation • Nearest neighbor search (ANN)

  21. Data Representation: Joint Positions • Need representation that allows numerical comparison of body posture • Joint angles not as discriminating as joint positions • Ignore root translation and align about vertical axis • May also want to include joint velocities • Joint velocity is considered by taking surrounding frames into distance computation

  22. Distance metric Original Joint positions Aligned positions J – Number of joints Wj – Joint weight p – global position of joint T - Transformation to align the first frame

  23. Searching process • Approximate Nearest Neighbor (ANN) Search • First finds the cell containing the query point in spatial data structure of the input data points. A randomized search then finds surrounding cells containing points within the given ε threshold distance from actual nearest neighbors. • Results guaranteed to be within a factor (1+ε) distance of actual nearest neighbors. • O(log n3) expected run time and O(nlogn) space requirement • Much better in practice than KNN as dimensionality of points increases

  24. Speeding up search • Curse of dimensionality • Search Each Joint Position Separately • Pair more joints together to increase accuracy n 3-DOF searches is faster than one 3n-DOF search...

  25. Simulating behavior • Model reaction to impacts causing loss of balance • Two controllers handle before and after contact phases respectively • Ensure transitioning to a balanced posture in motion data

  26. Fall controller • Aim: produce biomechanically inspired, protective behaviors in response to the many different ways a human may fall to the ground.

  27. Fall controller • Continuous control strategy • 4 controller states according to falling direction: backward, forward, right, left • During each state one or both arms are controlled to track predicted landing position of the shoulders • Goal of the controlled arm is to have wrists intersect the line between the shoulder and its predicted landing position. • A small natural bend is added to the elbow and the desired angles for the rest of the body are set to initial angles at the time the fall controller is activated.

  28. Fall controller • Determine controller state θ is the facing direction of the character. V is the average velocity of the limbs.

  29. Fall controller • Determine target shoulder joint angle • Can change when simulation steps forward • The ks and kd are properly tuned

  30. Settle controller • Aim: Driving the character to similar motion clip at an appropriate time • Beginning when hands impact the ground. • Two states • Absorb impact: • gains are adjusted to reduce hip and upper body velocity. • Last a half second before next state. • ANN search: • Find a frame in motion database that is close to currently simulated posture • Use found frame as target while continuing to absorb impact • Simulated motion is smoothly blended into motion data. Final results demo

  31. An alternative on response motion synthesis [Zordan 2005] Problem: Generating dynamic response motion to external impact Insight: • Dynamics is often only needed for a short time (a burst). • After that, the utility of the dynamics decreases and due to the lack of good behavior control • Return to mocap once the character becomes “conscious” again

  32. Generating dynamic response motion • Transition to simulation when impact takes place • Search motion data for transition-to sequence similar with simulated response motion • Run the second simulation with joint-torque controller actuating the character toward matching motion • Final blending to eliminate the discontinuity between simulated and transition-to motions

  33. Motion selection • Aim: to find a transition-to motion • Frame windows are compared between simulation and motion data • Frames are aligned so that roots position and orientation of start frame in each window coincide • Distance between and : pb, θb: body part position and orientation wi: window weight, quadratic function with highest value at start frame and decreasing for subsequent frames wpb, wθb: linear and angular distance scale for each body part

  34. Transition motion synthesis • Aim: generate the motion to fill the gap between the beginning of interaction and found motion data • Realized in 2 steps: • Run a second simulation to track the intermediate sequence • Blend the physically generated motion into transition-to motion data

  35. Transition motion synthesis • Simulation 2 • An inertia-scaled PD-servo is used to compute torque at each joint • The tracked sequence is generated by blend start and end frames using SLERP with an ease-in/ease-out. • A deliberate delay in tracking is introduced to make the reaction realistic

  36. Conclusion • Hybrid approaches • Complex dynamic behaviors are hard to model physically • A viable option to synthesize character motion under wider range of situations • Able to incorporate unpredictable interactions, especially in game • Making it more practical • Automatic computation of motion controller parameters [Allen 2007] • Speeding up search via pre-learned model [Zordan 2007]

  37. References • MANDEL, M., 2004. Versatile and interactive virtual humans: Hybrid use of data-driven and dynamics-based motion synthesis. Master's Thesis, Carnegie Mellon University. • ZORDAN V. B., MAJKOWSKA A., CHIU B., FAST M.: Dynamic response for motion capture animation. ACM Trans. Graph. 24, 3 (2005), 697.701. • B. Allen, D. Chu, A. Shapiro, P. Faloutsos. On the Beat! Timing and Tension for Dyanmic Characters, ACM SIGGRAPH/Eurographics Symposium on Computer Animation 2007 • Zordan, V.B., Macchietto, A., Medina, J., Soriano, M., Wu C.C., Interactive Dynamic Response for Games, ACM SIGGRAPH Sandbox Symposium 2007

More Related