320 likes | 410 Views
Animation CS 551 / 651. Dynamics Modeling and Culling Chenney, Ichnowski, and Forsyth. The world is full of moving things. Cars, people, clouds, leaves on a tree Lasseter believes everything must be moving to look “right” Dynamics The equations that define how they move Simulation
E N D
AnimationCS 551 / 651 Dynamics Modeling and Culling Chenney, Ichnowski, and Forsyth
The world is full of moving things • Cars, people, clouds, leaves on a tree • Lasseter believes everything must be moving to look “right” • Dynamics • The equations that define how they move • Simulation • The process of computing the dynamics
Simulation makes the world go ‘round • Simulation is expensive • Small timesteps for dynamics computations • Lots of moving limbs • Flexible objects like hair/cloth • Collision checks (n2)
Reduce costs of simulation • Perception permits simplification • What simulation fidelity is needed? • Out-of-view • No need to render correct movements • What happens when object returns to view? • Distant or in periphery • Some part of simulation must be accurate • Other parts can be approximated
Building simpilfications • How is simplification constructed? • Cull DOFs • Reduce temporal resolution • Permit more collisions • Current technology: simplify by hand
Preserving accuracy • Graceful degradation • Suspension of disbelief • If simplified thing looks unrealistic, belief in “virtual” world may be jeopardized • Accuracy of outcome • If simplified thing behaves differently, outcome of game or training application may be wrong
Related work • Geometric level of detail (LOD) • Cost of rendering geometry must be justified • Perceptually based perception metrics • Geometric simplification algorithms • Visibility culling • Do these translateto simulation? Funkhouser and Sequin, 1993
In a perfect world • For each frame • Compute effect on realism vs. all simplifications • Set “reality” dial on each object to suit its importance
Simplifying periodic systems • What does periodicity buy us? • Object’s description is a function of where it is relative to one “cycle” • Find “t”, where it is in the cycle • Build f(t), a function mapping t system state • Predicting where the blue-line bus is vs. predicting where Osama is
Roller coaster • Where is the car and what is its orientation?
Roller coaster • Build mapping, f(t) • Observe position/orientation ofcar during one cycle • How long is a cycle? • Train neural network to correctly predict mapping • f(t) = x, y, z, roll, pitch, yaw • Neural net is just a function approximator, so it can do this!
Roller coaster • Using the simplified model • Replace true dynamics withneural network • Just keep track of t and increment • A lot like motion capture
Roller coaster • Are there shortcomings with using motion capture? • Not responsive to changesin environment • Not alterable • Does it matter? • Use this simplification when responsiveness and flexibility are not required
Simplifying non-periodic systems • What does non-periodicity buy us? • People aren’t good at predicting future states • There is room for error/noise/approximation • People get worse at predicting as time elapses • Short lapses are predicted using extrapolation • Longer lapses are predicted using generalization • Really long lapses lack preconceptions • Examples of these?
Tilt-a-whirl • Where are all the cars? • A chaotic system where physics matters
Tilt-a-whirl • Short time lapses • Use previous state as a basis forprediction of future states • Extrapolation of accelerations and velocities
Tilt-a-whirl • Medium time lapses • Use previous state as a basis forprediction of future states • Extrapolation only works forsmall dt • Use neural network to model change in state afterdt seconds have passed • f (statet) = statet+dt
Tilt-a-whirl • Medium time lapses • Training a neural network • Sample system at time t • Sample system at time t + dt • Network has one input for each DOF • Network has one output for each DOF • Train network to predict state after t + dt
Tilt-a-whirl • Medium time lapses • A particular neural network onlypredicts state after dt seconds • What if object pops back into view after ½ dt seconds? • Build a second neural network for ½ dt • Build a third neural network for ¼ dt • …
Tilt-a-whirl • Medium time lapses • Any point in time is approximatedby series of neural networks • Ex: Approximate 3.75 seconds • Let NNs exist for dt = .25, .5, and 1.0 • state1 = NN1.0 (state0) • state2 = NN1.0 (state1) • state3 = NN1.0 (state2) • state3.5 = NN0.5 (state3) • state3.75 = NN0.25 (state3.5)
Medium time lapses Position after dt Velocity after dt True Dynamics Neural NetApproximation
Medium time lapses • Difference image masked by stationary distribution image
Results • Neural Network Prediction
Tilt-a-whirl • Long time lapses • Previous state is not a startingpoint for prediction… stochastic • What does the traffic on I-29 look like at 5:00 this afternoon? • I have a basic model, but no bias to previous states • Obviously if an accident happened at 4:00, my prediction would be wrong
Tilt-a-whirl • Long time lapses • How do I build a basic model? • Based on observations • I am more likely to expect system states that occurred frequently in my observations • Some system states will be implausible because of limits on feasibility that I determine
Tilt-a-whirl • Long time lapses • How do I build a basic model? • State of world is defined by DOFs • DOFs define n-dimensional space • Reduce the space to a finite volume • Limits on feasibility • What are min/max for each DOF? • Example: state space of two-joint arm
Tilt-a-whirl • Long time lapses • Discretize state-space volume intocells • Run the simulation for a while • At each timestep, record which cell system is in • Accumulate counters in each cell • Each cell is a assigned a value corresponding to the probability system is in that state
Which model do we use? • Extrapolation vs. NN vs. Stochastic • NN is accurate for designated dt • Start using NNs at smallest dt • At some point, knowing exact state at time t doesn’t help • As time passes, state of system begins to match the basic prediction of stationary distribution
Medium time lapses • Difference image masked by stationary distribution image