350 likes | 374 Views
INFOMCANIM - Research paper presentation Combining Recurrent Neural Networks and Adversarial Training for Human Motion Modelling, Synthesis and Control. Cengizhan Can Phoebe de Nooijer. M otivation of the paper. Focus: G enerative model for human motion synthesis and control.
E N D
INFOMCANIM - Research paper presentation Combining Recurrent Neural Networks and Adversarial Training for Human Motion Modelling, Synthesis and Control Cengizhan Can Phoebe de Nooijer
Motivation of the paper • Focus: Generative model for human motion synthesis and control
Motivation of the paper • Focus: Generative model for human motion synthesis and control
Motivation of the paper • Focus: Generative model for human motion synthesisand control Motion synthesis The generation through algorithms of new motion sequence
Motivation of the paper • Focus: Generative model for human motion synthesis and control Motion synthesis The generation through algorithms of new motion sequence
Motivation of the paper • Focus: Generative model for human motion synthesis and control Motion synthesis The generation through algorithms of new motion sequence Control How effectively and quickly animations can be changed
Motivation of the paper • Focus: Generative model for human motion synthesis and control Motion synthesis The generation through algorithms of new motion sequence Control How effectively and quickly animations can be changed
Motivation of the paper • Focus: Generative model for human motion synthesis and control Motion synthesis The generation through algorithms of new motion sequence Control How effectively and quickly animations can be changed Generative model A generative model learns the joint probability distribution p(x,y) (Whereas a discriminative model learns the conditional probability distribution p(y|x))
Motivation of the paper What is wrong with current approaches?
Motivation of the paper What is wrong with current approaches? Current deep RNN based methods often have difficulty obtaining good performance for long term motion generation
Motivation of the paper What is wrong with current approaches? Current deep RNN based methods often have difficulty obtaining good performance for long term motion generation Specifically, long-term results suffer from occasional unrealistic artifacts
High-level overview The key idea is to combine recurrent neural networks and adversarial training for human motion modeling
High-level overview The key idea is to combine recurrent neural networks and adversarial training for human motion modeling • Constructing a generative deep learning model from a large set of prerecorded motion data
High-level overview The key idea is to combine recurrent neural networks and adversarial training for human motion modeling • Constructing a generative deep learning model from a large set of prerecorded motion data • Using a “refiner network” with an adversarial loss
High-level overview The key idea is to combine recurrent neural networks and adversarial training for human motion modeling • Constructing a generative deep learning model from a large set of prerecorded motion data • Using a “refiner network” with an adversarial loss • Can randomly generate an infinite number of high-quality motions with infinite length
Problem statement: How can Recurrent Neural Networks for Human Motion Modelling, Synthesis and Control improved?
Related work and background information Shrivastava et al.: • Adversarial network to improve realism of synthetic images using unlabeled real image data • GAN and RNN
Technical details of the approach Feature Representation: • Joint angle poses • Character states
Technical details of the approach Input: hidden states; current feature Output: probabilistic distribution feature
Technical details of the approach • Hidden states • Back Propagation Through Time (BPTT) • Long Short Term Memory cells • Probabilistic distribution (Gaussian Mixture Model)
Technical details of the approach GNN training strategies: • Adding noise • Down sampling • Optimization method • Training data sets size Process of this model
Technical details of the approach • Refiner Network • Discriminative Model • Motion Regularization
Technical details of the approach GAN training strategies: • Training the generative model more • Using history of refined motions • Adjusting the training strategy when one of the models is too strong Process of this model
Technical details of the approach Motion model in use: • Random motion generation • Offline motion design • Online motion control • Motion denoising
Critical analysis of the approach & evaluation • Highly successful approach
Critical analysis of the approach & evaluation • Highly successful approach • Demo only uses ‘stick figures’
Critical analysis of the approach & evaluation • Highly successful approach • Demo only uses ‘stick figures’ • Might not work well on aperiodicmotions?
Critical analysis of the approach & evaluation • Highly successful approach • Demo only uses ‘stick figures’ • Might not work well on aperiodicmotions? • Effectiveness only researched in thearea of motion synthesis and control • What about motion tracking, motion recognition etc.?
Possible improvements of the paper (future work) Two largest disadvantages:
Possible improvements of the paper (future work) Two largest disadvantages: • More animations • E.g. Sprinting, crawling, toe walking
Possible improvements of the paper (future work) Two largest disadvantages: • More animations • E.g. Sprinting, crawling, toe walking • Not able to run on mobile applications
Discussion & Questions • Could future implementations significantly reduce video game development time? • Could this technique be useful for simulations?
Discussion & Questions What are your questions? • Could future implementations significantly reduce video game development time? • Could this technique be useful for simulations?