1 / 39

Apprenticeship Learning for Robotics, with Application to Autonomous Helicopter Flight

Explore apprenticeship learning algorithms for robotics, including dynamics models and control policies. Learn from demonstrations and optimize performance.

atkinsl
Download Presentation

Apprenticeship Learning for Robotics, with Application to Autonomous Helicopter Flight

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Apprenticeship Learning for Robotics, with Application to Autonomous Helicopter Flight Pieter Abbeel Stanford University Joint work with: Andrew Y. Ng, Adam Coates, J. Zico Kolter and Morgan Quigley

  2. Outline • Preliminaries: reinforcement learning. • Apprenticeship learning algorithms. • Experimental results on various robotic platforms.

  3. Reinforcement learning (RL) System Dynamics Psa System dynamics Psa System Dynamics Psa state s0 … sT s1 s2 sT-1 a0 aT-1 a1 reward R(s0) + R(s1) + R(s2) +…+ R(sT-1) + R(sT) Example reward function: R(s) = - || s – s* || Goal: Pick actions over time so as to maximize the expected score: E[R(s0) + R(s1) + … + R(sT)] Solution: policy  which specifies an action for each possible state for all times t= 0, 1, … , T.

  4. Model-based reinforcement learning Control policy  Run RL algorithm in simulator.

  5. Reinforcement learning (RL) • Apprenticeship learning algorithms use a demonstration to help us find • a good reward function, • a good dynamics model, • a good control policy. Dynamics Model Psa Reinforcement Learning Reward Function R Control policy p

  6. Apprenticeship learning: reward Dynamics Model Psa Reinforcement Learning Reward Function R Control policy p

  7. Many reward functions: complex trade-off • Reward function trades off: • Height differential of terrain. • Gradient of terrain around each foot. • Height differential between feet. • … (25 features total for our setup)

  8. Example result [ICML 2004, NIPS 2008]

  9. Reward function for aerobatics? • Compact description: reward function ~ trajectory (rather than a trade-off).

  10. Reward: Intended trajectory • Perfect demonstrations are extremely hard to obtain. • Multiple trajectory demonstrations: • Every demonstration is a noisy instantiation of the intended trajectory. • Noise model captures (among others): • Position drift. • Time warping. • If different demonstrations are suboptimal in different ways, they can capture the “intended” trajectory implicitly. • [Related work: Atkeson & Schaal, 1997.]

  11. Example: airshow demos

  12. Probabilistic graphical model for multiple demonstrations

  13. Learning algorithm • Step 1: find the time-warping, and the distributional parameters • We use EM, and dynamic time warping to alternatingly optimize over the different parameters. • Step 2: find the intended trajectory

  14. After time-alignment

  15. Apprenticeship learning for the dynamics model Dynamics Model Psa Reinforcement Learning Reward Function R Control policy p

  16. Apprenticeship learning for the dynamics model • Algorithms such as E3 (Kearns and Singh, 2002) learn the dynamics by using exploration policies, which are dangerous/impractical for many systems. • Our algorithm • Initializes model from a demonstration. • Repeatedly executes “exploitation policies'' that try to maximize rewards. • Provably achieves near-optimal performance (compared to teacher). • Machine learning theory: • Complicated non-IID sample generating process. • Standard learning theory bounds not applicable. • Proof uses martingale construction over relative losses. [ICML 2005]

  17. Learning the dynamics model • Details of algorithm for learning dynamics from data: • Exploiting structure from physics. • Lagged learning criterion. [NIPS 2005, 2006]

  18. Related work • Bagnell & Schneider, 2001; LaCivita et al., 2006; Ng et al., 2004a; Roberts et al., 2003; Saripalli et al., 2003.; Ng et al., 2004b; Gavrilets, Martinos, Mettler and Feron, 2002. • Maneuvers presented here are significantly more difficult than those flown by any other autonomous helicopter.

  19. Autonomous nose-in funnel

  20. Accuracy

  21. Non-stationary maneuvers • Modeling extremely complex: • Our dynamics model state: • Position, orientation, velocity, angular rate. • True state: • Air (!), head-speed, servos, deformation, etc. • Key observation: • In the vicinity of a specific point along a specific trajectory, these unknown state variables tend to take on similar values.

  22. Example: z-acceleration

  23. Local model learning algorithm 1. Time align trajectories. 2. Learn locally weighted models in the vicinity of the trajectory. W(t’) = exp(- (t – t’)2 /2 )

  24. Autonomous flips

  25. Apprenticeship learning: RL algorithm Dynamics Model Psa Reinforcement Learning Reward Function R Control policy p • (Crude) model [None of the demos exactly equal to intended trajectory.] • (Sloppy) demonstration or initial trial • Small number of real-life trials

  26. Algorithm Idea • Input to algorithm: approximate model. • Start by computing the optimal policy according to the model. Real-life trajectory Target trajectory The policy is optimal according to the model, so no improvement is possible based on the model.

  27. Algorithm Idea (2) • Update the model such that it becomes exact for the current policy.

  28. Algorithm Idea (2) • Update the model such that it becomes exact for the current policy.

  29. Algorithm Idea (2) • The updated model perfectly predicts the state sequence obtained under the current policy. • We can use the updated model to find an improved policy.

  30. Algorithm • Find the (locally) optimal policy  for the model. • Execute the current policy  and record the state trajectory. • Update the model such that the new model is exact for the current policy . • Use the new model to compute the policy gradient  and update the policy:  :=  + . • Go back to Step 2. Notes: • The step-size parameter  is determined by a line search. • Instead of the policy gradient, any algorithm that provides a local policy improvement direction can be used. In our experiments we used differential dynamic programming.

  31. Performance Guarantees • Let the local policy improvement algorithm be policy gradient. Notes: • These assumptions are insufficient to give the same performance guarantees for model-based RL. • The constant K depends only on the dimensionality of the state, action, and policy (), the horizon H and an upper bound on the 1st and 2nd derivatives of the transition model, the policy and the reward function.

  32. Experimental Setup • Our expert pilot provides 5-10 demonstrations. • Our algorithm • aligns trajectories, • extracts intended trajectory as target, • learns local models. • We repeatedly run controller, collect model errors, until satisfactory performance is obtained. • We use receding-horizon differential dynamic programming (DDP) to find the controller

  33. Airshow • [Switch to Quicktime for HD airshow.]

  34. Airshow accuracy

  35. Tic-toc

  36. Chaos • [Switch to Quicktime for HD chaos.]

  37. Conclusion • Apprenticeship learning algorithms help us find better controllers by exploiting teacher demonstrations. • Algorithmic instantiations: • Inverse reinforcement learning • Learn trade-offs in reward. • Learn “intended” trajectory. • Model learning • No explicit exploration. • Local models. • Control with crude model + small number of trials.

  38. Current and future work • Automate more general advice taking. • Guaranteed safe exploration---safely learning to outperform the teacher. • Autonomous helicopters • Assist in wildland fire fighting. • Auto-rotation landings. • Fixed-wing formation flight. • Potential savings for even three aircraft formation: 20%.

  39. Apprenticeship Learning via Inverse Reinforcement Learning, Pieter Abbeel and Andrew Y. Ng. In Proc. ICML, 2004. • Learning First Order Markov Models for Control, Pieter Abbeel and Andrew Y. Ng. In NIPS 17, 2005. • Exploration and Apprenticeship Learning in Reinforcement Learning, Pieter Abbeel and Andrew Y. Ng. In Proc. ICML, 2005. • Modeling Vehicular Dynamics, with Application to Modeling Helicopters, Pieter Abbeel, Varun Ganapathi and Andrew Y. Ng. In NIPS 18, 2006. • Using Inaccurate Models in Reinforcement Learning, Pieter Abbeel, Morgan Quigley and Andrew Y. Ng. In Proc. ICML, 2006. • An Application of Reinforcement Learning to Aerobatic Helicopter Flight, Pieter Abbeel, Adam Coates, Morgan Quigley and Andrew Y. Ng. In NIPS 19, 2007. • Hierarchical Apprenticeship Learning with Application to Quadruped Locomotion, J. Zico Kolter, Pieter Abbeel and Andrew Y. Ng. In NIPS 20, 2008.

More Related