390 likes | 400 Views
Explore apprenticeship learning algorithms for robotics, including dynamics models and control policies. Learn from demonstrations and optimize performance.
E N D
Apprenticeship Learning for Robotics, with Application to Autonomous Helicopter Flight Pieter Abbeel Stanford University Joint work with: Andrew Y. Ng, Adam Coates, J. Zico Kolter and Morgan Quigley
Outline • Preliminaries: reinforcement learning. • Apprenticeship learning algorithms. • Experimental results on various robotic platforms.
Reinforcement learning (RL) System Dynamics Psa System dynamics Psa System Dynamics Psa state s0 … sT s1 s2 sT-1 a0 aT-1 a1 reward R(s0) + R(s1) + R(s2) +…+ R(sT-1) + R(sT) Example reward function: R(s) = - || s – s* || Goal: Pick actions over time so as to maximize the expected score: E[R(s0) + R(s1) + … + R(sT)] Solution: policy which specifies an action for each possible state for all times t= 0, 1, … , T.
Model-based reinforcement learning Control policy Run RL algorithm in simulator.
Reinforcement learning (RL) • Apprenticeship learning algorithms use a demonstration to help us find • a good reward function, • a good dynamics model, • a good control policy. Dynamics Model Psa Reinforcement Learning Reward Function R Control policy p
Apprenticeship learning: reward Dynamics Model Psa Reinforcement Learning Reward Function R Control policy p
Many reward functions: complex trade-off • Reward function trades off: • Height differential of terrain. • Gradient of terrain around each foot. • Height differential between feet. • … (25 features total for our setup)
Example result [ICML 2004, NIPS 2008]
Reward function for aerobatics? • Compact description: reward function ~ trajectory (rather than a trade-off).
Reward: Intended trajectory • Perfect demonstrations are extremely hard to obtain. • Multiple trajectory demonstrations: • Every demonstration is a noisy instantiation of the intended trajectory. • Noise model captures (among others): • Position drift. • Time warping. • If different demonstrations are suboptimal in different ways, they can capture the “intended” trajectory implicitly. • [Related work: Atkeson & Schaal, 1997.]
Learning algorithm • Step 1: find the time-warping, and the distributional parameters • We use EM, and dynamic time warping to alternatingly optimize over the different parameters. • Step 2: find the intended trajectory
Apprenticeship learning for the dynamics model Dynamics Model Psa Reinforcement Learning Reward Function R Control policy p
Apprenticeship learning for the dynamics model • Algorithms such as E3 (Kearns and Singh, 2002) learn the dynamics by using exploration policies, which are dangerous/impractical for many systems. • Our algorithm • Initializes model from a demonstration. • Repeatedly executes “exploitation policies'' that try to maximize rewards. • Provably achieves near-optimal performance (compared to teacher). • Machine learning theory: • Complicated non-IID sample generating process. • Standard learning theory bounds not applicable. • Proof uses martingale construction over relative losses. [ICML 2005]
Learning the dynamics model • Details of algorithm for learning dynamics from data: • Exploiting structure from physics. • Lagged learning criterion. [NIPS 2005, 2006]
Related work • Bagnell & Schneider, 2001; LaCivita et al., 2006; Ng et al., 2004a; Roberts et al., 2003; Saripalli et al., 2003.; Ng et al., 2004b; Gavrilets, Martinos, Mettler and Feron, 2002. • Maneuvers presented here are significantly more difficult than those flown by any other autonomous helicopter.
Non-stationary maneuvers • Modeling extremely complex: • Our dynamics model state: • Position, orientation, velocity, angular rate. • True state: • Air (!), head-speed, servos, deformation, etc. • Key observation: • In the vicinity of a specific point along a specific trajectory, these unknown state variables tend to take on similar values.
Local model learning algorithm 1. Time align trajectories. 2. Learn locally weighted models in the vicinity of the trajectory. W(t’) = exp(- (t – t’)2 /2 )
Apprenticeship learning: RL algorithm Dynamics Model Psa Reinforcement Learning Reward Function R Control policy p • (Crude) model [None of the demos exactly equal to intended trajectory.] • (Sloppy) demonstration or initial trial • Small number of real-life trials
Algorithm Idea • Input to algorithm: approximate model. • Start by computing the optimal policy according to the model. Real-life trajectory Target trajectory The policy is optimal according to the model, so no improvement is possible based on the model.
Algorithm Idea (2) • Update the model such that it becomes exact for the current policy.
Algorithm Idea (2) • Update the model such that it becomes exact for the current policy.
Algorithm Idea (2) • The updated model perfectly predicts the state sequence obtained under the current policy. • We can use the updated model to find an improved policy.
Algorithm • Find the (locally) optimal policy for the model. • Execute the current policy and record the state trajectory. • Update the model such that the new model is exact for the current policy . • Use the new model to compute the policy gradient and update the policy: := + . • Go back to Step 2. Notes: • The step-size parameter is determined by a line search. • Instead of the policy gradient, any algorithm that provides a local policy improvement direction can be used. In our experiments we used differential dynamic programming.
Performance Guarantees • Let the local policy improvement algorithm be policy gradient. Notes: • These assumptions are insufficient to give the same performance guarantees for model-based RL. • The constant K depends only on the dimensionality of the state, action, and policy (), the horizon H and an upper bound on the 1st and 2nd derivatives of the transition model, the policy and the reward function.
Experimental Setup • Our expert pilot provides 5-10 demonstrations. • Our algorithm • aligns trajectories, • extracts intended trajectory as target, • learns local models. • We repeatedly run controller, collect model errors, until satisfactory performance is obtained. • We use receding-horizon differential dynamic programming (DDP) to find the controller
Airshow • [Switch to Quicktime for HD airshow.]
Chaos • [Switch to Quicktime for HD chaos.]
Conclusion • Apprenticeship learning algorithms help us find better controllers by exploiting teacher demonstrations. • Algorithmic instantiations: • Inverse reinforcement learning • Learn trade-offs in reward. • Learn “intended” trajectory. • Model learning • No explicit exploration. • Local models. • Control with crude model + small number of trials.
Current and future work • Automate more general advice taking. • Guaranteed safe exploration---safely learning to outperform the teacher. • Autonomous helicopters • Assist in wildland fire fighting. • Auto-rotation landings. • Fixed-wing formation flight. • Potential savings for even three aircraft formation: 20%.
Apprenticeship Learning via Inverse Reinforcement Learning, Pieter Abbeel and Andrew Y. Ng. In Proc. ICML, 2004. • Learning First Order Markov Models for Control, Pieter Abbeel and Andrew Y. Ng. In NIPS 17, 2005. • Exploration and Apprenticeship Learning in Reinforcement Learning, Pieter Abbeel and Andrew Y. Ng. In Proc. ICML, 2005. • Modeling Vehicular Dynamics, with Application to Modeling Helicopters, Pieter Abbeel, Varun Ganapathi and Andrew Y. Ng. In NIPS 18, 2006. • Using Inaccurate Models in Reinforcement Learning, Pieter Abbeel, Morgan Quigley and Andrew Y. Ng. In Proc. ICML, 2006. • An Application of Reinforcement Learning to Aerobatic Helicopter Flight, Pieter Abbeel, Adam Coates, Morgan Quigley and Andrew Y. Ng. In NIPS 19, 2007. • Hierarchical Apprenticeship Learning with Application to Quadruped Locomotion, J. Zico Kolter, Pieter Abbeel and Andrew Y. Ng. In NIPS 20, 2008.