1 / 46

Apprenticeship Learning via Inverse Reinforcement Learning

Apprenticeship Learning via Inverse Reinforcement Learning. Pieter Abbeel Stanford University [Joint work with Andrew Ng.]. Overview. Reinforcement Learning (RL) Motivation for Apprenticeship Learning Proposed algorithm Theoretical results Experimental results Conclusion.

charityh
Download Presentation

Apprenticeship Learning via Inverse Reinforcement Learning

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Apprenticeship Learning via Inverse Reinforcement Learning Pieter Abbeel Stanford University [Joint work with Andrew Ng.] Pieter Abbeel and Andrew Y. Ng

  2. Overview • Reinforcement Learning (RL) • Motivation for Apprenticeship Learning • Proposed algorithm • Theoretical results • Experimental results • Conclusion Pieter Abbeel and Andrew Y. Ng

  3. Example of Reinforcement Learning Problem • Highway driving. Pieter Abbeel and Andrew Y. Ng

  4. RL formalism • Assume that at each time step, our system is in some state st. • Upon taking an action a, our state randomly transitions to some new state st+1. • We are also given a reward function R. • The goal: Pick actions over time so as to maximize the expected score: • E[R(s0) + R(s1) + … + R(sT)]. System dynamics System dynamics System dynamics s0 … sT s1 s2 sT-1 R(s0) + R(s1) + R(s2) +…+ R(sT-1) + R(sT) = overall score Pieter Abbeel and Andrew Y. Ng

  5. RL formalism • Markov Decision Process (S,A,P,s0,R) • W.l.o.g. we assume • Policy • Utility of a policy for reward R=wT Pieter Abbeel and Andrew Y. Ng

  6. Motivation for Apprenticeship Learning • Reinforcement learning (RL) gives powerful tools for solving MDPs. It can be difficult to specify the reward function. Example: Highway driving. Pieter Abbeel and Andrew Y. Ng

  7. Apprenticeship Learning • Learning from observing an expert. • Previous work: • Learn to predict expert’s actions as a function of states. • Usually lacks strong performance guarantees. • (E.g.,. Pomerleau, 1989; Sammut et al., 1992; Kuniyoshi et al., 1994; Demiris & Hayes, 1994; Amit & Mataric, 2002; Atkeson & Schaal, 1997; …) • Our approach: • Based on inverse reinforcement learning (Ng & Russell, 2000). • Returns policy with performance as good as the expert as measured according to the expert’s unknown reward function. Pieter Abbeel and Andrew Y. Ng

  8. Algorithm • For i = 1,2,… • Inverse RL step: • Estimate expert’s reward function R(s)= wT(s) such that under R(s) the expert performs better than all previously found policies {j}. • RL step: • Compute optimal policy i for • the estimated reward w. Pieter Abbeel and Andrew Y. Ng

  9. Algorithm: Inverse RL step Pieter Abbeel and Andrew Y. Ng

  10. Algorithm: Inverse RL step Quadratric programming problem. (same as for SVM) Pieter Abbeel and Andrew Y. Ng

  11. Feature Expectation Closeness and Performance • If we can find a policy  such that • ||(E) - ()||2 , • then for any underlying reward R*(s) =w*T(s), • we have that • |Uw*(E) - Uw*()| = |w*T (E) - w*T ()| •  ||w*||2 ||(E) - ()||2 • . Pieter Abbeel and Andrew Y. Ng

  12. Algorithm 2 (E) (2) w(3) (1) w(2) w(1) Uw() = wT() (0) 1 Pieter Abbeel and Andrew Y. Ng

  13. Theoretical Results: Convergence • Theorem. Let an MDP (without reward function), a k-dimensional feature vector  and the expert’s feature expectations (E) be given. Then after at most • k T2/2 • iterations, the algorithm outputs a policy  that performs nearly as well as the expert, as evaluated on the unknown reward function R*(s)=w*T(s), i.e., • Uw*()  Uw*(E) - . Pieter Abbeel and Andrew Y. Ng

  14. Proof (sketch) 2 (E) (1) d0 d1 (1) w(1) (0) 1 Pieter Abbeel and Andrew Y. Ng

  15. Proof (sketch) Pieter Abbeel and Andrew Y. Ng

  16. Algorithm (projection version) 2 (E) (2) (1) w(3) w(2) (2) (1) w(1) (0) 1 Pieter Abbeel and Andrew Y. Ng

  17. Theoretical Results: Sampling • In practice, we have to use sampling to estimate the feature expectations of the expert. We still have -optimal performance with high probability if the number of observed samples is at least • O(poly(k,1/)). • Note: the bound has no dependence on the “complexity” of the policy. Pieter Abbeel and Andrew Y. Ng

  18. Gridworld Experiments Reward function is piecewise constant over small regions. Features  for IRL are these small regions. 128x128 grid, small regions of size 16x16. Pieter Abbeel and Andrew Y. Ng

  19. Gridworld Experiments Pieter Abbeel and Andrew Y. Ng

  20. Gridworld Experiments Pieter Abbeel and Andrew Y. Ng

  21. Gridworld Experiments Pieter Abbeel and Andrew Y. Ng

  22. Gridworld Experiments Pieter Abbeel and Andrew Y. Ng

  23. Case study: Highway driving Output: Learned behavior Input: Driving demonstration The only input to the learning algorithm was the driving demonstration (left panel). No reward function was provided. Pieter Abbeel and Andrew Y. Ng

  24. More driving examples In each video, the left sub-panel shows a demonstration of a different driving “style”, and the right sub-panel shows the behavior learned from watching the demonstration. Pieter Abbeel and Andrew Y. Ng

  25. More driving examples In each video, the left sub-panel shows a demonstration of a different driving “style”, and the right sub-panel shows the behavior learned from watching the demonstration. Pieter Abbeel and Andrew Y. Ng

  26. Car driving results Pieter Abbeel and Andrew Y. Ng

  27. Different Formulation • LP formulation for RL problem • max.  s,a (s,a) R(s) • s.t. • s a (s,a) = s’,a P(s|s’,a) (s’,a) • QP formulation for Apprenticeship Learning • min. , i (E,i - i)2 • s.t. • s a (s,a) = s’,a P(s|s’,a) (s’,a) • i i = s,a i(s) (s,a) Pieter Abbeel and Andrew Y. Ng

  28. Different Formulation (ctd.) • Our algorithm is equivalent to iteratively • linearizing QP at current point (Inverse RL step), • solve resulting LP (RL step). • Why not solving QP directly? Typically only possible for very small toy problems (curse of dimensionality). [Our algorithm makes use of existing RL solvers to deal with the curse of dimensionality.] Pieter Abbeel and Andrew Y. Ng

  29. Conclusions • Our algorithm returns a policy with performance as good as the expert as evaluated according to the expert’s unknown reward function. • Algorithm is guaranteed to converge in poly(k,1/) iterations. • Sample complexity poly(k,1/). • The algorithm exploits reward “simplicity” (vs. policy “simplicity” in previous approaches). Pieter Abbeel and Andrew Y. Ng

  30. Additional slides for poster • (slides to come are additional material, not included in the talk, in particular: projection (vs. QP) version of the Inverse RL step; another formulation of the apprenticeship learning problem, and its relation to our algorithm) Pieter Abbeel and Andrew Y. Ng

  31. Simplification of Inverse RL step: QP  Euclidean projection • In the Inverse RL step • set (i-1) = orthogonal projection of E onto line through { (i-1),((i-1)) } • set w(i) = E - (i-1) • Note: the theoretical results on convergence and sample complexity hold unchanged for the simpler algorithm. Pieter Abbeel and Andrew Y. Ng

  32. Algorithm (projection version) 2 E (1) w(1) (0) 1 Pieter Abbeel and Andrew Y. Ng

  33. Algorithm (projection version) 2 E (2) (1) w(2) (1) w(1) (0) 1 Pieter Abbeel and Andrew Y. Ng

  34. Algorithm (projection version) 2 E (2) (1) w(3) w(2) (2) (1) w(1) (0) 1 Pieter Abbeel and Andrew Y. Ng

  35. Appendix: Different View • Bellman LP for solving MDPs • Min. V c’V s.t. •  s,a V(s)  R(s,a) +  s’ P(s,a,s’)V(s’) • Dual LP • Max.  s,a (s,a)R(s,a) s.t. • s c(s) - a (s,a) +  s’,a P(s’,a,s) (s’,a) =0 • Apprenticeship Learning as QP • Min.  i (E,i - s,a (s,a)i(s))2 s.t. • s c(s) - a (s,a) +  s’,a P(s’,a,s) (s’,a) =0 Pieter Abbeel and Andrew Y. Ng

  36. Different View (ctd.) • Our algorithm is equivalent to iteratively • linearize QP at current point (Inverse RL step), • solve resulting LP (RL step). • Why not solving QP directly? Typically only possible for very small toy problems (curse of dimensionality). [Our algorithm makes use of existing RL solvers to deal with the curse of dimensionality.] Pieter Abbeel and Andrew Y. Ng

  37. Slides that are different for poster • (slides to come are slightly different for poster, but already “appeared” earlier) Pieter Abbeel and Andrew Y. Ng

  38. Algorithm (QP version) 2 (E) (1) w(1) Uw() = wT() (0) 1 Pieter Abbeel and Andrew Y. Ng

  39. Algorithm (QP version) 2 (E) (2) (1) w(2) w(1) Uw() = wT() (0) 1 Pieter Abbeel and Andrew Y. Ng

  40. Algorithm (QP version) 2 (E) (2) w(3) (1) w(2) w(1) Uw() = wT() (0) 1 Pieter Abbeel and Andrew Y. Ng

  41. Gridworld Experiments Pieter Abbeel and Andrew Y. Ng

  42. Case study: Highway driving Output: Learned behavior Input: Driving demonstration (Videos available.) Pieter Abbeel and Andrew Y. Ng

  43. More driving examples (Videos available.) Pieter Abbeel and Andrew Y. Ng

  44. Car driving results (more detail)

  45. Proof (sketch) Pieter Abbeel and Andrew Y. Ng

  46. Apprenticeship Learning via Inverse Reinforcement Learning Pieter Abbeel and Andrew Y. Ng Stanford University Pieter Abbeel and Andrew Y. Ng

More Related