560 likes | 672 Views
Space-Indexed Dynamic Programming: Learning to Follow Trajectories. J. Zico Kolter, Adam Coates, Andrew Y. Ng, Yi Gu, Charles DuHadway Computer Science Department Stanford University July 2008, ICML. TexPoint fonts used in EMF.
E N D
Space-Indexed Dynamic Programming: Learning to Follow Trajectories J. Zico Kolter, Adam Coates, Andrew Y. Ng, Yi Gu, Charles DuHadway Computer Science DepartmentStanford University July 2008, ICML TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAAAAAAA
Outline • Reinforcement Learning and Following Trajectories • Space-indexed Dynamical Systems and Space-indexed Dynamic Programming • Experimental Results
Trajectory Following • Consider task of following trajectory in a vehicle such as a car or helicopter • State space too large to discretize, can’t apply tabular RL/dynamic programming
Trajectory Following • Dynamic programming algorithms w/ non-stationary policies seem well-suited to task • Policy Search by Dynamic Programming (Bagnell, et. al), Differential Dynamic Programming (Jacobson and Mayne)
Dynamic Programming t=1 Divide control task into discrete time steps
Dynamic Programming t=1 t=2 Divide control task into discrete time steps
Dynamic Programming t=4 t=5 t=3 t=1 t=2 Divide control task into discrete time steps
Dynamic Programming t=4 t=5 t=3 t=1 t=2 Proceeding backwards in time, learn policies fort = T, T-1, …, 2, 1
Dynamic Programming t=4 t=5 t=3 t=1 t=2 Proceeding backwards in time, learn policies fort = T, T-1, …, 2, 1
Dynamic Programming t=4 t=5 t=3 t=1 t=2 Proceeding backwards in time, learn policies fort = T, T-1, …, 2, 1
Dynamic Programming t=4 t=5 t=3 t=1 t=2 Proceeding backwards in time, learn policies fort = T, T-1, …, 2, 1
Dynamic Programming t=4 t=5 t=3 t=1 t=2 Key Advantage:Policies are local (only need to perform well over small portion of state space)
Problems with Dynamic Programming Problem #1: Policies from traditional dynamic programming algorithms are time-indexed
Problems with Dynamic Programming Supposed we learned policy assuming this distribution over states
Problems with Dynamic Programming But, due to natural stochasticity of environment, car is actually here at t = 5
Problems with Dynamic Programming Resulting policy will perform very poorly
Problems with Dynamic Programming Partial Solution: Re-indexingExecute policy closest to current location, regardless of time
Problems with Dynamic Programming Problem #2: Uncertainty over future states makes it hard to learn any good policy
Problems with Dynamic Programming Dist. over states at time t = 5 Due to stochasticity, large uncertainty over states in distant future
Problems with Dynamic Programming Dist. over states at time t = 5 DP algorithms require learning policy that performs well over entire distribution
Space-Indexed Dynamic Programming • Basic idea of Space-Indexed Dynamic Programming (SIDP): Perform DP with respect to space indices (planes tangent to trajectory)
Difficulty with SIDP • No guarantee that taking single action will move to next plane along trajectory • Introduce notion of space-indexed dynamical system
Time-Indexed Dynamical System • Creating time-indexed dynamical systems:
Time-Indexed Dynamical System • Creating time-indexed dynamical systems: current state
Time-Indexed Dynamical System • Creating time-indexed dynamical systems: control action current state
Time-Indexed Dynamical System • Creating time-indexed dynamical systems: control action time derivative of state current state
Time-Indexed Dynamical System • Creating time-indexed dynamical systems: Euler integration
Space-Indexed Dynamical Systems • Creating space-indexed dynamical systems: • Simulate forward until whenever vehicle hits next tangent plane space index d+1 space index d
Space-Indexed Dynamical Systems space index d+1 space index d • Creating space-indexed dynamical systems:
Space-Indexed Dynamical Systems space index d+1 space index d • Creating space-indexed dynamical systems: (Positive solution exists as long as controller makes some forward progress)
Space-Indexed Dynamical Systems • Result is a dynamical system indexed by spatial-index variable d rather than time • Space-indexed dynamic programming runs DP directly on this system
Space-Indexed Dynamic Programming d=1 Divide trajectory into discrete space planes
Space-Indexed Dynamic Programming d=1 d=2 Divide trajectory into discrete space planes
Space-Indexed Dynamic Programming d=4 d=5 d=3 d=1 d=2 Divide trajectory into discrete space planes
Space-Indexed Dynamic Programming d=4 d=5 d=3 d=1 d=2 Proceeding backwards, learn policies ford = D, D-1, …, 2, 1
Space-Indexed Dynamic Programming d=4 d=5 d=3 d=1 d=2 Proceeding backwards, learn policies ford = D, D-1, …, 2, 1
Space-Indexed Dynamic Programming d=4 d=5 d=3 d=1 d=2 Proceeding backwards, learn policies ford = D, D-1, …, 2, 1
Space-Indexed Dynamic Programming d=4 d=5 d=3 d=1 d=2 Proceeding backwards, learn policies ford = D, D-1, …, 2, 1
Problems with Dynamic Programming Problem #1: Policies from traditional dynamic programming algorithms are time-indexed
Space-Indexed Dynamic Programming Space indexed DP: always executes policy based on current spatial index Time indexed DP: can execute policy learned for different location
Problems with Dynamic Programming Problem #2: Uncertainty over future states makes it hard to learn any good policy
Space-Indexed Dynamic Programming Dist. over states at time t = 5 Dist. over states at index d = 5 Space indexed DP: much tighter distribution over future states Time indexed DP: wide distribution over future states
Space-Indexed Dynamic Programming Dist. over states at time t = 5 Dist. over states at index d = 5 t(5): Space indexed DP: much tighter distribution over future states Time indexed DP: wide distribution over future states
Task: following race track trajectory in RC car with randomly placed obstacles Experimental Domain
Experimental Setup • Implemented space-indexed version of PSDP algorithm • Policy chooses steering angle using SVM classifier (constant velocity) • Used simple textbook model simulator of car dynamics to learn policy • Evaluated PSDP time-indexed, time-indexed with re-indexing and space-indexed