790 likes | 922 Views
50 th Anniversary of The Curse of Dimensionality. Continuous States: Storage cost: resolution dx Computational cost: resolution dx Continuous Actions: Computational cost: resolution du. Beating The Curse Of Dimensionality. Reduce dimensionality (biped examples)
E N D
50th Anniversary of The Curse of Dimensionality • Continuous States: Storage cost: resolutiondx Computational cost: resolutiondx • Continuous Actions: Computational cost: resolutiondu
Beating The Curse Of Dimensionality • Reduce dimensionality (biped examples) • Use primitives (Poincare section) • Parameterize V, policy (future lecture) • Reduce volume of state space explored • Use greater depth search • Adaptive/Problem-specific grid/sampling • Split where needed • Random sampling – add where needed • Random action search • Random state search • Hybrid Approaches: combine local and global opt.
Use Brute Force • Deal with computational cost by using cluster supercomputer. • Main issue is minimizing communication between nodes.
Cluster Supercomputing • (8) Cores w/ small local memory (cache) • (100) Nodes w/ shared memory (16GB) • (4-16Gb/s) Network • (100T) Disks
Q(x,u) = L(x,u) + V(f(x,u)) • c = L(x,u): as in desktop case • x_next = f(x,u): as in desktop case • V(x_next) • Uniform grid • Multilinear interpolation if all values available, distance weighted averaging if bad values
So what does this all mean for programming? • On a node, split grid cells among threads, which execute on cores. • Share updates of V(x) and u(x) within node almost for free using shared memory. • Pushing updated V(x) and u(x) to other nodes uses the network which is relatively slow…..
Dealing with the slow network • Organize grid cells into packet-sized blocks. Send them as a unit. • Threshold updates: too small, don’t send it. • Only do 1/N updates for each block (maximum skip time). • Tolerate packet loss (UDP) vs. verification (TCP/MPI)
Use Adaptive Grid • Reduce computational and storage costs by using adaptive grid. • Generate adaptive grid using random sampling.
Trajectory-Based Dynamic Programming
Full Trajectories Helps Reduce Resolution Needed SIDP Trajectory Based
Global PlanningPropagate Value Function Across Trajectoriesin Adaptive Grid
Growing the Explored Region: Adaptive Grids
Bidirectional Search Closeup
Growing the Explored Region: Spine Representation
One Link Swing Up Needed Only 63 Points
Trajectories For Each Point
Random Sampling of States • Initialize with a point at the goal with local models based on LQR. • Choose a random new state x. • Use the nearest stored point’s local model of the value function to predict the value of the new point (VP). • Optimize a trajectory from x to the goal. At each step use the nearest stored point’s local model of the policy to create an action. Use DDP to refine this trajectory. VT is cost of trajectory starting from x. • Store point at start of trajectory if |VT - VP |> λ(surprise), VT < Vlimit and VP < Vlimit, otherwise discard. • Interleave re-optimization of all stored points. Only update if Vnew < V (V is upper bound on value). • Gradually increase Vlimit.
Two Link Pendulum • Criterion:
Ankle Angle Hip Angle Ankle Torque Hip Torque
Convergence? • Because we create trajectories to the goal, each value function estimate at a point is an upper bound for the value at that point. • Eventually all value function entries will be consistent with their nearest neighbor’s local model, and no new points can be added. • We are using more aggressive acceptance tests for new points: VB < λVP, λ < 1, and VP < Vlimit vs. |VB – VP| < ε and VB < Vlimit • Not clear if needed new points can be blocked.
Use Local Models • Try to achieve a sparse representation using local models.
Regulator tasks • Examples: balance a pole, move at a constant velocity • A reasonable starting point is a Linear Quadratic Regulator (LQR controller) • Might have nonlinear dynamics xk+1 = f(xk,uk), but since stay around xd, can locally linearize xk+1 = Axk + Buk • Might have complex scoring function c(x,u), but can locally approximate with a quadratic model c xTQx + uTRu • dlqr() in matlab
Linearization Example • Iθdd = -mgl sin(θ) – μθd + τ • Linearize • Discretize time • Vectorize • (θθd)k+1T = (1 T; -mglT/I 1-μT/I) (θθd)kT + (0 T/I)Tτk
LQR Derivation • Assume V() quadratic: Vk+1(x) = xTVxx:k+1x • C(x,u) = xTQx + uTRu + (Ax+Bu)TVxx:k+1 (Ax+Bu) • Want C/u = 0 • BTVxx:k+1Ax = -(BTVxx:k+1B + R)u • u = Kx (linear controller) • K = - (BTVxx:k+1B + R)-1BTVxx:k+1A • Vxx:k= ATVxx:k+1A + Q + ATVxx:k+1BK
Trajectory Optimization (closed loop) • Differential Dynamic Programming (local approach to DP).
Q function • x: state, u: control or action • Dynamics: xk+1 = f(xk, uk) • Cost function: L(x,u) • Value function V(x) = ∑L(x,u) • Q function Q(x,u) = L(x,u) + V(f(x,u)) • Bellman’s Equation V(x) = minu Q(x,u) • Policy/control law: u(x) = argminu Q(x,u)
Propagating Local Models Along a Trajectory: Differential Dynamic ProgrammingGradient version • Vx:k-1 = Qx = Lx + Vxfx • Δu = Qu = Lu + Vxfu