320 likes | 431 Views
Optimal Min-max Pursuit Evasion on a Manhattan Grid. Krishna kalyanam ( Infoscitex corp.) In collaboration with S. Darbha ( Tamu ) P. P. Khargonekar (UF, E-ARPA) M. Pachter (AFIT/ENG) P. Chandler and D. Casbeer (AFRL/RQCA) AFRL/RQCA UAV Team meeting oct 31, 2012. Scenario.
E N D
Optimal Min-max Pursuit Evasion on a Manhattan Grid Krishna kalyanam (Infoscitex corp.) In collaboration with S. Darbha (Tamu) P. P. Khargonekar (UF, E-ARPA) M. Pachter (AFIT/ENG) P. Chandler and D. Casbeer (AFRL/RQCA) AFRL/RQCA UAV Team meeting oct 31, 2012
Scenario UGS Sensor Range Valid Intruder Path UAV Communication Range UGS Communication Range BASE RQCA Conf. Rm.
Pursuit-Evasion Framework • Pursuerengaged in search and capture of intruder on a Manhattan road network • Intersections in road instrumented with Unattended Ground Sensors (UGSs) • Pursuer has a 2x speed advantage over the evader • Pursuer has no on-board sensing capability • Evader triggers UGS and the event is time-stamped and stored in the UGS • Pursuer interrogates UGSs to get evader location information • Capture occurs when pursuer and evader are co-located at an UGS location RQCA Conf. Rm.
Manhattan Grid (3 row corridor) t D c b 0 1 2 n All edges of the grid are of same length Purser arrives at node (t/c/b,0) with delay D>0 (time steps) behind the evader Evader dynamics - move North, East or South but cannot re-visit a node Pursuer actions - move North, East or South or Loiter/ Wait at current location Pursuer has a 2x speed advantage over the evader RQCA Conf. Rm.
Governing Equations RQCA Conf. Rm.
Problem Framework • Pose the problem as a Partially Observable Markov Decision Process (POMDP) • unconventional POMDP since observations give delayed intruder location information with random time delays! • Use observations to compute the set of possible intruder locations • Dual control problem • Pursuer’s action in addition to aiding capture also affects the future uncertainty associated with evader’s location (exploration vs. exploitation) RQCA Conf. Rm.
Partial and delayed state information RQCA Conf. Rm.
Optimization Problem 0 1 2 t D c b RQCA Conf. Rm.
Bellman recursion RQCA Conf. Rm.
Induction - Motivation 0 1 2 D-1 D D D-1 D-2 1 0 evader c single row: capture in exactly D steps T(D)=1+T(D-1);T(1)=1 => T(D) = D two rows: capture in exactly D+2 steps T(D)=1+T(D-1);T(1)=3 => T(D) = D+2 pursuer t 0 D-1 1 D D-2 b RQCA Conf. Rm.
A Feasible Policy (upper bound) 0 1 2 t D c b RQCA Conf. Rm.
Bottom/Top row - delay 1 0 1 0 1 evader pursuer RQCA Conf. Rm.
Bottom/Top row - delay 2 0 1 2 0 2 1 RQCA Conf. Rm.
Center row - delay 1 0 1 2 3 0 1 2 1 RQCA Conf. Rm.
Center row - delay 2 1 2 3 4 0 0 2 1 2 1 RQCA Conf. Rm.
Center row - delay 3 Bottom row - delay 3 0 1 2 t D c b RQCA Conf. Rm.
Specification of the policyμ bottom row: center row: RQCA Conf. Rm.
Induction argument for D>=4 Basic step: Tμ(r,3)=13 Induction hypothesis: RQCA Conf. Rm.
Specification of the policyμ bottom row: center row: RQCA Conf. Rm.
Center row, delay D>=4 0 1 D-4 D-3 D-2 D-1 k=2D-2 D k=2D k=D k=D+1 k=2D-4 k=2D+2 (D-3) moves E RQCA Conf. Rm.
Center row, delay D>=4 (contd.) 0 1 D-4 D-3 D-2 D-1 k=2D-2 D k=2D k=0, k=D k=D+1 k=2D-4 k=2D+2 k=2D 2 k=4 k=2 k=2D-4 k=2D-2 (D-3) moves E RQCA Conf. Rm.
Center row, delay D>=4 (contd.) 0 1 D-4 D-3 D-2 D-1 k=2D-2 D k=2D k=0, k=D k=D+1 k=2D-4 k=2D+2 RQCA Conf. Rm.
Center row, delay D>=4 0 1 D-2 Bottom row, delay D>=4 k=4, k=D+2 D k=D+1 k=0, k=D RQCA Conf. Rm.
Lower Bound on Steps to capture 0 1 2 t D c b RQCA Conf. Rm.
Lower bound on optimal time to capture RQCA Conf. Rm.
Optimal (min-max) Steps to Capture RQCA Conf. Rm.
East is optimal at red UGS sketch of proof: RQCA Conf. Rm.
Optimal trajectory There is an optimal trajectory, referred to as a ``turnpike”, which both the pursuer and the evader strive to reach and stay in, for most of the encounter. Here, the turnpike is the center row of the symmetric 3 row grid. The pursuer, after initially going east, if not already on the turnpike, immediately heads towards it. The evader initially heads to the turnpike, unless it is already on it, until the ``end game", whence it swerves and gets off the turnpike to avoid immediate capture. The pursuer stays on the turnpike, monitoring the delays, until he observers delay 1. At this point, he also executes the ``end game" maneuver, and captures the evader in exactly 11 more steps. RQCA Conf. Rm.
Summary • Advantages • Policy is dependent only on the delay at, and time elapsed since, the last red UGS (sufficient statistic?) • Policy is optimal despite not relying on the entire information history of pursuer • Disadvantages • Policy is not in analytical form i.e., function from information state to action space (and so not extendable to other graphs) • what is the intuition (exploration vs. exploitation, does separation exist?) • Extension(s) • Can policy be approximated by a feedback policy that minimizes suitable norm of the error (distance to evader + size of uncertainty) • Capture can no longer be guaranteed (by a single pursuer) if number of rows exceeds 3 • With 2 pursuers, capture can be guaranteed in D+4 steps on any number of rows (including infinity)! RQCA Conf. Rm.
Extras RQCA Conf. Rm.
Center row, delay D>=4 (contd.) conservative bound: D-1+11=D+10 (see extra slide) 0 1 D-4 D-3 D-2 D-1 k=2D-2 D k=2D k=0, k=D k=D+1 k=2D-4 k=2D+2 RQCA Conf. Rm.
steps to capture: D-1+3=D+2 conservative bound (per policy) = D-1+11=D+10 0 1 D-4 D-3 D-2 D-1 0 k=2D-2 k=2D D k=2D 1 k=0, k=D k=D+1 k=2D-4 k=2D-2 k=4 k=2 k=2D-4 RQCA Conf. Rm.