580 likes | 694 Views
An Introduction to the Soft Walls Project. Adam Cataldo Prof. Edward Lee. University of Pennsylvania Dec 18, 2003 Philadelphia, PA. Outline. The Soft Walls system Objections Control system design Current Research Conclusions. Introduction. On-board database with “no-fly-zones”
E N D
An Introduction to the Soft Walls Project Adam Cataldo Prof. Edward Lee University of Pennsylvania Dec 18, 2003 Philadelphia, PA
Outline • The Soft Walls system • Objections • Control system design • Current Research • Conclusions
Introduction • On-board database with “no-fly-zones” • Enforce no-fly zones using on-board avionics • Non-networked, non-hackable
Autonomous control Pilot Aircraft Autonomous controller
Softwalls is not autonomous control Pilot Aircraft + bias pilot control Softwalls
Relation to Unmanned Aircraft • Not an unmanned strategy • pilot authority • Collision avoidance
A deadly weapon? • Project started September 11, 2001
Design Objectives Maximize Pilot Authority!
Unsaturated Control Pilot lets off controls Pilot tries to fly into no-fly zone Pilot turns away from no-fly zone No-fly zone Control applied
Objections • Reducing pilot control is dangerous • reduces ability to respond to emergencies
Objections • Reducing pilot control is dangerous • reduces ability to respond to emergencies • There is no override • switch in the cockpit
Objections • Reducing pilot control is dangerous • reduces ability to respond to emergencies • There is no override • switch in the cockpit • Localization technology could fail • GPS can be jammed
Localization Backup • Inertial navigation • Integrator drift limits accuracy range
Objections • Reducing pilot control is dangerous • reduces ability to respond to emergencies • There is no override • switch in the cockpit • Localization technology could fail • GPS can be jammed • Deployment could be costly • Software certification? Retrofit older aircraft?
Deployment • Fly-by-wire aircraft • a software change • Older aircraft • autopilot level • Phase in • prioritize airports
Objections • Reducing pilot control is dangerous • reduces ability to respond to emergencies • There is no override • switch in the cockpit • Localization technology could fail • GPS can be jammed • Deployment could be costly • how to retrofit older aircraft? • Complexity • software certification
Not Like Air Traffic Control • Much Simpler • No need for air traffic controller certification
Objections • Reducing pilot control is dangerous • reduces ability to respond to emergencies • There is no override • switch in the cockpit • Localization technology could fail • GPS can be jammed • Deployment could be costly • how to retrofit older aircraft? • Deployment could take too long • software certification • Fully automatic flight control is possible • throw a switch on the ground, take over plane
Potential Problems with Ground Control • Human-in-the-loop delay on the ground • authorization for takeover • delay recognizing the threat • Security problem on the ground • hijacking from the ground? • takeover of entire fleet at once? • coup d’etat? • Requires radio communication • hackable • jammable
What We Want to Compute Backwards reachable set No-fly zone States that can reach the no-fly zone even with Soft Walls controlller Can prevent aircraft from entering no-fly zone
What We Create • The terminal payoff function l:X -> Reals • The further from the no-fly zone, the higher the terminal payoff payoff + (constant over heading angle) northward position - eastward position no-fly zone
What We Compute terminal payoff optimal payoff No-fly zone Backwards Reachable Set
Our Control Input optimal payoff function dampen optimal control away from boundary optimal control at boundary State Space Backwards Reachable Set
How we computing the optimal payoff (analytically) • We solve this equation for J*: Reals^n x (-∞, 0] -> Reals • J* is the viscosity solution of this equation • J* converges pointwise to the optimal payoff as T->∞ • (Tomlin, Lygeros, Pappas, Sastry) spatial gradient dynamics time derivative terminal payoff
How we computing the optimal payoff (numerically) • (Mitchell) • Computationally intensive: n statesO(2^n) northward position no-fly zone eastward position heading angle time 0 1 M
Current Research • Model predictive controller • Linearize the system • Discretize time • Compute an optimal control input for the next N steps • Use the optimal input at the current step • Recompute at the next step • Feasible in real time
2 1 4 3 Current Research • Discrete Abstraction • Discretize time • Discretize space into a finite set • Feasible for Verification northward position eastward position heading angle
Conclusions • Embedded control system challenge • Control theory identified • Future design challenges identified
Acknowledgements • Ian Mitchell • Xiaojun Liu • Shankar Sastry • Steve Neuendorffer • Claire Tomlin
Backup Slides—Problem Model • State space X, a subset of Reals^n • Control space U and disturbance Space D, compact subsets of Reals^u and Reals^d northward position eastward position heading angle U Soft Walls bank right - bank left + D pilot
bank left time (bank right) Actions in Continuous Time • Pilot (Player 1) action d:Reals -> D, a Lebesgue measurable function • The set of all pilot actions Dt
bank left time (bank right) Actions in Continuous Time • Controller (Player 2) action u:Reals -> U, a Lebesgue measurable function • The set of all controller actions Ut
Dynamics • The state trajectory x:Reals -> X • The dynamics equation f:X x U x D -> Reals^n • (d/dt)x(t) = f(x(t), u(t), d(t)) northward position trajectory eastward position heading angle
pilot bank left time northward position (bank right) trajectory eastward position heading angle Information Set • At each time t, the controller knows pilot’s current action and the aircraft state • We say s:Dt -> Ut is the controller’s nonanticipative strategy • S is the set of nonanticipative strategies controller knows Cataldo
northward position trajectory eastward position heading angle Information Set • At each time t, the pilot knows pilot’s only the aircraft state • We say the pilot uses a feedback strategy pilot knows
What We Want to Compute Backwards reachable set No-fly zone States that can reach the no-fly zone even with Soft Walls controlller Can prevent aircraft from entering no-fly zone
The Terminal Payoff Function • The terminal cost function l:X -> Reals • The further from the no-fly zone, the higher the terminal payoff payoff + (constant over heading angle) northward position - eastward position no-fly zone
The Payoff Function • Given the pilot action d and the control action u, the payoff is V: X x Ut X Dt -> Reals • Aircraft state at time t is x(t) • y is any state trajectories that terminate at y given u and d as inputs distance from no-fly zone payoff at y given u and d time
The Optimal Payoff • At state y, the optimal payoff V*:X -> Reals comes from the control strategy s*:X ->S which maximizes the payoff for a disturbance which minimizes the payoff No-fly zone optimal control strategy optimal disturbancestrategy
The Finite Duration Payoff • We look back no more than T seconds to compute the finite duration payoff J: X x Ut x Dt x (-∞, 0] -> Reals and the optimal finite duration payoff J*:X x (-∞, 0] -> Reals the only difference
Computing V* • Assuming J* is continuous, it is the viscosity solution of • J* converges pointwise to V* as T->∞ • (Tomlin, Lygeros, Pappas, Sastry) spatial gradient dynamics finite duration optimal payoff
What Our Computation Gives Us terminal payoff optimal payoff No-fly zone Backwards Reachable Set
Control from Optimal Payoff Function dampen optimal control away from boundary optimal control at boundary State Space Backwards Reachable Set
Numerical Computation of V* • (Mitchell) • Computationally Intensive n states, O(2^n) northward position no-fly zone eastward position heading angle time 0 1 M
northward position eastward position heading angle What About Discrete-Time? • Same state space X • Same control space U and disturbance Space D U Soft Walls bank right - bank left + D pilot
Actions in Discrete Time • Pilot (Player 1) action d:Integers -> D • Control (Player 2) action u:Integers -> U • The set of all pilot actions Dk • The set of all control actions Uk bank left bank left time time (bank right) (bank right)
Dynamics • The state trajectory x:Integers -> X • The dynamics equation f:X x U x D -> Reals^n • x(k+1) = f(x(k), u(k), d(k)) northward position trajectory eastward position heading angle