280 likes | 430 Views
An algorithmic approach to air path computation. Devan SOHIER LDCI-EPHE (Paris) 23/11/04. Outline. Introduction Situation Modeling Markov decision processes Conclusion and perspectives. Outline. Introduction Situation Modeling Markov decision processes Conclusion and perspectives.
E N D
An algorithmic approach to air path computation Devan SOHIER LDCI-EPHE (Paris) 23/11/04
Outline • Introduction • Situation • Modeling • Markov decision processes • Conclusion and perspectives
Outline • Introduction • Situation • Modeling • Markov decision processes • Conclusion and perspectives
Problem • Find safe trajectories for all aircrafts in a given portion of the airspace • Taking into account stochastic events: • Temporary flyover interdiction (due to meterological conditions or some other reason) • Deviation • …
Levels of ATC • ATC can be divided in several levels: • Strategic level for mid-term planning of flights: • many aircrafts • meteorological uncertainties • Tactical level for short-term management • few aircrafts (2 or 3) • uncertainties about the location (deviation)
Outline • Introduction • Situation • Modeling • Solution • Conclusion and perspectives
Aircrafts crossing • Two aircrafts x and y • go from xd and yd to xf and yf • risks of conflict • Find a « good » trajectory for each of them (safe, and cheap)
Aircrafts crossing • Minimize: ∫x’2+y’2dt • Under the safety constraint: d(x,y)>ds and some constraints on speed! • We work on (x, y)R6 • The trajectory of (x, y) is composed of segments of straight lines and arcs of ellipses in R6
Stochastic aircrafts crossing • A stochastic deviation (d1, d2) is added to the model: • minimize: E[∫(x+d1)’2+(y+d2)’2dt] • under the constraint: d(x+d1,y+d2)>ds
Problems • Continuous modeling of the deviation: • difficult to determine • difficult to exploit (a continuous time markovian modeling cannot be adequate) • Moreover will it provide useful information? discretization
Outline • Introduction • Situation • Modeling • Markov decision processes • Conclusion and perspectives
Our Modeling • Existing modelings use: • Continuous space • Continuous time • We propose a discrete modeling more adequate to programming
Bricks • Discretization of the airspace : • Bricks (parallelepipeds) • Size = safety distances
Modeling of the airspace • To improve the modeling: • Use of a honeycomb paving • Discrete time
Voronoi paving • Introduction of dynamic safety distances by the use of a Voronoi paving
The graph • Allowed movements are modeled by a graph
Statistics • Markov (resp. semi-markovian) processes are a simple, general and well-known modeling • All the information is contained in the most recent observation(s) The deviation evolves in a memoryless way: the deviation at time t+1 only depends on the deviation at time t (resp. t, t-1, …, t-k)
Statistics • Preliminary Markov tests on the deviation • highlights a different behaviour of transversal and longitudinal deviations • semi-Markovian with a dependence to history of about 5
Outline • Introduction • Situation • Modeling • Markov decision processes • Conclusion and perspectives
Static vs. dynamic • Static solutions • Worst-case analysis • Loss of airspace • Dynamicity • Adapt the solution to the current situation • Use all the available information • But dynamicity requires more computing power
Dynamic programming • An optimal path (xt)t>0is such that for all t0, (xt)t>t0 is also optimal starting from the situation xt0 • Continuous time difficult to apply • Through discretization we obtain an adequate framework
Markov Decision Process • Dynamic programming with a Markov « opponent » • Find rules giving the decision to make in each situation, taking into account the probabilities of evolution under constraints • Safe: in each safe situation, a safe reaction is proposed
Markov Decision Process • We define for each deviation d and situation s: dk,d(s, g)=min{d(s,s’)+d2pd,d2 dk-1,d2(s’,g)/ss’} Nextk,d(s)=argmin{d(s,s’)+d2 pd,d2 dk-1,d2(s’,g)/ss’} with g=(xf, yf) the final situation, for all k: dk, d(s1, s2)= if s1+dis forbidden and dk, d(s, s)=0 • When these quantities do not evolve any longer, we obtain the optimization rules.
Complexity • Complexity of this MDP grows with the size of the history (5 in this case) of the Markov chain • Much more efficient than the computation of exact optimal solutions
Outline • Introduction • Situation • Modeling • Markov decision processes • Conclusion and perspectives
Conclusions • Dynamic computation of air trajectories may save much airspace without decreasing the safety • Markovian (memoryless) discrete modelings provide an efficient and adequate framework allowing computer programming of the solution
Works in Progress • Works in collaboration with L. El Ghaoui (Berkeley), A. d’Aspremont (Princeton) on the strategic level
Perspectives • Statistical validation of the modeling • Use of continuous modeling and decision rules, and discretization of the solution • Use of pretopological tools to refine the notion of conflict • Decentralization of the decision by the use of negociations protocols • Introduction of some equity constraints