1 / 28

Optimization Based Approaches to Autonomy

Optimization Based Approaches to Autonomy. SAE Aerospace Control and Guidance Systems Committee (ACGSC) Meeting Harvey’s Resort, Lake Tahoe, Nevada. March 3, 2005 Cedric Ma Northrop Grumman Corporation. Outline. Introduction Level of Autonomy Optimization and Autonomy

baakir
Download Presentation

Optimization Based Approaches to Autonomy

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Optimization Based Approaches to Autonomy SAE Aerospace Control and Guidance Systems Committee (ACGSC) Meeting Harvey’s Resort, Lake Tahoe, Nevada March 3, 2005 Cedric Ma Northrop Grumman Corporation

  2. Outline • Introduction • Level of Autonomy • Optimization and Autonomy • Autonomy Hierarchy and Applications • Path Planning with Mixed Integer Linear Programming • Optimal Trajectory Generation withNonlinear Programming • Summary and Conclusions

  3. Autonomy in Vehicle Applications TEAMTACTICS PACK LEVEL COORDINATION NAVIGATION LANDING FORMATION FLYING COOPERATIVE SEARCH RENDEZVOUS & REFUELING OBSTACLE AVOIDANCE

  4. Autonomy: Boyd’s OODA “Loop” Observe Orient Decide Act ImplicitGuidance& Control ImplicitGuidance& Control UnfoldingCircumstances CulturalTraditions Observations GeneticHeritage Decision(Hypothesis) Analyses &Synthesis Action(Test) FeedForward FeedForward FeedForward NewInformation PreviousExperience OutsideInformation UnfoldingInteractionWithEnvironment UnfoldingInteractionWithEnvironment Feedback Feedback Note how orientation shapes observation, shapes decision, shapes action, and in turn is shaped by the feedback and other phenomena coming into our sensing or observing window. Also note how the entire “loop” (not just orientation) is an ongoing many-sided implicit cross-referencing process of projection, empathy, correlation, and rejection. From “The Essence of Winning and Losing,” John R. Boyd, January 1996. Defense and the National Interest, http://www.d-n-i.net, 2001

  5. Level of Autonomy • Ground Operation • Activities performed off-line • Tele-Operation • Awareness of sensor / actuator interfaces • Executes commands uploaded from the ground • Reactive Control • Awareness of the present situation • Simple reflexes, i.e. no planning required • A condition triggers an associated action • Responsive Control • Awareness of past actions • Remembers previous actions • Remembers features of the environment • Remembers goals • Deliberative Control • Awareness of future possibilities • Reasons about future consequences • Chooses optimal paths / plans 1 2 3 4 5 Ground operation Tele-operation Reactive Control Responsive Control Deliberative Control Goal of Optimization Based Autonomy

  6. Optimization and Autonomy Objective/Reward Function Constraints/Rules(i.e. Dynamics/Goal) VehicleState Optimal Control/Decision Optimizer Formulation of problemshapes the “Orient” mechanism Determines best course of action based on current objective, while meeting constraints

  7. Autonomy Hierarchy Planning & Scheduling, Resource Allocation & Sequencing Task Sequencing, Auto Routing Time Scale: ~1 hr MissionPlanning Multi-Agent Coordination, Pack Level Organization Formation Flying, Cooperative Search & Electronic WarfareConflict Resolution, Task Negotiation, Team Tactics Time Scale: ~1 min CooperativeControl “Navigation,” Motion Planning Obstacle/Collision/Threat Avoidance Time Scale: ~10s PathPlanning “Guidance,” Contingency Handling Landing, Rendezvous, Refueling Time Scale: ~1s TrajectoryGeneration “Control,” Disturbance Rejection Applications: Stabilization, AdaptiveReconfigurable Control, FDIR Time Scale: ~0.1s TrajectoryFollowing/Inner Loop

  8. Path Planning with Mixed-IntegerLinear Programming (MILP)

  9. Overview: Path Planning • Path Planning bridges the gap between Mission Planner/AutoRouter and Individual Vehicle Guidance • Acts on an “intermediate” time scale between that of mission planner (minutes) and guidance (<seconds) • Short reaction time Mission waypoints Nap-of-the-Earth Flight Multi-vehicle Coordination Terrain Navigation Obstacle Avoidance Collision Avoidance

  10. Mixed-Integer Linear Programming Linear Programs (LP) with integer variables COTS MILP solver: ILOG CPLEX Vehicle dynamics as linear constraints: Limit velocity, acceleration, climb/turn rate Resulting path is given to 4-D guidance Integer variables can model: Obstacle collision constraints (binary) Control Modes, Threat Exposure Nonlinear Functions: RCS, Dynamics Min. Time, Acceleration, Altitude, Threat Objective function includes terms for:Acceleration, Non-Arrival, Terminal, Altitude, Threat Exposure Path-Planning with MILP

  11. Basic Obstacle Avoidance Problem • Vehicle Dynamic Constraints • Double Integrator dynamics • Max acceleration • Max velocity • Objective Function (summed over each time step) • Acceleration (1-norm) in x, y, z • Distance to destination (1-norm) • Altitude (if applicable) • Obstacle Constraints (integer) • One set per obstacle per time step • No cost associated with obstacles x – M b1 ≤ x1x + M b2 ≥ x2y – M b3 ≤ y1y + M b4 ≥ y2b1 + b2 + b3 + b4 ≤ 3

  12. Path is computed periodically, with most current information Planning horizon, replan period chosen based on problem type, computational requirements, & environment Only subset of current plan is executed before replanning RH reduces computation time Shorter planning horizon Does not plan to destination RH introduces robustness to path planning Pop-up obstacles Unexpected obstacle movement Receding Horizon MILP Path-Planning

  13. Obstacle Avoidance Nap of the Earth Flight Treetop level Urban Low AltitudeOperations

  14. Collision Avoidance • Problem is formulated identically as Obstacle Avoidance in MILP • Air vehicles are moving obstacles • Path calculation based on expected future trajectory of other vehicles • Dealing with Uncertainty • Vehicles of uncertain intent can be enlarged with time • Receding Horizon • Frequent replanning • Change in planned path (blue) in response to changes in intruder movement

  15. Coordinated Conflict Resolution • 3-D Multi-Vehicle Path-Planning problem • Centralized version • “Decentralized Cooperative Trajectory Planning of Multiple Aircraft with Hard Safety Guarantees” by MIT • Loiter maneuvers can be used to produce provably safe trajectories • Minimum separation distance is specified in problem formulation • No limit to number of vehicles • Non-cooperative vehicles are treated as moving obstacles

  16. Purpose: To avoid detection by known threats by planning trajectory behind opaque obstacles Shadow-like “Safe Zones” One per threat/obstacle pair Well defined for convex obstacles Nice topological properties Patent Pending: Docket No. 000535-030 Threat Avoidance Vehicle hiding behind building On-time arrivalat destination Threat

  17. MILP Path Planning MILP: Fast Global Optimization No suboptimal local minima Branch & Bound provides fast tree-search Commercial solver on RTOS Tractability Trade-off: Time Discretization Constraints active only at discrete points in time Time Scale Refinement Linear dynamics/constraints Formulation should properly capture nonlinearity of solution space True global minimum is in a neighborhood of MILP optimal solution Summary

  18. Optimal Trajectory Generation withNonlinear Programming(NLP)

  19. Problems & Goal of Trajectory Generation • Currently, the primary method is pre-generated waypoint routes with little/no adaptation or reaction to threats or condition changes • Even the latest vehicles have low autonomy levels and are doing exactly what they are told, largely indifferent to the world around them • What are the potential gains of Near Real Time Trajectory Generation? • Improved Effectiveness • Reduced operator workload – force multiplier • Mission planning / re-planning • Account for range and time delays • Improved Survivability? • UAV trades success/risk • Limp-home capability • Autonomous threat mitigation (RCS, SAM, Small Arms, AA Fire) • Air/Air Engagement • Accurate release of cheap ‘dumb’ ordinance • GOAL DRIVEN AUTONOMY Command ‘What’ not ‘How’ • How best can we mimic (improve?) on human skill and speed at trajectory generation in complex environments?

  20. Classical Trajectory Optimization Problem • Issues: • Becomes the traditional two point constrained boundary value problem • Computationally expensive due to equality constraints from the system, environment and actuation dynamics • Currently intractable in required time for effective control Cost: Constraints: • Hope? • Perhaps our systems contain a structure which allows all solutions of the system, (trajectories) to be smoothly mapped, from a set of free trajectories in a reduced dimensional space. Algebraic solutions in this reduced space would implicitly satisfying the dynamic constraints of the original system.

  21. e e   Trajectory Generation: Current Methods Iteration 1: • Brute force numerical method solution of the dynamic and constraint ODE’s • Solution Method • Guess control e(t) • Propagate dynamics from beginning to end (simulate) • Propagate constraints from beginning to end (simulate) • Check for constraint violation • Modify guess e(t) • Repeat until feasible/optimal solution obtained. (optimize) • Vast complexity and extremely long solution times are addressed by either/both: • Very simple control curves • All calculations performed offline (selected/looked-up online) • Much of previous work in subject devoted to improving ‘wisdom’ of next guess Iteration 2:

  22. Differential Systems Suggest an Elegant Solution • Perhaps our systems contain a structure which allows all solutions of the system, (trajectories) to be smoothly mapped by a set of free trajectories in a reduced dimensional space. Algebraic solutions in this reduced space would implicitly satisfying the dynamic constraints of the original system dynamics and constraint ODE’s • Constraints are mapped into the flat space as well and also become time independent • Direct Solutions! We are modifying the same curve we are optimizing! • Local Support: Every solution is only affected by the trajectory near it • Basically a curve fit problem

  23. Differential Flatness Definition: A system is said to be differentially flat if there exists variables z1,…,zm of the formsuch that (x,u) can be expressed in terms of z and its derivatives by an equation of the form Example: (Point-to-Point): Differential Constraints are reduced to algebraic equations in the Flat space! Note: Dynamic Feedback Linearization via endogenous feedback is equivalent to differential flatness.

  24. z1 Instinct Autonomy: Now Using Flatness • Simply find any curve that satisfies the constraints in the flat space • Solution Method • Map system to flat space using ‘w-1’ • Guess trajectory of flat output zn • Compare against constraints (in flat space) • Optimize over control points • When completed apply ‘w’ function to convert back to normal space • Much simpler control space, no simulation required: • Very simple to manipulate curves • All calculations performed on-line on the vehicle z2 e 

  25. Too Good to be True? What did we Lose? • It seems reasonable that such a reduction in complexity would result in some sort of approximation • Many systems lose nothing at all! • Linear models that are controllable (including non-minimum phase) • Fully-flat nonlinear models • Some systems make reasonable assumptions • Conventional A/C make identical assumptions as dynamic inversion • Some systems are very much less obvious and more complicated • This is one of the hardest questions of Differential Flatness – identifying the flat output can be very difficult • Modern configurations are very challenging! • After one stabilization loop, most systems become differentially flat (or very close to it)

  26. GO TO WP_D GO TO WP_A GO TO Rnwy_3 GO TO WP_C GO TO WP_A GO TO WP_S SEC Autonomous Trajectory Generation

  27. MILP Path Planning MILP: Fast Global Optimization No suboptimal local minima Branch & Bound provides fast tree-search Commercial solver on RTOS Tractability Trade-off: Time Discretization Constraints active only at discrete points in time Time Scale Refinement Linear dynamics/constraints Formulation should properly capture nonlinearity of solution space True global minimum is in a neighborhood of MILP optimal solution Optimal Trajectory Generation OTG: Fast Nonlinear Optimization Optimal control for full nonlinear systems Differential Flatness property allows problem to be mapped to lower dimensional space for NLP solver Absence of dynamics in new space speeds optimization Easier constraint propagation Problem setup should focus on right “basin of attraction” NLP solver seeks locally optimal solutions via SQP methods Good initial guess Use in conjunction with global methods, i.e. MILP Summary

  28. Conclusions • Optimization based approaches help achieve a higher level of autonomy by enabling autonomous decision making • Cast autonomy applications into standard optimization problems, to be solved using existing optimization tools and framework • Benefits: no need to build custom solver, existing body of theory, continued improvement in solver technology • Future: broad range of complex autonomy applications are enabled by a wide, continuous spectrum of powerful optimization engines and approaches • Challenge: advanced development of V&V, sensing, & fusion technology, leading to widespread certification and adoption • Thanks/Credits: • NTG/OTG Approach: Mark Milam/NGST, Prof. R. Murray/Caltech • MILP Approach: Prof. Jonathan How/MIT • Autonomy Slides: Jonathan Mead/NGST • OTG Slides: Travis Vetter/NGIS

More Related