350 likes | 368 Views
Vehicle Autonomy and Intelligent Control. J. A. Farrell Department of Electrical Engineering University of California, Riverside. Value Judgment Sensor World Behavior Processing Model Generation Sensors Structure Actuators. World. Intelligence & Autonomy.
E N D
Vehicle Autonomy and Intelligent Control J. A. Farrell Department of Electrical Engineering University of California, Riverside
Value Judgment Sensor World Behavior Processing Model Generation Sensors Structure Actuators World Intelligence & Autonomy Increasingly capable autonomous vehicles: a worthy challenge necessitating increased ability along various dimensions of intelligence
AV Examples • Phoenix Mars Lander • Artist: Corby Waste, JPL
AV Examples Stanford/Volkswagen’s Stanley: 2005 DARPA Grand Challenge
CMU: TARTAN Racing - Boss 2007 DARPA Urban Challenge “… to autonomously navigate in town and in traffic. Boss uses perception, planning and behavioral software to reason about traffic and take appropriate actions while proceeding safely to a destination. “ AV Examples
Value Judgment Sensor World Behavior Processing Model Generation Sensors Structure Actuators World IAV: Control Impact
Enabling Technological Advances • Computational Hardware • Sensors and Sensor Processing • Computational Reasoning • Control Theoretic Advances • Software Engineering Principles This talk: One perspective on how such advances enable advancing AV capability Topics: • Deliberative & reactive planning • Behaviors & nonlinear control • Discrete event & hybrid systems • Theory & practicality: Cognitive mapping
Computational Reasoning: AI “The science and engineering of making intelligent machines” – John McCarthy, 1956 Benchmark Intelligent (Human) Capabilities: • Deduction, reasoning, problem solving • Natural language understanding • Knowledge representation • Planning & scheduling • Learning • Vision • …… Intelligence: the ability of a system to act appropriately in an uncertain environment, where appropriate action is that which increased the probability of success, and success is the achievement of behavioral subgoals that support the system’s ultimate goal. -J. S. Albus
Computational Reasoning: Planning Discovery of an action sequence to achieve a goal • Formulation: Initial state, final state, action set, cost • Implementation: Search (e.g. A*), hierarchical tasks, heuristics • Challenges: Dealing w/ real world • Dimensionality • Model error • Lack of determinism
AI Early Successes Games, Theorem proving, Planning, etc. R. Brooks (1987) questions status: “Replication of human intelligence in a machine” • Achieve success on AI component • abstraction • symbolic processing w/ simple semantics • no uncertainty • Neglected Hard Issues: • Recognition • Spatial understanding • Uncertainty & Noise • Model error • …..
Traditional “Mobile Robot Control” Traditional approach: -- Decompose human intelligence into (right) subpieces, -- Progress on each subpiece, -- Define (right) interfaces between subpieces -- Reassemble subpieces Criticism: Insufficient experience and knowledge to … --R. Brooks (1987)
Behavior Based Control • Capabilities of Intelligent Systems • Built incrementally via task-achieving behaviors • Complete functional systems at each step: • to ensure pieces are valid • to ensure interfaces are valid
Planning & Reaction Hierarchical Planning: Formulates action sequence for long range goals • Deliberation • Time consuming • Model based • Adaptability for general tasks Reactivity: Activates automatically to ensure vehicle safety • Direct reflexive perception-action links • Tradeoff: optimal for safe • Well-tested for fixed tasks • Many opportunities for control theoretic contributions: • Behaviors provide interface • Finite alphabet of discrete actions/events for planning • Continuous desired trajectories to controllers • Behaviors included control, but were not control theoretic • Higher performance/robustness • Behavior switching requires analysis • Domains of attractions, controlled invariant sets • Switching stability • Adaptability requires stable performance feedback • Environmental models • Behavior models: closed loop performance • Etc. Discrete Event Systems Nonlinear control Hybrid systems Adaptation & learning
Behavior based ‘control’ design via DES Specify the set of events S, set of behaviors Q, and transition function d to solve a given problem. S – set of switching events e(t) Q – set of behaviors i(t) d – behavioral switching logic in response to events i(t+)=(e(t),i(t)) The resulting automaton can be represented as a graph. • Discrete Event Controller, (e(t),i(t)) • Switches among behaviors • Interface: • Event generator • Library of behaviors: Q = {Bi}, i = 1,…N • Trajectory generator • Controller
Q: Behaviors GT – go to point P US – uninformed search IS – informed search MI – maintain: in MO – maintain: out PD – post-declaration maneuvers : Events f – finish c – detect chemical @td n1 – no detection at t = td + t1 d – declare source DES: Chemical Plume Tracing • Design behaviors Q, event definitions S, and transition function such that • an autonomous underwater vehicle (AUV) will • Proceed from a home location to a region of operation • Search for a chemical plume • Track a chemicalnn plume in a turbulent flow to its source • Declare the source location • Return home
DES: Chemical Plume Tracing • DES formulation provides systematic design/analysis structure • Graph representation of facilitates definition of specifications within design team and with customer • Behaviors Q • Each behavipor designed to execute a specific trajectory • Behavior/Control interface at the speed/heading command level • New behaviors easily added • Design: Q, S, d • Biological emulation: moths, mosquitoes, salmon, … • Understanding of vehicle kinematics, fluid flow, physics • Informed search using HMM for chemical transport • For CPT, stochastic DES sufficiently complex to preclude analytic analysis • Analysis and design based on simulation • At-sea surf-zone performance demonstration (3x)
CPT In-water Experimental Results (June 2003) • Mission 003 • OpArea is dashed line • Trajectory inred • Chemical detections in blue
What is a Behavior/Schema? • A pattern of action as well as a pattern for action (Neisser 1976). • A mental codification of experience that includes a particular organized way of perceiving cognitively and responding to a complex situation or set of stimuli (Merriam-Webster 1984). • A control system that continually monitors … the system it controls to determine the appropriate pattern of action for achieving the motor schema’s goals (Overton 1984). Arkin 1989 • Behavior implementation requires control • traditionally at the speed and yaw command level • Speed & yaw control implementation is part of the hardware • Alternative interfaces/behaviors may be desirable • control is critical • performance • robustness • different behaviors may necessitate different controllers • switching between different controllers for different behaviors must be performed in a stable manner
Behavior Examples: Land Vehicle • Throttle and wheel angle control • Speed (cruise) control • Adaptive cruise control – slows to avoid collisions • Speed and yaw rate control • Speed and yaw angle control • Path following • Trajectory following • Still may use speed and yaw as intermediate control variables • Provides provably stable system • Robustness analysis is possible • Domain of attraction can be determined • Autonomous Parallel Parking of a Nonholonomic Vehicle • ... Avoid obstacle, follow target, change lane, exit, … • …Platoon: merge, exit, …
Behavior Examples: VSTOL MODES • CTOL – Conventional Takeoff & Landing • VTOL – Vertical Takeoff & Landing • Transition Key Ideas • Stability via approximate feedback linearization • Maximal controlled invariant subset • Least restrictive feedback control • Flight envelope protection
Behavior Examples: Helicopter • Behaviors: Motion primitives • trim points, transition between trim points • Tactical planning by hybrid automata: • Selection of optimal sequence of motion primitives: • Vehicle state constraints • Cost function • Strategic objectives • Each node of the automata is an agent (controller) responsible for behavior implementation
Behavior based controller • Library of behaviors: {Bi}, i = 1,…N • Each behavior: Bi ai, bi, Wi are Class K functions
Hybrid/Switched Systems Issues • No Zeno: Guaranteed via trajectory generator portion of planner/behavior • Behavior stability: Guaranteed via nonlinear control design/analysis given that behavior i starts with xi2i • Switching stability: • Requires
AUV for Hull Search Behaviors: • velocity & angular rate • velocity & attitude • trajectory following w/ zero attitude • trajectory following w/ nonzero attitude • surface following • hold position and attitude • scan object at offset Sim
Comments: • Simulation is an essential tool • idea evaluation • debugging • Implementation and test • of complete systems • on real vehicles • in the real world is the only real test of efficacy • Rigorous theoretical study: foundation to enable & direct advancement in autonomous vehicle capabilities • Ingenuity: to address the practical complexities beyond our theoretical understanding • Contests: • DARPA: Grand & Urban • AUVSI: UAS, UGV, USV, AUV • NIST: Search & Rescue • SAUC-E
Cognitive Mapping • Egocentric: self-centered frame • Object locations change as the vehicle moves • Uses: sensor information • Allocentric: external reference frame • Object locations are (largely) fixed • Uses: planning, long-term memory Human Example: Home map (allocentric) facilitates planning Vision (egocentric sensor) facilites maneuvering
Simultaneous Localization and Mapping Setting:Initiate an AV at an unknown location in an unknown environment: • Develop a map M of the unknown environment • Maintain knowledge of the AV position Pv w/i the unknown environment Assuming only egocentric sensing D landmark info: di – distance and bi – bearing dead-reckoning: odometry or inertial No anchoring (i.e., sensors such as GPS are not used) • SLAM Theoretical solution w/ properties in 2001 • Stochastic & Kalman filter methods • Linear assumptions
Practical SLAM: Challenges • System: noise, nonlinearity, observability issues • Dimensionality: • Number of variables • Position variables: 3*(#landmarks+1) • Covariance matrix: 9*(#landmarks+1)2 • Topography: grid or triangular tessalation • Topology • Correspondence or Data Association: Ego to Allo issues • Time variation: object motion, aging, changing topology • Exploration: optimization w/ map uncertainty • Sensor fusion: combining heterogeneous information from various sensor modalities
Similar Complex IAV Problems • Cognitive Mapping • Perception • Sensor Fusion/Feature Correspondence • Behavioral Learning • Optimal Control • Approximate dynamic programming • Mission Planning • Heuristics, hierarchies, …
Concluding Comments • Turing Test: • Optimal • Strong super-human: performs better than all humans • Super human: performs better than most humans • Sub-human: performs worse than most humans • Intelligent AV Capabilities, e.g.: • All involve feedback processes, w/ many challenging & unsolved problems • Control expertise has & continues to expand its role, both developing & utilizing new tools, to yield increasingly robust and capable systems • The concept of behaviors, combined w/ advanced control methods, enables robust abstraction for higher level IAV performance
Caption: It locates and destroys mines at the command of an operator who’s nowhere in the vicinity Thank you
Agile AV SW/HW Development Tenets • Simplicity • Start w/ simplest approach • Always have a functioning prototype • Add functionality as needed • Feedback & Communications • From customer • From team • Behavior specification • Unit test specification • Simulation test • From system • Freq. vehicle testing
Intelligent AV: Implementation “Optimism is an occupational hazard of programming, feedback is the treatment.” Kent Beck