1 / 56

Control and Decision Making in Uncertain Multiagent Hierarchical Systems

Control and Decision Making in Uncertain Multiagent Hierarchical Systems. June 10 th , 2002 H. Jin Kim and Shankar Sastry University of California, Berkeley. Outline. Hierarchical architecture for multiagent operations Confronting uncertainty Partial observation Markov games (POMgame)

susieh
Download Presentation

Control and Decision Making in Uncertain Multiagent Hierarchical Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Control and Decision Making in Uncertain Multiagent Hierarchical Systems June 10th , 2002 H. Jin Kim and Shankar Sastry University of California, Berkeley

  2. Outline Hierarchical architecture for multiagent operations Confronting uncertainty Partial observation Markov games (POMgame) Incorporating human intervention in control and decision making Model predictive techniques for dynamic replanning

  3. Partial-observation Probabilistic Pursuit-Evasion Game(PEG) with 4 UGVs and 1 UAV Fully autonomous operation

  4. position of targets • position of obstacles • positions of agents Exogenous disturbance Strategy Planner Map Builder Hierarchy in Berkeley Platform Communications Network desired agents actions targets detected agents positions obstacles detected tactical planner Tactical Planner & Regulation Vehicle-level sensor fusion Uncertainty pervades every layer! obstacles detected trajectory planner state of agents regulation • obstacles • detected • targets • detected inertial positions height over terrain actuator positions • lin. accel. • ang. vel. control signals Impossible to build autonomous agents that can cope with all contingencies actuator encoders vision ultrasonic altimeter INS GPS Terrain UAV dynamics UGV dynamics Targets

  5. Ground Station Human Interface High degree of autonomy does not guarantee superior performance of overall system Command Current Position, Vehicle Stats Evader location detected by Vision system

  6. Lessons Learned and UAV/UGV Objective • To design semi-autonomous teams that deliver mission reliably under uncertainty and evaluate their performance • Scalable/replicable system aided by computationally tractable algorithms • Hierarchical architecture design and analysis • High-level decision making in a discrete space • Physical-layer control in a continuous space • Hierarchical decomposition requires tight interaction between layers to achieve cooperative behavior, to deconflict and to support constraints. • Confronting uncertainty arising from partially observable, dynamically changing environments and intelligent adversaries • Proper degree of autonomy to incorporate reliance on human intervention • Observability and directability, not excessive functionality

  7. POMGAME Representing and Managing Uncertainty • Uncertainty is introduced in various channels • Sensing -> unable to determine the current state of world • Prediction -> unable to infer the future state of world • Actuation ->unable to make the desired action to properly affect the state of world • Different types of uncertainty can be addressed by different approaches • Nondeterministic uncertainty : Robust Control • Probabilistic uncertainty : (Partially Observable) Markov Decision Processes • Adversarial uncertainty : Game Theory

  8. Markov Games • Framework for sequential multiagent interaction in an Markov environment

  9. Policy for Markov Games • The policy of agent i at time t is a mapping from the current state to probability distribution over its action set. • Agent i wants to maximize • the expected infinite sum of a reward that the agent will gain by executing the optimal policy starting from that state • where is the discount factor, and is the reward received at time t • Performance measure: • Every discounted Markov game has at least one stationary optimal policy, but not necessarily a deterministic one. • Special case : Markov decision processes (MDP) • Can be solved by dynamic programming

  10. Partial Observation Markov Games (POMGame)

  11. Policy for POMGames • The agent i wants to receive at least • Poorly understood: analysis exists only for very specially structured games such as a game with a complete information on one side • Special case : partially observable Markov decision processes (POMDP)

  12. Acting under Partial Observations • Memory-free policies (mapping from observation to action or probability distribution over action sets) are not satisfactory. • In order to behave truly effectively we need to use memory of previous actions and observations to disambiguate the current state. • The state estimate, or belief state • Posterior probability distribution over states = the likelihood the world is actually in the state x, at time t, given the agent’s past experience (I.e. actions and observation histories). A priori human input on the initial state of world

  13. Updating Belief State • Can be updated recursively using the estimated world model and Bayes’ rule. New info on prediction New info on the state of world

  14. BEAR Pursuit-Evasion Scenario Evade!

  15. Optimal Pursuit Policy • Performance measure : capture time • Optimal policy m minimizes the cost

  16. Optimal Pursuit Policy • cost-to-go for policy m, when the pursuers start with Yt= Y and a conditional distributionpfor the state x(t) • cost of policy m

  17. Persistent pursuit policies • Optimization using dynamic programming is computationally intensive. • Persistent pursuit policy g

  18. Persistent pursuit policies • Persistent pursuit policy gwith a periodT

  19. Pursuit Policies • Greedy Policy • Pursuer moves to the cell with the highest probability of having an evader at the next instant • Strategic planner assigns more importance to local or immediate considerations • u(v): list of cells that are reachable from the current pursuers position v in a single time step.

  20. Persistent Pursuit Policies for unconstrained motion Theorem 1, for unconstrained motion • The greedy policy is persistent. ->The probability of the capture time being finite is equal to one ->The expected value of the capture time is finite

  21. Persistent Pursuit Policies for constrained motion Assumptions • For any • Theorem 2, for constrained motion • There is an admissible pursuit policy that is persistent on the average with period

  22. Experimental Results: Pursuit Evasion Games with 4UGVs (Spring’ 01)

  23. Experimental Results: Pursuit Evasion Games with 4UGVs and 1 UAV (Spring’ 01)

  24. Pursuit-Evasion Game Experiment • PEG with four UGVs • Global-Max pursuit policy • Simulated camera view • (radius 7.5m with 50degree conic view) • Pursuer=0.3m/s Evader=0.5m/s MAX

  25. Pursuit-Evasion Game Experiment • PEG with four UGVs • Global-Max pursuit policy • Simulated camera view • (radius 7.5m with 50degree conic view) • Pursuer=0.3m/s Evader=0.5m/s MAX

  26. Experimental Results: Evaluation of Policies for different visibility Capture time of greedy and glo-max for the different region of visibility of pursuers 3 Pursuers with trapezoidal or omni-directional view Randomly moving evader • Global max policy performs better than greedy, since the greedy policy selects movements based only on local considerations. • Both policies perform better with the trapezoidal view, since the camera rotates fast enough to compensate the narrow field of view.

  27. Experimental Results: Evader’s Speed vs. Intelligence Capture time for different speeds and levels of intelligence of the evader 3 Pursuers with trapezoidal view & global maximum policy Max speed of pursuers: 0.3 m/s • Having a more intelligent evader increases the capture time • Harder to capture an intelligent evader at a higher speed • The capture time of a fast random evader is shorter than that of a slower random evader, when the speed of evader is only slightly higher than that of pursuers.

  28. Game-theoretic Policy Search Paradigm • Solving very small games with partial information, or games with full information, are sometimes computationally tractable • Many interesting games including pursuit-evasion are a large game with partial information, and finding optimal solutions is well outside the capability of current algorithms • Approximate solution is not necessarily bad. There might be simple policies with satisfactory performances -> Choose a good policy from a restricted class of policies ! • We can find approximately optimalsolutions from restricted classes, using a sparse sampling and a provably convergent policy search algorithm

  29. Constructing A Policy Class • Given a mission with specific goals, we • decompose the problem in terms of the functions that need to be achieved for success and the means that are available • analyze how a human team would solve the problem • determine a list of important factors that complicate task performance such as safety or physical constraints • Maximize aerial coverage, • Stay within a communications range, • Penalize actions that lead an agent to a danger zone, • Maximize the explored region, • Minimize fuel usage, …

  30. Policy Representation • Quantize the above features and define a feature vector that consists of the estimate of above quantities for each action given agents’ history • Estimate the ‘goodness’ of each action by constructing where is the weighting vector to be learned . • Choose an action that maximizes . • Or choose a randomized action according to the distribution Degree of Exploration

  31. Policy Learning • Policy parameters are learned using standard techniques such as gradient descent algorithm to maximize the long-term reward • Given a POMDP, and assuming that we have a deterministic simulative model, we can approximate a value for a specific policy by building a set of trajectory trees with depth • ms is independent of the size of the state space or the complexity of the transition distribution [Ng, Jordan00] Computational tractability

  32. Example: Policy Feature • Maximize collective aerial coverage -> maximize the distance between agents where is the location of pursuer that will be landed by taking action from • Try to visit an unexplored region with high possibility of detecting an evader where is a position arrived by the action that maximizes the evader map value along the frontier

  33. Example: Policy Feature (Continued) • Prioritize actions that are more compatible with the dynamics of agents • Policy representation

  34. Benchmarking Experiments • Performance of two pursuit policies compared in terms of capture time • Experiment 1 : two pursuers against the evader who moves greedily with respect to the pursuers’ location • Experiment 2 : When we supposed the position of evader at each step is detected by the sensor network with only 10% accuracy, two optimized pursuers took 24.1 steps, while the one-step greedy pursuers took over 146 steps in average to capture the evader in 30 by 30 grid.

  35. Incorporating Human Intervention • Given the POMDP formalism, informational inputs affect only initializing or updating the belief state, and does not affect the procedure of computing (approximately) optimal actions. • When a part of the system is commanded to take specific actions, it may overrule internally chosen actions and simultaneously communicate its modified status to the rest of the system, which then in turn adapts to coordinate their own actions as well. • A human command in the form of mission objectives can be expressed as a change to the reward function, that causes the system to modify or dynamically replan its actions to achieve it. The importance of a goal is specified by changing the magnitude of the rewards.

  36. Coordination under Multiple Sources of Commands • When different humans or layers specify multiple, possibly conflicting goals or actions, how the system can prioritize or resolve them ? • Different entities are a priori assigned different degrees of authority • If there are enough resources to resolve an important conflict, we may give operators the option of explicitly coordinating their goals • Surge in coordination demand when the situation deviates from textbook cases: can the overall system adapt real-time? • Intermediate, cooperative modes of interaction (vs. traditional human interrupt of full manual form) is desirable • Transparent, event-based display to highlight changes (vs. current data-oriented display) • Anticipatory reasoning (not just information on history) should be supported.

  37. Deconfliction between Layers Each UAV is given a waypoint by high-level planner Shortest trajectories to the waypoints may lead collision How to dynamically replan the trajectory for the UAVs subject to input saturation and state constraints ?

  38. (Nonlinear) Model Predictive Control • Find that minimizes • Common choice

  39. Planning of Feasible Trajectories • State saturation • Collision avoidance • Magnitude of each cost element represents the priority of tasks/functionality, or the authority of layers

  40. position of targets • position of obstacles • positions of agents Exogenous disturbance Strategy Planner Map Builder Hierarchy in Berkeley Platform Communications Network desired agents actions targets detected agents positions obstacles detected tactical planner Tactical Planner & Regulation Vehicle-level sensor fusion obstacles detected trajectory planner state of agents regulation • obstacles • detected • targets • detected inertial positions height over terrain actuator positions • lin. accel. • ang. vel. control signals actuator encoders vision ultrasonic altimeter INS GPS Terrain UAV dynamics UGV dynamics Targets

  41. H1 H0 H2 |Lin. Vel.| < 16.7ft/s |Ang| < pi/6 rad |Control Inputs| < 1 Trajectories followed by 3 UAVs Coordination based on priority Constraints supported Cooperative Path Planning & Control Example: Three UAVs are given straight line trajectories that will lead to collision. NMPPC dynamically replans and tracks the safe trajectory of H1 and H2 under input/state constraints.

  42. Summary • Decomposition of complex multiagent operation problems requires tighter interaction between subsystems and human intervention • Partial observation Markov games provides a mathematical representation of a hierarchical multiagent system operating under adversarial and environmental uncertainty • Policy class framework provides a setup for including human experience • Policy search methods and sparse sampling produce computationally tractable algorithms to generate approximate solutions to partially observable Markov games. • Human input can/should be incorporated, either a priori or on-the-fly, into various factors such as reward functions, feature vector elements, transition rules, action priority • Model predictive (receding horizon) techniques can be used for dynamic replanning to deconflict/coordinate between vehicles, layers or subtasks

  43. Unifying Trajectory Generation and Tracking Control • Nonlinear Model Predictive Planning & Control combines trajectory planning and control into a single problem, using ideas from • Potential-field based navigation (real-time path planning) • Nonlinear model predictive control (optimal control of nonlinear multi-input, multi-output systems with input/state constraints) • We incorporate a tracking performance, potential function, state constraints into the cost function to minimize, and use gradient-descent for on-line optimization. • Removes feasibility issues by considering the UAV dynamics from the trajectory planning • Robust to parameter uncertainties • Optimization can be done real-time

  44. Modeling and Control of UAVs • A single, computationally tractable model cannot capture nonlinear UAV dynamics throughout the large flight envelope. • Real control systems are partially observed (noise, hidden variables). • It is impossible to have data for all parts of the high-dimensional state-space. -> Model and Control algorithm must be robust to unmodeled dynamics and noise and handle MIMO nonlinearity. Observation: Linear analysis and deterministic robust control techniques fail to do so.

  45. AerodynamicAnalysis longitudinal flapping lateral flapping main rotor collective pitch tail rotor collective pitch Body Velocities Angular rates Servoinputs throttle Coordinate Transformation Augmented Servodynamics Modeling RUAV Dynamics Tractable Nonlinear Model Position Spatial velocities Angles Angular rates

  46. Benchmarking Trajectory PD controller Nonlinear, coupled dynamicsare intrinsic characteristics in pirouette and nose-in circle trajectories. Example PD controller fails to achieve nose-in circle type trajectories.

  47. Reinforcement Learning Policy Search Control Design • Aerodynamics/kinematics generates a model to identify. • Locally weighted Bayesian regression is used for nonlinear stochastic identification: we get the posterior distribution of parameters, and can easily simulate the posterior predictive distribution to check the fit and robustness. • A controller class is defined from the identification process and physical insights and we apply policy search algorithm . • We obtain approximately optimal controller parameters by reinforcement learning, I.e. training using the flight data and the reward function. • Considering the controller performance with a confidence interval of the identification process, we measure the safety and robustness of control system.

  48. Performance of RL Controller Assent & 360° x2 pirouette Manual vs. Autonomous Hover

  49. Demo of RL controller doing acrobatic maneuvers (Spring 02)

  50. maneuver3 pirouette maneuver1 maneuver2 Set of Manuevers • Any variation of the following maneuvers in x-y direction • Any combination of the following maneuvers Nose-in During circling Heading kept the same

More Related