200 likes | 349 Views
Subsuption Architecture. Robotic Course presentation Amirali Salehi Abari. Rodney Brooks-1986 MIT. What is Subsuption Architecture?.
E N D
Subsuption Architecture Robotic Course presentation Amirali Salehi Abari
What is Subsuption Architecture? • It is a architecture for controlling mobile robot and It is behaviour Model.layers of control system are built to let the robot operate at increasing level of competence.
Architecture Before SA The traditional architecture used for mobile robot control systems -- decomposition into functional modules as follows: sensors effectors
Sense-plan-act (SPA) • Dominant approach to building mobile robots: sense-plan-act cycle • Sense: Sensors determine the values of state variables . • Plan: Modeling the world, finding a plan to satisfy the goal. • Act: Execute the plan (its first action), then go back to the first step.
Problem of SPA • Planning is so hard. • World modelling is difficult • An instance of each piece must be built in order to run robot at all. • Changing in each piece is difficult. • It is not Robust. • It is difficult to be distributed.
Brook ‘s Requirment • Multiple goals • Conflicting • Importance(priority) • Multiple sensors • Robustness • Sensor fail • Environment changes • Additivity
A Subsumption Architecture sensors effectors The levels/layers are added from the bottom up, and remain. Higher-level layers subsume lower-level layers when they want to take control. A level can potentially communicate with any other level.
Subsumption Architecture sensors effectors Each behavior may be simple or complex.
Features of aSubsumption Architecture • Layers of control allow the robot to operate at increasing levels of competence. The control system can be constructed one layer at a time, beginning with the lowest level. • Each layer is composed of asynchronous modules that communicate over low-bandwidth channels. • lowest layers handle most basic tasks • Lower layers represent less abstract behaviours, e.g. obstacle avoidance in physically embodied agents. And Higher layers represent more abstract ideas, e.g. move to the other side of the room • Higher levels can subsume the roles of lower levels by inhibiting their inputs or suppressing their outputs. • If higher levels fail, lower levels continue to function. This provides robustness.
Features of aSubsumption Architecture • Different layers can work on separate goals concurrently. As you go higher in levels, computation time is slower. Only one layer selects the outputs at a time. • Each module is a separate piece of code, called a behavior-producing module (BPM), which may run on its own processor. The BPMs produce the observablebehaviors of the robot. In Brooks’ original conception, a BPM was implemented as a finite-state machine.(FSM)
A Finite-State Machine (FSM) Start here Condition for state transition Outputs during transition State 1 Next-state transition State 2 State 3
AFSM Feature • Each behavior is represented as an augmented finite state machine (AFSMs) • Stimulus (input) or response (output) can be inhibited or suppressed by other active behaviors. • An AFSM can be in one state at a time, can receive one or more inputs, and send one or more outputs • AFSMs are connected communication wires, which pass input and output messages between them; only the last message is kept • AFSMs run asynchronously
Example Modules In Level 0 Control System collide HALT sonar motor select default move MOVE
Advantage Of SA • could achieve several goals simultaneously • might still work when one of the sensors fails • seems to allow the addition of new capabilities • No high-level planning is required • No need to know the environment • Robust, graceful degradation • Very simple to implement
Disadvantage of SA • difficult to decide to which level a behavior belongs • Effective agents be built with a small number of layers, about 10 maximum, but it is much harder to build agents that contain many layers. The dynamics of the interactions between the different behaviours become too complex to understand. • It is difficult to see how purely reactive agents can be designed to learn from their experience, and improve their performance over time