490 likes | 643 Views
Behavior-Based Paradigm, Part I. February 6, 2007 Class Meeting 7. Herbert, the soda can collector (MIT, Connell and Brooks, 1987). NOTE: Exam #1 is NEXT THURSDAY (Feb. 15). What to expect for Exam #1: Will cover all lecture notes through Thursday Feb. 8
E N D
Behavior-Based Paradigm, Part I February 6, 2007 Class Meeting 7 Herbert, the soda can collector (MIT, Connell and Brooks, 1987)
NOTE: Exam #1 is NEXT THURSDAY (Feb. 15) What to expect for Exam #1: • Will cover all lecture notes through Thursday Feb. 8 • Will cover all extra readings (2 to date, Chapters 1 and 3 of Arkin text)) • Will cover Murphy text, chapters 1 – 4
Today’s Objectives • To understand “emergence” in the context of reactive/behavior-based systems • To learn methods for composing and coordinating multiple behaviors • To understand two different reactive architectures: • Subsumption • Motor schemas
Recall Last Time: Behavioral Encodings • Expression of behaviors can be accomplished in several ways: • SR diagrams • Functional notation • FSA diagrams • Behaviors can be represented as triples (S, R, b ) • A strength multiplier, or gain g, can be used to turn off behaviors or alter the response’s relative strength. • Responses are encoded in two forms: • Discrete encoding • Continuous functional encoding • b(s) = r, where: • b = behavior • s = stimulus • r = response
Recall: Navigational Example(we’ll continue with this today) • Consider student going from one room to another. What is involved? • Getting to your destination from your current location • Not bumping into anything along the way • Skillfully negotiating your way around other students who may have the same or different intentions • Observing cultural idiosyncrasies (e.g., deferring to someone of higher priority – age, rank, etc.; or passing on the right (in the U.S.), …) • Coping with change and doing whatever else is necessary
Assembling Behaviors • Issue: When have multiple behaviors, how do we combine them? • Think about in terms of: Navigation example from last week: How does this work? C O O R D I N A T O R Move-to-class Class location Avoid-object Detected object Dodge-student Action Detected student Stay-right Detected path Defer-to-elder Detected elder
First, A Note on “Emergent” Behavior • Often invoked in a “mystical” sense • Emergent behavior: sum is considerably greater than its parts • True: Behavior-based outcome is often a surprise to the designer • But: is the surprise due to shortcoming of the analysis of the constituent behaviors and their coordination, or something else?
Meaning of “Emergence” • Emergence is: “the appearance of novel properties in whole systems” (Moravec 1988) • “Global functionality emerges from the parallel interaction of local behaviors” (Steels 1990) • “Intelligence emerges from the interaction of the components of the system” (Brooks 1991) • “Emergent functionality arises by virtue of interaction between components not themselves designed with the particular function in mind” (McFarland and Bosser 1993) Common thread: emergence is a property of a collection of interacting components (here, behaviors)
Question: If individual behaviors are defined functionally, why should coordinated collection produce unanticipated results? • The coordination functions we will study are algorithms, and therefore possess no magical properties • Can be straightforward: • E.g., choosing highest ranked or most dominant behavior • Can be more complex: • E.g., fusion of multiple active behaviors • Nevertheless: they are deterministic and computable • So, why can’t we predict their behavior exactly?
Answer Lies in Relationship of Robot with Environment • For most situations in which behavior-based paradigm is applied: • The world itself resists analytical modeling • Nondeterminism is rampant • Real world is filled with uncertainty and dynamic properties • Perception process is poorly characterized • Precise sensor models for open worlds do not exist • If a world model could be created that accurately captured all of its properties, then: • Emergence would not exist • Accurate predictions could be made • But, since the world resists such characterization, we cannot predict a priori, with any degree of confidence, in all but the simplest worlds, how the world will present itself. • Probabilistic models can provide guidance, but not certainty.
Summarizing Emergent Properties… • Common phenomena • But, nothing mystical about them • Consequence of: • Complexity of the world in which the robot resides • Complexity of perceiving that world • Complexity of interactions of agents and the world
Notation for Combining Behaviors • S denotes vector of all stimuli, si ,relevant for each behavior bi detectable at time t. • B denotes a vector of all active behaviors bi at a given time t • Gdenotes a vector encoding the relative strength or gain gi of each active behavior bi. • R denotes a vector of all responses rigenerated by the set of active behaviors.
Notation for Combining Behaviors (con’t.) Behavioral coordination function C is defined such that: r = C(G * B(S)) or, alternatively: r = C(G * R) where: R = , S = , G = , and B = • = vector encoding the global response that the robot will undertake, represented in the same form as r (e.g., [x, y, z, q, f, y]). * Denotes scaling operation for multiplication of each scalar component (gi) by the corresponding magnitude of the component vector ri. ( In other words, result is a response reaction in the same direction as ri, with its magnitude scaled by gi.) [ ] [ ] [ ] [ ] s1 s2 … sn g1 g2 … gn b1 b2 … bn r1 r2 … rn
Defining Coordination Function C • Two main strategies: • Competitive • E.g., Pure arbitration, where only one behavior’s output is selected • Cooperative • Blend outputs of multiple behaviors in some way consistent with agent’s overall goals • E.g., vector addition • Can also have combination of these two
Revisit Classroom Navigation Example • Robot current perceptions at time t: for each si= (p,l),where l is the stimulus p’s percentage of maximum strength (class_location,1.0) (detected_object, 0.2) (detected_student, 0.8) (detected_path, 1.0) (detected_elder, 0.0) S =
Revisit Classroom Navigation Example (con.’t) Then, behavioral response is: R is then computed using each b: bmove-to-class(s1) bavoid-object(s2) bdodge-student(s3) bstay-right(s4) bdefer-to-elder(s5) B(S) = With component vector magnitudes equal to (arbitrarily): rmove-to-class ravoid-object rdodge-student rstay-right rdefer-to-elder 1.0 0 0.8 1.0 0 R = Rmagnitude= (where each ri encodes an [x,y,q] for this particular robot expressing the desired directional response for each independent behavior)
Revisit Classroom Navigation Example (con.’t) • Remember: r = C(G * R) • Before the coordination function C is applied, R’ = G * R yielding: gmove-to-class gavoid-object gdodge-student gstay-right gdefer-to-elder 0.8 1.2 1.5 0.4 0.8 G = = With scaled component vector magnitudes equal to: g1 * r1 g2 * r2 g3 * r3 g4 * r4 g5 * r5 0.8 0 1.2 0.4 0 r = C(R’) = C R’magnitude=
Competitive Methods for Defining C • Provide a means of coordinating behavioral response for conflict resolution • Can be viewed as “winner take all” • Arbitration can be: • Fixed prioritization • Action selection • Vote generation
Competitive Method #1: Arbitration via Fixed Prioritization P E R C E P T I O N Behavior 4 Behavior 3 Behavior 2 Response of highest active behavior Behavior 1 Priority-based coordination Prioritization fixed at run-time
Competitive Method #2: Arbitration via Action Selection P E R C E P T I O N Behavior 4 Behavior 3 R = R MAX(act(B1),act(B2),act(B3),act(B4)) Response of behavior with highest activation level Behavior 2 Behavior 1 Action-Selection coordination • Behaviors compete through use of activation levels driven by agent’s goals Prioritization varies during mission
Competitive Method #3: Arbitration via Voting P E R C E P T I O N R1 Behavior 4 R2 Behavior 3 R = MAX(votes(R1),votes(R2), votes(R3),votes(R4) votes(R5)) R3 Response of behavior with highest activation level Behavior 2 R4 Behavior 1 R5 Voting-based coordination • Pre-defined set of motor responses; • Each behavior allocates votes (in some distribution) to each motor response • Motor response with most votes is executed Prioritization varies during mission
Cooperative Methods for Defining C • Behavioral fusion provides ability to concurrently use the output of more than one behavior at a time • Central issue: finding representation amenable to fusion • Common method: • Vector addition using potential fields • Example potential field: • (Lots more on this next time)
Cooperative Method #1:Behavioral Fusion via Vector Summation P E R C E P T I O N Behavior 4 S Behavior 3 Fused behavioral response Behavior 2 R = S(Gi * Ri) Behavior 1 Behavioral fusion
Summarizing Behavior Coordination • Two main strategies: • Competitive • Fixed prioritization • Action selection • Voting • Etc. • Cooperative • Vector addition • Etc. • Can also have combination of these two
Representative Reactive/Behavior-Based Architectures • Reminder: What is a robotic architecture? • “Robotic architecture is the discipline devoted to the design of highly specific and individual robots from a collection of common software building blocks” • “Robotic architecture is the abstract design of a class of robots: the set of structural components in which perception, reasoning, and action occur; the specific functionality and interface of each component, and the interconnection topology between components • “Robotic architecture provides a principled way of organizing a control system. However, in addition to providing structure, it imposes constraints on the way the control problem can be solved” • “Robotic architecture describes a set of architectural components and how they interact” • To study today: • Subsumption • Motor Schemas
Comparing and Constrasting Alternative Architectures • Commonalities: • Emphasis on the importance of coupling sensing and action tightly • Avoidance of representational symbolic knowledge • Decomposition into contextually meaningful units (e.g., behaviors or situation-action pairs) • Distinctions: • Granularity of behavioral decomposition • Basis for behavior specification (e.g., ethological, situated activity, experimental) • Response encoding method (e.g., discrete or continuous) • Coordination methods used (e.g., competitive vs. cooperative) • Programming methods, language support available, and extent of software reusability
Robot Architectures and Computability • From computational perspective, architectures are equivalent in their computational expressiveness. • Similar to choice of programming language (C, C++, Java, LISP, Cobol, Fortran, Pascal, etc.) • Recall results of Bohm and Jacopini (1966) concerning computability in programming languages: • Proved that if any language contains: • Ability to perform tasks sequentially • Conditional branching • Iteration then it can compute entire class of computable functions (i.e., it is Turing equivalent) • Since robotic architectures provide this, they are equivalent.
Example Reactive/Behavior-Based Robots Herbert (soda can collector) Genghis (MIT, Brooks, 1986) (MIT, Connell and Brooks, 1987)
Evaluating Architectures • How do we evaluate an architecture’s suitability for a particular problem? • Support for parallelism • Hardware targetability • Niche targetability • Support for modularity • Robustness • Timeliness in development • Run time flexibility • Performance effectiveness
Foraging Example • As an example to consider, let’s look at robotic foraging • Foraging: • Robot moves away from home base looking for attractor objects • When detect attractor object, move toward it, pick it up, and return it to home base • Repeat until all attractors in environment have been returned home • High-level behaviors required to accomplish foraging: • Wander: move through world in search of an attractor • Acquire: move toward attractor when detected • Retrieve: return the attractor to the home base once required
Finite State Acceptor (FSA) Diagram for Foraging Start Wander Acquire BEGIN DETECT RELEASE GRAB Halt Retrieve DONE
Subsumption Architecture • Developed in mid-1980s by Rodney Brooks, MIT Modify the World Sense Create Maps Model Discover New Areas Plan Avoid Collisions Act Move around Old Sense-plan-act model New subsumption model
Tenets of the Subsumption Architecture • Complex behavior need not be the product of a complex control system • Intelligence is in the eye of the observer • The world is its own best model • Simplicity is a virtue • Robots should be cheap • Robustness in the presence of noisy or failing sensors is a design goal • Planning is just a way of avoiding figuring out what to do next • All onboard computation is important • Systems should be built incrementally • No representation. No calibration. No complex computers. No high-bandwidth communication.
Subsumption Robots • Allen • Tom and Jerry • Genghis and Attila • Squirt • Toto • Seymour • Tito • Polly • Cog
Coordination in Subsumption • “Subsumption” comes from coordination process used between layered behaviors of architecture • Complex actions subsume simpler behaviors • Fixed priority hierarchy defines topology • Lower levels of architecture have no “awareness” of upper levels • Coordination has two mechanisms: • Inhibition: used to prevent a signal being transmitted along an AFSM wire from reaching the actuators • Suppression: prevents the current signal from being transmitted and replaces that signal with the suppressing message
Subsumption Based on Augmented Finite State Machines (AFSM) Reset Suppressor R BEHAVIORAL MODULE S Output wires Input wires I Inhibitor
Characteristics of Subsumption • AFSM encapsulates a particular behavioral transformation function bi • Stimulus or response signals can be suppressed or inhibited by other active behaviors • Each AFSM performs its own action and is responsible for its own world perception • No global memory, bus, or clock • No central world models • No global sensor representations • Each behavior can be (but is not required to be) mapped onto its own processor • Later versions of subsumption: • Use Behavior Language, which provides higher abstraction independent of AFSM, using a single rule set to encode each behavior • High-level language is then compiled to the intermediate AFSM, which can then be further compiled to run on a range of target processors
Example of 3-Layered Subsumption Implementation Lost Back-out-of-tight-situations Layer Collide Reverse clock Explore Layer Go Wander S E N S O R S Avoid-Objects Layer Forward S Motors Run Away S Brakes S
Foraging Example • Behaviors Used: • Wandering: move in a random direction for some time • Avoiding: • Turn to the right if the obstacle is on the left, then go • Turn to the left if the obstacle is on the right, then go • After three attempts, back up and turn • If an obstacle is present on both sides, randomly turn and back up • Pickup: Turn toward the sensed attractor and go forward. If at the attractor, close gripper. • Homing: Turn toward the home base and go forward, otherwise if at home, stop.
Organization for Subsumption-Based Foraging Robot Homing Pickup S Avoiding S Wandering S
Genghis Subsumption Design • Behavioral layers implemented: • Standup • Simple walk • Force balancing • Leg lifting • Whiskers • Pitch stabilization • Prowling • Steered prowling Two motors per leg: a = advance, which swings leg back and forth b = balance, which lifts leg up and down
Genghis AFSM Network unique; “central control” IR sensors prowl for/bak pitch duplicated twice control actuators feeler receive input from sensors beta force beta balance I S alpha collide walk up leg trigger I D leg down beta pos alpha advance S alpha balance S alpha pos steer
“Core Subset” of Genghis AFSM Network unique; “central control” • Enables robot to walk without any feedback: • Standup • Simple walk duplicated twice control actuators receive input from sensors walk up leg trigger leg down beta pos alpha advance S alpha balance S alpha pos
Evaluation of Subsumption • Strengths: • Hardware retargetability: Subsumption can compile down directly onto programmable-array logic circuitry • Support for parallelism: Each behavioral layer can run independently and asynchronously • Niche targetability: Custom behaviors can be created for specific task-environment pairs • Null (not strength/not weakness): • Robustness: Can be successfully engineered into system but is often hard-wired and hard to implement • Timeliness for development: Some support tools exist, but significant learning curve exists • Weaknesses: • Run time flexibility: priority-based coordination mechanism, ad hoc aspect of behavior generation, and hard-wired aspects limit adaptation of system • Support for modularity: behavioral reuse is not widely done in practice
Preview of Next Class • More about Motor Schemas • Potential Field Approaches • Other reactive/behavior-based approaches