1 / 80

Advanced Mobile Robotic The City College of the City University of New York Introduction to AI Robotics

Advanced Mobile Robotic The City College of the City University of New York Introduction to AI Robotics. Instructor: Prof. Jizhong Xiao Presenters: Diana Acevedo Sefton Bennett Clara Nieto-Wire Jorge O. Peche Marios Timotheou March 13, 2006.

lynley
Download Presentation

Advanced Mobile Robotic The City College of the City University of New York Introduction to AI Robotics

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Advanced Mobile RoboticThe City College of the City University of New YorkIntroduction to AI Robotics Instructor: Prof. Jizhong Xiao Presenters: Diana Acevedo Sefton Bennett Clara Nieto-Wire Jorge O. Peche Marios Timotheou March 13, 2006

  2. Reactive Paradigm • Biological foundations of the reaction paradigm • The reactive paradigm • Designing a reactive implementation

  3. Biological foundations of the reaction paradigm Chapter 3

  4. Why?????? • Animals  Open World • AGENT  Self-Contained and Independent Interacts with the world to make changes or sense what is happening. Humans-Animals-Robots = AGENT

  5. Marr’s Computational Theory Level 1 What are we trying to represent? Level 2 How Represent it? Level 3 How is it implemented?

  6. Computational Theory Levels • Existent proof of what can/should be done. Agents share commonality of purpose or functionality. • “What”=Inputs, outputs and transformations. Creating Flow Charts of black boxes. Agents exhibit common processes. • How to implement the processes. Agents may have little or not commonality in their implementations.

  7. PATTERN OF MOTOR ACTIONS SENSORY INPUT BEHAVIOR BEHAVIOR Mapping of sensory inputs to a pattern of motor actions which then are used to achieve a task. Pattern  Actions and sequences that are always the same.

  8. Types of Behavior • Reflexive : Stimulus-Response. • Reactive : Learn-Consolidate • Conscious : Deliberative Reactive Paradigm  Reflexive Behavior

  9. Reflexive Behavior NO CONGNITION • Reflexes : TRESPONSE = TSTIMULUS • Taxes : Response  Movement • Fixed-Action Patterns: TRESPONSE > TSTIMULUS

  10. …Ways to Acquire Behaviors • Innate • Sequence of innate behaviors • Innate with memory • Learn

  11. IRM  Activates the Behavior PATTERN OF MOTOR ACTIONS SENSORY INPUT BEHAVIOR Innate Releaser Mechanisms • Releaser: Control Signal (internal or external). • Behaviors can be implicitly chained together by their releasers.

  12. Concurrent Behaviors • Equilibrium  Behaviors balance each other. • Dominance of one  Winner takes all. • Cancellation  Behaviors cancel each other.

  13. WORLD Acts & Modifies World Samples, Finds Potential Actions Directs what to look for COGNITIVE ACTIVITY PERCEPTION OF ENVIROMENT PERCEPTION • Cognitive activities (Feedback and Feed forward control) • Planning • Reacting • Two uses of perception • Release a behavior • Guide a behavior

  14. Gibson’s Ecological Approach • Affordances : Perceivable potentialities of the environment for an action. • Directly Perceivable. • Sensing does not need/require memory or interpretation (optic flow). • Structural models attempt to describe an object in terms of physical components.

  15. Neisser: Two perceptual systems • Direct Perception: Gibsonian or ecological track of the brain. Structures low in the brain that evolved earlier. (Affordances) • Recognition: More recent perceptual track in the brain.

  16. Schema Theory Schema  Knowledge of how to act and/or perceive. Computational process to accomplish the activity. • schema instantation is specific to a situation, equivalent to an instance in OOP.

  17. Motor Schema Behavior = Shema RELEASERS Perceptual Schema

  18. The Reactive Paradigm Chapter 4

  19. Objectives • Define what the reactive paradigm is in terms of : • i) the three primitives SENSE, PLAN, and ACT • ii) sensing organization. • List and discuss the characteristics and connotations of a reactive robotic system. • Describe the two dominant methods for combining behaviors in a reactive architecture. • i) subsumption • ii) potential field summation. • Evaluate these two reactive architectures in terms of: • i) support of modularity • ii) niche targetability • iii) ease of portability to other domains • iv) robustness • Be able to program a behavior using a p.field methodology. • Be able to construct a new p.field from primitive p.fields, and sum the p.fields to generate and emergent behavior.

  20. HierarchicalParadigm is characterized by having a horizontal decomposition. Reactive Paradigm is characterized by having a vertical decomposition. • If anything happens to an advanced behavior, the lower behaviors would still operate. • All reactive systems are composed of behaviors.

  21. Attributes of Reactive Paradigm • Behaviors. • SENSE-ACT organization. • Behavior specific (local) sensing.

  22. Attributes of Reactive Paradigm BEHAVIOR (Ethological def.) a direct mapping of sensory inputs to a pattern of motor actions that are then used to achieve a task. (Mathematically) a transfer function , transforming sensory inputs into actuator commands. Schema: a way of expressing the basic unit of activity. It is consists of the knowledge of how to act or perceive, and the algorithm by which it uses to accomplish the activity. (like a generic template for doing some activity).

  23. Attributes of Reactive Paradigm SENSE-ACT Organization (S-A) The SENSE and ACT components are tightly coupled into behaviors and all robotic activities emerge as the result of these behaviors operating either in sequence or concurrently. (NO PLAN component). The S-A organization does not specify how the behaviors are coordinated. What happens when multiple behaviors are active simultaneously?

  24. Attributes of Reactive Paradigm Behavior-Specific (local) Sensing • Sensing is local • Sensors can be shared • Sensors can be fused locally by a behavior • Behaviors favor the use of affordances (direct perceptions). The behavior does not rely on any central representation built up from all sensors. The sensing portion of the behavior is nearly instantaneous and action is very rapid.

  25. Connotations of Reactive Behaviors: • A reactive robotic system executes rapidly. Behaviors can be implemented directly in hardware as circuits or with low computational complexity. • Reactive robotic systems have no memory Reactive behaviors are limited to pure “stimulus-response reflexes” BEHAVIORS ARE CONTROLLED BY WHAT IS HAPPENING IN THE WORLD, DUPLICATING THE SPIRIT OF INNATE RELEASING MECHANISMS, rather than by the program storing and remembering what the robot did last.

  26. Characteristics of Reactive Behaviors: • Robots are situated agents (integral part of the world) operating in an ecological niche (goals of robot, world where it operates, and how it perceives the world). • Behaviors serve as the building blocks for robotic actions. And the overall behavior of the robot is emergent. • Only local, behavior-specific sensing is permitted. (Any sensing which does require representation is expressed in ego-centric coordinates). • These systems inherently follow good software design principles. • Animals models of behavior are often cited as a basis for these systems or a particular behavior.

  27. Advantages of programming by behavior: Behaviors support good software engineering principles through: • decomposition • modularity and • incremental testing.

  28. Representative Architectures • Potential Field Methodologies • Subsumption Architecture Provide mechanisms for: • Determine what happens when multiple behaviors are active at the same time • Triggering behaviors

  29. Subsumption Architecture • Subsumption refers to how behaviors are combined. • In many applications behaviors are embedded directly in the hardware or on small microprocessors.

  30. Subsumption Philosophy • Modules should be grouped into layers of competence • Modules in a higher lever can override or subsume behaviors in the next lower level • Suppression: substitute input going to a module • Inhibit: turn off output from a module • No internal state in the sense of a local, persistent representation similar to a world model. • Architecture should be taskable: accomplished by a higher level turning on/off lower layers

  31. EXAMPLE • Level 0 Avoid collisions Subsumption Architecture Vector representation of sonar readings Level 0 recast as primitive behaviors • Robot-centric polar plot of 8 sonar range readings Polar plot unrolled into a plot

  32. Subsumption Architecture • EXAMPLE LEVEL 1: WANDER • Level 1: Wander • Level 0 : Avoid Collisions • Inhibition • Suppression Level 1 recast as primitive behaviors

  33. EXAMPLE LEVEL 2: Follow Corridors Level 2: Follow Corridors Level 1: Wander Level 0 : Avoid Collisions Subsumption Architecture

  34. Subsumption Summary • Subsumption groups schema-like modules into layers of competence, or abstract behaviors. • Higher layers may subsume and inhibit behaviors in lower layers, but behaviors in lower layers are never rewritten or replaced. • The design of layers and component behaviors is difficult. It is more of an art than a science. (True for all reactive architectures) • Behaviors are released by the presence of stimulus in the environment. (No PLAN component) • The releaser is almost always the percept for guiding the motor schema. • Perception is ego-centric and distributed. (sonar plot was relative to the robot, available to any behavior which needed it, and the output from perceptual schemas can be shared with other layers)

  35. Potential Fields:Ron Arkin From http://www.cc.gatech.edu/aimosaic/faculty/arkin From http://www.cc.gatech.edu/aimosaic/robot-lab/MRLhome.html

  36. Potential Fields Philosophy • The motor schema (what the robot should do) component of a behavior can be expressed with a potential field’s methodology • A potential field can be a “primitive” or constructed from primitives which are summed together • The output of behaviors are combined using vector summation • From each behavior, the robot “feels” a vector or force • Magnitude = force, strength of stimulus, or velocity • Direction • We visualize the “force” as a field, where every point in space represents the vector that it would feel if it were at that point

  37. Potential fields are continuous • Even with a small element at a point can be associated with a vector • How an obstacle will exert a field on the robot and make it run away? • If the robot is close to the obstacle it is inside the potential field and will feel a force that makes it want to face directly away from the obstacle and move away. • If the robot is not within that range then it just sits there because there is no force on it.

  38. Example: Run Away via Repulsion

  39. 5 Primitive Potential Fields a. Uniform b. Perpendicular c. Attraction d. Repulsion e. Tangential

  40. Common fields in behaviors • Uniform • Move in a particular direction, corridor following • Repulsion • Runaway (obstacle avoidance) • Attraction • Move to goal • Perpendicular • Corridor following • Tangential • Move through door, docking (in combination with other fields) • random • do you think this is a potential field?

  41. Magnitude Profiles • The length of the arrows gets smaller closer to the object and the way the magnitude of the vectors in the field change is called the magnitude profile. • Magnitude profiles solve the problem of constant magnitude, which allows the designer to represent reflexivity (response is proportional to the strength of the stimulus). • The main motivation for magnitude profiles is to fine-tune the behavior.

  42. Linear Drop off If a robot is far away from the object it will turn and move quickly towards it, then slows up to keep from hitting the object. Mathematically, the rate at which the magnitude of the vectors drops off can be plotted as a straight line. Exponential Drop off To have the robot react more the closer it is to the object (constant magnitude). Stronger reaction but more of a taper. The drop off is proportional to the square of the distance for every unit of distance away from the object, the force on the robot drops in half.

  43. Potential Fields and Perceptions • Potential Fields are ego-centric because robots perception is ego-centric. • This makes programming easy. The visualization of the entire field may appear to indicate that the robot and the objects are in a fixed, absolute coordinate system but they are not. The robot computes the effect of the potential field, usually as a straight line, at every update, with no memory of where it was previously or where the robot has moved.

  44. goal goal obstacle obstacle Combining Fields for Emergent Behavior obstacle If robot were dropped anywhere on this grid, it would want to move to goal and avoid obstacle: Behavior 1: MOVE2GOAL(attraction field) Behavior 2: RUNAWAY (repulsive field) The output of each independent behavior is a vector, the 2 vectors are summed to produce emergent behavior

  45. Note: in this example, robot can sense the goal from 10 meters away Note: In this example, repulsive field only extends for 2 meters; the robot runs away only if obstacle within 2 meters Fields and Their Combination

  46. Path Taken Robot only feels vectors for this point when it (if) reaches that point • If robot started at this location, it would take the following path • It would only “feel” the vector for the location, then move accordingly, “feel” the next vector, move, etc. • Pfield visualization allows us to see the vectors at all points, but robot never computes the “field of vectors” just the local vector

  47. Update rates, Holonomicity and Local Minima 1. Distance between updates are different (length of the arrows) Note between t3 and t4, the robot actually goes further without turning and has to turn back as to go to goal. A smoother path would have a faster update. 2. Robots turn in any direction but have to stop and errors develop due to the contact between the wheels and the surface. 3. The vectors only have head and no body, this means the magnitude is 0.0 and that if the robot reaches to that spot it will stop and not move.

  48. Example of using one behavior per sensor

  49. How does the robot see a wall without reasoning or intermediate representations? • Perceptual schema “connects the dots”, returns relative orientation MS: Perp. orientation PS: Find-wall S MS: Uniform Sonars Sum all 8 ranges internally Return the single largest distance and direction

  50. OK, But why isn’t that a representation of a wall? • It’s not really reasoning that it’s a wall, rather it is reacting to the stimulus which happens to be smoothed (common in neighboring neurons)

More Related