340 likes | 356 Views
Toward Versatile Robotic Assistants for Security and Service Applications. Monica N. Nicolescu Department of Computer Science and Engineering University of Nevada, Reno http://www.cs.unr.edu/~monica. Overview. Goals: Integrate robots in human society
E N D
Toward Versatile Robotic Assistants for Security and Service Applications Monica N. Nicolescu Department of Computer Science and Engineering University of Nevada, Reno http://www.cs.unr.edu/~monica
Overview Goals: • Integrate robots in human society • Increase the utility of autonomous robots and their ability to function in dynamic, unpredictable environments • Facilitate interaction with robots Motivation: • Accessibility to a large range of users
Application Domains Security • Scenario: security checkpoint • Task: threat detection Service • Scenario: office/home robot assistant • Task: service multiple user requests
Research Problems • Robot control: • Support for frequent human-robot interactions, include the human in the loop • Communication: • Engage in sustained interactions with people • Express/understand intent • Learning: • Program robots using an accessible method
Approach • Behavior-based control with a particular behavior representation • Frequent, sustained interactions with people • Understanding intent • Learning by demonstration • Natural robot programming • Understanding (malevolent) activity/intent • Communication through actions • Expressing intent • Long-term: integrate with neuroscience and cognitive science approaches
Robot Control Acontrol architecture that provides support for frequent human-robot interactions • Modularity, robustness and real-time response, support for learning • Automatic reusability of existing components • Ability to encode complex task representations • Run-time reconfiguration of controllers • Behavior-based control as underlying control architecture
Input Actions Behavior Input Input Actions Behavior Sensors Actuators Behavior-Based Robot Control • Behaviors • Goal-driven, time-extended control processes, running in parallel, connecting sensors and effectors • Highly effective in unstructured, dynamic environments • Usually invoked by reactive conditions • Built-in task specific information
Hierarchical Abstract Behavior Architecture • Extended behavior-based architecture • Representation & execution of complex, sequential, hierarchically structured tasks • Flexible activation conditions for behavior reuse • Representation of tasks as (hierarchical) behavior networks • Sequential & opportunistic execution • Support for automated generation (task learning) sensory input Environment M. N. Nicolescu, M. J Matarić, “A hierarchical architecture for behavior-based robots", International Conference of Autonomous Agents and Multiagent Systems, July 15- July 19, 2002.
Goals Beh1…k {1/0} Goals Behi {1/0} The Behavior Architecture • Abstract Behavior • Embeds representations of the preconditions & goals • Primitive Behavior • Performs actions, achieve goals • Representation and execution of behavioral sequences • Flexible activation conditions behavior reuse • Representation of tasks as behavior networks Task specific preconditions if met Test world preconditions Abstract behavior Primitive behavior Perform actions Abstract/primitive behavior structure
Abstract behaviors PickUp(Box) GoTo(Dest) A A GoTo(Source) A Follow(Wall) Drop(Box) A A A A A Primitive behaviors Permanent precondition Enabling precondition Ordering precondition A A Task Representation: Behavior Networks • Links represent task relevant precondition-postcondition dependencies
Layers of Abstraction: Network Abstract Behaviors • Abstracts existing networks into a single component • Use NAB’s as parts of other behavior networks • Allows for a hierarchical representation of increasingly complex tasks • Upon activation, enable their components
The Robot Testbed • Pioneer 2DX mobile robot • Pan-tilt-zoom camera • Laser range-finder • Gripper • 2 rings of 8 sonars • PC104 stack • Logitech cordless headset • IBM ViaVoice speech software • Implementation in Ayllu • Picking up, dropping objects (PickUp, Drop) and tracking targets (Track)
Experimental Validation • Sequential & opportunistic execution • Object transport & visit targets subtasks • Hierarchical representation
Trial 1 Orange Pink Light-Green Yellow Trial 2 Pink Yellow Orange Light-Green Trial 3 Light-Green Yellow Orange Pink Trial 4 Yellow Light-Green Orange Pink Trial 5 Yellow Orange Pink Light-Green Results Order of target visits M. N. Nicolescu, M. J Matarić, “A hierarchical architecture for behavior-based robots", International Conference of Autonomous Agents and Multiagent Systems, July 15- July 19, 2002.
Human-Robot Interaction – Proposed Work Goal: • Include support for frequent human-robot interactions Issues: • Handle interruptions • Switching between different activities (idle, task execution, learning, dialog) Approach: • Incorporate awareness of human presence • Incorporate a model of activity control (situations and associated responses) with our behavior based architecture
Communication – Proposed Work Goal: • Understanding intent from simple behavior Approach: • Match high-level perceptions of the robot with the known goals of the robot’s behaviors Applications: • Service: achieve better cooperation by understanding human intentions (e.g., giving/taking a tool/object)
Learning • Learn a high-level task representation, from a set of underlying capabilities already available to the robot Approach: • Learning by experience (teacher following) • Active participation in the demonstration • Mapping between observations and the skills that achieve the same observed effects
YES YES GIVE DEMONSTRATION TASK REPRESENTATION FIRST? EXECUTE TASK OK? DONE NO NO GENERALIZE GENERALIZED REPRESENTATION EXECUTE TASK REFINED TASK REPRESENTATION Learning by Demonstration Framework Inspiration: • Human-like teaching by demonstration • Multiple means for interaction and learning: concurrent use of demonstration, verbal instruction, attentional cues, gestures, etc. Solution: • Instructive demonstrations, generalization and practice
Instruction Stage: Teacher’s Perspective • The teacher is aware of: • Robot skills • What observations/features the robot could detect • Instructions for the robot • Informative cues: • “HERE” – moments of time relevant to the task • The teacher may give simple instructions: • “TAKE”, “DROP” – pick-up, drop objects • “START”, “DONE” –beginning/end of a demonstration
Instruction Stage: Learner’s Perspective • Teacher-following strategy (laser rangefinder & camera) • Abstract behaviors (perceptual component) continuously monitor their goals: • Ability to interpret high-level effects (e.g. approaching a target, being given/taken an object) Goals Met Abstract Behavior signals observation-behavior mapping • Compute the values of behavior parameters gathered through its own sensors
Learning an Object-Transport Task Human demonstration Environment Learned topology Robot demonstration • All observations relevant • No trajectory learning • Not reactive policy
Generalization • Hard to learn a task from only one trial: • Limited sensing capabilities, quality of teacher’s demonstration, particularities of the environment • Main learning inaccuracies: • Learning irrelevant steps (false positives) • Omission of steps that are relevant (false negatives) • Approach: • Demonstrate the same task in different/similar environments • Construct a task representation that: • Encodes the specifics of each given example • Captures the common parts between all demonstrations M. N. Nicolescu, M. J Matarić, ”Natural Methods for Robot Task Learning: Instructive Demonstrations, Generalization and Practice", Second International Joint Conference on Autonomous Agents and Multi-Agent Systems, July 14-18, 2003
Generalization • Task: Go to either the Green or Light Green targets, pick up the Orange box, go between the Yellow and Red targets, go to the Pink target, drop the box there, go to the Light Orange target and come back to the Light Green target • None of the demonstrations corresponds to the desired task • Contain incorrect steps and inconsistencies
Generalization Experiments 1st 2nd 3rd 3rd Human demonstration Robot performance
Delete unnecessary steps A Include newly demonstrated steps A M C BAD A A M B N B B COME GO N F BAD B A A C C A C A C Refining Task Representation Through Practice • Practice allows more accurate refining of the learned tasks • Unnecessary task steps (“bad”) • Missing task steps (”come””go”) M. N. Nicolescu, M. J Matarić, ”Natural Methods for Robot Task Learning: Instructive Demonstrations, Generalization and Practice", Second International Joint Conference on Autonomous Agents and Multi-Agent Systems, July 14-18, 2003
Practice and Feedback Experiments 3rd demonstration Practice run & feedback Topology refinement Robot performance
Practice and Feedback Experiments Practice run & feedback Topology refinement 1st demonstration Practice run Robot performance
Learning from Robot Teachers Gate Traversing Task Human demonstration Robot execution Learned network M. N. Nicolescu, M. J Matarić, "Experience-based representation construction: learning from human and robot teachers", IEEE/RSJ International Conference on Intelligent Robots and Systems, Pages 740-745, Oct. 29 – Nov 3, 2001
Learning – Proposed Work Goal: learn a larger spectrum of tasks • Repetitive tasks: “repeat-until” • Conditioned tasks: “if-then” • Time relevant information: “do-until” • Trajectory learning: Turn(Angle), MoveForward(Distance) Approach: • Use an increased vocabulary of instructional cues (repeat, until, if, etc.)
Support for Communication – Proposed Work Goal: • Understanding intentions from complex activity Approach: • Use learning from demonstration to train the robot patterns of activity, and • Understand activity by observing/following people and mapping the observations to its learned database of activities Applications: • Security: detect suspicious behavior (e.g. passing around a checkpoint area)
Communicating Through Actions – Proposed Work Goal: • Natural communication & engaging in interactions with people Approach: • Use actions as vocabulary for communicating intentions • Understanding exhibited behavior is natural: actions carry intentional meanings
Communication – Preliminary Work • If in trouble, try to get help from a human assistant • perform “dog-like” actions to get a human’s attention • perform the actions that failed in front of the helper to express intentions Pick up an inaccessible object Traverse a blocked gate
Communication – Proposed Work Goal: • Understanding intent from interaction Approach: • Engaging in interactions with people can expose underlying intentions Applications: • Security: uncooperative person could potentially have malicious intentions • Service: learn about cooperative/uncooperative users
Summary • Proposed framework for the development of autonomous, interactive robotic systems • Behavior-based control with a particular behavior representation • Frequent, sustained interactions with people • Understanding intent • Learning by demonstration • Natural robot programming • Understanding (malevolent) activity/intent • Communication through actions • Expressing intent, engaging in interactions with people