170 likes | 295 Views
Autonomy for General Assembly. Reid Simmons Research Professor Robotics Institute Carnegie Mellon University. The Challenge. Autonomous manipulation of flexible objects for general assembly of vehicles Dexterity Precise perception Speed Reliability The Specific Task
E N D
Autonomy for General Assembly Reid Simmons Research ProfessorRobotics InstituteCarnegie Mellon University
The Challenge • Autonomous manipulation of flexible objects for general assembly of vehicles • Dexterity • Precise perception • Speed • Reliability • The Specific Task • Insert clip attached to cable intohole with millimeter tolerance • Year 2: Moving taskboard Carnegie Mellon
Overall Approach • Utilize our previous work in robot autonomy • Multi-layered software architecture • Hierarchical, task-level description of assembly • Robust, low-level behaviors • Distributed visual servoing • Force sensing • Exception detection and recovery Carnegie Mellon
Planning Executive Behavioral Control Architectural Framework • Three-Tiered Architecture • Deals with goals and resource interactions • Reactive & Deliberative • Modular • Control loops at multiplelevels of abstraction • Task decomposition; Task synchronization; Monitoring; Exception handling • Deals with sensors and actuators Carnegie Mellon
Synchronization / Coordination Planning Planning Planning Executive Executive Executive Granulatity Behavioral Control Behavioral Control Behavioral Control Syndicate: Multi-Robot Architecture Carnegie Mellon
Syndicate Layers: Behavioral • Made up of “blocks” • Each block is a small thread/function/process • Represent hardware capabilities or repeatable behaviors • “Stateless”: relies on current data; no knowledge of past or future • Communicate with sensors • Send commands to robots and get feedback • Communicate data to other blocks Carnegie Mellon
Ace Control Behaviors Carnegie Mellon
Mast Eye Mobile Manipulator Relative positions Tracking VisualServo End effector delta Images (via cameras) The World Manipulate environment (via arm) Distributed Visual Servoing • Mast Eye tracks fiducials • Uses ARTag software package to detect fiducials • Provides 6-DOF transform between fiducials • Mobile Manipulator uses information to plan how to achieve goal • Use data base describing positions of fiducials on objects • Behavioral layer enables dynamic, transparent inter-agent connections ArmControl Carnegie Mellon
Distributed Visual Servoing • Fairly precise • millimeter resolution at one meter • Relatively fast • 3-4 Hz • Basically unchanged from Trestle code • Operates in relative frame • Poses of one object relative to another • Controller continually tries to reduce pose difference • Cameras do not need to be precisely calibrated with respect to base or arm Carnegie Mellon
Distributed Visual Servoing • Associating Fiducials with Objects • Programmer provides file listing the pose of each fiducial with respect to an object • Multiple fiducials can be associated with each object • Can measure directly, or use system to give us the poses • Reducing Pose Differences • “Waypoint” is the pose of one object with respect to another • Everything is relative! • Visual servo block multiplies pose difference by gain • Update moves when new information arrives Carnegie Mellon
Syndicate Layers: Executive • Made up of “tasks” • Each task is concerned with achieving a single goal • Tasks can be arranged temporally • Tasks can: • Spawn subtasks • Enable and connect blocks in the behavioral layer to achieve the task • Enable tell a block to start running • Connect tell blocks to send data to other blocks • Monitor blocks for failure • Provide failure recovery Carnegie Mellon
Child link Serial Constraint Ace Task Decomposition Carnegie Mellon
In the tree, this is the task name, but this is the actual function being executed A keyword that says this is not supposed to be a “leaf” in the task tree Reusing task with different parameters Tells the system to execute this task afterLoadPlugArmPose completes Wait until both tasks have completed before starting RoughArmMove Example TDL Code (somewhat simplified) Goal ClipInsertion ( ) { loadPlugArmPose : spawn ArmMove (loadPose); stowArmPose: spawn ArmMove (stowPose) WITH SERIAL loadPlugArmPose; roughBaseMove: spawn RoughBaseMove (roughBaseWaypoint) WITH SERIAL loadPlugArmPose; spawn RoughArmMove (roughArmWaypoint) WITH SERIAL roughBaseMove, SERIAL stowArmPose; … } Carnegie Mellon
Initial Results (December 2007) • Used Previous Hardware • RWI base • Metrica 5 DOF arm • Metal & plastic gripper • Successfully Inserted Clip • 60% success rate (15 trials) • Mainly attributable to hardware problems • Fairly slow (~1 minute) • Scripted base move Carnegie Mellon
Insertion Video Press This Carnegie Mellon
Current Status • Moved to New Hardware • Powerbot base • WAM arm (Barrett) • All-metal gripper • Still Successfully Inserting Clip • Much faster • Better hardware • “Rough” moves • Base motion is planned, not scripted • Uses force sensing to detect completion / problem • Have not yet characterized success rate Carnegie Mellon
Upcoming Work • Near Term (1 month) • Complete hardware integration • Laser, PTU, VizTracker • Characterize success rate of system • Mid Term (2-6 months) • Convert to velocity control of WAM • Use force control for actual insertion • Increase reliability through execution monitoring and exception handling • Farther Term (2nd year of contract) • Insert clip into moving taskboard Carnegie Mellon