1 / 13

Human-in-the-Loop Control of an Assistive Robot Arm

Human-in-the-Loop Control of an Assistive Robot Arm. Katherine Tsui and Holly Yanco University of Massachusetts, Lowell. Challenge. Using a standard controller, put the ball in the cup. 1. Turn the cup over. 2. Pick up the ball. 3. Put the ball in the cup.

quinta
Download Presentation

Human-in-the-Loop Control of an Assistive Robot Arm

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Human-in-the-Loop Control of an Assistive Robot Arm Katherine Tsui and Holly Yanco University of Massachusetts, Lowell

  2. Challenge Using a standard controller, put the ball in the cup. 1. Turn the cup over. 2. Pick up the ball. 3. Put the ball in the cup. Average time of execution: ~3-5 minutes, even by middle school children with video game experience! Although this is a simplified example, similar tasks may occur repeated throughout a handicapped person’s daily life.

  3. Motivation • Why? • Unintuitive controllers • Operator sensory overload • For severely handicapped people, activities of daily life are difficult enough to perform. • While an assitive robotic device like ours allows for limited independence, it can be frustrating and tiresome to operate. • Let’s abstract this away…

  4. Hardware • Manus Assistive Robotic Manipulator (ARM) by Exact Dynamics • 6 Degree of Freedom (DoF) • plus 2 DoF gripper end-effector • Joint encoders • Cameras • Shoulder view • Gripper view

  5. Standard Control Movement for a “out of the box” configuration is done by menus accessed from single switch, keypad, or joystick input.

  6. Standard Control: Using the Joint Menu Direct Joint Mode + Direct control of individual joints - Unable to temporally simultaneously move joints - Not how humans do it… we don’t think in terms like move shoulder up, rotate wrist, extend forearm, etc.

  7. Standard Control:Using the Cartesian Menu Direct Cartesian Mode + Gripper moves linearly in 3D + Joints can move collaterally in space and time - Still not how humans do it… we don’t think in term of moving left, right, up, down, etc.

  8. Alternative Control • Transparent Mode • ARM has Controller Area Network (CAN) communication with PC • ARM transmits status packages at 20ms intervals. • t=20ms: message 0x350 gives ARM status and position • t=40ms: message 0x360 gives gripper position • t=60ms: message 0x37F asks for return package • t=80ms: message 0x350… • Every 60ms, when message 0x37F is sent, movement information can be returned as ARM input.

  9. How should the ARM move? Like humans do! Think: I want the cell phone. Actions: See, reach, grasp However, the intended users may not be capable of these actions, therefore we simplify.

  10. Selection Process Given what the user sees directly ahead of them, and assuming the desired, unobstructed object is within reach… zoom in on the cell phone!

  11. Movement • From the user selection, we know the x,y position of where the ARM should be. • How do we move there? • By using Phission and joint encoder feedback to determine movement length, speed, and direction: • Phission: We’ve trained on the color we desire to track. While the center of the blob is not near the desired (x,y), move towards. • Feedback: Monitor ARM status and position

  12. Drop for Z Depth information is deduced through simulated stereo vision. 2 images are sequentially taken as the gripper moves along the y-axis; B is known. Disparity between the images yields depth Z and the ARM moves “close” to the desired object. Z = Bf/(xL – xR)

  13. Future Work • Distance sensing: laser • Non-rigid stereo vision

More Related