160 likes | 344 Views
PEG-IN-HOLE USING DYNAMIC MOVEMENT PRIMITIVES. Fares J. Abu-Dakka, Bojan Nemec, Aleš Ude Department of Automatics, Biocybernetics, and Robotics. 13-Sep-12. Contents. Keywords . Peg-In-Hole, Nonlinear dynamic systems , robot learning. PiH background Force control for PiH
E N D
PEG-IN-HOLE USING DYNAMIC MOVEMENT PRIMITIVES Fares J. Abu-Dakka, Bojan Nemec, Aleš Ude Department of Automatics, Biocybernetics, and Robotics 13-Sep-12
Contents • Keywords. Peg-In-Hole, Nonlinear dynamic systems, robot learning • PiH background • Force control for PiH • PiH & DMP learning • Experimental results • Conclusion
PiH background • PiH – a classical assembly problem. • PiH – requires position and force control • Force control can be done by • Passive approaches • Active approaches
PiH background • Passive approaches • Needed to ease the insertion process and to reduce contact forces with the surface. • Remote Center of Compliance (RCC) is used. • RCC introduce low lateral and rotational stiffness in the grasping mechanism. • Active approaches • Require force-torque sensing. • While passive approaches provide a fixed RCC, the active approaches can locate it arbitrary.
Force control for PiH • The desired force is calculated from the measured force signal. • The end-effector velocity is calculated Force selection Resolved velocity Velocity selection measured - desired Force gain matrix = stiffness matrix
PiH & DMPs Learning: DMPs In this paper Discrete DMPs are used • Encodes control policies for discrete point to point movements. • DMPs are based on systems of 2nd ode (Ijspeert et al., 2002a, 2002b). • Advantages: the ability to deal with perturbations and to include feedback terms. • Feedback terms can be added to change the timing (Schaal et al., 2007) and/or avoid some areas of the workspace. • The training movement must come to a full stop at the end of the demonstration if the robot is to stay at the attractor point after tT.
PiH & DMPs Learning: Approach Phase • Trajectory encoding by imitation. • Demonstrated trajectory measured using KUKA LWR with gravity compensation
PiH & DMPs Learning: Detection of Contact • Goal configuration is used to initialize DMPs (translation and rotation) • Monitor the forces during DMP execution • Stop if contact established • If contact is not established at the end of trajectory execution • Generate downward motion using hybrid control
PiH & DMPs Learning • The peg is in contact with the surface outside the hole • The peg is inside the hole with only one contact at the edge of the hole • The peg is in contact with two points • The peg is in contact with the hole edge • The peg is inside the hole with two contacts • The peg is inside the hole with only one contact 1 2 3 4 5 6 During the trajectory execution using DMP, the actual trajectory was modified according to the eventual force error. Slowing down feedback automatically assures that the DMP commanded motion slows down or stops whenever the peg gets stuck in the hole (or whenever the forces are exceeding the permitted value). When the robot exerts a downward force, each case Described on the left is changed to the case (3) or (5), eventually
PiH & DMPs Learning: Search Phase & Algorithms • Generates new goal positions on the surface. • Movement is generated by linear DMPs (without non-linear part). • Hybrid control (force in z direction). • Monitor changes in forces and height.
PiH & DMPs Learning: Insertion Phase • force control • Possible to learn the resulting motion as a DMP by incremental learning.
Experimental results • PiH motion template was obtained from multiple human demonstrations using kinaesthetic guidance. • Cranfield Benchmark is used. It constitutes a standardized assembly task in robotics. • A circular peg has been used for testing in this Experiment.
Change in z (m) Experimental results Force error in z (N) Force in z (N) Time
Conclusion • In this paper, the PiH problem has been solved using DMPs. • DMPs with force feedback, in conjunction with hybrid trajectory and force control have been tested as a candidate solution for the PiH task. • Experiments have been done using the ROBWORK simulation and LWR real robot.