E N D
The field of robotics presents a unique opportunity to design new technologies that can perform the tasks of humans. This is especially important in cases where a task is too difficult or dangerous, such as war. This project serves to bridge the human-robot communication gap by presenting a robust gesture recognition interface that can recognize gestures and translate these into actions for a robot to perform. The system uses the Xbox Kinectand the Robot Operating System to recognize a set of seven military gestures. The system accurately recognizes complex sequences of these gestures and performs a preset action for each gesture, and has the functionality of adding new gestures or replacing old gestures on the fly. Senior Project – Computer Science – 2013Developing a Gesture Recognition InterfaceBy–Jonathan LebronAdvisor – Prof. Nick Webb There are various scenarios in warfare where speech is not possible or an optimal solution. When presented these situations soldiers have a defined set of gestures they can use to communicate. The question this project looks to solve is: How do you give a robot a sequence of natural gestures that they have to interpret and act upon? Xbox Kinect ™ Calculate angles and classify gesture based on angles, move robot Kinect tracks Skeleton, generates X,Y,Z coordinates Perform Gesture Gesture Recognized! left_shoulder:0.35,0.12,2.19 left_elbow:-0.18,-0.10,2.02 left_hand:-0.13,-0.34,1.924 Now moving! • The Robot Operating System (ROS) is the main tool used for programming the gesture recognition interface. Here is a graph of all of the nodes: • openni_tracker: This node is provided by OpenNI as a package called openni_kinect. This node tracks a user using the Xbox Kinect and generates skeleton data. • kinect_listener: Generates pose data based on the angles of the left hand/arm of the user. This node uses code provided by Cornell’s Personal Robotics research lab. • gesture_recognizer: Sends requests to the classifier and delegates actions to the robot. • classifier: Classifies gestures based on pose data sent from gesture_recognizer node. • add_gesture: Susbcribes to pose data and creates new model file for the classifier to use. Classification: Gestures Abreast Enemy Freeze Stop Listen Rifle Cover Angles handTheta handPhi elbowTheta elbowPhi Classifier Kinect Skeleton Tracking http://www.ros.org/news/resources/2010/NITE-1.png Improvements to my system include refining the classifier to better interpret gestures. As of now the classifier uses the output of a C4.5 decision tree. Also, improving the tracker so there is no initial calibration necessary will help the system flow more naturally.