1 / 21

Multi-scenario Gesture Recognition Using Kinect

Computer Engineering & Computer Science University of Louisville Louisville , KY 40214, USA yi.li@louisville.edu. Multi-scenario Gesture Recognition Using Kinect. Professor : Yih - Ran Sheu. Student : Sin- Jhu YE Student Id : MA020206 . Outline . Abstract Introduction

gerek
Download Presentation

Multi-scenario Gesture Recognition Using Kinect

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Computer Engineering & Computer Science University of Louisville Louisville, KY 40214, USA yi.li@louisville.edu Multi-scenario Gesture Recognition Using Kinect Professor:Yih - Ran Sheu Student :Sin- JhuYE Student Id:MA020206

  2. Outline • Abstract • Introduction • Literature Review • Methods • Results • Conclusion • References

  3. Abstract • Hand gesture recognition (HGR) is an important research topic because some situations require silent communication with sign languages. • novel method for contact-less HGR using Microsoft Kinect for Xbox is described, and a real-time HGR system is implemented. • Because the depth sensor of Kinect is an infrared camera, the lighting conditions, signers’ skin colors and clothing, and background have little impact on the performance of this system.

  4. Introduction • Microsoft Kinect provides an inexpensive and easy way for real-time user interaction. • There is no hand specific data available for gesture recognition,although it does include information of the joints between hands and arms. • The system then detects hand gestures made by the user, compares them with the signs in the chosen set, and displays the matched meaning and the corresponding picture on the screen.

  5. Literature Review(1/2) • Recent technology for gesture recognition is to incorporate the information of object distances, or depths, normally captured. • Their system could recognize six gestures: open hand, fist, pointing up, L-shape, pointing at the camera, and thumb-up.

  6. Literature Review(2/2) • The system was not able to find fingertips;thus it was limited to recognize only motion gestures like wave, and move up/down, left/right, and forward/backward. • A method to track fingertips and the centers of palms using Kinect was presented. • Proposed an approach using depth data provided by Kinect to detect fingertips.

  7. Methods(1/10) • The system of the present work consists of three main components: • a. Hand Detection • b. Finger Identification • c. Gesture Recognition • Three-point Alignment Algorithm • Distance-based Finger Identification • Angle-based Finger Identification Algorithm

  8. Methods(2/10) • Hand Detection

  9. Methods(3/10) • From top to bottom, and left to right, scan all pixels on the screen until a pixel s being a hand point is found,which is set as the starting point. • After the hand contours are detected, the centers of the palms are calculated as the centers of the inscribing circles of the hand contours.

  10. Methods(4/10) • Finger Identification • Three-point Alignment Algorithm • Distance-based Finger Identification • Angle-based Finger Identification Algorithm

  11. Methods(5/10) • Finger Identification • Three-point Alignment Algorithm The red dots are real fingertips, and the green dot is not a fingertip. The yellow dots are used to check for threepoint alignment.

  12. Methods(6/10) • Finger Identification • Distance-based Finger Identification 1) The first and easiest step is to identify the thumb and the index finger, since the distance between them is the largest among all neighboring fingers. 2) The little finger is identified as the farthest finger away from the thumb. 3) The middle finger is identified as the closest one to the index finger. 4) The remaining one is the ring finger.

  13. Methods(7/10) • Finger Identification • Angle-based Finger Identification Algorithm 1) The first and easiest step is to identify the thumb, sincethe distance between the thumb and the index finger isthe largest among all neighboring fingers. 2) The angles between the thumb and the other fingers arecalculated. Then the other fingers are sorted by theseangles in the ascending order. 3) The finger with the smallest angle to the thumb is theindex finger. The others, in the ascending order of theangles, are the middle finger, the ring finger, and thelittle finger, respectively.

  14. Methods(8/10) • If the same object still exists in the next frame with some transformation compared to the previous frame, all properties of this object is mapped to the old frame.

  15. Methods(9/10) • Gesture Recognition

  16. Methods(10/10) • Gesture Recognition

  17. Results(1/2) • HGR accuracy for popular gesture scenario • The average accuracy of each gesture in the Popular Gesture scenario is calculated and shown in Table I. For “Start Gesture”, the accuracy is nearly 100%, and the names of the fingers are perfectly identified.

  18. Results(2/2) • HGR accuracy for the numbers scenario • The numbers “Three”, “Six”, “Seven”, “Eight”, and “Nine”, each consisting of three fingers, present some difficulty for the system to distinguish among them.

  19. Conclusion • Then the Graham Scan algorithm is used to determine the convex hulls of hands; a contour tracing algorithm is used to detect hand contours. • Finger names are determined according to their relative positions, and a direction vector is assigned to each finger. • Incorporating hidden Markov models, the system may allow the recognition of continuous gestures that form words or sentences.

  20. References • [1] M. Van den Bergh and L. Van Gool, “Combining RGB and ToFcamerasfor real-time 3D hand gesture interaction,” in Applications of ComputerVision (WACV), 2011 IEEE Workshop on, 2011, pp. 66 –72. • [2] C. Yang, Y. Jang, J. Beh, D. Han, and H. Ko, “Gesture recognitionusing depth-based hand tracking for contactless controller application,”inConsumer Electronics (ICCE), 2012 IEEE International Conferenceon, 2012, pp. 297 –298. • [3] J. L. Raheja, A. Chaudhary, and K. Singal, “Tracking of fingertips andcenters of palm using KINECT,” 2011 Third International Conference onComputational Intelligence Modelling Simulation, pp. 248–252, 2011. • [4] G.-F. He, S.-K. Kang, W.-C. Song, and S.-T. Jung, “Real-time gesturerecognition using 3D depth camera,” in Software Engineering and ServiceScience (ICSESS), 2011 IEEE 2nd International Conference on, 2011, pp.187 –190. • [5] S. Stegmueller, “Hand and finger tracking with Kinect depth data,” 2011,http://candescentnui.codeplex.com. • [6] R. L. Graham, “An efficient algorithm for determining the convex hullof a finite planar set,” Information Processing Letters, vol. 1, no. 4, pp.132–133, 1972.

  21. Q&A

More Related