290 likes | 311 Views
Computer Vision: Gesture Recognition from Images. Joshua R. New Knowledge Systems Laboratory Jacksonville State University. Outline. Terminology Current Research and Uses Kjeldsen’s PhD Thesis Implementation Overview Implementation Analysis Future Directions. Terminology.
E N D
Computer Vision:Gesture Recognition from Images Joshua R. New Knowledge Systems Laboratory Jacksonville State University
Outline • Terminology • Current Research and Uses • Kjeldsen’s PhD Thesis • Implementation Overview • Implementation Analysis • Future Directions
Terminology Image Processing - Computer manipulation of images. Some of the many algorithms used in image processing include convolution (on which many others are based), edge detection, and contrast enhancement. Computer Vision - A branch of artificial intelligence and image processing concerned with computer processing of images from the real world. Computer vision typically requires a combination of low level image processing to enhance the image quality (e.g. remove noise, increase contrast) and higher level pattern recognition and image understanding to recognize features present in the image.
Current Research • Capture images from a camera • Process images to extract features • Use those features to train a learning system to recognize the gesture • Use the gesture as a meaningful input into a system • More information located at: • http://www.cybernet.com/~ccohen/
Current Research Example • Starner and Pentland • 2 hands segmented • Hand shape from a bounding ellipse • Eight element feature vector • Recognition using Hidden Markov Models
Current Uses • Sign Stream (released demo for MacOS) • Database tool for analysis of linguistic data captured on video • Developed at Boston University with funding from ASL Linguistic Research Project and NSF • http://www.bu.edu/asllrp/SignStream/
Current Uses • Recursive Models of Human Motion (Smart Desk, MIT) • Models the constraints by which we move • Visually-guided gestural interaction, animation, and face recognition • Stereoscopic vision for 3D modeling • http://vismod.www.media.mit.edu/vismod/demos/smartdesk/
Kjeldsen’s PhD thesis • Application • Gesture recognition as a system interface to augment that of the mouse • Menu selection, window move, and resize • Input: 200x300 image • Calibration of user’s hand
Kjeldsen’s PhD thesis • Image split into HSI channels (I = Intensity, Lightness, Value) • Segmentation with largest connected component • Eroded to get rid of edges • Gray-scale values sent to learning system
Kjeldsen’s PhD thesis • Learning System – Backprop network • 1014 input nodes (one for each pixel) • 20 hidden nodes • 1 output node for each classification • 40 images of each pose • Results: • Correct classification 90-96% of the time on images
Implementation Overview • System: • 1.33 Ghz AMD Athlon • OpenCV and IPL libraries (from Intel) • Input: • 2 – 640x480 images, saturation channel • Max hand size in x and y orientations in # of pixels • Output: • Rough estimate of movement • Refined estimate of movement • Number of fingers being held up • Rough Orientation
Implementation Overview • Chronological order of system: • Saturation channel extraction • Threshold Saturation channel • Calculate Center of Mass (CoM) • Reduce Noise • Remove arm from hand • Calculate refined-CoM • Calculate orientation • Count the number of fingers
Implementation Analysis 1. Saturation channel extraction: Digital camera, saved as JPGs JPGs converted to 640x480 PPMs Saturation channels extracted into PGMs Original Image Hue Lightness Saturation
Implementation Analysis 2. Threshold Saturation channel: a) Threshold value – 50 (values range from 0 to 255) b) @ PixelValue = PixelValue ≥ 50 ? 128 : 0
Implementation Analysis 3. Calculate Center of Mass (CoM): a) Count number of 128-valued pixels b) Sum x-values and y-values of those pixels c) Divide each sum by the number of pixels • 0th moment of an image: • b) 1st moment for x and y of an image, respectively: • c) Center of Mass (location of centroid): • where and
Implementation Analysis 4. Reduce Noise: FloodFill at the computed CoM (128-valued pixels become 192)
Implementation Analysis • 5. Remove arm from hand • Find top left of bounding box • Apply border for bounding box from calibration measure • FloodFill, 192 to 254
Implementation Analysis • 6. Calculate refined-CoM (rCoM): • Threshold, 254 to 255 • Compute CoM as before
Implementation Analysis 7. Orientation: a) 0th moment of an image: b) 1st moment for x and y of an image, respectively: c) 2nd moment for x and y of an image, respectively: d) Orientation of image major axis:
Implementation Analysis 8. Count the number of fingers (via FingerCountGivenX) Function inputs: a) Pointer to Image Data b) rCoM c) Radius = .17*HandSizeX + .17*HandSizeY d) Starting Location (x or y, call appropriate function) e) Ending Location (x or y, call appropriate function) f) White Pixel Counter g) Black Pixel Counter h) Finger Counter
Implementation Analysis • 8. Count the number of fingers: • 2 similar functions – start/end location in x or y • After all previous steps, the finger-finding function sweeps out an arc, counting the number of white and black pixels as it progresses • A finger in the current system is defined to be any 10+ white pixels separated by 3+ black pixels (salt/pepper tolerance) minus 1 for the hand itself
Implementation Analysis 8. Count the number of fingers:
Implementation Analysis • 8. Count the number of fingers: • Illustration of noise tolerance
Implementation Analysis System Input: System Output:
Implementation Analysis System Input: System Output:
Implementation Analysis • System Runtime: • Real Time – requires 30fps • Current time – 16.5 ms for one frame (without reading or writing) • Current Processing Capability on 1.33 Ghz Athlon – 60 fps
Future Directions • Optimization • Orientation for Hand Registration • New Finger Counting Approach • Learning System For additional information, please visithttp://ksl.jsu.edu.