1 / 18

Human Motion Analysis

Human Motion Analysis. Hyoung-Gon Lee MAI Lab. Seminar 2004.4.23. “Recent developments in human motion analysis”. Liang Wang, Weiming Hu, Tieniu Tan Nation Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing 100080, China

Download Presentation

Human Motion Analysis

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Human Motion Analysis Hyoung-Gon Lee MAI Lab. Seminar 2004.4.23

  2. “Recent developments in human motion analysis” Liang Wang, Weiming Hu, Tieniu Tan Nation Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing 100080, China (Received 27 November 2001; accepted 13 May 2002) Pattern Recognition 36(2003) 585-601

  3. Introduction • Most active research topics in computer vision. • Wide spectrum of promising applications : • virtual reality, smart surveillance, perceptual interface, content-based image storage and retrieval, video conferencing, athletic performance analysis, etc. • Several large research projects • VSAM** by DAPRA* • W4 : The real-time visual surveillance system • “Television control by hand gestures” by IBM & Microsoft • Featured in a number of leading international journals of CV. *DARPA : Defense Advanced Research Projects Agency **VSAM : Video Surveillance and Monitoring MAI Lab. Seminar at 2004 Spring

  4. Application I – Visual surveillance • Security-sensitive areas such as banks, department stores, parking lots. • Real-time analysis is needed. • compared to traditional video archive system. • Access control : face & gait recognition techniques • Other applications • measuring traffic flow • monitoring pedestrian congestion in public spaces • compile consumer demographics in shopping malls MAI Lab. Seminar at 2004 Spring

  5. Application II – Advanced user interface • Speech understanding has already been widely used in early HCI. • but it is subject to environmental noise and distance. • Vision, as a complement to speech recognition. • gestures, body poses, facial expressions, etc. • Future machines must be able to independently sense the surrounding environment. • Other applications • sign-language translation • gesture driven controls • signaling in high-noise environment MAI Lab. Seminar at 2004 Spring

  6. Application III – Motion-based diagnosis and identification • Segment various body parts of human, and recover the underlying 3-D body structure. • interpreting video sequences automatically using content-based indexing will save tremendous human efforts in sorting and retrieving images or video in a huge database. • Traditional gait analysis • providing medical diagnosis and treatment support • also used for personal identification • Other applications • personalized training systems • medical diagnostics of orthopedic patients • choreography of dance and ballet MAI Lab. Seminar at 2004 Spring

  7. A general framework for human motion analysis Human detection aims at segmenting regions corresponding to people from the rest of an image. Motion Segmentation Human Detection (Low-level Vision) Establishing coherent relations of image features between frames with respect to position, velocity, shape, texture, etc. Object Classification To analyze and recognize human motion patterns, and to produce high-level description of actions and interactions. Human Tracking Human Tracking (Intermediate-level Vision) Action Recognition Behavior Understanding (High-level Vision) Semantic Description MAI Lab. Seminar at 2004 Spring

  8. Detection I – Motion segmentation • It is a problem which aims at detecting regions corresponding to moving objects such as vehicles and people in natural scenes. • However, changes fromweather, illumination, shadow and repetitive motion from clutter make motion segmentation difficult to process quickly and reliably. • At present, most segmentation methods use either temporal or spatial information of the images. Several conventional approaches to motion segmentation are outlined in the following. • background subtraction, statistical methods, temporal differencing, optical flow MAI Lab. Seminar at 2004 Spring

  9. Detection I – Motion segmentation 1. Background subtraction - relatively static background - detect moving regions in an image by differencing between current image and a reference background image in a pixel-by-pixel fashion. - It is extremely sensitive to changes of dynamic scenes due to lighting and extraneous events. 2. Statistical methods - use the characteristics of individual pixels or groups of pixels to construct more advanced background models, and the statistics of the backgrounds can be updated dynamically during processing. - This approach is becoming increasingly popular due to its robustness to noise, shadow, change of lighting conditions, etc. MAI Lab. Seminar at 2004 Spring

  10. Detection I – Motion segmentation 3. Temporal differencing - makes use of pixel-wise difference between two or three consecutive frames in an image sequence to extract moving regions. - It is very adaptive to dynamic environments, but generally does a poor job of extracting the entire relevant feature pixels. 4. Optical flow - uses characteristics of flow vectors of moving objects over time to detect change regions in an image sequence. - most flow computation methods are computationally complex and very sensitive to noise. MAI Lab. Seminar at 2004 Spring

  11. Detection II – Object classification • The purpose of moving object classification is to precisely extract the region corresponding to people from all moving blobs obtained by the motion segmentation methods discussed above. 1. Shape-based classification - using different description of shape information - silhouette-based shape representation 2. Motion-based classification - using a periodic property of non-rigid motion - similarity-based technique MAI Lab. Seminar at 2004 Spring

  12. Tracking - To prepare data for pose estimate and action recognition 1. Model-based tracking : - Stick figure • combination of line segments linked by joints • recognition of the whole feature - 2-D contour - analogous to 2-D ribbons or blobs - Volumetric models - more details using 3-D models MAI Lab. Seminar at 2004 Spring

  13. Tracking 2. Region-based tracking - The idea here is to identify a connected region associated with each moving object in an image, and then track it over time using a cross-correlation measure. - difficulties : long shadows (connecting up blobs that should have been associated with separate people), congested situation (people partially occlude one another instead of being spatially isolated) MAI Lab. Seminar at 2004 Spring

  14. Tracking 3. Active-contour-based tracking - The idea is to have a representation of the bounding contour of the object and keep dynamically updating it over time. - reduction of computational complexity, compared to region-based tracking. - but it requires a good initial fit. 4. Feature-based tracking - uses sub-features such as distinguishable points or lines on the object to realize the tracking task. - benefit : even in the presence of partial occlusion, some of the sub-features of the tracked objects remain visible. MAI Lab. Seminar at 2004 Spring

  15. Behavior understanding • Action recognition involved in behavior understanding may be thought as a time-varying data matching problem. • 1. General Techniques - Dynamic Time Warping - template-based dynamic programming matching problem - Hidden Markov models (HMMs) - using a finite set of output probability distribution - Neural Network MAI Lab. Seminar at 2004 Spring

  16. Behavior understanding 2. Action Recognition • Template Matching • convert an image sequence into a static shape pattern • compare it to pre-stored action prototypes during recognition • low computational complexity and simple implementation, but more susceptible to noise and the variations of the time interval of the movements. • State Space Approaches • define each static posture as a ‘state’ • uses certain probabilities to generate connections MAI Lab. Seminar at 2004 Spring

  17. Behavior understanding 3. Semantic Description • applying concepts of natural languages to vision systems • reasonably choose a group of motion words or short expressions to report the behaviors of the moving objects in natural scenes • Text based description • Description of human motion is more complex MAI Lab. Seminar at 2004 Spring

  18. Discussion • Further Research • Fast and Accurate Motion Segmentation • Occlusion Handling • 3-D Modeling and Tracking • Use of multiple cameras • Action Understanding • Performance Evaluation MAI Lab. Seminar at 2004 Spring

More Related