1 / 7

Human Action Recognition Week 2

Taylor Rassmann. Human Action Recognition Week 2. Action Recognition from One Example. Action is often distinguished from activity in the sense that action is an individual atomic unit of activity . In particular, human action refers to physical body motion.

gali
Download Presentation

Human Action Recognition Week 2

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Taylor Rassmann Human Action RecognitionWeek 2

  2. Action Recognition from One Example • Action is often distinguished from activity in the sense that action is an individual atomic unit of activity. In particular, human action refers to physical body motion.

  3. Action Recognition from One Example • Novel feature representation that is derived from space-time local (steering) regression kernels (3D LSKs) • This feature representation can attain data even in instances of high distortion and data uncertainty

  4. Action Recognition from One Example • This is achieved by measuring the likeness of a voxel to its surroundings based on computation of a distance between points. • These points are measured (along the shortest path) on a manifold defined by the embedding of the video data in 4D • For better classification performance, space time saliency detection is applied to larger videos to crop to a shorter action clip

  5. Action Recognition from One Example • The key idea behind 3D LSKs is to robustly obtain local space-time geometric structures by analyzing the photometric (voxel value) differences based on estimated space-time gradients, and use this structure information to determine the shape and size of a canonical kernel (descriptor).

  6. Approach Taken • Use of pair wise distances of salient regions • Saliency extraction complete from a few actions in KTH data set

  7. Current Work • Code for saliency statistics in progress • This will implement the distance metrics for pair wise distances between features • Possible use of thresholding for some of the salient regions to prevent merging • This will help acquire different parts of features • Note: Thresholding must be careful not to be to high or low to eliminate some of the important data

More Related