250 likes | 280 Views
Motion Segmentation at Any Speed. Shrinivas Pundlik and Stan Birchfield Department of Electrical and Computer Engineering Clemson University Clemson, SC USA. The problem of motion segmentation. Carve an image according to motion vectors Gestalt theory:
E N D
Motion Segmentation at Any Speed Shrinivas Pundlik and Stan Birchfield Department of Electrical and Computer Engineering Clemson UniversityClemson, SC USA
The problem of motion segmentation • Carve an image according to motion vectors • Gestalt theory: • Focus on well-organized patterns rather than disparate parts • “Grouping” - the key idea behind visual perception • But motion is inherently differential! common fate
Previous approaches Eigenvector based Extraction of motion layers Multi-body factorization Wang and Adelson 1994, Ayer and Sawhney 1995, Xiao and Shah 2005 Ke and Kanade 2001, Vidal and Sastry 2003 Shi and Malik 1998 Object level grouping Rank Constraint Functional Sivic, Schaffalitzky and Zisserman 2004 Cremers and Soatto 2005 Rothganger, Lazebnik, Schmid and Ponce 2004
Traditional approach time two frames spatiotemporal volume • Two unanswered questions: • What are the limitations of processing a block of frames? • How to integrate information over time?
Batch processing x fast medium slow threshold t time window dependent upon speed
Incremental processing x fast medium slow crawling threshold t independent of speeddependent only upon the amount of information
Algorithm overview • Detect and track Kanade-Lucas-Tomasi feature points • Accumulate groups using region growing (neighbors from Delaunay triangulation) • Retain consistent groups • Maintain groups over time
Region growing Between two frames, • Repeat • Randomly select seed feature • Fit motion model to neighbors • Repeat until group does not change: • Discard all features except the one near the centroid • Grow group by recursively including neighboring features with similar motion • Update the motion model • Until all features have been considered
Region growing for a single group Choice of seed heavily influences resulting group
a b c d a b c d Finding consistent groups Consistency check: Features that are always grouped together, no matter the seed point seed point seed point a a b a b b c c c d d d a b c d a b c d a 1 1 a 2 1 1 1 1 1 1 = b b 2 2 + 1 1 c 1 c 2 1 d 1 d 2 1 In practice, we use 7 seed points
Single consistent group seed point 1 seed point 2 seed point 3 consistent group
Multiple consistent groups seed point 1 seed point 2 seed point 3 only 3 groups in initial results 4 groups in final result consistent groups
Maintaining groups over time • Find new groups(when new objects enter scene) • Split existing groups(when configuration changes) • Add new features to existing groups(when new information available)
Finding new groups group1 group2 group1 group2 ungrouped features ungrouped feature group3 find consistent groups
3 3 7 7 8 5 5 8 5 5 6 6 6 6 9 9 9 9 10 10 10 10 Splitting existing groups if lost features > x % of original features frame k frame k + n lost features 2 2 2 try to regroup(find consistent groups again) 1 1 1 track features 3 3 4 4 7 4 7 5 8 6 either all are regrouped or multiple groups are found 6 5 9 10 newly added features
Adding new features new (ungrouped) features 1 2 3 group 1 (with motion model 1) group 2 (with motion model 2) Feature 2 is neighbor to multiple groups: Compare feature motion with all group motion models Add if similar to one and dissimilar from the rest Feature 1 is neighbor to only one group: Compare feature motion with group motion model Add if similar Feature 3 is neighbor to only one group: Compare feature motion with group motion model Add if similar Feature 1 is neighbor to only one group: Compare feature motion with group motion model Add if similar
Experimental results 64 185 279 8 395 468 497 520 statue sequence
Experimental results Number of groups is determined automatically and dynamically
Experimental results mobile-calendar sequence 14 70 100 car-map sequence 11 20 35 free-throw sequence 10 15 20
Videos Videos available at http://www.ces.clemson.edu/~stb/research/motion_segmentation
Insensitivity to speed normal 64 185 395 480 ½ frames dropped 32 93 197 240 double frames 128 370 790 960
Insensitivity to parameter normal 4 8 12 64 ½ threshold 4 8 12 64 threshold x 2 4 8 12 64
Future application: Mobile robot obstacle avoidance Speed of algorithm: 20 ms per image frame (plus feature tracking, which is real time) Can apply algorithm to real-time problems
Conclusion • Motion is inherently differential • Motion segmentation should take this into account • Proposed algorithm • segments based upon available evidence, independently of object speed • incrementally processes video • contains one primary parameter, namely the amount of evidence needed to split a group • works in real time • automatically computes the number of objects dynamically • Future work: dense segmentation