1 / 45

Counting Crowded Moving Objects

Counting Crowded Moving Objects. Vincent Rabaud and Serge Belongie Department of Computer Science and Engineering University of California, San Diego { vrabaud,sjb }@ cs.ucsd.edu. Presentation by: Yaron Koral IDC, Herzlia , ISRAEL. AGENDA. Motivation Challenges Algorithm

lalasa
Download Presentation

Counting Crowded Moving Objects

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Counting Crowded Moving Objects Vincent Rabaud and Serge Belongie Department of Computer Science and Engineering University of California, San Diego {vrabaud,sjb}@cs.ucsd.edu Presentation by: Yaron Koral IDC, Herzlia, ISRAEL

  2. AGENDA • Motivation • Challenges • Algorithm • Experimental Results

  3. AGENDA • Motivation • Challenges • Algorithm • Experimental Results

  4. Motivation • Counting crowds of people • Counting herds of animals • Counting migrating cells • Everything goes as long as the crowdis homogeneous!!

  5. AGENDA • Motivation • Challenges • Algorithm • Experimental Results

  6. Challenges • The problem of occlusion • Inter-object • Self occlusion • Large number of independent motions • Dozens of erratically moving objects • Require more than two successive frames Surveillance camera viewing a crowd from a distant viewpoint, but zoomed in, such that the effects of perspective are minimized.

  7. AGENDA • Motivation • Challenges • Algorithm • Experimental Results

  8. Algorithm Highlights • Feature Tracking with KLT • Increased Efficiency • Feature Re-Spawning • Trajectory Conditioning • Trajectory Clustering

  9. Algorithm Highlights • Feature Tracking with KLT • Increased Efficiency • Feature Re-Spawning • Trajectory Conditioning • Trajectory Clustering

  10. Harris Corner Detector – What are Good Features?C.Harris, M.Stephens. “A Combined Corner and Edge Detector”. 1988 • We should easily recognize a corner by looking through a small window • Shifting a window in anydirection should give a large change in intensity

  11. Harris Detector: Basic Idea “flat” region:no change in all directions “edge”:no change along the edge direction “corner”:significant change in all directions

  12. Window function Shifted intensity Intensity Window function w(x,y) = or 1 in window, 0 outside Gaussian Harris Detector: Mathematics Change of intensity for shift in [u,v] direction:

  13. Harris Detector: Mathematics For small [u,v]: We have:

  14. Harris Detector: Mathematics For small shifts [u,v] we have a bilinear approximation: where M is a 22 matrix computed from image derivatives:

  15. Harris Detector: Mathematics Denotes by ei the ith eigen-vactor of M whose eigen-value is i: Conclusions:

  16. Harris Detector: Mathematics Intensity change in shifting window: eigenvalue analysis 1, 2 – eigenvalues of M direction of the fastest change Ellipse E(u,v) = const direction of the slowest change (max)-1/2 (min)-1/2

  17. Harris Detector: Mathematics 2 Classification of image points using eigenvalues of M: “Edge” 2 >> 1 “Corner”1 and 2 are large,1 ~ 2;E increases in all directions 1 and 2 are small;E is almost constant in all directions “Edge” 1 >> 2 “Flat” region 1

  18. Sum of Squared Differences – Tracking Features • SSD is optimal in the sense of ML when • Constant brightness assumption • i.i.d. additive Gaussian noise

  19. Exhaustive Search • Loop over all parameter space • No realistic in most cases • Computationally expensive • E.g. to search 100X100 image in 1000X1000 image using only translation ~1010 operations! • Explodes with number of parameters • Precision limited to step size

  20. The Problem Find (u,v) that minimizes the SSD over region A. Assume that (u,v) are constant over all A

  21. Iterative Solution • Lucas Kanade (1981) • Use Taylor expansion of I (the optical flow equation) • Find

  22. Feature Tracking with KLT(We’re back to crowd counting…) • KLT is a feature tracking algorithm • Driving Principle: • Determine the motion parameter oflocal window W from image I to consecutive image J • The center of the window defines the tracked feature

  23. Feature Tracking with KLT • Given a window W • the affine motion parameters A and d are chosen to minimize the dissimilarity Affine Motion = Monitoring

  24. Feature Tracking with KLT • It is assumed that only d matters between 2 frames. Therefore a variation of SSD is used Pure Translation -> Tracking min(λ1,λ2) > t • A window is accepted as a candidate feature if in the center of the window, both eigenvalues exceed a predefined threshold t

  25. Algorithm Highlights • Feature Tracking with KLT • Increased Efficiency • Feature Re-Spawning • Trajectory Conditioning • Trajectory Clustering

  26. Increased Efficiency #1 • Associating only one window with each feature • Giving a uniform weight function that depends on 1/(window area |w|) • Determining quality by comparing: • Computation of different Z matrices is accelerated by “integral image”[1] [1] Viola & Jones 2004

  27. Increased Efficiency #2 • Run on sample training frames first • Determine parameters that lead to the optimal windows sizes • Reduces to less than 5% of the possible parameter set • All objects are from the same class Surveillance camera viewing a crowd from a distant viewpoint, but zoomed in, such that the effects of perspective are minimized.

  28. Algorithm Highlights • Feature Tracking with KLT • Increased Efficiency • Feature Re-Spawning • Trajectory Conditioning • Trajectory Clustering

  29. Feature Re-Spawning • Along time, KLT looses track: • Inter-object occlusion • Self occlusion • Exit from picture • Appearance change due to perspective and articulation • KLT recreates features all the time • Computationally intensive • Weak features are renewed

  30. Feature Re-Spawning • Re-Spawn features only at specific locations in space and time • Propagate them forward and backward in time • Find the biggest “holes” • Re-spawn featuresin frame with theweighted average oftimes

  31. Algorithm Highlights • Feature Tracking with KLT • Increased Efficiency • Feature Re-Spawning • Trajectory Conditioning • Trajectory Clustering

  32. Trajectory Conditioning • KLT tracker gives a set of trajectories with poor homogeneity • Don’t begin and end at the same times • Occlusions can result in trajectory fragmentation • Feature can lose its strength resulting in less precise tracks • Solution: condition the data • Spatially and temporally

  33. Trajectory Conditioning • Each trajectory is influenced by its spatial neighbors • Apply a box to each raw trajectory • Follow all neighbor trajectories from the time the trajectory started

  34. Algorithm Highlights • Feature Tracking with KLT • Increased Efficiency • Feature Re-Spawning • Trajectory Conditioning • Trajectory Clustering

  35. Trajectory Clustering • Determine number of object at time t by clustering trajectories • Since at time t objects may be close, focus attention on a time interval (half-width of 200 frames) • Build connectivity graph • At each time step, the present features form the nodes of a connectivity graph G • Edges indicate possible membership to a common object.

  36. Surveillance camera viewing a crowd from a distant viewpoint, but zoomed in, such that the effects of perspective are minimized. Trajectory Clustering • Connectivity Graph • Bounding Box: as small as possible, able to contain every possible instance of the object • If two features do not stay in a certain box, they do not belong to the same object. • The 3 parameters of this box are learned from training data. Articulation factor

  37. Trajectory Clustering • Rigid parts merging • Features share similar movement during whole life span, belong to a rigid part of an object, and consequently to a common object • RANSAC is applied to sets of trajectories • Within time window • Connected in graph G

  38. Trajectory Clustering • Agglomerative Clustering • At each iteration, the two closest sets are considered • If all features are linked to each other in the connectivity graph, they are merged together. • Otherwise, the next closest sets are considered • Proceed until all possible pairs are analyzed

  39. AGENDA • Motivation • Challenges • Algorithm • Experimental Results

  40. Experimental results • Datasets • USC: elevated view of a crowd consisting of zero to twelve persons • LIBRARY: elevated view of a crowd of twenty to fifty persons • CELLS: red blood cell dataset consisting of fifty to hundred blood cells

  41. Experimental results

  42. Experimental results Estimated Ground Truth

  43. Experimental results

  44. Conclusion • A new way for segmenting motions generated by multiple objects in crowd • Enhancements to KLT tracker • Conditioning and Clustering techniques

  45. Thank You!

More Related