220 likes | 368 Views
Synchronization and Calibration of Camera Networks from Silhouettes. Sudipta N. Sinha Marc Pollefeys University of North Carolina at Chapel Hill, USA. Goal. To recover the Calibration & Synchronization of a Camera Network from only Live Video or Archived Video Sequences. Motivation.
E N D
Synchronization and Calibration of Camera Networks from Silhouettes Sudipta N. Sinha Marc Pollefeys University of North Carolina at Chapel Hill, USA.
Goal To recover the Calibration & Synchronizationof a Camera Network from only Live Video or Archived Video Sequences.
Motivation • Easy Deployment and Calibration of Cameras. • No Offline Calibration ( Patterns, LED etc) • No physical access to environment • Possibility of using unsynchronized video streams (camcorders, web-cams etc.) • Applications in wide-area surveillance camera networks (3D tracking etc). • Digitizing 3D events
Why use Silhouettes ? Visual Hull (Shape-from-Silhouette) System • Many silhouettes from dynamic objects • Background segmentation Feature-based ? • Features Matching hard for wide baselines • Little overlap of backgrounds • Few features on foreground
Prior Work : Calibration from Silhouettes Epipolar Geometry from Silhouettes • Porrill and Pollard, ’91 • Astrom, Cipolla and Giblin, ’96 Structure-and-motion from Silhouettes • Vijayakumar, Kriegman and Ponce’96 (orthographic) • Furukawa and Ponce’04 (orthographic) • Wong and Cipolla’01 (circular motion, at least to start) • Yezzi and Soatto’03 (needs initialization) Sequence to Sequence Alignment • Caspi, Irani,’02 (feature based)
Our Approach • Compute Epipolar Geometry fromSilhouettes in synchronized sequences (CVPR’04). • Here, we extend this to unsynchronizedsequences. • Synchronization and Calibration of camera network.
x2 x1 x’2 x’1 Multiple View Geometry of Silhouettes Frontier Points Epipolar Tangents • Always at least 2 extreme frontier points per silhouette • Only 2-view correspondence in general.
Camera Network Calibration from Silhouettes • 7 or more corresponding frontier points needed to compute epipolar geometry • Hard to find on single silhouette and possibly occluded • However, video sequences contain many silhouettes.
Camera Network Calibration from Silhouettes • If we know the epipoles, draw 3 outer epipolar tangents (need at least two silhouettes in each view) • Compute an epipolar line homography H-T • Epipolar Geometry F=[e]xH
RANSAC-based algorithm Repeat { • Generate a Hypothesis for the Epipolar Geometry • Verify the Model } Refine the best hypothesis. • Note : RANSAC is used to explore 4D space of epipoles apart from dealing with noisy silhouettes
Compact Representation for SilhouettesTangent Envelopes • Store the Convex Hull of the Silhouette. • Tangency Points for a discrete set of angles. • Approx. 500 bytes/frame. Hence a whole video sequences easily fits in memory. • Tangency Computations are efficient.
RANSAC-based algorithm Generate Hypothesis for Epipolar Geometry • Pick 2 corresponding frames, pick random tangents for each of the silhouettes. • Compute epipoles. • Pick 1 more tangent from additional frames • Compute homography • Generate Fundamental Matrix.
RANSAC-based algorithm Verify the Model For all tangents Compute Symmetric Epipolar Transfer Error Update Inlier Count (Abort Early if Hypothesis doesn’t look Promising)
What if videos are unsychronized ? For fixed fps video, same constraints are valid up to an extra unknown temporal offset. • Add a random temporal offset to RANSAC hypothesis. • Use multi-resolution approach: • Keyframes with slow motion, rough synchronization • ones with fast motion provide fine synchronization
Synchronization experiment # Promising Candidates # Iterations (In millions) Sequence Offset (# frames) • Total temporal offset search range [-500,+500] (i.e. ±15 secs.) • Unique peaks for correct offsets • Possibility for sub-frame synchronization
+3 +8 +6 -5 0 +2 Camera Network Synchronization • Consider directed graph with offsets as branch value • For consistency loops should add up to zero • MLE by minimizing in frames (=1/30s) ground truth
From epipolar geometry to full calibration • Solve for camera triplet (Levi and Werman, CVPR’03; Sinha et al. CVPR’04) • Assemble complete camera network.
Metric Cameras and Visual-Hull Reconstruction from 4 views Final calibration quality comparable to explicit calibration procedure
Taking Sub-frame Synchronization into account to appear (Sinha, Pollefeys, 3DPVT’04) Temporal Interpolation of Silhouettes. Reprojection error reduced from 10.5% to 3.4% of the pixels in the silhouette
Conclusion and Future Work • Camera network calibration & synchronization just from dynamic silhouettes. • Great for visual-hull systems. • Applications for surveillance systems. • Extend to active PTZ camera network and asynchronous video streams. Acknowledgments • NSF Career, DARPA. • Peter Sand, (MIT) for Visual Hull dataset.