250 likes | 378 Views
Advanced Decision Architectures Collaborative Technology Alliance Rama Chellappa University of Maryland In Collaboration with CSID and HRED. Participants. UMD Rama Chellappa Dr. Amit Agrawal (MERL) Dr. Naresh Cuntoor (KitWare, Inc) Mr. M. Arunkumar Mr. Dikpal Reddy ARL Dr. Phil David
E N D
Advanced Decision ArchitecturesCollaborative Technology Alliance Rama Chellappa University of Maryland In Collaboration with CSID and HRED
Participants • UMD • Rama Chellappa • Dr. Amit Agrawal (MERL) • Dr. Naresh Cuntoor (KitWare, Inc) • Mr. M. Arunkumar • Mr. Dikpal Reddy • ARL • Dr. Phil David • Dr. Jeff DeHart • Mr. Larry Tokarcik • HRED • Dr. Grayson CuQlock-Knopp • Level of effort • 1.5 students, faculty time
3D Modeling and visualization • 3D modeling of buildings • Automatic fusion of geometry and video information • Collaboration with Drs. Phil David, Jeff Dehard and Larry Tokarcik. • Was briefed at the 2006 CTA mtg in MD. • Terrain analysis using hyper stereo • Terrain drop detection • Collaboration with Dr. Cuqlock-Knopp, Grayson (Civ, ARL/HRED) and Dr. John Merritt (The Merritt Group) • 3D modeling of moving humans and vehicles • Multi-view tracking and activity recognition • Fusion of tracks using planar motion constraints • Done under a Task Order (with Dr. Phil David) • Factorization approach for 3D modeling of vehicles • Rank constraints in 3D modeling under planar motion constraints • Compressive sensing for surveillance • Detection of moving objects (Covered in OSU talk)
Multi camera tracking Challenges • Data from varied sources • Inter-camera Registration • Multiple targets Benefits of multi-camera fusion • Ability to handle occlusion • Accurate tracking.
Planar scene assumption • Planar scene • Image plane to world plane transformation is 1-1 • Can convert image plane location to a world plane estimate. • Incorporating parallax • vanishing points
Ground plane assumption • Invertible transformation • Ability to visualize from various view points.
Background Subtraction Basic outline of the tracker Projection Data Association Tracking
Key properties of the algorithm • Fusion mechanism • Camera-world error dependence explicitly modeled. • Fusion adaptively weights inputs from the cameras optimally in the sense of min variance. • Particle filtering with data association to handle multiple modality of distributions. • Ability to estimate other biometrics: height • Scalability • Computational Cost • Linear with number of targets in the scene • Linear with number of cameras • However, association algorithm used is suboptimal.
FlexiView • Led to a DARPA seedling program with UMD as the lead and SET Corporation as a sub. • Led by Prof. Amitabh Varshney, our visualization guru. • SET helped with accurate 3D modeling of A.V. Williams building (my home) on campus. • UMD integrated multi-object tracking, activity recognition and rendering algorithms.
Terrain drop detection: motivation • Obstacle detection for on-road navigation • Terrain-drop detection for autonomous cross-country vehicle navigation • Driver assistance and warning systems under poor visibility conditions using special imaging devices
Detection of terrain drop-offs Terrain drop-offs can be called negative obstacles Negative obstacles are harder to detect than positive obstacles size in image severe occlusion by the leading edge of the obstacle Negative Obstacle Positive Obstacle
Existing methods of negative obstacle detection • Largely ad-hoc methods aimed specifically at detecting discontinuities on planar ground in the heading direction mainly in the context of on-road navigation • Inspects each vertical scanline for jumps in elevation after allowing for the slope of the ground surface • Often uses other information like color
Challenges in negative obstacle detection • Limited magnitude and spatial resolution of depth-maps • Image noise and other errors in stereo matching • Depth-maps created using scanline-based stereo matching usually have discontinuities between scanlines which interfere with obstacle detection. Image Disparity map Disparity gradient magnitude
Optimal discontinuity detection • Assuming a step-edge model for discontinuity • Optimal linear detector in the presence of noise is Canny’s edge detector • Canny’s edge detector can be approximated by the derivative of the Gaussian • Optimal edge detector, J. Canny, IEEE PAMI, 1986. Canny’s edge detector filter Humans vs machines In the experiments using a set of 20 terrain drop-off scenes, the algorithm detected drop-offs on the average 10 m sooner at 3 MPH The reference was human observers wearing stereo displays with 1X baseline The algorithm used 3X hyper-stereo
Ongoing work: Nonlinear methods a scale factor and an averaging filter • Minimum of is obtained from the first order necessary condition: • Euler-Lagrange equation: • • Used Neumann boundary condition, is the disparity, is the direction of gradient and • Improved terrain detection using anisotropic diffusion methods Diffusion based methods continuously evolve and the evolving surface do not remain close to the original surface. Need to determine the stopping time. Instead we minimize an energy function.
Detected disparity discontinuities • Normal Canny’s edge detector, threshold=3 10 14 16 18 • Detector using anisotropic diffusion threshold=3 10 14 16 18
3D Modeling of vehicles • Motivation: Reconstructing Vehicle Models from surveillance video automatically • Method: Factorization for Structure from Planar Motion System • Background Subtraction • Use intensity and gradient direction information • Tracking feature points • Use KLT tracker • One-time Calibration • Use calibration from vanishing points. • Using FA to detect outliers and reconstruct 3D model Experimental Results • Use the derived rank constraints • Find the motion and shape matrix • Detect outliers and refine inliers
Motivation • Structure from planar motion in surveillance videos • A very common setting: stationary perspective camera, objects moving on the ground plane • Sample video
Background subtraction • Intensity based Segmentation • Set threshold according to the statistical variation of background intensity • Post-processing: Group small regions and applying morphological operation.
Forming the measurement matrix using tracked feature points • Using KLT tracker • Replacing proper number of features when tracking is lost • Feed those feature points to the 3D modeling algorithm Suppose N points are tracked over M frames. Form the measurement matrix Exploit rank 3 constraint Resolve factorization ambiguity Get the 3D shape matrix.
Summary • Developed many approaches for 3D modeling of sites, humans and vehicles • On demand rendering of activities possible • Useful in after action reports • Useful in mission planning and simulation • Thanks for the memories! • For nine years of uninterrupted support • For letting us do what we like to do • For giving us Cathi, Sue and Patricia • Mike and Laurel too!
Publications • A. Agrawal and Rama Chellappa, "3D Model Refinement using Surface-Parallax", IEEE ICASSP, 2004. • A. Agrawal and Rama Chellappa, "Robust Ego-Motion Estimation and 3D Model Refinement Using Depth Based Parallax Model", IEEE ICIP, 2004. • A. Agrawal, R. Meth and R. Chellappa “ Hierarchical DEM Refinement using Surface Parallax ”, 24th Army Science Conference, Orlando FL, 2004 • A. Agrawal and Rama Chellappa, "Robust Ego-Motion Estimation and 3D Model Refinement in Scenes with Varying Illumination", IEEE MOTION 2005 (oral) • A. Agrawal and Rama Chellappa, "Moving Object Segmentation and Dynamic Scene Reconstruction Using Two Frames", IEEE ICASSP 2005 (Best Student Paper Award) • A. Agrawal and R. Chellappa, "Fusing Depth and Video using Rao-Blackwellized Particle Filter", First International Conference on Pattern Recognition and Machine Intelligence (PReMI), Kolkatta, Dec 2005 (oral). • A. Agrawal, R. Chellappa and R. Raskar, "An Algebraic Approach to Surface Reconstruction from Gradient Fields", Proc. Intl. Conf. on Computer Vision, Beijing, China, Oct. 2005. • A. Agrawal, R. Raskar and R. Chellappa, "Edge Suppression by Gradient Field Transformation using Cross-Projection Tensors", Proc. IEEE Computer Society Conf. on Computer Vision and Patt. Recn., New York, NY, June 2006. • A. Agrawal, R. Raskar and R. Chellappa, "What is the Range of Surface Reconstructions from a Gradient Field?", European Conf. on Computer Vision, Graf, Austria, Oct. 2006 (oral presentation, 4.5% acceptance) • A. Agrawal and Rama Chellappa, “Robust Egomotion Estimation and 3D Model Refinement Using Surface Parallax”, IEEE Trans. On Image Processing, vol. 15, pp. 1215-1225, May 2006. • J. Li and Rama Chellappa, “Structure from Planar Motion”, IEEE Trans. On Image Processing, vol. 15, pp. 3466-3477, Nov. 2006. • A. Mohananchettiar, Volkan Cevher, Grayson V., Rama Chellappa and John Merritt, “Terrain drop detection using hyperstereo”, Proceedings of the SPIE, April 2007 (Jl. Version under preparation). • A. C. Sankaranarayanan, A. Srivastava and R. Chellappa, “Algorithmic and Architectural Optimizations for Computationally Efficient Particle Filtering”, IEEE Transactions on Image Processing, vol. 17, pp.737-748, May 2008. • A. C. Sankaranarayanan, A. Veeraraghvan and R. Chellappa, “Distributed Detection, Tracking and Recognition using a Network of Video Cameras Invited paper, Proceedings of IEEE, vol. 96, pp. 1606-1624, Oct. 2008. • Volkan Cevher, Aswin Sankaranarayanan, Marco F. Duarte, Dikpal Reddy, Richard G. Baraniuk, and Rama Chellappa, “Compressive Sensing for Background Summarization”, Proc. European Conf. on Computer Vision, Marseille, France, Oct. 2008. • Aswin Sankaranarayanan, Robert Patro, Pavan Turaga, A. Varshney and Rama Chellappa, "Modeling and Visualization of Human Activities for Multi-Camera Networks," EURASIP Jl. On Applied Signal Processing (To appear)