570 likes | 697 Views
Next Generation 4D Distributed Modeling and Visualization. Automated 3D Model Construction for Urban Environments. Christian Frueh John Flynn Avideh Zakhor. University of California, Berkeley. June 13, 2002. Presentation Overview. Introduction Ground based modeling Mesh processing
E N D
Next Generation 4D Distributed Modeling and Visualization Automated 3D Model Construction for Urban Environments Christian Frueh John Flynn Avideh Zakhor University of California, Berkeley June 13, 2002
Presentation Overview • Introduction • Ground based modeling • Mesh processing • Airborne modeling • Aerial photos • Airborne laser scans • 3D Model Fusion • Rendering • Conclusion and Future Work
Introduction Goal: Generate 3D model of a city for virtual walk/drive/fly-thrus and simulations • Fast • Automated • Photorealistic needed: For Fly-Thru: For Walk/Drive-Thru: • 3D Model of terrain and buildings tops & sides • coarse resolution • 3D model of street scenery & building façades • highly detailed
Airborne Modeling Ground Based Modeling • Laser scans/images from plane • Laser scans & images from acquisition vehicle Fusion Complete 3D City Model Introduction 3D Model of terrain and building tops 3D model of building façades
Airborne Modeling Acquisition of terrain shape and top-view building geometry Goal: • Available Data: • Aerial Photos • Airborne laser scans Texture: from aerial photos Geometry: 2 approaches: I) stereo matching of photos II) airborne laser scans
Airborne Modeling Approach I : Stereo Matching (last year) Stereo photo pairs from city/urban areas, ~ 60% overlap Semi-Automatic Manual: Automated: • Segmentation • Camera parameter computation, • Matching, • Distortion reduction, • Model generation
Stereo Matching Stereo pair from downtown Berkeley and the estimated disparity after removing perspective distortions
Stereo Matching Results Downtown Oakland
Airborne Modeling Approach II: Airborne Laser Scans Scanning city from plane • Resolution 1 scan point/m2 • Berkeley: 40 Million scan points point cloud
Airborne Laser Scans • Re-sampling point cloud • Sorting into grid • Filling holes Map-like height field usable for: • Monte Carlo Localization • Mesh Generation
Textured Mesh Generation 1. Connecting grid vertices to mesh 2. Applying Q-slim simplification 3. Texture mapping: • Semi-automatic • Manual selection of few correspondence points: 10 mins/entire Berkeley • Automated camera pose estimation • Automated computation of texture for mesh
Airborne Model East Berkeley campus with campanile
Airborne Model Downtown Berkeley http://www-video.eecs.berkeley.edu/~frueh/3d/airborne/
Ground Based Modeling buildings 2D laser v truck z u y x Acquisition of highly detailed 3D building façade models Goal: • Scanning setup • vertical 2D laser scanner for geometry capture • horizontal scanner for pose estimation • Acquisition vehicle • Truck with rack: • 2 fast 2D laser scanners • digital camera
(ui, vi, i) (ui-1, vi-1, i-1) … (u2, v2, 2) (u1, v1, 1) Scan Matching & Initial Path Computation Horizontal laser scans: • Continuously captured during vehicle motion • Overlap Relative position estimation by scan-to-scan matching Translation (u,v) Rotation Adding relative steps (ui, vi, i) t = t0 t = t1 (u, v) path (xi,yi,i) Scan matching 3 DOF pose (x, y, yaw)
6 DOF Pose Estimation From Images • Scan matching cannot estimate vertical motion • Small bumps and rolls • Slopes in hill areas • Full 6 DOF pose of the vehicle is important; affects: • Future processing of the 3D and intensity data • Texture mapping of the resulting 3D models • Extend initial 3 DOF pose by deriving missing 3 DOF (z, pitch, roll) from images
6 DOF Pose Estimation From Images Central idea: photo-consistency • Each 3D scan point can be projected into images using initial 3 DOF pose • If pose estimate is correct, point should appear the same in all images • Use discrepancies in projected position of 3D points within multiple images to solve for the full pose
6 DOF Pose Estimation – Algorithm • 3DOF of laser as initial estimate • Project scan points into both images • If not consistent, use image correlation to find correct projection • Ransac used for robustness
6 DOF Pose Estimation – Results with 3 DOF pose with 6 DOF pose
Monte Carlo Localization (1) Previously: Global 3 DOF pose correction using aerial photography a) path before MCL correction b) path after MCL correction After correction, points fit to edges of aerial image
Monte Carlo Localization (2) Extend MCL to work with airborne laser data and 6 DOF pose Now: No perspective shifts of building tops, no shadow lines • Fewer particles necessary, increased computation speed • Significantly higher accuracy near high buildings and tree areas Use terrain shape to estimate z coordinate of truck • Correct additional DOF for vehicle pose (z, pitch, roll) • Modeling not restricted to flat areas
Monte Carlo Localization (3) Track global 3D position of vehicle to correct relative 6 DOF motion estimates Resulting corrected path overlaid with airborne laser height field
Segment path into quasi-linear pieces • Cut path at curves and empty areas • Remove redundant segments Path Segmentation 24 mins, 6769 meters vertical scans:107,082 scan points: ~ 15 million Too large to process as one block!
Path Segmentation Resulting path segments overlaid with edges of airborne laser height map
Side views look “noisy” Remove foreground: extract facades Simple Mesh Generation Triangulate Point cloud Mesh • Problem: • Partially captured foreground objects • erroneous scan points due to glass reflection
split depth main depth local minimum 2. Histogram analysis over vertical scans split depth main depth depth value sn,υfor a scan point Pn,υ depth scanner scan nr ground points depth Façade Extraction and Processing (1) 1. Transform path segment into depth image
Façade Extraction and Processing (2) 3. Separate depth image into 2 layers: foreground=trees, cars etc. background=building facades
Façade Extraction and Processing (3) 4. Process background layer: • Detect and remove invalid scan points • Fill areas occluded by foreground objects by extending geometry from boundaries • Horizontal, vertical, planar interpolation, RANSAC • Apply segmentation • Remove isolated segments • Fill remaining holes in large segments • Final result: “clean” background layer
Façade Extraction – Examples (1) with processing without processing
Façade Extraction – Examples (2) without processing with processing
Façade Extraction – Examples (3) without processing with processing
Mesh Generation Downtown Berkeley
Automatic Texture Mapping (1) Camera calibrated and synchronized with laser scanners Transformation matrix between camera image and laser scan vertices can be computed 1. Project geometry into images 2. Mark occluding foreground objects in image 3. For each background triangle: Search pictures in which triangle is not occluded, and texture with corresponding picture area
Typical texture reduction: factor 8..12 Automatic Texture Mapping (2) Efficient representation: texture atlas Copy texture of all triangles into “mosaic” image
Texture synthesis: preliminary • Mark holes corresponding to non-textured triangles in the atlas • Search the image for areas matching the hole boundaries • Fill the hole by copying missing pixels from these image Automatic Texture Mapping (3) Large foreground objects: Some of the filled-in triangles are not visible in any image! “texture holes” in the atlas
Texture holes filled Automatic Texture Mapping (4) Texture holes marked
Ground Based Modeling - Results Façade models of downtown Berkeley
Ground Based Modeling - Results Façade models of downtown Berkeley
Model Fusion Fusion of ground based and airborne model to one single model Goal: Façade model Airborne model Model Fusion: • Registration of models • Combining the registered meshes
Which model to use where? Registration of Models Models are already registered with each via Monte-Carlo-Localization !
Preparing Ground Based Models Intersect path segments with each other Remove degenerated, redundant triangles in overlapping areas original mesh redundant triangles removed
Preparing Airborne Model Ground based model has 5-10 times higher resolution • Remove facades in airborne model where ground based geometry is available • Add ground based façades • Fill remaining gaps with a “blend mesh” to hide model transitions
Preparing Airborne Model Initial airborne model
Preparing Airborne Model Remove facades where ground based geometry is available
Combining Models Add ground based façade models