280 likes | 415 Views
GIS and Image Processing for Environmental Analysis with Outdoor Mobile Robots. Presenter: Paul Kelly Co-author: Gordon Dodds. School of Electrical & Electronic Engineering Queen’s University Belfast Northern Ireland. Background. Ground-level images give high resolution and multiple views
E N D
GIS and Image Processing for Environmental Analysis with Outdoor Mobile Robots Presenter: Paul Kelly Co-author: Gordon Dodds School of Electrical & Electronic EngineeringQueen’s University BelfastNorthern Ireland
Background • Ground-level images give high resolution and multiple views • Perspective transformation necessary to use images for change detection in 2-D • Requires geographical knowledge of ground elevation, building outlines, etc. In many areas can use a Geographical Information System (GIS) to augment the images taken by a mobile system School of Electrical and Electronic Engineering, Queen’s University Belfast
Why use GIS? • Easy access to surveyed geographical data • Use existing spatial analysis and processing functionality • Already contains advanced visualisation capabilities that can be adapted for combination of observed images with GIS data • Output of visual surveying will become an input to the GIS School of Electrical and Electronic Engineering, Queen’s University Belfast
Methodology Outline 1. Camera calibration – Image correction 2. 3-D Database and view reconstruction 3. For each image frame • Camera location approximation (DGPS) • Accurate camera localisation using GIS data • Ground-level image / GIS processing 4. *Change detection and logging 5. *Path and mission planning for change mapping (* to be covered in later publications) School of Electrical and Electronic Engineering, Queen’s University Belfast
Camera Use& Calibration • Low-cost consumer Digital Video (DV) camera • Images corrected for DV pixel aspect ratio and radial lens distortion (based on straight line-fitting) • Focal length measured experimentally • Calibrated also for colour and luminance for change detection School of Electrical and Electronic Engineering, Queen’s University Belfast
Perspective Transformation of a Single GIS “Image” • Camera calibration data / interiororientation parameters transferred to GIS 3-D visualisation module • This enables • Photogrammetric calculations • Generation of “camera-eye views” in GIS • Pixel-by-pixel mapping to real world co-ordinates • GRASS GIS modified to facilitate this School of Electrical and Electronic Engineering, Queen’s University Belfast
GIS 3-D View GIS “camera-eye view” of vector boundary data and GPS spot heights School of Electrical and Electronic Engineering, Queen’s University Belfast
Easting: 352552 mNorthing: 336353 m Elevation: 16.93 m GIS 3-D View • 3-D reverse look-up of point co-ordinates • Do this for every pixel, combining with image (colour) data School of Electrical and Electronic Engineering, Queen’s University Belfast
Multiple Images — Results School of Electrical and Electronic Engineering, Queen’s University Belfast
Multiple Images — Results School of Electrical and Electronic Engineering, Queen’s University Belfast
Multiple Images — Results School of Electrical and Electronic Engineering, Queen’s University Belfast
Multiple Images — Results School of Electrical and Electronic Engineering, Queen’s University Belfast
Multiple Images — Results School of Electrical and Electronic Engineering, Queen’s University Belfast
Multiple Images — Results School of Electrical and Electronic Engineering, Queen’s University Belfast
Multiple Images — Results School of Electrical and Electronic Engineering, Queen’s University Belfast
Accurate Camera Localisation Segmentation& edge detection forthese features Perform ModifiedHough Transform onthis ROI ImageData Select vectorfeatures for visibilityin image Determinebounding box ofROI GISVector Data Project vectorfeatures into imageframe CameraCalibration Data Update cameraposition from MHTresults GIS PerspectiveTransform Model GIS DigitalElevation Model Iterate Final CalculatedPosition Low-res.GPS Data Initial position estimate School of Electrical and Electronic Engineering, Queen’s University Belfast
Camera Location Approximation • Low-cost 2-metre resolution GPS • Yaw Pitch Roll inertial sensor • Sensor fusion results in initial estimate of position (easting, northing, elevation) and orientation School of Electrical and Electronic Engineering, Queen’s University Belfast
Accurate Camera Localisation • Match as many features as possible between GIS vector data (e.g. buildings, land features) and the raster-based camera image • Use vector attributes from GIS to improve image processing • Optimisation approach based on Modified Hough Transform • Largest errors in RPY—image based information will significantly reduce these School of Electrical and Electronic Engineering, Queen’s University Belfast
Accurate Camera Localisation Segmentation& edge detection forthese features Perform ModifiedHough Transform onthis ROI ImageData Select vectorfeatures for visibilityin image Determinebounding box ofROI GISVector Data Project vectorfeatures into imageframe CameraCalibration Data Update cameraposition from MHTresults GIS PerspectiveTransform Model GIS DigitalElevation Model Iterate Final CalculatedPosition Low-res.GPS Data Initial position estimate School of Electrical and Electronic Engineering, Queen’s University Belfast
GIS Data Initial approximation of observer position House (exampleGIS feature) Measured low-resGPSpoints School of Electrical and Electronic Engineering, Queen’s University Belfast
3. Arbitrary search ROI 2. Projected house outline from GIS 3-D view module 1. Distortion-corrected image acquired with vehicle-mounted DV Camera GIS-aided Landmark Extraction School of Electrical and Electronic Engineering, Queen’s University Belfast
4. House found within ROI (using image processing) GIS-aided Landmark Extraction School of Electrical and Electronic Engineering, Queen’s University Belfast
Automatic Camera Localisation • Update approximation of camera location until object positions coincide (normally 3 non co-planar objects) • Simultaneously use many vector features from GIS data that may also be identified through image processing (hedges, walls etc.) School of Electrical and Electronic Engineering, Queen’s University Belfast
Requirements for extension to real-time usage • Remote access to server running GIS and image processing • Efficient GIS / mobile robot interfaces • Use GIS attributes to select landmark features that are likely to have lowest image processing load • Pre-planning of expected routing “images” School of Electrical and Electronic Engineering, Queen’s University Belfast
Summary • Calibrated camera images can be enhanced • GIS electronic map data reduces image processing time and improved landmark extraction • Automatic perspective transformation of multiple images & view reconstruction enables 3D changes to be found • May be used in real-time with some efficiency improvements • GIS use greatly improves efficiency in vision-based navigation and environmental analysis School of Electrical and Electronic Engineering, Queen’s University Belfast