10 likes | 112 Views
Modelling Spatial Video as part of a GIS Video-Analysis Framework Paul Lewis. Introduction. Viewpoint Implementation.
E N D
Modelling Spatial Video as part of a GIS Video-Analysis Framework Paul Lewis Introduction Viewpoint Implementation Using standard photogrammetric methods from camera calibration literature, [1][2], and used in a general case like this we implement the following steps to generate a Viewpoints extents: It is now common for video; real-time and collected, mobile and static, to be georeferenced and stored in large archives for users of expert systems to access and interact with. In the ground-based terrestrial context georeferenced video is also becoming more commonly collected and accessed in recent years. The collecting, storing and distributing of these data are now more easily achieved through the development and affordability of dedicated systems such as Mobile Mapping Systems (MMS) , the StratAG XP1 experimental platform is shown in figure 1. Such MMS are being used for infrastructural monitoring and mapping because they can now generate high accuracy geospatial data. Camera Model (table 1) The Circle Of Confusion (COC) measurement is the maximum permissible blur circle for an image and directly affects the Depth-Of-Field (DOF) calculations. An Angle-Of-View (AOV) represents the camera lens properties as an arc angle. A DOF Hyperfocal Distance is a measurable distance in front of the camera lens from which point to infinity the DOF extends. Spatial Video Survey data loaded onto PC. Spatial Data: decode GPS NMEA messages Video Data: Converted to an easily indexed format Spatial Video Viewpoints Model (Figure 2) Decode Video Frame Index ID's Store Video Figure 1: StratAG Mobile Mapping System (MMS) XP1 experimental platform. Automatic Database Population process Spatial Video Player/Viewer GIS Query and Analysis of Viewpoints DB Figure 4: . Spatial video post-survey processing flowchart. Table 1. Implemented Camera Model Operations Spatial Extrapolation Model The automated process performed by the software shown in figure 3 is highlighted here in figure 4. Adjust the GPS coordinates to be coincident with the principle point of the camera image plane. Calculate an adjusted Hyperfocal Sharpness Distance to the eight Viewpoint plane intersection points, figure 3. Calculate an adjusted azimuth. Calculate adjusted altitudes of the Viewpoint plane intersection points. However, significant problems exist for access to large volumes of archived georeferenced MMS data in a GIS context, particularly Video which is the subject of this poster. These include the semantics behind the modelling of its spatial content and the development of a computationally efficient query model that can isolate video sequences of geographical interest. This poster outlines a georeferenced-video framework where a GIS-viewshed oriented approach to modelling and querying terrestrial mobile imagery is detailed. Technically, this viewshed model implements a subtle and more flexible optimisation of the Open Geospatial Consortiums (OGC) Geo-Video (GVS) ViewCone data type, however, it does define some fundamentally different properties. This in turn has enable a significantly different spatial context to be defined for the geographic space that terrestrial mobile video captures by optimising it in terms of areal coverage and perceptible depth. Figure 6: Using buildings footprint spatial data to generate higher accuracy Viewpoints. Use these results to solve the 3D geodetic forward algorithm as defined in Vincenty, [3]. Results and Applications Spatial Video Data Model In figure 5 we show one result from the large system calibration testing. While in figure 6 we highlight one GIS analysis operation on a large spatial video data warehouse. In this case a buildings footprint data set is used to generate higher accuracy Viewpoints by intersecting both GIS constrained spatial data sets. A view frustum model, calculated from the camera parameters, forms the basis of the geographical extent representation of each spatial video frame (or static sequence). The OGC GVS implementation uses this principle in a, restricted, two dimensional ViewCone data structure. Our Viewpoint approach (figure 2) has lead to a distinct change in the overall GVS data structure from a single polygonal representation of space to a capture point and disconnected but associated image-space polygon representation set. The camera image-capture location is represented by a point while the polygon defines the geographical space, as the optimized focus range, for the image space of each frame. Figure 3 shows a software tool for semi-supervised fitting of spatial video to a Viewpoint data view in a GIS. In this tool both the GIS space and the camera/image properties can be adjusted to achieve an accurate geographical space representation. Figure 2: 3D Spatial Video Viewpoint. Figure 3 Software module for semi-supervised spatial video Viewpoint fitting. Figure 5: Plan view of Viewpoint fitting in test scenario showing accuracy results. References 1. Wheeler, R.E. Notes on View Camera Geometry. 2003, 55. www.bobwheeler.com/photo/ViewCam.pdf. 2. Wolf, P.R. and DeWitt, B.A. Elements of Photogrammetry(with Applications in GIS). McGraw-Hill Higher Education, 2000. 3.Vincenty, T. Direct and Inverse Solutions of Geodesics on the ellipsoid with application of nested equations. Survey Review 23, 176 (1975), 88-93. Research presented in this poster was funded by a Strategic Research Cluster Grant (07/SRC/I1168) by Science Foundation Ireland under the National Development Plan. The authors gratefully acknowledge this support.