1 / 28

Subsurface mapping, deterministics and volumes

Subsurface mapping, deterministics and volumes. Tomislav Malvić May 2016, BEST professional seminar. DETERMINISTICS. Deterministic had been closely connected with exact representation of nature, and in this case, with relatively shallow subsurface rock lithotypes and saturations ;

Download Presentation

Subsurface mapping, deterministics and volumes

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Subsurface mapping, deterministics and volumes Tomislav Malvić May 2016, BEST professional seminar

  2. DETERMINISTICS • Deterministic had been closely connected with exact representation of nature, and in this case, with relatively shallow subsurface rock lithotypes and saturations; • This is way how to give interpretation of subsurface in despite of measurements errors, limited number of data, a lot of heterogeneties in geological system, etc.; • Deterministics is one of possibile approach (others are stochastics and probabilistics), the most applied and always with one solution what is favourable for further interpretation;

  3. PART I ABOUT INTERPOLATION METHODS

  4. Interpolation methods • Estimation, i.e. interpolation is possible to perform in 1, 2 or 3 dimensions. • Estimation can be done based on known values (hard values) of primary variable (autocorrelation) or using one of more secondary variable(s) in the same space. The necessary condition is that secondary variable(s) is in strong correlation with primary one. • Here are compared five often interpolation methods: • Hand-made maps, • Inverse Distance Weighting, • Nearest Neighbourhood, • MovingAverage and • Kriging.

  5. Hand-made interpolation (1) Hand-made interpolations (maps) are no longer extensively used in Geology, Geography etc. However, making relatively simple geological maps (up to 20 data) is still often faster and sometimes more precise (using expert knowledge, especially about local geology) then application of computers for mathematical algorithms. There is one important outcome of such approach – expert knowledge can in special cases be extremely superior to mathematical algorithms applied in mapping software.

  6. Hand-made interpolation (2) • Hand-made interpolation makes possible variations, simplification and improvisation in several important mapping task: • How many hard data would be used in estimation of unknown values; • How interpolate highly clustered data; • How make isolines at the map margins. • Mathematical algorithms such problems solve using unification, i.e. standardisation of task applied for similar problems. Such approach can be fitted with: • 4. Selection of the most appropriate algorithm; • 5. Fitting of input and algorithm variables based on experiment or experience.

  7. PART II INVERSE DISTANCE WEIGHTING

  8. Inverse Distance Weighting (1) • The new values are estimated using relatively simple mathematical equation; • The influence of each hard data is inversely proportional to its distance of estimated point (location). Number of hard data (z1...zn) is limited to radius of circle/ellipsoidcentred in unknown point. • The result is strongly depend on distance exponent (p). That value is mostly set on 2 („power equation”), what is confirmed as the most useful default values for many mapping purposes, but not exclusively. Where are: zIU - estimated value d1...dn - distance of locations ‘1...n’ from estimated point p - distance exponent z1...zn - real values at location ‘1...n’

  9. Inverse Distance Weighting (2) Example 1: Porosity maps obtained with Inverse Distance Weighting

  10. PART III NEAREST NEIGHBORHOOD METHOD

  11. Nearest neighborhood (1) • Nearest neighborhood method estimates value of polygon, which is the same as the value of point in the centre of polygon. • This is not strictly interpolation, but zonal estimation. Although not such precise, this rough approach can be only proper when small number of data exist as input. Any outcomebased on 5 or less hard data is not map, but estimation, where only approximation of data scaling can be given. That is represents by zones (zonal estimation). • If some places are too far from (rare) data, they can be even deleted from zonal maps (blind zones), as areas with too high uncertainties.

  12. Nearest neighborhood (2) Example 2: Nearest neighborhood estimation. Coloured zones shows zonal values, and and data are points in the centre of polygon. Example 3: Using of regular point grid for zonal estimation (ideal case) Example 4: Real nearest neighborhood porosity map

  13. PART IV MOVING AVERAGE

  14. Moving Average (1) • Moving average method calculate point values from all hard data in exploration radius (ellipsoid or circle of influence). • Hard data are arithmetically averaged, taking into calculation sequentially lower and lower set of measurements (‘n-1’). • The minimum of total data for reliable calculation is also determined. • Such procedure is repeated in every estimation point.

  15. Moving Average (2) Where are: ‘n’ number of measurements, ‘p’ their average. That equations can be varied in many forms based on: - radius of circle or ellipsoid (of influence) with values inside it; - variations of minimal number of data for calculation of averages. It is one of the most used simpler mathematical algorithms in mapping. As all linear method, the estimated values are located between minimal and maximal hard dana values.

  16. Moving Average (3) Example 5: Interpolation od water level by Moving Average (www.itc.nl/ilwis/applications/applications10.asp)

  17. PART V KRIGING

  18. Kriging (1) Kriging is geostatistical method, including calculation of spatial dependance (variogram analysis). It is main advantage compared with other mathematical interpolation algorithms. In variogram calculation connection strenghts (weights) between hard data and estimation points are determined exclusively on their distances, not values. Variogram analysis also very well taking into consideration anisotropy (directions), clusters and sometimes local variance.

  19. Kriging (2) The simple linear equation describe idea of Kriging. In practice it is extended in matrix equations. Where are: zK - value estimated from ‘n’surrounding values; i - weight coefficient at location ‘i’; zi - real value at locationi ‘i’.

  20. Kriging (3) Example 6: Kriging porosity map (using isotopic variogram model with range 1100 m)

  21. Kriging (4) Example 7: Variogram model and consequent Kriging porosity map in dolomites

  22. Kriging (5) Example 8: Variogram model and consequent Kriging porosity map in breccia

  23. PART VI CIRCLE OR ELLIPSOID OF INFLUENCE

  24. Circle/ellipsoid of influence Any new values interpolated from hard data can be calculated from total or partial input dataset. It depends on spatial dependancy of data in analysed area and is expressed as range of influence. Exception is Nearest Neighbourhood where single point defines polygon. Usually, only data that are connected, i.e. their values can de deduced in some way from each other, are located in such range of influence. In simplier application the circle is used as isotropic shape. In moe complex data relations it could be replaced with ellipsoid where can be interpreted anisotropy.

  25. PART VII NUMERICAL CALCULATION OF ERROR

  26. Numerical calculation of error Numerical calculation of error is additional tool for evaluation of map’s quality. That is based on removing of one hard data and mapping of new value in the same point (location) from the rest of dataset. It is repeated for all hard data points (sequentially removing of another hard data and putting back the previously removed). The result is called as Mean Square Error (abbr. MSE). Where are: MSEmetode - Mean Square Error of selected algorithm; izmj.vrij. - measured value (hard data) of varibale at location ‘i’; procj.vrij. - estimated value of variable at location ‘i’.

  27. PART VIII CONCLUSION

  28. Conclusion • Deterministical interpolation methods are still dominant mathematical algorithms for mapping (comparing with stochastic, probablisitc, Monte Carlo estimation or hand-made mapping). • The main advantage of such approach is one solution accepted as true representation of volume or plane. • The most applied in the group of easy and fast interpolations are Inverse Distance Weighting and Moving Average. • The most applied for mostly the best linear estimation in mapping is Kriging. • For small datasets and irregular distributed hard data hand-made maps still could be the best approach, especially if are interpolated by experienced expert.

More Related