270 likes | 380 Views
FY2001 Oil and Gas Recovery Technology Review Meeting Diagnostic and Imaging. High Speed 3D Hybrid Elastic Seismic Modeling Lawrence Berkeley National Laboratory, GX Technology, Burlington Resources. Contacts: Valeri Korneev 510-486-7214, VAKorneev@lbl.gov;
E N D
FY2001 Oil and Gas Recovery Technology Review MeetingDiagnostic and Imaging High Speed 3D Hybrid Elastic Seismic Modeling Lawrence Berkeley National Laboratory, GX Technology, Burlington Resources Contacts: Valeri Korneev 510-486-7214, VAKorneev@lbl.gov; Mike Hoversten 510-486-5085, GMHoversten@lbl.gov
Why do we need 3D elastic modeling • Heterogeneous 3D media, complex topography, 3-component data, strong converted S- waves • Survey design • Hypothesis testing • AVO evaluation • Wave field interpretation • Synthetic data sets for depth migration and full waveform inversion testing • “Engine” for inversion • It is cheaper and faster than real data acquisition
Society of Exploration Geophysicists and European Association of Exploration Geophysicists 3D modeling project for seismic imaging testing. 8000 Gflop hours. Acoustic. Started in 1994. Still not completed. This project needs larger model and elastic code.
Deep water Gulf of Mexico Regional Seismic LineSub-salt structures can not be seen using acoustic inversion. Elastic propagator is needed to image details.
Where 3D modeling stands today • Industry primarily uses acoustic uniform grid • Need of “smart” users who are experts in the method • Massive parallel computing is expensive and “slow” • Model building is a problem • Modeling results are difficult to interpret
Requirements • Full elastic modeling • Attenuation • Anisotropy • Topography • Effectively exploit computational resources • High fidelity numerical modeling • Hybrid methodology: Ray-tracing coupled with finite difference • Local resolution algorithms • Massively parallel super computers and clusters
What is our goal? • To build a 3D elastic modeling software tool capable to compute • realistic (10 km * 10 km *4 km) models • at seismic exploration frequencies (up to 100 Hz) • on local networks • at reasonable time (overnight) • by any geophysical software user (with no special method-oriented training).
Issues of 3D modeling performance • Accuracy. Improves by using higher order differencing and finer gridding. • Model size. No less then 5 grid points per shortest wavelength. • Acoustic code requires 3.5 *N cells, where N = Nx*Ny*Nz number of cells for model parameters sampling. Elastic code needs 6 times more cells. • CPU time. Acoustic code requires 5*K operations per grid cell. Elastic code requires 5 times more operations per grid cell, where K= 6*m, m - an order of differential operator. • Stability. Requires integration time step • Optimization. Avoid over sampling and too small time integration steps. Use parallel computing. Avoid computing in undisturbed cells. • Numerical artifacts. Contrast contacts. Step sampling noise of dipping interfaces. Boundary reflections. Liquid-elastic interfaces. 0.5
3D hybrid elastic seismic modeling • Parallel based upon overlapping subdomain decomposition variable order 3D finite difference code. • Grid spacing depends on model parameters to provide local optimal computational regime. • Wave propagation in the water will be computed by acoustic code. • Contrast dipping interfaces will be handled with Local Boundary Conditions approach. • Computation is performed for subdomains with non-zero wave field only. • Option of conditional computation restarting at any given lapse time.
FY2000 Results • Stair step gridding problem resolvedNonuniform grid algorithm tested • New stable topography algorithm tested • Parallel interface library applied to 2D • 4-th order in time scheme tested • GXT ray tracer installed
BoxLib Foundation Library Domain specific library: support PDE solvers on structured, hierarchical adaptive meshes (AMR) Support for heterogeneous workloads BoxLib based programs run on serial, distributed memory and shared memory supercomputers and on SMP clusters Parallel implementation MPI standard based, ensuring portability Dynamic load balance achieved using dynamic-programming approach to distribute sub grids Hybrid C++ / Fortran programming model C++ flow control, memory management and I/O Fortran numerical kernels Parallel Software Infrastructure
Discretization Methodology • Improved finite difference schemes • Fourth order in space and time • Reduce computational times and memory requirements by more than an order of magnitude for realistic geologic models • Improved parallel performance by reducing communication to computation ratio • Absorbing boundary conditions • Use non-local pseudo-differential operators to represent one-way wave propagation • Expand with PADE approximations to obtain local representation • Add graded local damping to minimize evanescent reflections
Grating effect reduction by spatial filtering No correction Linear gradient correction
Nonuniform grid FD computations No velocity contrast test Two half-spaces with 100% velocity contrast Savings factors Resource 2D tested 3D projected Memory 1.6 1.8 CPU time 3.2 3.6
LBNL has a fast and accurate wave propagation algorithm for moderately heterogeneous elastic media: We are going to apply it
Hybrid Ray Tracing and FD approachspeeds up computations up to 10 times Fast code through slow simple media FD code through complex media
LBNL PC cluster • Vendor - SGI • 8 dual Pentium III CPU 800 MHz • 512 SDRAM per node • Myrinet LAN - fast net card • Lynux based • Price $60K • Completion by the end of 2000
Year 2001 effortsRequested budget - 250K • Parallel 3D elastic code • Nonuniform grid for 3D • 4-th order in time scheme for 3D • Hybrid (ray tracing + FD) algorithm • Topography for 3D
BoxLib Parallelism • Hybrid C++/Fortran Programming environment. • Library supports parallel PDE solvers on rectangular meshes. • MPI portability: distributed and shared memory supercomputers, clusters of engineering workstations.
Discretization • Fourth order accuracy in space and time based on a modified equation approach. • 2D: fourth order scheme gives 2 times the performance of conventional second order schemes. • 3D: fourth order 4 times as efficient.
Parallel Performance Wall clock run time Number of CPUs 4-th order scheme performs better as number of CPUs increases.