10 likes | 99 Views
A defense of my claims that the ADAPS non linear system can materially improve deep resolution – David Paige
E N D
A defense of my claims that the ADAPS non linear system can materially improve deep resolution – David Paige I start with a quote from a recent spirited exchange – “I understand your arguments about detuning, but you suggest that you can get back to the original spike train using your method.It is fine to calculate the spikes as you do but they are still limited by the seismic data frequency content as to the uncertainty in their position and size. The mathematics behind this statement is exactly the same principle as the Heisenberg Uncertainty principle, which deals with the uncertainty between position and momentum of time and energy, both of which are Fourier transform pairs. My argument is that you can calculate and integrate the spikes but what comes out is not better than if you had created a suitable broad band wavelet and created an integrated trace as in the Sparse Spike method.” I feel certain my correspondent speaks for many seismic researchers. I envy his sublime faith in current seismic theory. In contrast I’m always questioning what I have done. I admit to being surprised at how well my logic works under tough data circumstances, and I continue to learn just watching it do its thing. In fact I admit I believe in black boxes. When faced with good results from one I want to know what I missed, and what that logic knows that I don’t. It should be obvious that I am not up to arguing at his mathematical level, but there are some things I think I know that he might not. I have had others tell me that ADAPS could not honestly do what it does because of frequency limitations. When I point to the before and after well matches their certainty is not shaken. However those results fortify me to the point where I will argue with some basic tenets. The evolution of the down wave. We hit the earth with some sort of an impact which results in a movement. Since the earth is elastic, it rebounds past its original state creating an oscillation. As this 3-dimensional shape expands, resistance tempers the sharpness of the the wave front, and further oscillation lengthens the disturbance. As it passes through the typical layered subsurface, reflections are created at each velocity interface, each seeming to be independent. Because the down wave has become leggy, the algebraic addition of all these closely spaced events creates a jumbled and complex pattern. However detuning this mess is another subject. Our down wave continues. Time series mathematics has allowed us to describe these shapes in terms of frequency. This is accomplished by modeling processes called transforms. The time series is convolved with a series of frequencies to measure correlation coefficients at each step. The result is a power spectrum. The great leap in mathematical logic was to assume these coefficients actually represent pure frequency content that can be used in filter design. When the layman thinks of a 100 cps component from the resulting spectrum the tendency is to visualize a discrete wavelet. This could be true of course, but but the fact is that this particular frequency, along with its partners, might have only been needed to to either model the dampening of the the averaged waveform, or model a complex shape that is made up entirely of lower dominant energy. Obviously if the spectrum peaked at 100 cps this would suggest real lobes at that frequency. In any case some doubt should exist. The supposed beauty of the modeling is that it allows doing filter computation in the frequency domain, which is relatively simple. Knowing the data spectrum one can design some desired spectrum and mathematically create a transforming filter to remove any unwanted energy. In the simplest case of a strong ring, the modeling will probably be effective, showing a sharp velocity peak at the ringing frequency. As soon as the down wave starts to dampen, other modeling frequencies have to be brought into play. When it (the down wave) begins to get more complex (with lobes transitioning from higher to lower dominant frequencies) the modeling task becomes much tougher, with the vital shape information being spread thinly over a wide frequency range and thus susceptible to noise.. ADAPS depends on advanced pattern recognition in spike positioning. The shape of the estimated down wave is our major concern. When the frequency content changes with depth the shape is obviously affected. An enormous amount of statistical effort is spent getting the best possible down wave guess in the target zone. Spikes on each trace are computed independently, so continuity of results becomes a logical proof. The way I learned what made the transform work was by tearing apart the early code (I was around way back then). Of course this analysis is old. In my own simple minded way I consider these computed frequencies to be “descriptive” (in the modeling sense). Sometimes they might point out offending events and sometimes they don’t. High frequency loss at depth is partly a function on the dampening of the leading edge of the down wave. As long as we can simulate the down wave shape accurately in the target zone, error in positioning should not be hyper sensitive to frequency. This basic disagreement led to the spirited part of the discussion, and we parted with an agreement to disagree. Before discussing filtering let me return to the under-lined argument above. Much of the resolution power of ADAPS comes from its ability to integrate its spikes. To do this, the spikes must be unique to the reflecting interface. I respectfully submit that integrating the entire wavelet would not get the job done, and that the ability to optimize true spikes is essential. I believe. ADAPS may stand alone in this capability. Finally, my doubts on the true value of the transformed spectrum contribute to my dislike of frequency sensitive filtering. In ADAPS we predict and gently lift off the noise. Click on oval for example set (give it time).