570 likes | 703 Views
Spectral Differencing with a Twist. Johann Radon Institute for Computational and Applied Mathematics Linz, Österreich 15. März 2004. Spectral Differencing with a Twist. Johann Radon Institute for Computational and Applied Mathematics Linz, Österreich 15. März 2004.
E N D
Spectral Differencing with a Twist Johann Radon Institute for Computational and Applied Mathematics Linz, Österreich 15. März 2004
Spectral Differencing with a Twist Johann Radon Institute for Computational and Applied Mathematics Linz, Österreich 15. März 2004
Spectral Differencing with a Twist Richard Baltensperger Université Fribourg Manfred Trummer Pacific Institute for the Mathematical Sciences & Simon Fraser University Johann Radon Institute for Computational and Applied Mathematics Linz, Österreich 15. März 2004 trummer@sfu.ca www.math.sfu.ca/~mrt
Pacific Institute for the Mathematical Sciences www.pims.math.ca
Banff International Research Station • PIMS – MSRI collaboration (with MITACS) • 5-day workshops • 2-day workshops • Focused Research • Research In Teams www.pims.math.ca/birs
Round-off Errors • 1991 – a patriot missile battery in Dhahran, Saudi Arabia, fails to intercept Iraqi missile resulting in 28 fatalities. Problem traced back to accumulation of round-off error in computer arithmetic.
Introduction Spectral Differentiation: • Approximate function f(x) by a global interpolant • Differentiate the global interpolant exactly, e.g.,
Features of spectral methods + Exponential (spectral) accuracy, converges faster than any power of 1/N + Fairly coarse discretizations give good results - Full instead of sparse matrices - Tighter stability restrictions - Less robust, difficulties with irregular domains
Types of Spectral Methods • Spectral-Galerkin: work in transformed space, that is, with the coefficients in the expansion. Example: ut = uxx
Types of Spectral Methods • Spectral Collocation (Pseudospectral): work in physical space. Choose collocation points xk, k=0,..,N, and approximate the function of interest by its values at those collocation points. Computations rely on interpolation. • Issues: Aliasing, ease of computation, nonlinearities
Interpolation • Periodic function, equidistant points: FOURIER • Polynomial interpolation: • Interpolation by Rational functions • Chebyshev points • Legendre points • Hermite, Laguerre
Differentiation Matrix D Discrete data set fk, k=0,…,N Interpolate between collocation points xk p(xk) = fk Differentiate p(x) Evaluate p’(xk) = gk All operations are linear: g = Df
Software • Funaro: FORTRAN code, various polynomial spectral methods • Don-Solomonoff, Don-Costa: PSEUDOPAK FORTRAN code, more engineering oriented, includes filters, etc. • Weideman-Reddy: Based on Differentiation Matrices, written in MATLAB (fast Matlab programming)
Polynomial Interpolation Lagrange form: “Although admired for its mathematical beauty and elegance it is not useful in practice” • “Expensive” • “Difficult to update”
Barycentric formula, version 2 =1 • Set-up: O(N2) • Evaluation: O(N) • Update (add point): O(N) • New fk values: no extra work!
Barycentric formula: weights wk • Equidistant points: • Chebyshev points (1st kind): • Chebyshev points (2nd kind):
Barycentric formula: weights wk • Weights can be multiplied by the same constant • This function interpolates for any weights ! Rational interpolation! • Relative size of weights indicates ill-conditioning
Computation of the Differentiation Matrix Entirely based upon interpolation.
Barycentric Formula Barycentric (Schneider/Werner):
Chebyshev Differentiation xk = cos(k/N) Differentiation Matrix:
Chebyshev Matrix has Behavioural Problems numerical • Trefethen-Trummer, 1987 • Rothman, 1991 • Breuer-Everson, 1992 • Don-Solomonoff, 1995/1997 • Bayliss-Class-Matkowsky, 1995 • Tang-Trummer, 1996 • Baltensperger-Berrut, 1996
Chebyshev Matrix and Errors Matrix Absolute Errors Relative Errors
Round-off error analysis has relative error and so has , therefore
Roundoff Error Analysis • With “good” computation we expect an error in D01 of O(N2) • Most elements in D are O(1) • Some are of O(N), and a few are O(N2) • We must be careful to see whether absolute or relative errors enter the computation
Remedies • Preconditioning, add ax+b to f to create a function which is zero at the boundary • Compute D in higher precision • Use trigonometric identities • Use symmetry: Flipping Trick • NST: Negative sum trick
More ways to compute Df If we only want Df but not the matrix D (e.g., time stepping, iterative methods), we can compute Df for any f via • FFT based approach • Schneider-Werner formula
Chebyshev Differentiation Matrix xk = cos(k/N) “Original Formulas”: Cancellation!
Flipping Trick sin( – x) not as accurate as sin(x) Use and “flip” the upper half of D into the lower half, or the upper left triangle into the lower right triangle.
NST: Negative sum trick Spectral Differentiation is exact for constant functions: Arrange order of summation to sum the smaller elements first - requires sorting
Observations • Original formula for D is very inaccurate • Trig/Flip + “NST” (Weideman-Reddy) provides good improvement • FFT not as good as “expected”, in particular when N is not a power of 2 • NST applied to original D gives best matrix • Even more accurate ways to compute Df
Machine Dependency • Results can vary substantially from machine to machine, and may depend on software. • Intel/PC: FFT performs better • SGI • SUN • DEC Alpha
Understanding theNegative sum trick Error in f Error in D
Understanding NST3 The inaccurate matrix elements are multiplied by very small numbers leading to O(N2) errors -- optimal accuracy
Understanding the Negative sum trick NST is an (inaccurate) implementation of the Schneider-Werner formula: Schneider-Werner Negative Sum Trick
Understanding the Negative sum trick • Why do we obtain superior results when applying the NST to the original (inaccurate) formula? • Accuracy of Finite Difference Quotients:
Finite Difference Quotients • For monomials a cancellation of the cancellation errors takes place, e.g.: • Typically fj – fk is less accurate than xj – xk, so computing xj - xk more accurately does not help!
Fast Schneider-Werner • Cost of Df is 2N2, SW costs 3N2 • Can implement Df with “Fast SW” method Size of each corner blocks is N½ Cost: 2N2 + O(N)
Polynomial Differentiation • For example, Legendre, Hermite, Laguerre • Fewer tricks available, but Negative Sum Trick still provides improvements • Ordering the summation may become even more important
Higher Order Derivatives • Best not to compute D(2)=D2, etc. • Formulas by Welfert (implemented in Weideman-Reddy) • Negative sum trick shows again improvements • Higher order differentiation matrices badly conditioned, so gaining a little more accuracy is more important than for first order
Using D to solve problems • In many applications the first and last row/column of D is removed because of boundary conditions • f -> Df appears to be most sensitive to how D is computed (forward problem = differentiation) • Have observed improvements in solving BVPs Skip