1 / 40

Interpolation Methods

Interpolation Methods. Robert A. Dalrymple Johns Hopkins University. Why Interpolation?. For discrete models of continuous systems, we need the ability to interpolate values in between discrete points.

rianna
Download Presentation

Interpolation Methods

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Interpolation Methods Robert A. Dalrymple Johns Hopkins University

  2. Why Interpolation? • For discrete models of continuous systems, we need the ability to interpolate values in between discrete points. • Half of the SPH technique involves interpolation of values known at particles (or nodes).

  3. Interpolation • To find the value of a function between known values. Consider the two pairs of values (x,y): (0.0, 1.0), (1.0, 2.0) What is y at x = 0.5? That is, what’s (0.5, y)?

  4. Linear Interpolation • Given two points, (x1,y1), (x2,y2): Fit a straight line between the points. y(x) = a x +b a=(y2-y1)/(x2-x1), b= (y1 x2-y2 x1)/(x2-x1), Use this equation to find y values for any x1 < x < x2

  5. Polynomial Interpolants Given N (=4) data points, Find the interpolating function that goes through the points: If there were N+1 data points, the function would be with N+1 unknown values, ai, of the Nth order polynomial

  6. Polynomial Interpolant Force the interpolant through the four points to get four equations: Rewriting: The solution is found by inverting p

  7. Example Data are: (0,2), (1,0.3975), (2, -0.1126), (3, -0.0986). Fitting a cubic polynomial through the four points gives:

  8. Matlab code for polynomial fitting % the data to be interpolated (in 1D) x=[-0.2 .44 1.0 1.34 1.98 2.50 2.95 3.62 4.13 4.64 4.94]; y=[2.75 1.80 -1.52 -2.83 -1.62 1.49 2.98 0.81 -2.14 -2.93 -1.81]; plot(x,y,'bo') n=size(x,2) % CUBIC FIT p=[ones(1,n) x x.*x x.*(x.*x)]' a=p\y' %same as a=inv(p)*y' yp=p*a hold on; plot(x,yp,'k*') Note: linear and quadratic fit: redefine p

  9. Polynomial Fit to Example Exact: red Polynomial fit: blue

  10. Beware of Extrapolation Exact: red An Nth order polynomial has N roots!

  11. Least Squares Interpolant For N points, we will have a fitting polynomial of order m < (N-1). The least squares fitting polynomial be similar to the exact fit form: Now p is N x m matrix. Since we have fewer unknown coefficient as data points, the interpolant cannot go through each point. Define the error as the amount of “miss” Sum of the (errors)2:

  12. Least Squares Interpolant Minimizing the sum with respect to the coefficients a: Solving, This can be rewritten in this form, which introduces a pseudo-inverse. Reminder: for cubic fit

  13. Question Show that the equation above leads to the following expression for the best fit straight line:

  14. Matlab: Least-Squares Fit %the data to be interpolated (1d) x=[-0.2 .44 1.0 1.34 1.98 2.50 2.95 3.62 4.13 4.64 4.94]; y=[2.75 1.80 -1.52 -2.83 -1.62 1.49 2.98 0.81 -2.14 -2.93 -1.81]; plot(x,y,'bo') n=size(x,2) % CUBIC FIT p=[ones(1,n) x x.*x x.*(x.*x)]' pinverse=inv(p'*p)*p' a=pinverse*y' yp=p*a plot(x,yp,'k*')

  15. Cubic Least Squares Example Data irregularly spaced x: -0.2 .44 1.0 1.34 1.98 2.50 2.95 3.62 4.13 4.64 4.94 y: 2.75 1.80 -1.52 -2.83 -1.62 1.49 2.98 0.81 -2.14 -2.93 -1.81

  16. Least Squares Interpolant Cubic Least Squares Fit: * is the fitting polynomial o is the given data Exact

  17. Piecewise Interpolation Piecewise polynomials: fit all points Linear: continuity in y+, y- (fit pairs of points) Quadratic: +continuity in slope Cubic splines: +continuity in second derivative RBF All of the above, but smoother

  18. Radial Basis Functions Developed to interpolate 2-D data: think bathymetry. Given depths: , interpolate to a rectangular grid.

  19. Radial Basis Functions 2-D data: For each position, there is an associated value: Radial basis function (located at each point): where is the distance from xj The radial basis function interpolant is:

  20. RBF To find the unknown coefficients i, force the interpolant to go through the data points: where This gives N equations for the N unknown coefficients.

  21. RBF Morse et al., 2001

  22. Multiquadric RBF MQ: RMQ: Hardy, 1971; Kansa, 1990

  23. RBF Example 11 (x,y) pairs: (0.2, 3.00), (0.38, 2.10), (1.07, -1.86), (1.29, -2.71), (1.84, -2.29), (2.31, 0.39), (3.12, 2.91), (3.46, 1.73), (4.12, -2.11), (4.32, -2.79), (4.84, -2.25) SAME AS BEFORE

  24. RBF Errors Log10 [sqrt (mean squared errors)] versus c: Multiquadric

  25. RBF Errors Log10 [ sqrt (mean squared errors)] versus c: Reciprocal Multiquadric

  26. Consistency Consistency is the ability of an interpolating polynomial to reproduce a polynomial of a given order. The simplest consistency is constant consistency: reproduce unity. where, again, If gj(0) = 1, then a constraint results: Note: Not all RBFs have gj(0) = 1

  27. RBFs and PDEs Solve a boundary value problem: The RBF interpolant is: N is the number of arbitrarily spaced points; the j are unknown coefficients to be found.

  28. RBFs and PDEs Introduce the interpolant into the governing equation and boundary conditions: These are N equations for the N unknown constants, j

  29. RBFs and PDEs (3) Problem with many RBF is that the N x N matrix that has to be inverted is fully populated. RBFs with small ‘footprints’ (Wendland, 2005) 1D: 3D: His notation: Advantages: matrix is sparse, but still N x N

  30. Wendland 1-D RBF with Compact Support h=1 Max=1

  31. Moving Least Squares Interpolant are monomials in x for 1D (1, x, x2, x3) x,y in 2D, e.g. (1, x, y, x2, xy, y2 ….) Note aj are functions ofx

  32. Moving Least Squares Interpolant Define a weighted mean-squared error: where W(x-xi) is a weighting function that decays with increasing x-xi. Same as previous least squares approach, except for W(x-xi)

  33. Weighting Function q=x/h

  34. Moving Least Squares Interpolant Minimizing the weighted squared errors for the coefficients: , , ,

  35. Moving Least Squares Interpolant Solving The final locally valid interpolant is:

  36. % generate the data to be interpolated (1d) x=[-0.2 .44 1.0 1.34 1.98 2.50 2.95 3.62 4.13 4.64 4.94]; y=[2.75 1.80 -1.52 -2.83 -1.62 1.49 2.98 0.81 -2.14 -2.93 -1.81]; plot(x,y,'bo') n=size(x,2)% QUADRATIC FIT p=[ones(1,n) x x.*x]' xfit=0.30; sum=0.0 % compute msq error for it=1:18, % fiting at 18 points xfit=xfit+0.25; d=abs(xfit-x) for ic=1:n q=d(1,ic)/.51;% note 0.3 works for linear fit; 0.51 for quadratic if q <= 1. Wd(1,ic)=0.66*(1-1.5*q*q+0.75*q^3); elseif q <= 2. Wd(1,ic)=0.66*0.25*(2-q)^3; else Wd(1,ic)=0.0; end end Moving Least Squares (1)

  37. MLS (2) Warray=diag(Wd); A=p'*(Warray*p) B=p'*Warray acoef=(inv(A)*B)*y' % QUADRATIC FIT yfit=acoef'*[1 xfit xfit*xfit]' hold on; plot(xfit, yfit,'k*') sum=sum+(3.*cos(2.*pi*xfit/3.0)-yfit)^2; end

  38. MLS Fit to (Same) Irregular Data h=0.51 Given data: circles; MLS: *; exact: line

  39. Varying h Values 1.0 .3 1.5 .5

  40. Conclusions There are a variety of interpolation techniques for irregularly spaced data: • Polynomial Fits • Best Fit Polynomials • Piecewise Polynomials • Radial Basis Functions • Moving Least Squares

More Related