1 / 32

MA2213 Lecture 3

MA2213 Lecture 3. Approximation. Piecewise Linear Interpolation p. 147. can use Nodal Basis Functions. Introduction. Problem : Find / evaluate a function P that (i) P belongs to a specified set S of functions , and (ii) P best approximates a function f among the

skyler
Download Presentation

MA2213 Lecture 3

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. MA2213 Lecture 3 Approximation

  2. Piecewise Linear Interpolation p. 147 can use Nodal Basis Functions

  3. Introduction Problem : Find / evaluate a function P that (i) P belongs to a specified set S of functions, and (ii) P best approximates a function f among the functions in the set S Approximates = match, fit, resemble If S were the set of ALL functions the choice P = f solves the problem – “you can’t get any closer to somewhere than by being there”. If S is not the set of all functions then S must be defined carefully Furthermore, we must define the approximation criteria used to compare two approximations

  4. Set of Functions Furthermore, S is usually finite dimensional In practice S is closed under sums and multiplication by numbers – this means that it is a vector space and then S admits a basis Example: Bases for S = { Polynomials of degree < n } : Monomial Basis For distinct nodes Lagrange’s Basis For possibly repeated nodes Newton’s Basis (used with divided differences)

  5. Approximation Criteria In many engineering and scientific problems, data is acquired from measurements Least Squares p. 178-187 http://en.wikipedia.org/wiki/Carl_Friedrich_Gauss observed that measurement errors usually have “Gaussian Statistics” and he invented an optimum method to deal with such errors. Minimax or Best Approximation p. 159-165 Arises in optimal design, game theory as well as in mathematics of uniform convergence

  6. Least Squares Criteria Least Squares (over an finite set) Least Squares (over an interval)

  7. Least Squares Over a Finite Set p. 319-333 If then we minimize by choosing coefficients to satisfy

  8. Least Squares Over a Finite Set Remark: these are n equations in n variables Since the coefficients satisfy the equations

  9. Least Squares Equations Construct matrices The least squares equations are The interpolation equations hold if and only if Question When do these equations have solutions ?

  10. Least Squares Examples For The least squares equations are so and the constant function is the least squares approximation or data fit for the data points by a constant function.

  11. Least Squares Examples For The least squares equations are The solution gives the least squares data fit by a polynomial of degree

  12. Least Squares MATLAB CODE function c = lspf(n,m,x,y) % function c = lspf(n,m,x,y) % % Wayne Lawton 28 August 2007 % Least Squares Polynomial Fit % Inputs : n = deg poly + 1, m = # data points, % data arrays x and y of size n x 1 % Output : array c of poly coefficients % for i = 1:m for j = 1:n B(i,j) = x(i)^(j-1); end end c = (B'*B)\(B'*y);

  13. Least Squares MATLAB CODE

  14. Least Squares MATLAB CODE sum of squares error for constant least squares fit sum of squares error for ‘linear’ least squares fit Question : Why did the ssq error decrease ?

  15. Least Squares Algebraic Formula The sum of squares error can be computed, by substituting the value, to obtain

  16. Least Squares MATLAB CODE ssq error for quadratic ls fit Question : Why did the error decrease so little ?

  17. Marge, Where are the Least Squares ?

  18. Least Squares Over an Interval If then we minimize by choosing coefficients to satisfy

  19. Least Squares Over an Interval Since These are also n equations in n variables the coefficients satisfy the equations The ‘ interpolation equation ‘ on the interval is

  20. Least Squares Over an Interval p. 181 For S = polynomials of degree over the interval the equations are The matrix of coefficients for these equations is called the Hilbert matrix. It is a well-known example of an ill conditioned matrix and is discussed in Example 6.5.5 on pages 300-301

  21. Legendre Polynomials p. 181-183 defined by satisfy they are examples of Orthogonal Polynomials and are useful method to solve the least squared approximation by polynomials p. 183-185

  22. Vandermonde Matrix first expand det by last row to obtain and then use induction on n to obtain

  23. Gramm Matrix Theorem 1. If are linearly independent, then the Gramm matrix defined by is nonsingular. Proof lin. ind. and Derive this equation

  24. Semipositive Definite and Positive Definite Definitions A real n x n matrix is (semi) positive definite if for every nonzero vector Theorem 2. If is both positive semidefinite and symmetric and satisfies then Proof Assume that For every construct the function and observe that since is positive semidefinite Since therefore for every hence

  25. Gramm Matrix Corollary (of Thm 2). If the Gramm matrix for a set of functions is nonsingular then this set of functions is linearly independent. Proof. Assume that the Gramm matrix is satisfies nonsingular and that It suffices to show that We observe that Furthermore, since satisfies the hypothesis of theorem 2, it follows that and the proof is complete.

  26. Questions for Thought and Discussion Question 1. Is the matrix semi positive definite, positive definite,symmetric ? Is it true that Question 2. Give and example of a positive semidefinite matrix that is not positive definite. Question 3. Find a positive semidefinite nonsingular matrix and a nonzero vector such that Question 4. Give a detailed derivation for all the assertions in the proofs of Theorems 1 and 2.

  27. Joseph Louis Lagrange http://en.wikipedia.org/wiki/Image:Langrange_portrait.jpg Jan 1736 – April 1813 Giuseppe Lodovico Lagrangia was an Italian-Frenchmathematician and astronomer who made important contributions to all fields of analysis and number theory and to classical and celestial mechanics as arguably the greatest mathematician of the 18th century. Before the age of 20 he was professor of geometry at the royal artillery school at Turin.

  28. Isaac Newton January1643 – March1727) was an Englishphysicist,mathematician, astronomer, natural philosopher, and alchemist, regarded by many as the greatest figure in the history of science.His treatise http://en.wikipedia.org/wiki/Isaac_Newton Philosophiae Naturalis Principia Mathematica, published in 1687, described universal gravitation and the three laws of motion, laying the groundwork for classical mechanics.

  29. Carl Friedrich Gauss April1777 – February1855 was a Germanmathematician and scientist of profound genius who contributed significantly to many fields, including number theory, analysis, differential geometry, geodesy, magnetism, astronomy and optics. http://en.wikipedia.org/wiki/Carl_Friedrich_Gauss 23, heard about the problem After three months of intense work, he predicted a position for Ceres in Dec 1801…influential treatment of the method of least squares…minimize the impact of measurement error…under the assumption of normally distributed errors…when asked how he had been able to predict the trajectory of Ceres with such accuracy he replied, "I used logarithms." The questioner then wanted to know how he had been able to look up so many numbers from the tables so quickly. "Look them up?" Gauss responded. "Who needs to look them up? I just calculate them in my head!"

  30. Homework Due Tutorial 2 Question 1. Do Problem 8 (a) on page 133. Question 2. Use Newton’s divided difference method to find a polynomial of degree that satisfies Question 3. Do Problem 1 on page 330. Please show all details in your computation. Question 4. Compute the Gramm Matrix for the three piecewise linear nodal basis functions in the first vufoil if the three nodes are -1, 0, 1 and the interval of integration is [-1,1]

  31. Homework Example Question 4. Compute the Gramm Matrix for the three piecewise linear nodal basis functions in the first vufoil if the three nodes are -1, 0, 1 and the interval of integration is [-1,1] Solution The Gram matrix will be a 3 x 3 matrix, here is one of its entries

  32. Questions for Thought and Discussion Answer 1. The matrix is symmetric, however it is not semi positive definite and therefore not positive definite. Answer 3. The matrix is semi positive definite and nonsingular and if and clearly then

More Related