1 / 30

Roots of Equations

Roots of Equations. Open Methods (Part 1) Fixed Point Iteration & Newton-Raphson Methods. The following root finding methods will be introduced: A. Bracketing Methods A.1. Bisection Method A.2. Regula Falsi B. Open Methods B.1. Fixed Point Iteration B.2. Newton Raphson's Method

ronni
Download Presentation

Roots of Equations

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Roots of Equations Open Methods (Part 1) Fixed Point Iteration & Newton-Raphson Methods

  2. The following root finding methods will be introduced: A. Bracketing Methods A.1. Bisection Method A.2. Regula Falsi B. Open Methods B.1. Fixed Point Iteration B.2. Newton Raphson's Method B.3. Secant Method

  3. B. Open Methods To find the root for f(x) = 0, we construct a magic formulae xi+1 = g(xi) to predict the root iteratively until x converge to a root. However, x may diverge! • Bisection method • Open method (diverge) • Open method (converge)

  4. What you should know about Open Methods How to construct the magic formulae g(x)? How can we ensure convergence? What makes a method converges quickly or diverge? How fast does a method converge?

  5. B.1. Fixed Point Iteration • Also known as one-point iteration or successive substitution • To find the root for f(x) = 0, we reformulatef(x) = 0 so that there is an x on one side of the equation. • If we can solve g(x) = x, we solve f(x) = 0. • x is known as the fixed point of g(x). • We solve g(x) = x by computing until xi+1 converges to x.

  6. Fixed Point Iteration – Example Reason: Ifx converges, i.e. xi+1  xi

  7. Example Find root of f(x) = e-x - x = 0. (Answer: α= 0.56714329)

  8. Two Curve Graphical Method The point, x, where the two curves, f1(x) = x and f2(x) = g(x), intersect is the solution to f(x) = 0. Demo

  9. Fixed Point Iteration For example, (ans: x = 3 or -1) • There are infinite ways to construct g(x) from f(x). Case a: Case b: Case c: So which one is better?

  10. x0 = 4 • x1 = 3.31662 • x2 = 3.10375 • x3 = 3.03439 • x4 = 3.01144 • x5 = 3.00381 • x0 = 4 • x1 = 1.5 • x2 = -6 • x3 = -0.375 • x4 = -1.263158 • x5 = -0.919355 • x6 = -1.02762 • x7 = -0.990876 • x8 = -1.00305 • x0 = 4 • x1 = 6.5 • x2 = 19.625 • x3 = 191.070 Converge! Diverge! Converge, but slower

  11. How to choose g(x)? • Can we know which g(x) would converge to solution before we do the computation?

  12. Convergence of Fixed Point Iteration By definition Fixed point iteration

  13. Convergence of Fixed Point Iteration According to the derivative mean-value theorem, if g(x) and g'(x) are continuous over an interval xi≤x≤α, there exists a value x = c within the interval such that • Therefore, if |g'(c)| < 1, the error decreases with each iteration. If |g'(c)| > 1, the error increase. • If the derivative is positive, the iterative solution will be monotonic. • If the derivative is negative, the errors will oscillate.

  14. (a) |g'(x)| < 1, g'(x) is +ve • converge, monotonic (b) |g'(x)| < 1, g'(x) is -ve • converge, oscillate (c) |g'(x)| > 1, g'(x) is +ve • diverge, monotonic (d) |g'(x)| > 1, g'(x) is -ve • diverge, oscillate Demo

  15. Fixed Point Iteration Impl. (as C function) // x0: Initial guess of the root // es: Acceptable relative percentage error // iter_max: Maximum number of iterations allowed double FixedPt(double x0, double es, int iter_max) { double xr = x0; // Estimated root double xr_old; // Keep xr from previous iteration int iter = 0; // Keep track of # of iterations do { xr_old = xr; xr = g(xr_old);// g(x) has to be supplied if (xr != 0) ea = fabs((xr – xr_old) / xr) * 100; iter++; } while (ea > es && iter < iter_max); return xr; }

  16. The following root finding methods will be introduced: A. Bracketing Methods A.1. Bisection Method A.2. Regula Falsi B. Open Methods B.1. Fixed Point Iteration B.2. Newton Raphson's Method B.3. Secant Method

  17. B.2. Newton-Raphson Method Use the slope of f(x) to predict the location of the root. xi+1 is the point where the tangent at xi intersects x-axis.

  18. Newton-Raphson Method What would happen when f '(α) = 0? For example,f(x) = (x –1)2 = 0

  19. Error Analysis of Newton-Raphson Method By definition Newton-Raphson method

  20. Error Analysis of Newton-Raphson Method Suppose α is the true value (i.e., f(α) = 0). Using Taylor's series When xi and α are very close to each other, c is between xi and α. The iterative process is said to be of second order.

  21. The Order of Iterative Process (Definition) Using an iterative process we get xk+1 from xk and other info. We have x0, x1, x2, …, xk+1 as the estimation for the root α. Let δk = α – xk Then we may observe The process in such a case is said to be of p-th order. • It is called Superlinear if p > 1. • It is call quadratic if p = 2 • It is called Linear if p = 1. • It is called Sublinear if p < 1.

  22. Error of the Newton-Raphson Method Each error is approximately proportional to the square of the previous error. This means that the number of correct decimal places roughly doubles with each approximation. Example: Find the root of f(x) = e-x - x = 0 (Ans: α= 0.56714329) Error Analysis

  23. Error Analysis

  24. Newton-Raphson vs. Fixed Point Iteration Find root of f(x) = e-x - x = 0. (Answer: α= 0.56714329) Fixed Point Iteration with Newton-Raphson

  25. Pitfalls of the Newton-Raphson Method • Sometimes slow

  26. Pitfalls of the Newton-Raphson Method Figure (a) An inflection point (f"(x)=0) at the vicinity of a root causes divergence. Figure (b) A local maximum or minimum causes oscillations.

  27. Pitfalls of the Newton-Raphson Method Figure (c) It may jump from one location close to one root to a location that is several roots away. Figure (d) A zero slope causes division by zero.

  28. Overcoming the Pitfalls? • No general convergence criteria for Newton-Raphson method. • Convergence depends on function nature and accuracy of initial guess. • A guess that's close to true root is always a better choice • Good knowledge of the functions or graphical analysis can help you make good guesses • Good software should recognize slow convergence or divergence. • At the end of computation, the final root estimate should always be substituted into the original function to verify the solution.

  29. Other Facts • Newton-Rahpson method converges quadratically (when it converges). • Except when the root is a multiple roots • When the initial guess is close to the root, Newton-Rahpson method usually converges. • To improve the chance of convergence, we could use a bracketing method to locate the initial value for the Newton-Raphson method.

  30. Summary • Differences between bracketing methods and open methods for locating roots • Guarantee of convergence? • Performance? • Convergence criteria for fixed-point iteration method • Rate of convergence • Linear, quadratic, super-linear, sublinear • Understand what conditions make Newton-Raphson method converges quickly or diverges

More Related