1 / 17

CSCI 125 & 161 / ENGR 144 Lecture 13

This lecture discusses the limitations of floating point numbers, the need for approximations in numerical algorithms, and the implementation of algorithms like square root using successive approximation and series expansion methods.

beckys
Download Presentation

CSCI 125 & 161 / ENGR 144 Lecture 13

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CSCI 125 & 161 / ENGR 144 Lecture 13 Martin van Bommel

  2. Floating Point Data • Floating point numbers are not exact • Value 0.1 is very close to 1/10, but not precisely equal to it • 0.110 = 0.000110011001100110011... 2 • Cannot rely on accuracy of floating point values

  3. for With Floating Point Data • Following for loop is syntactically correct for (x = 1.0; x <= 2.0; x += 0.1) ... • On some machines, x will never take on value 2.0, but will become 2.00000000000001, which fails the test

  4. for With Floating Point Values • Better to use the for loop for (i = 10; i <= 20; i++) ... • Then use the calculation x = i / 10.0; to give the floating point values for x

  5. Equality With Floating Points • Testing floating point data for equality leads to problems with precision • Want to test if equal within some epsilon • In reality, test if absolute value of difference between numbers, divided by the smaller of their absolute values, is less than 

  6. Approximately Equal #define Epsilon 0.000001 bool ApproximatelyEqual(double x, double y) { double diff = abs(x - y); y = abs(y); x = abs(x); if (y < x) x = y; return ((diff / x) < Epsilon); }

  7. Numerical Algorithms • Techniques used by computers to implement mathematical functions like sqrt • sqrt function exists in math library • Most people just use it • How is it implemented? • Successive approximation? • Series expansion?

  8. Successive Approximation • Steps: 1. Make a guess at the answer 2. Use guess to generate a better answer 3. If getting closer to actual answer, repeat until guess is close enough • How to generate next guess? • need a converging sequence of guesses

  9. Newton’s Method • e.g. • Suppose want square root of 16 • Use 8 as first guess - too large, 8 x 8 = 64 • Derive next guess by dividing value by guess • 16 / 8 = 2, answer must be between 2 and 8 • Use (2 + 8) / 2 = 5 as next guess • 16 / 5 = 3.2, (3.2 + 5) / 2 = 4.1 as next guess • Next guesses: 4.001219512 and 4.00000018584

  10. Problem with Successive Approx • Process will never guarantee an exact answer • Can continue until close enough (?) • Use ApproximatelyEqual function • Continue until square of guess approx. equal to original number

  11. Newton’s Algorithm double Sqrt(double x) { double g = x; if (x == 0) return 0; if (x < 0) { cout << ”Sqrt on negative number\n”); return 0; } while (!ApproximatelyEqual(x, g*g)) g = (g + x / g) / 2; return g; }

  12. Series Expansion • Value of function approximated by adding terms in a mathematical series • If addition of new term brings total closer to desired value, series converges and can be used to approximate result • Taylor Series Approximation for 0 < x < 2

  13. Implement Series Expansion • For a series such as:Think of each inside term as • Initially sign = odd = 1; • Then, for the next term sign = -sign;odd = odd + 2;

  14. Implementing Taylor Series • Think of each term as • Then, for next term • xpower *= (x-1); • factorial *= (i+1); • coeff *= (0.5-i);

  15. Taylor Algorithm double TaylorSqrt(double x) { double sum, factorial, coeff, term, xpower; int i; factorial = coeff = xpower = term = 1.0; sum = 0.0; for (i = 0; sum != sum + term; i++) { sum += term; coeff *= (0.5 - i); xpower *= (x - 1); factorial *= (i + 1); term = coeff * xpower / factorial; } return sum; }

  16. Fixing Limitation • Taylor Series Approx works for 0 < x < 2 • How can we use it for larger x? • Recall • Then divide by 4 until within range, then calcuate sqrt, and multiply by 2’s to recover • e.g. sqrt(24) = sqrt(4*4*1.5) = 2*2*sqrt(1.5)

  17. Taylor Fix double TSqrt(double x) { int mult = 1; if (x == 0) return (0); if (x < 0) { cout << "TSqrt of negative value " << x << endl; return 0; } while (x >= 2){ x /= 4; mult *= 2; } return mult * TaylorSqrt(x); }

More Related