1 / 36

Computational Methods in Physics Lecture - ODEs, Adaptive Step Size, Projects, Talks

Learn about handling nth order ODEs, adaptive step size algorithms, and project deadlines in today's lecture. Understand second order ODEs and the algorithm outline. Summarize issues and solutions for adaptive step sizes.

mestrada
Download Presentation

Computational Methods in Physics Lecture - ODEs, Adaptive Step Size, Projects, Talks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Computational Methods in Physics PHYS 3437 Dr Rob Thacker Dept of Astronomy & Physics (MM-301C) thacker@ap.smu.ca

  2. Today’s Lecture • Notes on projects & talks • Issues with adaptive step size selection • Second order ODEs • nth order ODEs • Algorithm outline for handling nth order ODEs Advanced warning – no class on March 31st

  3. Project time-line By now you should have approximately ½ of the project coding & development done There are only 4 weeks until the first talk I would advise you to have at least some outline of how your write-up will look already Project reports are due on the last day of term

  4. Talk Format • Allowing for changing over presenters, connecting/disconnecting etc, we can allow up to 60 minutes per lecture period • 3 people per lecture means 20 minutes per presenter • 15 minute presentation • 5 minutes for questions • Remember your presentation will be marked by your peers

  5. Issues with the adaptive step size algorithm • Step (8) If err < 0.1e  h is too small. Increase by 1.5 for next step If err > e  h is too big. Halve h and repeat iteration If 0.1 e ≤ err ≤ e h is OK. • Clearly if there is problems with convergence this step will continue to keep dividing h forever. You need to set up a limit here, either by not allowing h to get smaller than some preset limit or counting the number of times you halve h • Because of rounding error as you increment x it is very unlikely that you will precisely hit xmax with the final step • Therefore you should choose h=min(h,xmax-x) where x is current position

  6. Issues with the adaptive step size algorithm: more “gotcha’s” • Step (7) Estimate error • Should add a very small (“tiny”) number to the absolute value of the denominator to avoid a divide by zero (when the two estimates are very close to zero you might - in incredibly rare situations - get the two numbers adding to zero). • Since you store values of y and x in an array it is possible that you may run out of storage (because you need too many points). You should do some check of the number of positions stored versus the maximum possible allowed in your decleration of the x & y arrays . This will avoid the possibility of a segmentation fault.

  7. Second order ODEs • Consider the general second order ODE • We now require two initial values be provided, namely the y0 value and the derivative y´(x0) • These are called the Cauchy conditions • If we let z=y´, then z´=y´´ and we have (1)

  8. We have thus turned a second order ODE into a first order ODE for a vector Can apply the R-K solver to the system but you now have two components to integrate At each step must update all x, y and z values Second order ODEs cont

  9. Diagrammatically Remember z’=g(x,y,y’) Remember y’=z y(x) z´0=g0 z(x) y1 z1 y´0=z0 z0 y0 x0 x1 x0 x1

  10. nth order ODEs • Systems with higher than second order derivatives are actually quite rare in physics • Nonetheless we can adapt the idea for 2nd order systems to nth order • Suppose we have a system specified by • Such an equation requires n initial values for the derivatives, suppose again we have the Cauchy conds.

  11. Definitions for the nth order system

  12. Definitions for the nth order system So another system of coupled first order equations that can be solved via R-K.

  13. Algorithm for solving such a system Useful parameters to set: imax=number of points at which y(x) is to be evaluated (1000 is reasonable) nmax=highest order of ODE to be solved (say 9) errmax=highest tolerated error (say 1.0×10-9) Declare x(imax),y(imax,nmax),y0(nmax),yl(nmax), ym(nmax),yr(nmax),ytilde(nmax),ystar(nmax) Note that the definitions used here are not quite consistent with the variable definitions used in the discussion of the single function case.

  14. User inputs • Need to get from the user (not necessarily at run time) • The g(x,y,y’,…) function • Domain of x, i.e. what are xmin and xmax • What is the order of the ODE to be solved, stored in variable nord • What are the initial values for y and the derivatives (the Cauchy conditions), these are stored in y0(nmax)

  15. Correspondence of arrays to variables • The arrays y(imax,nmax), y0(nmax) corresponds to the yderivatives as follows: • y(1:imax,1)≡y vals with y(1,1)=y0=y0(1) • y(1:imax,2)≡y´ vals with y(1,2)=y´0=y0(2) • y(1:imax,3)≡y´´ vals with y(1,3)=y´´0=y0(3) • y(1:imax,nord)≡y(nord-1) vals with y(1,nord)=y(n-1)0 = y0(nord) | |

  16. Choose initial step size & initialize yl values • Apply same criterion as standard Runge-Kutta • dx=0.1×errmax×(xmax-xmin) • dxmin=10-3×dx • We can use this value to ensure that adaptive step size is never less than 1/1000th of the initial guess • x(1)=xl=xmin • Set initial position for solver • Initialize yl (left y-values for the first interval) • do n=1,nord yl(n)=y0(n)=y(1,n)

  17. Start adaptive loop • Set i=1 this will count number of x positions evaluated • dxh=dx/2 --- half width of zone • xr=xl+dx --- right hand boundary • xm=xl+dxh --- mid point of zone • Perform R-K calculations on this zone for all yn • Need to calculate all y values on right boundary using a single R-K step (stored in ytilde(nmax)) • Need to calculate all y values on right boundary using two half R-K steps (stored in ystar(nmax)) – for example call rk(xl,yl,ytilde,nord,dx) call rk(xl,yl,ym ,nord,dxh) call rk(xm,ym,ystar ,nord,dxh)

  18. Now evaluate R.E.-esque value yr(n) • err=0. • do n=1,nord • If err< 0.1×errmax then increase dx: dx=dx*1.5 If err > errmax dx is too big: dx=max(dxh,dxmin) and repeat evaluation of this zone If 0.1 errmax≤ err ≤ errmax  dx is OK & we need to store all results Error is now set over all the functions being integrated

  19. Update values and prepare for next step • Increment i: i=i+1 (check that it doesn’t exceed imax) • x(i)=xr • xl=xr (note should check xl hasn’t gone past xmax & that h value is chosen appropriately) • do n=1,nord • y(i,n)=yr(n) • yl(n)=yr(n) • Then return to top of loop and do next step

  20. Runge-Kutta routine • Subroutine rk(xl,yl,yr,nord,dx) • yl and yr will be arrays of size nord • Set useful variables: • dxh=dx/2. • xr=xl+dx • xm=xl+dxh • Recall, given y’=f(x,y), (xl,yl), and dx then

  21. Steps in algorithm • Complications: have a vector of functions and vector of yl values • call derivs(xl,yl,f0,nord) (sets all function f0 values) • do n=1,nord • call derivs(xm, ,nord) (sets function values) • do n=1,nord • call derivs(xm, ,nord) (sets function values) • do n=1,nord • call derivs(xr, ,nord) (sets function values)

  22. Calculate R-K formula & derivs subroutine • do i=1,nord • That’s the end of the R-K routine • subroutine derivs(x,y,yp,nord) • do n=1,nord-1: yp(n)=y(n+1) • Lastly set yp(nord)=g(x,y,y´,…,y(nord-1))

  23. Summary • There are a few issues with the adaptive step size algorithm you need to be concerned about (avoiding divide by zero etc.) • Second order systems can be turned into coupled first order systems • Nth order systems can be turned into n coupled first order systems • The adaptive R-K algorithm for vectors is in principle similar to that for a single function • However, must loop over vectors

  24. NON-EXAMINABLE BUT USEFUL TO KNOW Implicit Methods: Backward Euler method • Recall in the Forward Euler method • Rather than predicting forward, we can predict backward using the value of f(tn,yn) • Rewriting in terms of n+1 and n • Replacing yn+1=r we need to use a root finding method to solve

  25. Notes on implicit Methods • Implicit methods tend to be more stable, and for the same step size more accurate • The trade-off is the increased expense of using an interative procedure to find the roots • This can become very expensive in coupled systems • Richardson Extrapolation can also be applied to implicit ODEs, results in higher order schemes with good convergence properties • Use exactly the same procedure as outline in previous lecture, compare expansions at h and h/2 • Crank-Nicholson is another popular implicit method • Relies upon the derivative at both the start and end points of the interval • Second order accurate solution

  26. Multistep methods • Thus far we’ve consider self starting methods that use only values from xn and xn+1 • Alternatively, accuracy can be improved by using a linear combination of additional points • Utilize yn-s+1,yn-s+2,…,yn to construct approximations to derivatives of order up to s, at tn • Example for s=2

  27. Comparison of single and multistep methods

  28. Second order method • We can utilize this relationship to describe a multistep second order method • Generalized to higher orders, these methods are known as Adams-Bashforth (predictor) methods • s=1 recovers the Euler method • Implicit methodologies are possible as well • Adams-Moulton (predictor-corrector) methods • Since these methods rely on multisteps the first few values of y must be calculated by another method, e.g. RK • Starting method needs to be as accurate as the multistep method

  29. “Stiff” problems • Definition of stiffness: • “Loosely speaking, the initial value problem is referred to as being stiff if the absolute stability requirement dictates a much smaller time step than is needed to satisfy approximation requirements alone.” • Fomally: An IVP is stiff in some interval [0,b] if the step size needed to maintain stability of the forward Euler method is much smaller than the step size required to represent the solution accurately. • Stability requirements are overriding accuracy requirements • Why does this happen? • Trying to integrate smooth solutions that are surrounded by strongly divergent or oscillatory solutions • Small deviations away from the true solution lead to forward terms being very inaccurate

  30. Consider: y’(t)=-15y(t), t≥0, y(0)=1 Exact solution: y(t)=e-15t, so y(t)→0 as t→0 If we examine the forward Euler method, strong oscillatory behaviour forces us to take very small steps even though the function looks quite smooth Example

  31. Implicit methods in stiff problems • Because implicit methods can use longer timesteps, they are strongly favoured in integrations of stiff systems • Consider a two-stage Adams-Moulton integrator:

  32. Adams-Moulton solution for h=0.125 Much better behaviour and convergence

  33. Choosing stiff solvers • This isn’t as easy as you might think • Performance of different algorithms can be quite dependent upon the specific problem • Researchers often write papers comparing the performance of different solvers on a given problem and then advise on which one to use • This is a sensible way to do things • I recommend you do the same if you have a stiff problem • Try solvers from library packages like ODEPACK or the Numerical recipes routines

  34. Stability of the Forward Euler method • Stability is more important than the truncation error • Consider y’=ly for some complex l • Provided Re l < 0 solution is bounded • Substitute into the Euler method • For yn to remain bounded we must have • Thus a poorly chosen Dt that breaks the above inequality will lead to yn increasing without limit

  35. Behaviour of small errors • We considered the previous y’=ly equation because it describes the behaviour of small changes • Suppose we have a solution ys=y+e where e is the small error. • Substitute into y’=f(t,y) and use a Taylor expansion To leading order in e

  36. Next lecture • Monte Carlo methods

More Related