70 likes | 87 Views
We study the error in the derivatives of an unknown function. We construct the discretized problem. The local truncation and global errors are discussed. The solution of discretized problem is constructed. The analytical and discretized solutions are compared. The two solution graphs are described by using MATLAB software. Wai Mar Lwin | Khaing Khaing Wai "Errors in the Discretized Solution of a Differential Equation" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd27937.pdf Paper URL: https://www.ijtsrd.com/mathemetics/applied-mathamatics/27937/errors-in-the-discretized-solution-of-a-differential-equation/wai-mar-lwin<br>
E N D
International Journal of Trend in Scientific Research and Development (IJTSRD) Volume 3 Issue 5, August 2019 Available Online: www.ijtsrd.com e-ISSN: 2456 – 6470 Errors in the Discretized Solution of a Differential Equation Wai Mar Lwin1, Khaing Khaing Wai2 1Faculty of Computing, 2Department of Information Technology Support and Maintenance 1,2University of Computer Studies, Mandalay, Myanmar How to cite this paper: Wai Mar Lwin | Khaing Khaing Wai "Errors in the Discretized Solution of a Differential Equation" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456- 6470, Volume-3 | Issue-5, August 2019, https://doi.org/10.31142/ijtsrd27937 Copyright © 2019 by author(s) and International Journal of Trend in Scientific Research and Development Journal. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (CC (http://creativecommons.org/licenses/by /4.0) z ˆ . Then the error in this computed solution is z z E ˆ 1.1.2.Absolute Error A natural measure of this error would be the absolute value of E, z z E ˆ This is called the absolute error in the approximation. 1.1.3.Relative Error z z ˆ is called relative error. ABSTRACT We study the error in the derivatives of an unknown function. We construct the discretized problem. The local truncation and global errors are discussed. The solution of discretized problem is constructed. The analytical and discretized solutions are compared. The two solution graphs are described by using MATLAB software. KEYWORDS: Differential Equations, MATLAB, Heat Equation 1.INTRODUCTION 1.1.Measuring Errors In order to discuss the accuracy of a numerical solution, it is necessary to choose a manner of measuring that error. It may seem obvious what is meant by the error, but as we will see there are often many different ways to measure the error which can sometimes gives quite different impressions as to the accuracy of an approximate solution. 1.1.1.Errors in a Scalar Value First we consider a problem in which the answer is a single value Consider, for example, the scalar ODE ) 0 ( )), ( ( ) ( u t u f t u and suppose we are trying to compute the solution at some particular time T, so z=u( T). Denote the computed solution by 0 . This is slightly stronger than the previous statement, and means that ( f )) ( ( ) ( g o f then f not be true. Saying that f as 0 . Examples 1.1.1 0 IJTSRD27937 pp.2266-2272, z . R (1.1) BY 4.0) decays to zero faster than )) ( ( ) g O though the converse may ) 1 ( ) ( o f simply means that the . If ( ) ( g ) (1.2) ( ) 0 (1.3) 3 1 2 as , since for all . 3 2 2 O ( ) 2 1 2 2 2 1 3 2 2 o ( ) 0 0 as , since for all . 2 3 5 3 5 The error defined by 0 as ,since for all sin( ) O ( ) sin( ) ... z 1.1.4.“Big-oh” and “little-oh” notation In discussing the rate of convergence of a numerical method we use the notation ) ( O , the so-called “big-oh” notation. If ) ( f and ) ( g )) ( ( ) ( g O f as 0 . If there is some constant C such that . 0 , since sin( ) 0 as . sin( ) o ( ) 2 O ( ) p 1.1.5.Taylor Expansion Each of the function values of u can be expanded in a Taylor series about the point x, as e.g., 1 ) ( ) ( ) ( x u x u x u x u are two functions of then we say that (1.4) 1 for all 2 3 4 f ( ) ( ) u ( x ) O ( ) C 2 6 g ( ) sufficiently small, or equivalently, if we can bound ) ( ) ( g C f It is also sometimes convenient to use the “little-oh” notation )) ( ( ) ( g O f as 0 (1.5) 1 1 for all sufficiently small. 2 3 4 u ( x ) u ( x ) u ( x ) u ( x ) u ( x ) O ( ) 2 6 1.2.Finite Difference Approximations Our goal is to approximate solutions to differential equation, i.e. to find a function (or some discrete approximation to this function) which satisfies a given relationship between f ( ) . This means that as 0 g ( ) @ IJTSRD | Unique Paper ID – IJTSRD27937 | Volume – 3 | Issue – 5 | July - August 2019 Page 2266
International Journal of Trend in Scientific Research and Development (IJTSRD) @ www.ijtsrd.com eISSN: 2456-6470 various of its derivatives on some given region of space and/ or time along with some boundary conditions along the edges of this domain. A finite difference method proceeds by replacing the derivatives in the differential equations by finite difference approximations. This gives a large algebraic system of equations to be solve in place of the differential equation, something that is easily solved on a computer. We first consider the more basic question of how we can approximate the derivatives of a known function by finite difference formulas based only on values of the function itself at discrete points. Besides providing a basis for the later development of finite difference methods for solving differential equations, this allows us to investigate several key concepts such as the order of accuracy of an approximation in the simplest possible setting. Let u(x) represent a function of one variable that will always be assumed to be smooth, defined bounded function over an interval containing a particular point of interest x, 1.2.1.First Derivatives Suppose we want to approximate approximation based only on values of u at a finite number of points near x. One choice would be to use ) ( x u 1 ) ( O x u x u for some small values of . It is known as forward difference approximation. Another one-sided approximation would be ) ( x u 1 ) ( O x u x u for two different points.It is known as backward difference approximation. Another possibility is to use the centered difference approximation 2 1 x u x u 1 ) ( O u x u 1.2.2.Second Order Derivatives The standard second order centered approximation is given by ( ) ( 2 ) ( ) ( 1 2 4 u ( x ) u ( x ) O ( ) 12 2 u ( x ) O ( ) 1.2.3.Higher Order Derivatives Finite difference approximations to higher order derivatives can be obtained. 1 ) ( 3 1 ) ( O x u x u The first one equation (1.10) is un-centered and first order accurate: 1 ) ( 3 0 1 ) ( O x u x u This second equation (1.12) is second order accurate. 2.Comparison Of Analytical and Discretized Solution Of Heat Equation 2.1 Solutions for the Heat Equation 2.1.1. Finite Difference Method We will derive a finite difference approximation of the following initial boundary value problem: u u for 0 ) , 1 ( ) , 0 ( t u t u for ) ( ) 0 , ( x f x u for Let 0 n be a given integer, and define the grid spacing in the x-direction by 1 n The grid points in the x-direction are given by j = xj integers 0 m , where t h denotes the time step. Then, we let following approximations ( ) , ( O h and ) , ( 2 ) , ( ) , ( 2 These approximations motivate the following scheme: (1.10) 2 u x [ u ( x 2 ) 3 u ( x ) 3 u ( x ) u ( x )] 2 ( ) ( ) 2 u ( x ) O ( ) (1.11) u x [ u ( x 2 ) 2 u ( x ) 2 u ( x ) u ( x 2 )] 2 u (x ) 2 4 ( ) ( ) by a finite difference 4 2 u ( x ) O ( ) u ( x ) u ( x ) (1.6) 2 ( ) ( ) 2 1 , 0 ( x ), t 0 t xx t 0 (2.1) u ( x ) u ( x ) ) 1 , 0 ( x (1.7) 2 ( ) ( ) 2 x ( ) 1 for j=0,1,…,n+1. Similarly, we define tm=mh for u ( x ) u ( x ) u ( x ) 0 jv denote an approximation of u(xj,tm). We have the m [ ( ) ( )] 2 u ( x , t h ) u ( x , t ) (1.8) (2.2) 2 4 ( ) ut x t h ) 6 u x t u x t u ( x , t ) (2.3) 2 uxx x t O ( ) u x u x u x ) (1.9) 2 u x 2 @ IJTSRD | Unique Paper ID – IJTSRD27937 | Volume – 3 | Issue – 5 | July - August 2019 Page 2267
International Journal of Trend in Scientific Research and Development (IJTSRD) @ www.ijtsrd.com eISSN: 2456-6470 m j 1 m j m j m j m j T 1 ( h ) T v v v 2 v v m 0 for . (2.12) for j=1,…,n, (2.4) m 1 m 1 1 m 0 2 Some iterations of (2.12) h T ) 1 ( 1 2) 1 ( m T h . . . 1 ) 1 ( T h ) 1 ( h Clearly indicate that the solution is m m h T ) 1 ( for This fact is easily verified by induction on m. Next we turn our attention to the problem (2.9) with boundary condition (2.11). In fact this is equivalent to the eigenvalue problem. h By using the boundary conditions of (2.1), we have 0 v and 0 , for all The scheme is initialized by ) ( j j x f v , for j=1,…,n. h r . Then the scheme can be rewritten in a more convenient form m j j j j rv v r rv v 1 1 ) 2 1 ( , j=1,…,n, When the scheme is written in this form, we observe that the values on the time level tm+1 are computed using only the values on the previous time level tm and we have to solve a tridiagonal system of linear equations. 2.1.2 Approximate Solution The first step in our discretized problem is to derive a family of particular solutions of the following problem: 1 2 h for j=1,…,n, with the boundary conditions 0 v and , for all The initial data will be taken into account later. We seek particular solutions of the form m j T X v for j=1,…,n, 0 m Here X is a vector of n components, independent of m, while } { m m T is a sequence of real numbers. By inserting (2.8) into (2.6), we get 2 h Since we are looking only for nonzero solutions, we assume 0 m X , and thus we obtain X X X hT . The left-hand side only depends on m and the right-hand side only depends on j. Consequently, both expressions must be equal to a common constant, say following two difference equations: X X X , for j=1,…,n, T T for 0 m . We also derive from the boundary condition (2.7) that 0 1 0 n X X We first consider the equation (2.10). We define T0=1 and consider the difference equation T 0 1 m m nv m m m 0 . 1 0 m 0 m 1 2 Let m 1 m m m 0 (2.5) m 0 . (2.13) , ,..., Hence, we obtain that the n eigenvalues by 2 2 2 2 2 2 2 2 2 4 2 2 and the corresponding eigenvectors ,..., , ( 2 , 1 , k X X X have components given by ) sin( , j X ) sin( , j j jx X , j=1,…,n. It can be easily verified that 2 1 j j X AX 1 2 1 2 cos( ) sin( j 2 2 2 2 2 are given 1 2 n cos( ) k m j m j m j m j m j v v v v v 1 1 2 m 0 (2.6) 1 ( cos( )) 0 1 m m nv 0 m 0 2 sin ( ) . (2.7) 2 k 2 sin ( ) ,..., 1 n 2 for (2.14) (2.8) j m n X ) R , ,..., 1 n , n 0 X T X T X T X T X T j j m 1 j m j 1 m j m j 1 m 2 jT that T 2 T j 1 j j 1 m 1 m 1 X X 2 X , , 1 , j , j 1 2 2 2 m j 2 1 ) 1 j ) 1 sin( ( j ) sin( ) sin( ( j ) 2 2 ( ) , and we get the 2 1 j j sin( j )) sin( ) sin( ) 2 2 2 j 1 j j 1 X j 2 (2.9) j ) j ) j sin( ) cos( ) cos( sin( j ) ) 2 sin( ) 1 1 m m T 2 cos( ) sin( m (2.10) h sin( j )( 1 cos( )) (2.11) j sin( )( 2 sin ( )) 2 @ IJTSRD | Unique Paper ID – IJTSRD27937 | Volume – 3 | Issue – 5 | July - August 2019 Page 2268
International Journal of Trend in Scientific Research and Development (IJTSRD) @ www.ijtsrd.com eISSN: 2456-6470 with the eigenfunctions sin( ) ( x X ( 2 , 1 2 ) ,..... for (2.20) 4 j 2 sin ( ) sin( ) 2 2 x ) 2 , 1 ,..... for (2.21) X , j m v, 2) j ( t Hence, we obtain particular solutions sin( ) 1 ( , j h v of the form T ( t ) e T T Further, the solution of ,..... 2 , 1 Finally, we use the superposition principle to a solution to the initial-boundary value problem (2.1), and we get 1 The unknown coefficients can be determined from the initial condition such that The solution of the continuous problem is given by 2) ( fourier coefficient 0 2.2. Comparison of Analytical and Discretized Solution 2.2.1. We want to compare this analytical solution with the discretized solution given by 1 where 2 and j 1 for In order to compare the analytical and discretized solution at is given by for m m x ) (2.15) (2.22) j n { v } We have derived a family of particular solutions m j v, at the grid point (xj,tm). Next, we observe that any linear combination of particular solutions 1 , ( is also a solution of (2.6) and (2.7). Finally, we determine the } by using the initial condition ) ( j j x f v , for j=1,…,n. X v at t=0 , we want to determine { n x f X , for j=1,…,n. Hence, it follows from 1 n x x f X 1 that j 1 for 2.1.3. Exact Solution To find a solution of the initial-boundary value problem (2.1), assume that ) ( ) ( ) , ( t T x X t x u Using boundary conditions we get X(0) = X(1) = 0 If we insert the (2.17) in the equation (2.1), we have ) ( ) ( t T x X Now we observe that the left hand side is a function of x , while the right hand side just depends on t. Hence, both have to be equal to the same constant eigenvalue problem for x X X , X(0) = X(1) = 0 with 1 values u ( x , t ) c T ( t ) X ( x ) n v v is scalar) 2 ( ) t u ( x , t ) c e sin( x ) 1 coefficients { 0 f ( x ) c sin( x ) (2.23) } such that 1 Since ( ) , j j (2.16) 1 t u ( x , t ) c e sin( x ) (2.24) 1 n f ( x ) sin( x ) where j j 1 c 2 f ( x ) sin( x ) dx (2.25) , ( ) sin , sin s x sj j j j h h n 2 f ( x jX ) n , j ,..., 1 n 1 ( m j m v h ) sin( x ) j (2.26) 4 2 sin 2 (2.17) (2.27) n 2 f ( x ) sin( x ) j j ,..., 1 n (2.28) T X ( x ) t u m j u ( x , t ) a grid point (xj,tm), we define Our aim is to prove m j j u v under appropriate conditions on the mesh parameters and h. To avoid technicalities, we consider a fixed grid point t tm for t>0 independent of the mesh , i.e, j m ( ) (2.18) t m j u c e sin( x ) m j (2.29) 1 R . This yields the m (2.19) Nontrivial solutions only exist for special values of They are so-called the eigenfunctions. In this special case we have the eigenvalues (xj,tm) where @ IJTSRD | Unique Paper ID – IJTSRD27937 | Volume – 3 | Issue – 5 | July - August 2019 Page 2269
International Journal of Trend in Scientific Research and Development (IJTSRD) @ www.ijtsrd.com eISSN: 2456-6470 parameters. Furthermore, we assume that the initial function f is smooth and satisfies the boundary conditions, i.e. f(0) =f(1) =0. Finally we assume that the mesh parameters h and are sufficiently small. m j u and n t e c x e c Here we want to show that n Since f is smooth, it is also bounded and then the Fourier c are bounded for all . Obviously, we have sin( and 1 1 n 1 n 1 e e e C t e 1 0 , for large value of n. Since we have verified (2.30) it follow that n t m j x e c u Now we want to compare the finite sums (2.26) and (2.31): 1 Motivated by the derivation of the solutions, we try to compare the two sums term wise. Thus we keep fixed, and we want to compare ) sin( j x e c and Since the sine part here is identical, it remains to compare the Fourier coefficients c and m t e and . 2.2.2. Comparison of Fourier coefficient coefficient We start by considering the Fourier coefficients, and note that is a good approximation of j 1 is the trapezoidal-rule approximation of 0 In fact we have 1 2 ) sin( ) ( 2 dx x x f c , and the time-dependent terms m 1 ( h ) m jv c and In order to compare , we note that t u ( x , t ) c e sin( x ) m j m j 1 c because t sin( ) sin( x ) m m n j j 2 f ( x ) sin( x ) 1 n 1 j j t c e sin( x ) 0 m j 1 (2.30) 1 2 f ( x ) sin( x ) dx n coefficients j 0 f ( x ) sin( x ) j j 1 jx ) 1 n n j j 2 f ( x ) f ( x ) sin( x ) 2 f ( x ) sin( x ) j 1 j j j j 2 1 1 t t c e sin( x ) c e sin( x ) m m n j j j f ( x ) f ( x ) sin( x ) n 1 n 1 j j j 1 2 n j f ( x ) f ( x ) f ( x ) ... f ( x ) sin( x ) j j j j j 2 1 2 ( ) tm max c e 3 n 2 f ( x ) f ( x ) ... sin( x ) j j j 2 j 1 2 t C e , t t m 3 n j O 2 f ( x ) f ( x ) ... ( j j 2 1 n 1 n 2 2 2 t t C e e ... 2 ) , for f sufficiently smooth. m t n 1 m 1 ( h ) e 2 2 2 t t 2 t 2.2.3 Comparison of the Terms and ... m 1 ( h ) We will compare the term m t e , we simplify the problem a bit by choosing a fixed time approximates the term 1 n 1 2 t C e 2 2 h tm , say tm=1 and we assume that we want to compare the terms e and 1 ) 1 ( Since both is sufficient to compare them for small . In order to compare recalling that ) ( ) sin( y O y y Thus, we get . As a consequence, 2 (2.32) sin( ) m j h h (2.31) (2.33) 1 are very small for large values of , it and n 1 ( m j m v h ) sin( x ) j for small values of , we start by and 3 t m 1 ( h ) sin( x ) m 2 . j 3 5 h h 2 2 3 5 ( ) ( ) h h 2 2 2 2 sin 2 ... 2 2 ! 3 ! 5 @ IJTSRD | Unique Paper ID – IJTSRD27937 | Volume – 3 | Issue – 5 | July - August 2019 Page 2270
International Journal of Trend in Scientific Research and Development (IJTSRD) @ www.ijtsrd.com eISSN: 2456-6470 2 h m j l u ( x , t ) u ( x , t ) ... u ( x , t ) u ( x , t ) ... t j m tt j m xx j m xxxx j m 2 2 12 2 h h 2 4 ( ) ( ) h 2 2 2 2 1 ... 2 h 2 ! 3 ! 5 u ( x , t ) u ( x , t ) u ( x , t ) u ( x , t ) ... t j m xx j m tt j m xxxx j m 2 12 2 h 2 h u ( x , t ) u ( x , t ) ... 2 tt j m xxxx j m 2 12 2 , for h sufficiently small h 2 2 O 2h ( , ) The finite difference scheme (2.6) is consistent of order (2,1). Example 2.3.1 Solve the IVP u u for , t > 0 0 ) , 1 ( ) , 0 ( t u t u for 0 t ) ( ) 0 , ( x f x u for . The exact solution is given by x e u Approximate solution and exact solution are illustrated by 2.1. By using these facts, we derive 1 ) 1 ( h h 1 4 h 2 1 h sin ( ) 2 2 ) 1 , 0 ( x t xx (2.34) 1 h h 2 2 1 2 sin ) 1 , 0 ( x 1 2 2 1 ( h ) h 2 sin t 1 1 1 1 1 h h ! 2 2 2 2 2 2 2 2 1 ( h ) ( h ) ... ( h ) h h 1 1 h 2 2 2 2 2 2 2 1 ( ) ... ( h ) h 2 2 2 e e , for h sufficiently small m t e This shows that also the time dependent term approximated by its discrete 2.3 Consistency Lemma 2.3.1. The finite difference scheme (2.6) is consistent of order (2,1). Proof: Local truncation error of the finite difference scheme (2.6) is given by j j j u u h j j j j j u u u u hl 1 2 2 3 2 h t x u t x hu t x u is well m 1 ( h ) . Figure2.1: Comparison of Exact and Approximate Solutions MATLAB codes for Figure 2.1 m j 1 m j u u 1 m m m m j l 2 u 1 1 2 h m m 1 m m m m j u N=100 ; x=zeros (N,1) ; 1 g= zeros (N,1) ; h ( , ) ( , ) ( , ) u ( x , t ) ... v=zeros (N,N) ; j m t j m tt j m ttt j m 2 6 2 3 %delta x=h, delta t=p %f (x)=cos (2*pi*x) ; H=1/(N+1) ; P=1/(N+1) ; for j = 1:N x ( j )=j*h ; end %for m=1:N % t ( m )=m*p ; u ( x , t ) u ( x , t ) u ( x , t ) u ( x , t ) j m x j m xx j m xxx j m 2 6 4 h u ( x , t ) u ( x , t ) ... 2 u ( x , t ) u ( x , t ) u ( x , t ) j m xxxx j m j m j m x j m 2 24 2 3 4 u ( x , t ) u ( x , t ) u ( x , t ) ... xx j m xxx j m xxxx j m 2 6 24 2 h hu ( x , t ) u ( x , t ) ... t j m tt j m 2 4 h 2 u ( x , t ) u ( x , t ) ... xx j m xxxx j m 2 12 @ IJTSRD | Unique Paper ID – IJTSRD27937 | Volume – 3 | Issue – 5 | July - August 2019 Page 2271
International Journal of Trend in Scientific Research and Development (IJTSRD) @ www.ijtsrd.com eISSN: 2456-6470 %end %gamma k for k=1:N sum=0 ; for j=1:N term=2*h*sin ( 2*pi*x ( j ))*sin (k*pi*x ( j )); sum=sum+term ; end g ( k )=sum ; end %vk for k=1 : N for j = 1 : N v ( k , j )=p*(4/h^2)*(sin(k*pi*h/2))^2)*sin(k*pi*x(j)) ; end v ( k , : ) ; end sum 1 =0 ; for k= 1 : N term1=g ( k ) . *v ( k , :) ; sum1=sum1+term1 ; s =sum1 ; end %%%Approximate solution plot ( x , s , ‘ – ‘) ; hold on %%%%%%Exact solution Plot ( x ,-exp (-*pi^2*0.01 )*sin(pi*x) ) ; 3.CONCLUSION The aim of this research paper describe the Errors between Analytical solution and Discretized solution of Differential equations. 4.REFERENCES [1]R. J. LeVeque, Finite difference methods for differential equations, Washington, 2006. [2]J. W. Thomas, Numerical partial differential equation: Finite difference methods, Springer-Verlag, New York, 1995. @ IJTSRD | Unique Paper ID – IJTSRD27937 | Volume – 3 | Issue – 5 | July - August 2019 Page 2272