180 likes | 189 Views
Talk on RM. Application of Numerical Computation in General Research Work By Dr Yeak Su Hoe. Bilik Seminar 2, Level 4, T05. 12 PM, 28 March 2016, Monday. Boundary conditions - general remarks. ODE/PDE + boundary condition(s) complete solution
E N D
Talk on RM Application of Numerical Computation in General Research Work By Dr Yeak Su Hoe Bilik Seminar 2, Level 4, T05 12 PM, 28 March 2016, Monday
Boundary conditions - general remarks ODE/PDE + boundary condition(s) complete solution (ODE/PDE alone without b.c. General solution, not always useful) Example 1: The ODE, du/dx = 2, has the general solution u(x) = 2x + C, where C is an arbitrary constant. This describes a family of lines in x-u plane with a slope of 2. A boundary condition, u(0) = 1, will add the constraint that the line must pass the point (x, u) = (0, 1) C = 1 unique solution u(x) = 2x+1 For more complicated problems, picking the right curve/surface that fits the b.c. is often more difficult than determining the general solution. Boundary condition is important. An ODE or PDE combined with a wrong type of boundary condition(s) may lead to no solution at all.
Boundary conditions - general remarks Examples of "healthy" and "unhealthy" boundary conditions First order ODE For the ODE in Example 1, The ODE, du/dx = 2, (i) Imposing two b.c.'s at two different x, e.g., u(0) = 1 and u(1) = 2, would lead to contradiction No solution. (Special cases such as u(0) = 1, u(1) = 3, would lead to a solution, but in this case the second b.c. is redundant.) (ii) Imposing a single b.c. for u' (the first derivative) instead of u, e.g., u'(0) = 3, will also lead to contradiction No solution. A healthy b.c. for this ODE must be of the form, u(a) = A., i.e., an "initial condition" for u given at a single point of x.
Boundary conditions - general remarks Examples of "healthy" and "unhealthy" boundary conditions Second order ODE Example 2: The ODE, d2u/dx2= 2, has the general solution, u(x) = x2+ C x + D, where C and D are arbitrary constants. Consider the following types of b.c.'s: i) u(a) = A, u'(a) = B. For example, u(0) = 1, u'(0) = 1 D = 1, C = 1 unique solution u(x) = x2+ x + 1. (ii) u(a) = A, u(b) = B, a b. For example, u(0) = 1, u(1) = 0 D = 1, C = -2 unique solution u(x) = x2- 2x + 1. (iii) u(a) = A, u'(b) = B, a b. For example, u(0) = 1, u'(1) = 2 D = 1, C = 0 unique solution u(x) = x2+ 1. (iv) u'(a) = A , u'(b) = B. For example, u'(0) = 1, u'(1) = 2 "1 = 0", contradiction; solution does not exist. (Special cases such as u'(0) = 1, u'(1) = 3 would avoid contradiction. Yet, they do not lead to a useful solution since D remains undetermined.) Types (i)-(iii) are the healthy b. c.'s for this ODE. (The conclusion is specific to this ODE. For a different ODE the situation may be different.) The figure in next page shows the solutions from the examples in (i)-(iii). Note that all three curves satisfy the same ODE but different b. c.'s.
Boundary conditions - general remarks Examples of "healthy" and "unhealthy" boundary conditions Second order ODE The figure in next page shows the solutions from the examples in (i)-(iii). Note that all three curves satisfy the same ODE but different b. c.'s.
Numerical Validation Methods Types of validation Validation using other numerical solutions. This technique compares the results to be validated with the results obtained through other numerical methods previously validated. In other words, one technique has been validated before it can be used as a reference to validate the second method. Another way to use this technique is using more than one numerical method to solve the problem. Validation using analytical solutions. This type of comparison can be used when the researcher knows the analytical theory behind the problem and makes a direct comparison of the simulation results with the analytical solution. One of the main problems of this technique is that it can only be used in extremely simple cases because trying to find the analytical solution of real problems is almost impossible. - suggestion: Using exact solution, create the model again with appropriate B.C.s. e.g. Laplace equation: uxx+uyy=0 Create uxx+uyy=0 or uxx+uyy=f (x,y)
Numerical Validation Methods Types of validation Validation using experimental results. This technique is the most popular of all; this is due mainly to the fact that the measurement shows the consistency of the model with the reality. However, one cannot forget that whenever you perform a measurement you should introduce a measuring instrument and this directly or indirectly affect the system being measured Validation using intermediate results. This technique compares the intermediate results of the numerical model with experimental or theoretical known values, although these results are not the final objective of our comparison. The major drawback of this method is to find an intermediate result that is really close related with the final result under study. However, this technique is frequently used to monitor some parameters of the numerical simulations, but it is rarely used alone or as a main validation method. E.g.: in electromagnetic simulations. Imagine that it is required to compare the far-field simulations and measurements produced by a source inside an airplane. In this case, to make far-field measurements in a big structure can be very expensive and complicated. However, it is possible to measure some near-field values at specific points near the aircraft and to compare them with simulations results calculated at the same points.
Numerical Validation Methods Types of validation Validation using convergence. This type of validation is based on a comparison of the convergence of the numerical model with the pattern or the reference results. This comparison is done knowing that the solution found is not the best, but assuming that the model results converge. E.g.: linear boundary value problem y+(1/x)y-(1/x2)y=3, y(1)=2, y(2)=3 for x=1(0.2)2 using finite difference method. Analytical solution: y(x)=x(x-1)+2/x. Let h=0.2, x0=a=1, x1=1.2, x2=1.4, x3=1.6, x4=1.8 and x5=b=2. Find yiy(xi), i=1,2,3,4. At xi, we get error h=h0 How about? y+(1/x)y-(1/x2)y=3, y(1)=2, y(2)=3 h=h0/2 x
Numerical Validation Methods Validation methods – statistical data Numerous studies show that a direct comparison point by point is not feasible when large amounts of data are compared. Therefore, this method is not recommended to validate the results and much less to assign an absolute value of accuracy. This approach makes sense only for simple models, but in the numerical simulations, the results are often very complex. Today there are several methods of validation. Among the most used are: Correlation This is a widely used method for its ease implementation and interpretation and it is intended for quantitative variables for which there is a linear relationship. The correlation between two variables is perfect when output value is closest to 1 or -1 and gets worse as it approaches to 0. The sign indicates the direction of the association: a value equal +1 indicates a perfect positive linear relationship. When this case happens the relationship between two variables has exactly the same behaviour: when one of them increases, the other increases too. If instead of that, the value is -1, it is said that there is a perfect negative relationship and implies that both signals have a linear relationships, one will decrease as the other increases. The most popular type of correlation is called the "Pearson correlation coefficient" and is usually used to measure the strength of the relationship between two variables when there is a linear relationship between them.
Self -Validated Numerical Methods Approximate computations A simple and common approach for estimating the error in floating point computation is to repeat it using more precision and compare the results Consider the evaluation of f = 333.75y6 + x2(11x2y2 – y6 – 121y4 – 2)+5.5 y8 +x/(2y). For x = 77617 and y = 33096 Rump reports that computing f in Fortran on a IBM S/370 mainframe yield: f = 1.172603 … using single precision f = 1.1726039400531 … using double precision f = 1.172603940053178… using extended precision. Since these three values agree in the first seven places, common practice would accept the computation as correct. However, the true value is f= -0.8273960599... ; not even the sign is right in the computed results!
Self -Validated Numerical Methods Approximate computations f = 333.75y6 + x2(11x2y2 – y6 – 121y4 – 2)+5.5 y8 +x/(2y). For x = 77617 and y = 33096 Similar results can be obtained with Maple x:=77617; y:=33096 # evaluate in floating point f:=333.75*y^6+x^2*(11*x^2*y^2-y^6-121*y^4-2)+5.5*y^8+x/(2*y); f:=1.172603940 #evaluate in exact rational arithmetic f:=33375/100*y^6+x^2*(11*x^2*y^2-y^6-121*y^4-2)+55/10*y^8+x/(2*y); #show decimal equivalent Evalf(f,10): -0.8273960599
Self -Validated Numerical Methods Approximate computations In response to this problem, there have arisen several models for self-validated computation (SVC), also called automatic result verification, in which the computer itself keeps track of the accuracy of computed quantities as part of the process of computing them. Floating Point number system x = m e A t-digit floating point number in base has the form where m is a t-digit fraction, called the mantissa, e is exponent. If the first digit mantissa is different from zero called normalized. The number of digits in mantissa is called precision. If floating point number to be represented with twice the usual precision, it is called double precision. Most computers conform to the IEEE floating point standard (ANSI/IEEE standard 754-1985). For Single precision, IEEE standard recommends about 24 binary digits, and for double precision about 53 binary digits.
Floating Point number system IEEE standard for single precision provide 7 decimal digits accuracy, since 223 1.2107, For double precision, provide 16 decimal digits of accuracy, since 252 2.21016. Note: computation with double precision require more computer time and storage. Each computer has an allowable range of exponent : (L,U) which is machine dependent. During computation process, any produced number whose exponent is too large (too small), that is outside the permissible range, is called overflow (underflow). Overflow is a serious problem. For most computers, when an underflow occurs, the computed value is set to zero. The IEEE standard set the results of operations with infinities and NaN. Those ambiguous situations, such as 0, results in NaN, and all binary operations with one or two NaNs result in a NaN. x = m e A t-digit floating point number in base has the form e.g. , let =10, t=3, (L,U)=(3,3). a= 0.111103, a= 0.120103, c=a b= = 0.133105, Result in an overflow, exponent 5 is too large. NaN=“Not-a-numbers”
Conditioning of the problem Perturbation analysis of the linear system Theorem: Right Perturbation Theorem. If b and x are the perturbations of b and x in Ax=b, A is nonsingular, and b0, then Ax = b A → A, b → b+b, x → x+x. If condition number is not too large, a small perturbation in b will have very little effect on solution. If condition number is large, small perturbation in b will change the solution drastically. Definition: The number ||A1|| ||A||. Is called the condition number of A and is denoted by Cond(A)
Conditioning of the problem Perturbation analysis of the linear system An ill-conditioned linear system. Change b to b* small Relative perturbation: Completely different solution Relative error in solution:
Conditioning of the problem Perturbation analysis of the linear system Theorem: Left Perturbation Theorem. Assume A is nonsingular, and b0, let A and x are the perturbations of A and x in in Ax=b. Furthermore, we assume that ||A|| < 1/ ||A1||, then Ax = b A → A+A, b → b, x → x+x. e.g. Change a23=2.002 to 2.0021, keep b fixed. Solve the system, (A+A)x*=b Relative error in solution: (quite large) Cond(A)=O(105)
Conditioning of the problem Perturbation analysis of the linear system Ax=b A → A+A, b → b+b, x → x+x. Theorem: General Perturbation Theorem. AssumeA is nonsingular, and b0, and ||A|| < 1/||A1||. Then
Numerical method is applicable for all problem! Ordinary differential equations (ODEs) Finite difference & Simpson method for fundamental solution – Dirac Delta function 0.2 0.1 0 0.1 0.2 E.g. : Laplace problem: Fundamental solution x0 x1 x2 x3 x4 BC: u(x0)=u0= ½|0.2|=0.1, u4=0.1 Exact: u(x)= ½|x| Let i=1, Let i=3, If ui involve ghost node, use ui=(x0) to calculate ghost u node value. Or (last choice) use ui=H(x0) to calculate ghost u node value.