750 likes | 982 Views
Numerical Methods Part: Cholesky and Decomposition http://numericalmethods.eng.usf.edu. For more details on this topic Go to http://numericalmethods.eng.usf.edu Click on Keyword. You are free. to Share – to copy, distribute, display and perform the work
E N D
Numerical MethodsPart: Cholesky and Decompositionhttp://numericalmethods.eng.usf.edu
For more details on this topic • Go to http://numericalmethods.eng.usf.edu • Click on Keyword
You are free • to Share – to copy, distribute, display and perform the work • to Remix – to make derivative works
Under the following conditions • Attribution — You must attribute the work in the manner specified by the author or licensor (but not in any way that suggests that they endorse you or your use of the work). • Noncommercial — You may not use this work for commercial purposes. • Share Alike — If you alter, transform, or build upon this work, you may distribute the resulting work only under the same or similar license to this one.
Chapter 04.09: Cholesky and Decomposition Lecture # 1 Major: All Engineering Majors Authors: Duc Nguyen http://numericalmethods.eng.usf.edu Numerical Methods for STEM undergraduates 10/22/2014 http://numericalmethods.eng.usf.edu 5
Introduction (1) where = known coefficient matrix, with dimension vector = known right-hand-side (RHS) = unknown vector. 1 6 http://numericalmethods.eng.usf.edu
Symmetrical Positive Definite (SPD) SLE can be considered as SPD if either of A matrix the following conditions is satisfied: (a) If each and every determinant of sub-matrix is positive, or.. (b) If for any given vector As a quick example, let us make a test a test to see if the given matrix is SPD? http://numericalmethods.eng.usf.edu
Symmetrical Positive Definite (SPD) SLE Based on criteria (a): The given matrix is symmetrical, because Furthermore, http://numericalmethods.eng.usf.edu
Hence is SPD. http://numericalmethods.eng.usf.edu
Based on criteria (b): For any given vector , one computes http://numericalmethods.eng.usf.edu
hence matrix is SPD http://numericalmethods.eng.usf.edu
Step 1: Matrix Factorization phase (2) (3) Multiplying two matrices on the right-hand-side (RHS) of Equation (3), one gets the following 6 equations (4) (5)
(6) (7) Step 1.1: Compute the numerator of Equation (7), such as Step 1.2 If is an off-diagonal term (say ) then (See Equation (7)). Else, if is a diagonal term (that is, ), then (See Equation (6)) http://numericalmethods.eng.usf.edu
As a quick example, one computes: (8) Thus, for computing , one only needs to use the (already factorized) data in columns and of , respectively. http://numericalmethods.eng.usf.edu
Figure 1: Cholesky Factorization for the term http://numericalmethods.eng.usf.edu
Step 2: Forward Solution phase Substituting Equation (2) into Equation (1), one gets: (9) Let us define: (10) Then, Equation (9) becomes: (11) (12) http://numericalmethods.eng.usf.edu
(13) From the 2nd row of Equation (12), one gets (14) Similarly (15) http://numericalmethods.eng.usf.edu
In general, from the row of Equation (12), one has (16) http://numericalmethods.eng.usf.edu
Step 3: Backward Solution phase As a quick example, one has (See Equation (10)): (17) http://numericalmethods.eng.usf.edu
From the last (or ) row of Equation (17), one has hence (18) Similarly: (19) and (20) http://numericalmethods.eng.usf.edu
In general, one has: (21) http://numericalmethods.eng.usf.edu
(22) For example, (23) Multiplying the three matrices on the RHS of Equation (23), one obtains the following formulas for the “diagonal” , and “lower-triangular” matrices: http://numericalmethods.eng.usf.edu
(24) (25) http://numericalmethods.eng.usf.edu
Step1: Factorization phase (22, repeated) Step 2: Forward solution and diagonal scaling phase Substituting Equation (22) into Equation (1), one gets: (26) Let us define: http://numericalmethods.eng.usf.edu
Also, define: (29) (30) Then Equation (26) becomes: http://numericalmethods.eng.usf.edu
(31) (32) http://numericalmethods.eng.usf.edu
Step 3: Backward solution phase http://numericalmethods.eng.usf.edu
Numerical Example 1 (Cholesky algorithms) Solve the following SLE system for the unknown vector ? where http://numericalmethods.eng.usf.edu
Solution: The factorized, upper triangular matrix can be computed by either referring to Equations (6-7), or looking at Figure 1, as following: http://numericalmethods.eng.usf.edu
Thus, the factorized matrix http://numericalmethods.eng.usf.edu
The forward solution phase, shown in Equation (11), becomes: http://numericalmethods.eng.usf.edu
The backward solution phase, shown in Equation (10), becomes: http://numericalmethods.eng.usf.edu
Hence http://numericalmethods.eng.usf.edu
Numerical Example 2 ( Algorithms) Using the same data given in Numerical Example 1, find the unknown vector by algorithms? Solution: The factorized matrices and can be computed from Equation (24), and Equation (25), respectively. http://numericalmethods.eng.usf.edu
Hence and http://numericalmethods.eng.usf.edu
The forward solution shown in Equation (31), becomes: or, (32, repeated) http://numericalmethods.eng.usf.edu
Hence http://numericalmethods.eng.usf.edu
The diagonal scaling phase, shown in Equation (29), becomes http://numericalmethods.eng.usf.edu
or Hence http://numericalmethods.eng.usf.edu
The backward solution phase can be found by referring to Equation (27) (28, repeated) http://numericalmethods.eng.usf.edu
Hence http://numericalmethods.eng.usf.edu
Hence http://numericalmethods.eng.usf.edu
Re-ordering Algorithms For Minimizing Fill-in Terms [1,2]. During the factorization phase (of Cholesky, or Algorithms ), many “zero” terms in the original/given matrix will become “non-zero” terms in the factored matrix . These new non-zero terms are often called as “fill-in” terms (indicated by the symbol ) It is, therefore, highly desirable to minimize these fill-in terms , so that both computational time/effort and computer memory requirements can be substantially reduced. http://numericalmethods.eng.usf.edu
For example, the following matrix and vector are given: (33) (34) http://numericalmethods.eng.usf.edu