180 likes | 424 Views
Vector Norms and the related Matrix Norms. Properties of a Vector Norm: Euclidean Vector Norm: Riemannian metric:. If is a regular vector-norm on the n-dimemsional vector space, and if A is an matrix, we define the related matrix-norm as
E N D
Properties of a Vector Norm: • Euclidean Vector Norm: • Riemannian metric:
If is a regular vector-norm on the n-dimemsional vector space, and if A is an matrix, we define the related matrix-norm as • Properties of the related matrix-norm: For some positive constants which are independent of A
The Conditional Number of a Matrix If A is a nonsingular square matrix, we define the conditional number • Interpretation: Let the unit sphere be mapped by the transformation into some surface S. The conditional number is the ration of the largest to the smallest distances from the origin to points on S. Thus, where are the eigenvalues of A arranged so that This follows from setting and equal to eigenvectors belonging to and , respectively.
By the previous definition: But what is the minimum of ? we have Therefore So,
Application of Conditional Numbers Suppose that we are solving , where that data A and are not known exactly. What is the effect of errors and on the solution? Let Assume that A and are nonsingular, and that . Define the error ratios:
We try to estimate as a function of and . But Whereas Therefore, Multiplying by and division by yield Hence then
Assuming we find or If then If then
Perturbations of the spectrum Let A be an matrix with eigenvalues and with corresponding eigenvectors . A small change in the matrix produces changes in the eigenvalues and changes in the eigenvectors. If are distinct, then are linearly independent and are unique, except for nonzero scalar multiplies We have and (1) In this equation we consider (2) If , the ; but the perturbation equation (1) is satisfied by any which is multiple of . To ensure , we shall normalize the perturbated eigenvector by the assumption that,
in the expansion The coefficient of remains equal to 1 when A is replaced by . In other words, we shall require expansions (3) The unknowns are now and the coeffs. for . If the components of the matrix are very small, eqn.(1) becomes, to the first order, where the neglected terms, are of second order. Since (4) To compute the unknowns we will use the “principle of biorthogonality”: Let be eigenvectors corresponding to the eigenvalues of an matrix A. assume . Let be eigenvectors corresponding to the eigenvalues of .(Hermitian matrix of A) Then and
To solve eqn.(4) for , we will use the eigenvectors of . By normalization (3), the perturbation is a combination of for . Therefore, . Now (4) yields, But Therefore, since (5) To find take the inner product of eqn.(4) with , for : Since But the normalization (3) gives (6)
Example: , where . Let , where and is a small parameter. In this case, we take for the eigenvectors of A and . Eqn.(5) gives Eqn.(6) gives Now Eqn.(3) gives is the vector whose jth component is 0 and kth component is