710 likes | 917 Views
Quadratic Forms, Characteristic Roots and Characteristic Vectors. Mohammed Nasser Professor, Dept. of Statistics, RU,Bangladesh Email: mnasser.ru@gmail.com.
E N D
Quadratic Forms, Characteristic Roots and Characteristic Vectors Mohammed Nasser Professor, Dept. of Statistics, RU,Bangladesh Email: mnasser.ru@gmail.com The use of matrix theory is now widespread .- - - -- are essential in ----------modern treatment of univeriate and multivariate statistical methods. ----------C.R.Rao
Contents • Linear Map and Matrices • Meaning of Px • Quadratic Forms and Its Applications in MM • Classification of Quadratic Forms • Quadratic Forms and Inner Product • Definitions of Characteristic Roots and Characteristic Vectors • Geometric Interpretations • Properties of Grammian Matrices • Spectral Decomposition and Applications • Matrix Inequalities and Maximization • Computations
Length of a vector Right-angle triangle Pythagoras’ theorem || x || = (x12+ x22 + x32 )1/2 Inner product of a vector with itself = (vector length)2 xTx =x12+ x22 +x32 = (|| x ||)2 x2 ||x|| x1 Some Vector Concepts • Dot product = scalar x2 || x || = (x12+ x22)1/2 x1
||x|| ||y|| b y2 q y1 x =/2 Orthogonal vectors: xT y = 0 y Some Vector Concepts • Angle between two vectors x
Linear Map and Matrices Linear mappings are almost omnipresent • If both domain and co-domain are both finite-dimensional vector space, each linear mapping can be uniquely represented by a matrix w.r.t. specific couple of bases • We intend to study properties of linear mapping from properties of its matrix
Linear map and Matrices This isomorphism is basis dependent
Linear map and Matrices Let A be similar to B, i.e. B=P-1AP Similarity defines an equivalent relation in the vector space of square matrices of orde n, i.e. it partitions the vector space in to different equivalent classes. Each equivalent class represents unique linear operator • How can we choose • the simplest one in each equivalent classand • ii) The one of special interest ??
Linear map and Matrices Two matrices representing the same linear transformation with respect to different bases must be similar. A major concern of ours is to make the best choice of basis, so that the linear operator with which we are working will have a representing matrix in the chosen basis that is as simple as possible. A diagonal matrix is a very useful matrix, for example, Dn=P-1AnP
Linear map and Matrices Each equivalent class represent unique linear operator Can we characterize the class in simpler way? Yes, we can Under extra conditions The concept , characteristic roots plays an important role in this regards
Meaning of Pn×n xn×1 • Case 1: Pn×n is singular but not idempotent Meaning: The whole space, Rn is mapped to the column space of Pn×n , an improper subspace of Rn . An vector of the subspace may mapped to another vector of the Subspace. For example, Px=(x1+x2)
Meaning of Pn×n xn×1 • Case 2: Pn×n is singular and idempotent( asymmetric) Meaning: The whole space, Rn is mapped to the column space of Pn×n , an improper subspace of Rn . An vector of the subspace is mapped to the same vector of the Subspace. It is oblique projection, That is the subspace is not ┴ to its complement. For example, Px=x1 Px is not orthogonal to x-Px
Meaning of Pn×n xn×1 • Case 3: Pn×n is singular and idempotent( symmetric) Meaning: The whole space, Rn is mapped to the column space of Pn×n , an improper subspace of Rn . An vector of the subspace is mapped to the same vector of the Subspace. It is orthogonal projection, That is, the subspace is to its complement. For example, Px=(x1+x2) Px is orthogonal to x-Px
Meaning of Pn×n xn×1 • Case 4: Pn×n is non-singular and non-orthogonal Meaning: The whole space, Rn is mapped to the column space of Pn×n , same as Rn . The mapping is one-to-one and onto.We have now columns of Pn×n as a new (oblique) basis in place of standard basis. Angles between vectors and length of vectors are not preserved.For example,
Meaning of Pn×n xn×1 • Case : Pn×n is non-singular and orthogonal Meaning: The whole space, Rn is mapped to the column space of Pn×n , same as Rn . The mapping is one-to-one and onto.We have now columns of Pn×n as a new (orthogonal) basis in place of standard basis. Angles between vectors and length of vectors are preserved. We have only a rotation of axes. For example, From a symmetric matrix we have always such a P of its n independent eigen vectors From a symmetric matrix we have alway a symmetric idempotent P of its r(<n )independent eigen vectors
Quadratic Form Definition:The quadratic form in n variables x1, x2, …, xn is the general homogeneous function of second degree in the variables In terms of matrix notation, the quadratic form is given by
Examples of Some Quadratic Forms 1 2. 3. Standard form What is its uses?? A can be many for a particular quadratic form. To make it unique it is customary to write A as symmetric matrix.
In Fact Infinite A’s • For example 1 we have to take a12, and a21 s.t.a12 +a21 =6. • We can do it in infinite ways. • Symmetric A
Its Importance in Statistics • Variance is a fundamental concept in statistics. It is nothing but a quadratic form with a idempotent matrix of rank (n-1) • Quadratic forms play a central role in multivariate statistical analysis. For example, principal component analysis, factor analysis, discriminant analysis etc.
Multivariate Gaussian Its Importance in Statistics 0
Bivariate Gaussian Its Importance in Statistics
Quadratic Form as Inner Product XT AX=(AT X)TX = XT (AX) = XT Y • Let A=CTC. Then XT AX= XTCTCX=(CX)TCX=YTY XT AY= XTCTCY=(CX)TCY=WT Z What is its geometric meaning ? Length ofY, ||Y||= (YTY)1/2; XTY, dot product of X andY • Different nonsingular Cs represent different inner products • Different inner products different geometries.
Euclidean Distance and Mathematical Distance Usual human concept of distance is Eucl. Dist. Each coordinate contributes equally to the distance Mathematicians, generalizing its three properties , 1) d(P,Q)=d(Q,P).2) d(P,Q)=0 if and only if P=Q and 3) d(P,Q)=<d(P,R)+d(R,Q) for all R, define distance on any set.
Statistical Distance Weight coordinates subject to a great deal of variability less heavily than those that are not highly variable Who is nearer to data set if it were point?
Ellipse of Constant Statistical Distance for Uncorrelated Data x2 x1 0
Mahalonobis Distance • Population version: • Sample veersion; • We can robustify it using robust estimators of location and scatter functional
Classification of Quadratic FormDefinitions • 1. Positive Definite: A quadratic form Y=XTAX is said to be positive definite iff Y=XTAX>0 ; for all x≠0 . Then the matrix A is said to be a positive definite matrix. • 2. Positive Semi-definite:A quadratic form, Y=XTAX is said to be positive semi-definite iff Y=XTAX>=0 , for all x≠0 and there exists x≠0 such that XTAX=0 . Then the matrix A is said to be a positive semidefinite matrix. • 3. Negative Definite: A quadratic form Y=XTAX is said to be negative definite iff Y=XTAX<=0 for all x≠0. Then the matrix A is said to be negative definite matrix
Classification of Quadratic FormDefinitions • 4. Negative Semi-definite: A quadratic form, is said to be negative semi-definite iff , for all x≠0 and there exists x≠0 such that . The matrix A is said to be a negative semi-definite matrix. • Indefinite: Quadratic forms and their associated symmetric matrices need not be definite or semi-definite in any of the above scenes. In this case the quadratic form is said to be indefinite; that is , it can be negative, zero or positive depending on the values of x.
Two Theorems On Quadratic Form Theorem(1): A quadratic form can always be expressed with respect to a given coordinate system as . where A is a unique symmetric matrix. Theorem2: Two symmetric matrices A and B represent the same quadratic form if and only if B=PTAP where P is a non-singular matrix.
Classification of Quadratic Form Importance of Standard Form • From standard form we can easily classify a quadratic form. • XT AX= • Is positive /positive semi/negative/ negative semidifinite/indefinite if ai >0 for all i/ ai >0 for some i others, a=0/ai <0 for all i,/ ai <0 some i , others, a=0/ • Some ai are +ive, some are negative.
Classification of Quadratic Form Importance of Standard Form That is why using suitable nonsingular trandformation ( why nonsingular??) we try to transform general XT AX into a standard form. If we can find a P nonsingular matrix s.t. we can easily classify it. We can do it i) for congruent transformation and ii) using eigenvalues and eigen vectors. Method 2 is mostly used in MM
Classification of Quadratic Form Importance of Determinant, Eigen Values and Diagonal Element • 1. Positive Definite: (a). A quadratic form is positive definite iff the nested principal minors of A is given as Evidently a matrix A is positive definite only if det(A)>0 (b). A quadratic form Y=XTAX be positive definite iff all the eigen values of A are positive.
Classification of Quadratic Form Importance of Determinant, Eigen Values and Diagonal Element • 2. Positive Semi-definite:(a) A quadratic form is positive semi-definite iff the nested principal minors of A is given as (b). A quadratic form Y=XTAX is positive semi-definite iff at least one eigen value of A is zero while the remaining roots are positive.
Continued 3. Negative Definite:(a). A quadratic form is negative definite iff the nested principal minors of A are given as Evidently a matrix A is negative definite only if (-1)n× det(A)>0; where det(A) is either negative or positive depending on the order n of A. (b). A quadratic form Y=XTAX be negative definite iff all the eigen Roots of A are negative.
Continued • 4. Negative Semi-definite:(a)A quadratic form is negative semi-definite iff the nested principal minors of A is given as Evidently a matrix A is negative semi-definite only if ,that is, det(A)≥0 ( det(A)≤0 ) when n is odd( even). (b). A quadratic form is negative semi-definite iff at least one eigen value of A is zero while the remaining roots are negative.
Theorem on Quadratic Form(Congruent Transformation) • If is a real quadratic form of n variables x1, x2, …, xn and rank r i.e. ρ(A)=r then there exists a non-singular matrix P of order n such that x=Pz will convert Y in the canonical form where λ1, λ2, …, λr are all the different from zero. That implies
Grammian (Gram)Matrix Grammian Matrix -----If A be n×m matrix then the matrix S=ATA is called grammian matrix of A. If A is m×n then S=ATA is a symmetric n-rowed matrix. Properties Every positive definite or positive semi-definite matrix can be represented as a Grammian matrix The Grammian matrix ATA is always positive definite or positive semi-definite according as the rank of A is equal to or less than the number of columns of A c. d. If ATA=0 then A=0
What are eigenvalues? • Given a matrix, A, x is the eigenvector and is the corresponding eigenvalue if Ax = x • A must be square and the determinant of A - I must be equal to zero Ax - x = 0 ! (A - I) x = 0 • Trivial solution is if x = 0 • The non trivial solution occurs when det(A - I) = 0 • Are eigenvectors are unique? • If x is an eigenvector, then x is also an eigenvector and is an eigenvalue of A, A(x) = (Ax) = (x) = (x)
Calculating the Eigenvectors/values • Expand the det(A - I) = 0 for a 2 × 2 matrix • For a 2 × 2 matrix, this is a simple quadratic equation with two solutions (maybe complex) • This “characteristic equation” can be used to solve for x
Eigenvalue example • Consider, • The corresponding eigenvectors can be computed as • For = 0, one possible solution is x = (2, -1) • For = 5, one possible solution is x = (1, 2)
Geometric Interpretation of eigen roots and vectors We know from the definition of eigen roots and vectors Ax = λx; (**) where A is m×m matrix, x is m tuples vector and λ is scalar quantity. • From the right side of (**) we see that the vector is multiplied by a scalar. Hence the direction of x and λx is on the same line. • The left side of(**)shows the effect of matrix multiplication of matrix A (matrix operator) with vector x. But matrix operator may change the direction and magnitude of the vector.
Geometric Interpretation of eigen roots and vectors • Hence our goal is to find such kind of vectors that change in magnitude but remain on the same line after matrix multiplication. • Now the question arises: does these eigen vectors along with their respective change in magnitude characterize the matrix? Answer is the DECOMPOSITION THEOREMS
Geometric Interpretation of eigen Roots and Vectors Geometric Interpretation Y Y [A] Ax2 x1 x2 Ax1 X X Z Z