570 likes | 697 Views
Chapter 4 (C). General Vector Spaces(II) 4.6 ~ 4.9 Gareth Williams J & B, 滄海書局. Part C. 4.6 Linear Dependence and Independence 4.7 Basic and Dimension 4.8 Rank of a Matrix 4.9 Orthonormal Vector and Projections in R n Homework 4. 4.6 Linear Dependence and Independence.
E N D
Chapter 4(C) General Vector Spaces(II) 4.6 ~ 4.9 Gareth Williams J & B, 滄海書局
Part C • 4.6 Linear Dependence and Independence • 4.7 Basic and Dimension • 4.8 Rank of a Matrix • 4.9 Orthonormal Vector and Projections in Rn • Homework 4
4.6 Linear Dependence and Independence (4, -1, 0) = 2*(2, 1, 3)-3*(0, 1, 2) (2, 1, 3) = 0.5*(4, -1, 0)+1.5*(0, 1, 2) (4, -1, 0) – 2* (2, 1, 3) + 3* (0, 1, 2) = (0, 0, 0)
Definition • The set of vectors {v1, …, vm } in a vector space V is said to be linearly dependent if there exist scalars c1, …, cm, not all zero, such that c1v1 + … + cmvm = 0 • The set of vectors { v1, …, vm } is linearly independent if c1v1 + … + cmvm = 0 can only be satisfied when c1 = 0, …, cm = 0.
Example 1 Show that the set {(1, 2, 3), (-2, 1, 1), (8, 6, 10)} is linear dependent in R3. Solution
Example 2 Show that the set {(3, -2, 2), (3, -1, 4), (1, 0, 5)} is linear independent in R3. Solution
Example 3 Consider the functions f(x) = x2 + 1, g(x) = 3x – 1, h(x) = – 4x + 1 of the vector space P2 of polynomials of degree 2. Show that the set of functions { f, g, h } is linearly independent. Solution
Theorem 4.8 A set consisting of two or more vectors space is linearly dependent if and only if it is possible to express one of the vectors as a linearly combination of the other vectors. Proof
Linear Dependence of {v1, v2} {v1, v2} linearly dependent; vectors lie on a line {v1, v2} linearly independent; vectors do not lie on a line Figure 4.21 Linear dependence and independence of {v1, v2} in R3.
Linear Dependence of {v1, v2, v3} {v1, v2, v3} linearly dependent; vectors lie on a line {v1, v2, v3} linearly independent; vectors do not lie on a line Figure 4.22 Linear dependence and independence of {v1, v2, v3} in R3.
Theorem 4.9 Let V be a vector space. Any set of vectors in V that contains the zero is linearly dependent. Proof
Theorem 4.10 Let the set{v1, …, vm}be linearly dependent in a vector space V. Any set of vectors in V that contains these vectors will also be linearly dependent. Proof
Example 4 Let the set {v1, v2} be linearly independent. Prove that {v1 + v2, v1 – v2} is also linearly independent. Solution
4.7 Bases and Dimension Definition A finite set of vectors {v1, …, vm} is called a basis for a vector space V if the set spans V and is linearly independent.
Standard Basis Definition The set of n vectors {(1, 0, …, 0), (0, 1, …, 0), …, (0, …, 1)} is a basis for Rn. This Basis is called the standard basis for Rn. Proof
Example 1 Show that the set {(1, 0, -1), (1, 1, 1), (1, 2, 4)} is a basis for R3. Solution
Example 2 Show that { f, g, h }, where f(x) = x2 + 1, g(x) = 3x – 1, and h(x) = –4x + 1 is a basis for P2. Solution
Proof We examine the identity (1) We will show that values of c1, …, cm, not all zero, exist, satisfying this identity and thus proving that the vectors are linearly dependent. Theorem 4.11 Let {v1, …, vn } be a basis for a vector space V. If {w1, …, wm} is a set of more than n vectors in V, then this set is linearly dependent. The set {v1, …, vn} is a basis for V. Thus each of the vectors w1, …, wm can be expressed as a linear combination of v1, …, vn. Let
Theorem 4.11 Substituting for w1, …, wm into Equation (1) we get Rearranging, we get Since v1, …, vn are linear independent, this identity can be satisfied only if the coefficients are all zero. Thus Thus finding cs that satisfy Equation (1) reduces to finding solutions to this system of n equations in m variables. Since m > n, the number of variables is greater than the number of equations. We know that such a system of homogeneous equations has many solutions. These are therefore nonzero of cs that satisfy Equation (1). Thus the set {w1, …, wm} is linearly dependent.
Proof Let {v1, …, vn} and {w1, …, wm} be two bases for V. If we interpret {v1, …, vn} as a basis for V and {w1, …, wm} as a set of linearly independent vectors in V, then the previous theorem tells us that m n. Theorem 4.12 Any two bases for a vector space V consist of the same number of vectors. Conversely, if we interpret {w1, …, wm} as a basis for V and {v1, …, vn} as a set of linearly independent vectors in V, then n m. Thus n = m, proving that both bases consist of the same number of vectors.
Definition If a vector space V has a basis consisting of n vectors, then the dimension of V is said to be n. We write dim(V) for the dimension of V. • finite dimension • infinite dimension
Example 3 Consider the set {{1, 2, 3), (-2, 4, 1)} of vectors in R3. These vectors generate a subspace V of R3 consisting of all vectors of the form The vectors (1, 2, 3) and (-2, 4, 1) span this subspace. Furthermore, since the second vector is not a scalar multiple of the first vector, the vectors are linearly independent. Therefore {{1, 2, 3), (-2, 4, 1)} is a basis for V. Thus dim(V) = 2. We know that V is, in fact, a plane through the origin.
Theorem 4.13 • The origin is a subspace of R3. The dimension of this subspace is defined to be zero. • The one-dimensional subspaces of R3 are lines through the origin. • The two-dimensional subspaces of R3 are planes through the origin.
Theorem 4.13 Figure 4.23 One and two-dimensional subspaces of R3
Theorem 4.13 Proof • Let V be the set {(0, 0, 0)}, consisting of a single elements, the zero vector of R3. Let c be the arbitrary scalar. Since • (0, 0, 0) + (0, 0, 0) = (0, 0, 0) and c(0, 0, 0) = (0, 0, 0) • V is closed under addition and scalar multiplication. It is thus a subspace of R3. The dimension of this subspaces is defined to be zero. • (b) Let v be a basis for a one-dimensional subspace V of R3. Every vector in V is thus of the form cv, for some scalar c. We know that these vectors form a line through the origin. • (c) Let {v1, v2}be a basis for a two-dimensional subspace V of R3. Every vector in V is of the form c1v1 + c2v2. V is thus a plane through the origin.
Theorem 4.14 Let {v1, …, vn} be a basis for a vector space V. Then each vector in V can be expressed uniquely as a linear combination of these vectors. Proof
Theorem 4.15 • Let V be a vector space of dimension n. • If S = {v1, …, vn} is a set of n linearly independent vectors in V, then S is a basis for V. • If S = {v1, …, vn} is a set of n vectors that spans V, then S is a basis for V.
Example 4 Prove that the set {(1, 3, -1), (2, 1, 0), (4, 2, 1)} is a basis for R3. Solution
Theorem 4.16 Let V be a vector space of dimension n. Let {v1, …, vm} be a set of m linearly independent vectors in V, where m<n. Then there exist vectors vm+1,…,vn such that {v1, …, vm, vm+1, …,vn } is a basis for V. Proof
Example 5 State (with a brief explanation) whether the following statements are true or false. Solution (a) The vectors (1, 2), (-1, 3), (5, 2) are linearly independent in R2. False : The dimension of R2 is two. Thus any three vectors are linearly dependent. (b) The vectors (1, 0, 0), (0, 2, 0), (1, 2, 0) span R3. False: The three vectors are linearly dependent. Thus they cannot span a three-dimensional space.
Example 5 Solution (c){(1, 0, 2), (0, 1, -3)} is a basis for the subspace of R3 consisting of vectors of the form (a, b, 2a–3b). True: The vectors span the subspace since (a, b, 2a–3b) = a(1, 0, 2) + b(0, 1, -3) The vectors are also linearly independent since they are not colinear. (d) Any set of two vectors can be used to generate a two-dimensional subspace of R3. False: The two vectors must be linearly independent.
Space of Matrices Consider the vector space M22 of 2x2 matrices. The following matrices span M22.
Space of Polynomials Consider the vector space P2 of polynomial of degree 2. The function x2, x, 1 span P2.
Space of Cn Consider the complex vector space C2. The vector (1, 0) and (0, 1) span C2 since an arbitrary (a+bi, c+di) can be written: (a+bi, c+di) = (a+bi) (1, 0) + (c+di) (0, 1) (1, 0) and (0, 1) are also linearly independent in C2. Thus {(1, 0), (0, 1)} is a basis for C2, and the dimension of C2 is 2.
4.8 Rank of a Matrix Definition Let A be an m n matrix. The rows of A may be viewed as row vectors r1, …, rm, and the columns as column vectors c1, …, cn. Each row vector will have n components, and each column vector will have m components, The row vectors will span a subspace of Rn called the row space of A, and the column vectors will span a subspace of Rm called the column space of A.
Example 1 Consider the matrix The row vectors of A are These vectors span a subspace of R4 called the row space of A. The column vectors of A are These vectors span a subspace of R3 called the column space of A.
Theorem 4.17 The row space and the column space of a matrix A have the same dimension. Proof Let u1, …, um be the row vectors of A. The ith vector is Let the dimension of the row space be s. Let the vectors v1, …, vs form a basis for the row space. Let the jth vector of this set be
Theorem 4.17 Each of the row vectors of A is a linear combination of v1, …, vs. Let Equating the ith components of the vectors on the left and right, we get This may be written
Theorem 4.17 This implies that each column vector of A lies in a space spanned by a single set of s vectors. Since s is the dimension of the row space of A, we get dim(column space of A) dim(row space of A) By similar reasoning, we can show that dim(row space of A) dim(column space of A) Combining these two results we see that dim(row space of A) = dim(column space of A) Proving the theorem.
Definition The dimension of the row space and the column space of a matrix A is called the rank of A. The rank of A is denoted rank(A).
Example 2 Determine the rank of the matrix Solution
Theorem 4.18 The nonzero row vectors of a matrix A that is in reduced echelon form are a basis for the row space of A. The rank of A is the number of nonzero row vectors. Proof
Example 3 Find the rank of the matrix
Theorem 4.19 Let A and B be row equivalent matrices. Then A and B have the same the row space. rank(A) = rank(B).
Theorem 4.20 Let E be a reduced echelon form of a matrix A. The nonzero row vectors of E form a basis for the row space of A. The rank of A is the number of nonzero row vectors in E.
Example 4 Find a basis for the row space of the following matrix A, and determine its rank. Solution