230 likes | 298 Views
Properties of Solution Sets. In this section we use the RREF to develop additional properties of the solution set of a linear system. The case of no solutions, an inconsistent system, requires nothing further.
E N D
Properties of Solution Sets In this section we use the RREF to develop additional properties of the solution set of a linear system. • The case of no solutions, an inconsistent system, requires nothing further. • The case of a unique solution will provide information about the rows and columns of the original coefficient matrix. • However, our primary focus will be on the nature of the solution set of linear systems that have at least one free variable.For such systems we develop a geometric view of the solution set. For these results keep in mind that row operations produce equivalent linear systems. That is, row operations do not change the solution set. Hence the information obtained from the RREF (or REF) applies to the original linear system and of course to its coefficient matrix.
Solution Set Structure. The linear system Ax = b which has augmented matrix has a solution set consisting of all vectors in R5 of the form Geometrically the solution set of this system is a translation of When a specific vector v is added or subtracted from each vector in a set S, we say the set S is translated by vector v. by vector We will develop a corresponding algebraic way to view such a solution set.
The vectors associated with the free variables in the solution set are . These vectors have a particular property that is important with regard to the set of all solutions of the associated homogeneous linear system Ax = 0. We first investigate this property and then introduce concepts which provide a characterization, or a way of considering, the set of all solutions of Ax = 0. We will show that the set of all solutions to Ax = 0 has a structure (a pattern of behavior) that is easy to understand and yields insight into the nature of the solution set of homogeneous linear systems. Ultimately this will provide a foundation for describing algebraically the solution set of the nonhomogeneous linear system Ax = b.
Example: The linear system Ax = b has augmented matrix The RREF of the augmented matrix is Determine the general solution and write is as a translation of a set of vectors. Using back substitution we have then
It is important that you master this material since it forms a core of ideas for further developments in linear algebra. Definition A set of vectors is called linearly independent provided the only way that a linear combination of the vectors produces the zero vector is when all the coefficients are zero. If there is some set of coefficients, not all zero, for which a linear combination of the vectors is the zero vector, then the set of vectors is calledlinearly dependent. That is, {v1, v2, ..., vk} is linearly dependent if we can find constants c1, c2, ..., cknot all zero such that The zero vector. The zero vector. Example: Determine which of the following sets are linearly independent. STRATEGY: Form a linear combination of these vector with arbitrary coefficients, set it equal to the zero vector. IF the only way this can be true is if the coefficients are all zero, then the set id linearly independent.
is linearly independent. Example: Show that the set Let’s go back to the linear system We showed that the set of all solutions is given by So the general solution is the translation of the span of linearly independent set.
When we use the RREF or REF of an augmented matrix to obtain the general solution as a translation of a linear combination of vectors with arbitrary coefficients those vectors are linearly independent. This result is awkward to prove in a general case so we do not pursue a formal proof here. We summarize this as follows: The solution set of a linear system with infinitely many solutions is the translation of the span of a set S of linearly independent vectors. The number of linearly independent vectors in S is the same as the number of free variables the solution set of the linear system. If the linear system is homogeneous and has infinitely many solutions then the solution set is just the span of a set S of linearly independent vectors, since the augmented column is all zeros and remains unchanged by the row operations that are used.
Example: Determine the solution set of the homogeneous linear system Ax = 0 when the RREF of [A | 0] is Note that the augmented matrix is in RREF, so we can immediately use back substitution to find the solution set. Extending the Example: Suppose we had the same coefficient matrix A, but a nonhomogeneous linear system whose the RREF is Find the solution set as the translation of a span of vectors..
We have the following general principle. If the linear system Ax = b has infinitely many solutions,then the solution set is the translation of the span of a set S of linearly independent vectors. We express this as x = xp + span{S}, where xp is a particular solution of the nonhomogeneous systemAx = b and the vectors in span{S} form the solution set of the associated homogeneous linear system Ax = 0. If we let xh represent an arbitrary vector in span{S} then the set of solutions is given by x = xp + xh . We use the term particular solution for xp since its entries are completely determined; that is, the entries do not depend on any of the arbitrary constants. It is important to keep in mind that xh represents one vector in the span of a set of vectors. Note that A(xp + xh) = Axp + Axh = b + 0 = b.
The solution set of a homogeneous linear system Ax = 0 arises so frequently that we refer to it as the solution space or as the null space of matrix A. In addition we use the notation ns(A) to denote this special set of vectors. The term null space refers to the set of vectors x such that multiplication by A produces the null or zero vector; that is, Ax = 0. Since every homogeneous linear system is consistent, ns(A) consists either of a single vector or infinitely many vectors. If Ax = 0 has a unique solution, then ns(A) = {0}; that is, the only solution is the zero vector. As we have shown, if Ax = 0 has infinitely many solutions, then ns(A) = span{S}, where S is the set of vectors obtained from the information contained in an equivalent linear system in upper triangular form, REF, or RREF. In addition, it was argued that these vectors are linearly independent by virtue of choosing the arbitrary constants for the free variables.
Example: Find the null space of matrix A. So the null space of A, ns(A), is
Additional Structural Properties Next we develop ideas that will assist us in determining the nature of the set ns(A) and other sets of vectors. span{T}, is a set which consists of all possible linear combinations of the members of T. We say span{T} is generated by the members of T. A set is closed, provided every linear combination of its members also belongs to the set. It follows that span{T} is a closed set since a linear combination of the members of span{T} is just another linear combination of the members of T. Thus ns(A) is closed. (Verify.)
Definition A subset of Rn ( Cn) that is closed is called a subspaceof Rn (Cn). Observe that if A is an m n real matrix then any solution of Ax = 0 is a vector in Rn. It then follows thatns(A) is a subspace of Rn. For the linear system Ax = 0, this means that any linear combination of solutions is another solution. But the nature of ns(A) is even simpler because of the way we have obtained the set S which generates ns(A); recall that ns(A) = span{S}. To develop the necessary ideas we next investigate the nature of linearly independent and linearly dependent sets of spanning vectors. CASE: We begin with a set T of two distinct vectors in Rn : T = {v1, v2}. We can show that T = {v1, v2} is a linearly dependent set if and only if one vector is a scalar multiple of the other. This is equivalent to the statement thatT = {v1, v2} is a linearly independent set if and only if neither vector is a scalar multiple of the other.
CASE: One of the vectors in T = {v1, v2} is the zero vector. We can show that Any set containing the zero vector is linearly dependent. CASE: Let T = {v1, v2}. What can we say about the generators of span{T} if set T is linearly dependent? We can show that span{T} has a exactly one vector that generates it. General implication:If T is any linearly dependent set of vectors, then span{T} can be generated by fewer vectors from T. It also follows thatif T is a linearly independent set of vectors, then span{T} cannot be generated by fewer vectors from T.
Situation for sets with more than 2 vectors: For a set T with more than two vectors, T = {v1, v2, v3, ..., vr}, essentially the same properties hold. We need only revise statements as follows. T is linearly dependent if and only if (at least) one of the vectors in T is a linear combination of the other vectors. If T is a linearly dependent set, then the subspace span{T} can be generated by using a subset of T. That is, the information given by the vectors of T is redundant since we can use fewer vectors and generate the same subspace. If T is a linearly independent set, then any subset of fewer vectors cannot generate the whole subspace span{T}. That is, a linearly independent spanning set contains the minimal amount of information to generate a subspace. To indicate this we introduce the following terminology.
DefinitionA linearly independent spanning set of a subspace is called a basis. Example: Let T = {v1, v2} be a linearly independent set in R2. Then T is a basis for R2. That is, any pair of linearly independent vectors in R2 is a basis for R2. Example: For a homogeneous linear system Ax = 0 with infinitely many solutions, our construction of the solution set S from either the RREF or REF produces a basis for the subspace ns(A). Other subspaces associated with a matrix. Definition For a matrix A the span of the rows of A is called the row space of A, denoted row(A), and the span of the columns of A is called the column space of A, denoted col(A). If A is m n, then the members of row(A) are vectors in Rn and the members of col(A) are vectors in Rm. Since the span of any set is closed, row(A) is a subspace of Rn and col(A) is a subspace of Rm. Our goal now is to determine a basis for each of these subspaces associated with matrix A.
How to find a basis for the row space of matrix A. Row operations applied to A manipulate its rows and (except for interchanges) produce linear combinations of its rows. If matrix B is row equivalent to A, then row(B) is the same as row(A) since a row of B is a linear combination of rows of A It follows that row(A) = row(rref(A)). Suppose that rref(A) has k nonzero rows. These k rows have a leading 1 in different columns. When we form a linear combination of these rows with coefficients c1, c2, ..., ck each of the c's will appear by itself in an entry of the resulting vector. Equating the resulting vector to the zero (row) vector must then give c1 = c2 = ... = ck = 0. Hence the nonzero rows of rref(A) are linearly independent. Since row(rref(A)) = row(A), it follows that the nonzero rows of rref(A) form a basis for row(A). How to find a basis for the coulmn space of matrix A. To determine a basis for col(A) we first recall that the transpose converts columns to rows. Hence we have that col(A) = row(AT) and since row(AT) = row(rref(AT)) it follows that the nonzero columns of (rref(AT))T form a basis for col(A).
Example: Determine a basis for row(A), col(A), and ns(A) where
Determining if a set of vector is linearly independent or linearly dependent. We can use the techniques for computing bases for row(A) and col(A) just developed to determine if a set T = {v1, v2, ..., vk} in Rn is linearly independent or linearly dependent. Form the linear combination of the vectors in set T: Set the linear combination equal to the zero vector: We want to values for the coefficients cj, j = 1, 2, ..., k. This equivalent to finding a solution of the homogeneous linear system where we write the vectors vj as columns: We determine whether there is a nontrivial solution or only the trivial solution by compute the RREF of the linear system to make this determination.
Example: Determine whether the set T is linearly independent or linearly dependent. Construct the coefficient matrix. Construct the augmented matrix and find its RREF. Since the homogeneous system has only the trivial solution set T is linearly independent.
Example: Determine whether the set T is linearly independent or linearly dependent. Find the RREF of the augmented matrix of the homogeneous linear system Ac = 0. We see there are nontrivial solutions so set T is linearly dependent.
We next introduce two concepts which provide a count of the number of linearly independent rows of a matrix and a count of the largest number of linearly independent vectors that can be present in a subset of a subspace. Definition The rankof a matrix A, denoted rank(A), is the number of leading 1's in rref(A). Since the nonzero rows form a basis for the row space of A, the rank of A is a count of the number of linearly independent rows in matrix A. Definition The dimension of a subspace V of Rn, denoted dim(V), is the number of vectors in a basis for V. So rank(A) is the dimension of the row space of A. FACT:dim(row(A)) = dim(col(A)); that is, the dimension of the row space of A is the same as the dimension of the column space of A. The rank(A) compute the dimension of both the subspace row(A) and col(A). We state this without a proof.
Example: Compute the rank, dim(row(A)), dim(col(A)), and dim(ns(A)) for each of the following matrices.