370 likes | 485 Views
Chapter 6.7. Determinants. In this chapter all matrices are square; for example: 1x1 (what is a 1x1 matrix, in fact?), 2x2, 3x3 Our goal is to introduce a new concept, the determinant , which is only defined for square matrices, yet any size square matrices.
E N D
Chapter 6.7 Determinants
In this chapter all matrices are square; for example: 1x1 (what is a 1x1 matrix, in fact?), 2x2, 3x3 • Our goal is to introduce a new concept, the determinant, which is only defined for square matrices, yet any size square matrices
The determinant is, first of all, just a number; and, since we want to have a natural definition for it, we say that for 1x1 matrices the determinant is EXACTLY that number:
Examples: • Notation: we use either the keyword “det” in front of the matrix OR we replace the brackets (or parenthesis, if applicable) with vertical bars
Careful: for 1x1 matrices DO NOT confuse the vertical bars notation with the absolute value notation (the context will tell you whether the number is assumed to be a matrix, in which case it’s determinant, or a number, in which case it’s absolute value)
What about bigger matrices? The idea is to define a bigger matrix’ determinant based on smaller (sub-)matrices’ determinant • Take a 2x2 matrix; a smaller matrix is a 1x1 matrix, whose determinant we know right now; hence we can define the 2x2 matrix determinant
Choose a row (usually the first one) • take the first entry in the row, and remove from the matrix the column corresponding to it and the chosen row; what’s left is a matrix of order 2-1=1
take the determinant of this matrix (we know how to compute it!); we call this the minor and it’s usually denoted by capital letter with the initial entry indices (in our case 11)
The last thing we do for the 11 index is to build the cofactor, which is the minor multiplied by (-1) to the power (sum of indices), in our case 1+1=2; the usual notation for cofactor is c with original indices
Take now the second entry and build its cofactor: remove the second column and the first row; what’s left is a 1x1 matrix, namely • compute the minor
Get the cofactor: • we exhausted the row, and now we construct the sum:
For the 2x2 matrix we get, in fact, an easy to remember formula: product of the first diagonal (NW-SE) minus product of the second diagonal (NE-SW) • Example:
Things get a bit more complicated as we go to 3x3 matrices; we use the VERY SAME TECHNIQUE, though • take the matrix
Choose a row - again, usually we take the first row, so let’s go with that one • take first entry and compute its cofactor: remove the first column and the first row, and we get the following matrix:
We compute this smaller (2x2) matrix’ determinant (because we know now how!) and so we get the minor of index 11: • finally, the cofactor:
So, the cofactor is: • and, for the last entry of this row, voila the cofactor:
There actually is a way of remembering even this formula, reminiscent of the 2x2 matrix’ determinant; again, we have a first diagonal (NW-SE), but also 2 “first half-diagonals”; the product of entries on each gets added; we have a second diagonal (NE-SW) and 2 “second half-diagonals”, and the product of entries on each, respectively, gets substracted (for alternate description please read at the bottom of page 280)
One interesting issue: as mentioned, you could choose any row you want (and, in fact, if you “look sidewise”, you could do a very similar thing for columns!); but the process is pretty complex, right? So how can we be so sure we get the same number all the time? Well, be assured - it really works, regardless of row (or of column, in fact)
Why would you choose a different row? For example, you have a lot of 0 (zero) entries in that row; since you got to multiply those entries with the respective cofactors, 0 times anything is 0! So we don’t need to compute those cofactors!
Example: • if you choose the first row, you need to compute all three cofactors; but if you choose the second row, you only need compute the second entry’s cofactor!
the determinant is hence (I‘m mentioning the other 2 cofactors, but, as you see, they get multiplied by 0, so we don’t care what values they have):
As an exercise, try to prove that, by choosing the second row in a 2x2 matrix you get the same number (you can work with an actual matrix, or try the general form, with a’s)
Complicated as it was for 3x3 matrices, you can see now how complicated it could get even further (4x4, 5x5 and so on); still the method still works, and the moment you know how to compute a 3x3 matrix’ determinant, you can compute the 4x4 matrix’ one; the moment you know how to compute a 4x4 matrix’ determinant, you can compute the 5x5 matrix’ one
But … computing a determinant is not always a hideously long and intricate task, because the way we compute this number leaves a few backdoors open • For instance: • 1. If each of the entries in a certain row (or column) of the matrix is 0, then its determinant is 0 (remember the example above? What if the 4 was also a 0?)
2. If two rows (or columns) are identical, then the determinant is 0 • 3. If the matrix is upper/lower triangular (in particular, if it is diagonal) then the determinant is equal to the product of the main (first) diagonal entries
4. If one modifies the matrix by adding a multiple of one row to another row (same for columns), the determinant doesn’t change - here you see the connection to elementary matrix operations!
5. If one interchanges two rows (or columns) the determinant changes sign • 6. If one multiplies the entries of a row (or column) by a number, the determinant gets multiplied by that number
7. If one multiplies the matrix by a scalar (which, if you remember, means multiplying all elements by that number - all rows, that is, and see 6.) then the determinant gets multiplied by that number as many times as many rows it has (its size; for a 2x2 matrix, twice; 3x3 matrix, thrice; and so on)
8. If one multiplies two matrices, the determinant of the product is the product of the determinants • 9. The determinant of the transpose of a mtrix equal the determinant of the original matrix
One last thing: as you can see, it’s very convenient to have an upper triangular matrix (or lower, or diagonal); on the other hand, when reducing a matrix (a square matrix, that is) that’s exactly what we get
So … why not use this? As you can see, one elementary operation doesn’t change the determinant (adding a multiple of a row to another one; probably the most important one; see 4.); a second one only changes its sign (interchanging two rows; see 5.); the last one multiplies the determinant by a controllable quantity (multiplying by that quantity a certain row; see 6.)
Here’s an example: • think of this as “factoring out the 3 out of the second row”
(we now have an upper triangular matrix, so we can stop here - or, if you can waste time, NOT DURING THE EXAM! you can continue reducing the matrix by “factoring out” the -7 out of the second row etc) • This method is called triangulation (for obvious reasons!)
It’s especially useful for higher order matrices (4x4, 5x5, etc) since we don’t have to compute many complex cofactors, but rather use simple elementary operations; both methods take time, though (in general, expect to waste a lot of time when computing a large matrix, with few 0 …)