200 likes | 317 Views
Matrix Algebra. Methods for Dummies FIL November 17 2004 Mikkel Wallentin mikkel@pet.auh.dk. Sources. www.sosmath.com www.mathworld.wolfram.com www.wikipedia.org Maria Fernandez’ slides (thanks!) from previous MFD course: http://www.fil.ion.ucl.ac.uk/spm/doc/mfd-2004.html
E N D
Matrix Algebra Methods for Dummies FIL November 17 2004 Mikkel Wallentin mikkel@pet.auh.dk
Sources • www.sosmath.com • www.mathworld.wolfram.com • www.wikipedia.org • Maria Fernandez’ slides (thanks!) from previous MFD course: http://www.fil.ion.ucl.ac.uk/spm/doc/mfd-2004.html • Slides from SPM courses:http://www.fil.ion.ucl.ac.uk/spm/course/
Design matrix … = the betas (here : 1 to 9) parameters error vector design matrix data vector a m b3 b4 b5 b6 b7 b8 b9 = + ´ = + Y X b e
Matrix: Rectangular array of scalars 2 3 Square (3 x 3) Rectangular (3 x 2) d i j : ith row, jth column Scalars, vectors and matrices • Scalar:Variable described by a single number – e.g. Image intensity (pixel value) • Vector: Variable described by magnitude and direction
Matrices • A matrix is defined by the number of Rows and the number of Columns (eg. a (mxn) matrix has m rows and n columns). • A square matrix of order n, is a (nxn) matrix.
Matrix addition • Addition (matrix of same size) • Commutative: A+B=B+A • Associative: (A+B)+C=A+(B+C) • Eg.
Multiplication of a matrix and a constant: Matrix multiplication Rule:In order to perform the multiplication AB, where A is a (mxn) matrix and B a (kxl) matrix, then we must have n=k. The result will be a (mxl) matrix.
…Each parameter (the betas) assigns a weight to a single column in the design matrix … = the betas (here : 1 to 9) parameters error vector design matrix data vector a m b3 b4 b5 b6 b7 b8 b9 = + ´ = + Y X b e
column → row row →column Transposition
Two vectors: Inner product = scalar Outer product = matrix Example Note: (1xn)(nx1) -> (1X1) Note: (nx1)(1xn) -> (nXn)
…A contrast estimate is obtained by multiplying the parameter estimates by a transposed contrast vector … parameters error vector design matrix contrast vector data vector 1 0 0 0 0 0 0 0 0 a m b3 b4 b5 b6 b7 b8 b9 = + c + e ´ = Y X b
cT=1 0 0 0 0 0 0 0 contrast ofestimatedparameters cTb T = T = varianceestimate s2cT(XTX)+c SPM{t} T test - one dimensional contrasts - SPM{t} A contrast= a linear combination ofparameters: cT´b box-car amplitude> 0 ? = b1> 0 ? => b1b2b3b4b5.... Compute 1xb1+0xb2+0xb3+0xb4+0xb5+ . . . and divide by estimated standard deviation
Identity matrices • Is there a matrix which plays a similar role as the number 1 in number multiplication? Consider the nxn matrix: • For any nxn matrixA, we haveA In = InA = A • For any nxm matrix A, we have InA = A, and A Im = A
H0: True model is X0 X0 X0 X1(b3-9) 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 This model ? Or this one ? F-test (SPM{F}) : a reduced model or ...multi-dimensional contrasts ? tests multiple linear hypotheses. Ex : does DCT set model anything? test H0 : cT´ b = 0 ? H0: b3-9 = (0 0 0 0 ...) cT = SPM{F}
Inverse matrices • Definition. A matrix A is called nonsingular or invertible if there exists a matrix B such that: • Notation. A common notation for the inverse of a matrix A is A-1. So: • The inverse matrix is unique when it exists. So if A is invertible, then A-1 is also invertible and
Determinants • Determinants are mathematical objects that are very useful in the analysis and solution of systems of linear equations (i.e. GLMs). • The determinant is a function that associates a scalar det(A) to every square matrixA. • The fundamental geometric meaning of the determinant is as the scale factor for volume when A is regarded as a linear transformation. • A matrix A has an inverse matrix A-1 if and only if det(A)≠0. • Determinants can only be found for square matrices. • For a 2x2 matrix A, det(A) = ad-bc. Lets have at closer look at that: Recall that for 2x2 matrices: And generally :
Matrix Inverse - Calculations i.e. Note: det(A)≠0 A general matrix can be inverted using methods such as the Gauss-Jordan elimination, Gaussian elimination or LU decomposition
System of linear equations Imagine a drink made of egg, milk and orange juice. Some of the properties of these ingredients are described in this table: If we now want to make a drink with 540 calories and 25 g of protein, the problem of finding the right amount of the ingredients can be formulated like this: or
A similar problem … = the betas (here : 1 to 9) parameters error vector design matrix data vector a m b3 b4 b5 b6 b7 b8 b9 = + ´ = + Y X b e
Cramer’s rule • Consider the linear system (in matrix form) • A X = B • where A is the matrix coefficient, B the nonhomogeneous term, and X the unknown column-matrix. We have: Theorem. The linear system AX = B has a unique solution if and only if A is invertible. In this case, the solution is given by the so-called Cramer's formulas: • where xi are the unknowns of the system or the entries of X, and the matrix Ai is obtained from A by replacing the ith column by the column B. In other words, we have • where the bi are the entries of B. Thank you Bent Kramer!