140 likes | 238 Views
Short recapitulation of matrix basics. A vector can be interpreted as a file of data. A matrix is a collection of vectors and can be interpreted as a data base. The red matrix contain three column vectors.
E N D
Shortrecapitulation of matrixbasics A vectorcan be interpreted as a file of data A matrixis a collection of vectors and can be interpreted as a data base The red matrixcontainthreecolumnvectors Handlingbiological data is most easilydonewith a matrixapproach. An Excel worksheetis a matrix.
The first subscriptdenotesrows, thesecondcolumns. n and m definethedimension of a matrix. Ahas m rows and n columns. Rowvector Columnvector Thesymmetricmatrixis a matrixwhereAn,m = A m,n. Thediagonal matrixis a square and symmetrical. is a matrixwith one row and one column. Itis a scalar (ordinarynumber). Unit matrixI
For a non-singularsquarematrixtheinverseisdefined as Singularmatricesarethosewheresomerowsorcolumnscan be expressed by a linearcombination of others. Suchcolumnsorrows do not containadditionalinformation. Theyareredundant. (A•B)-1 = B-1 •A-1 ≠ A-1 •B-1 A matrixissingularifit’s determinant is zero. r2=2r1 r3=2r1+r2 Theinverse of a 2x2 matrix Det A: determinant of A A linearcombination of vectors A matrixissingularifatleast one of theparameters k is not zero. Determinant
Scalarproduct Addition and subtraction The innerordotproduct Basic rule of matrixmultiplication
The general solution of a linear system Identitymatrix OnlypossibleifAis not singular. IfAissingularthe system has no solution. Systems with a uniquesolution Thenumber of independent equationsequalsthenumber of unknowns. X: Not singular
The general solution of a linear system =IA=A Aspilota sp2 Both specieshavelowreproductiverate r. Theyareprone to fast extinction. Aspilota sp5
Orthogonalvectors Y= XY= The dotproduct of twoorthogonalvectorsis zero. If the orthogonalvectorshave unity lengththeyarecalledorthonormal. X= A system of n orthogonalvectorsspansan n-dimensionalhypervolume (a Cartesian system) In ecologicalmodellingorthogonalvectorsare of particularimportance. Theydefinelinearly independent variables. Orthogonalmatrix d=1 V= Y=sin(a) X=cos(a) Multiplyinganorthogonalmatrix with itstransposegives the identitymatrix. The transpose of anorthogonal system isidentical to itsinverse.
Eigenvalues and eigenvectors Y Multiplication of a vector with a square matrix defines a new vector that points to a different direction. The matrix defines a transformation in space How to transform vector A into vector B? B A The vectors that don’t change during transformation are the eigenvectors. X Y In general we define B U is the eigenvector and l the eigenvalue of the square matrix X A [X [X Image transformation X contains all the information necesssary to transform the image X
Some properties of eigenvectors If L is the diagonal matrix of eigenvalues: The eigenvectors of symmetric matrices are orthogonal Eigenvectors do not change after a matrix is multiplied by a scalar k. Eigenvalues are also multiplied by k. The product of all eigenvalues equals the determinant of a matrix. The determinant is zero if at least one of the eigenvalues is zero. In this case the matrix is singular. If A is trianagular or diagonal the eigenvalues of A are the diagonal entries of A.
MatrixM Eigenvalues of M EigenvectorsU of M The largest eigenvalue is associated with the left (dominant) eigenvector UTU UTU = I
A geometrical interpretation of eigenvalues l2 l1 Ymean Xmean Correlation matrix The eigenvectors define the major axes of the data. The eigenvalues define the length of the eigenvalues
Correlation matrix ` [R The eigenvalues of a correlation similarity matrix are linearly linked to the coefficients of correlation. The eigenvector ellipse Xmean
Eigenvectors and information content A matrix is a data base that contains an amount of information. Left and right sides of an equation contain the same amount of information The eigenvectors take over the information content of the data base (the matrix) The eigenvalues define ow much information contains each eigenvector. The eigenvalue is a measure of correlation. The squared eigenvalue is therefore a measure of the variance explained by the associated eigenvector. The eigenvector of the largest eigenvalue is called the dominant eigenvector and contains the largest part of information of the associated data base.