300 likes | 740 Views
Eigenvalues and eigenvectors. Equilibrium. Population increase. Deaths. Births. t. Population increase = Births – deaths. If the population is age structured and contains k age classes we get. N: population size b: birthrate d: deathrate.
E N D
Eigenvalues and eigenvectors Equilibrium Populationincrease Deaths Births t Populationincrease = Births – deaths Ifthepopulationisagestructured and contains k ageclasses we get N: populationsize b: birthrate d: deathrate Thenumbers of survivingindividualsfromclass i to class j aregiven by The net reproductionrate R = (1+bt-dt)
Leslie matrix Assumeyouhave a population of organismsthatisagestructured. LetfXdenotethefecundity (rate of reproduction) atageclass x. Letsxdenotethefraction of individualsthatsurvives to thenextageclass x+1 (survivalrates). Letnxdenotethenumber of individualsatageclass x We candenotethisassumptionsin a matrix model calledthe Leslie model. We havew-1 ageclasses, wisthemaximumage of an individual. Lis a squarematrix. Numbers per ageclassat time t=1 arethedotproduct of the Leslie matrixwiththeabundancevectorNat time t
The sum of allfecunditiesgivesthenumber of newborns v n0s0givesthenumber of individualsinthe first ageclass v Nw-1sw-2givesthenumber of individualsinthelastclass v The Leslie model is a linearapproach. Itassumesstablefecundity and mortalityrates Theeffectpoftheinitialagecompositiondisappearsover time Agecompositionapproaches an equilibriumalthoughthewholepopulationmight go extinct. Population growth ordeclineisoftenexponential
An example Importantproperties: Eventuallyallageclassesgroworshrinkatthe same rate Initial growth depends on theagestructure Earlyreproductioncontributesmore to population growth thanlatereproduction Atthe long run thepopulationdies out. Reproductionratesaretoolow to counterbalancethe high mortalityrates
Leslie matrix Doesthe Leslie approachpredict a stationary point wherepopulationabundancesdoesn’tchangeanymore? We’relooking for a vectorthatdoesn’tchangedirectionwhenmultipliedwiththe Leslie matrix. ThisvectoriscalledtheeigenvectorU of thematrix. Eigenvectorsareonlydefined for squarematrices. I: identitymatrix
The insulin – glycogen system At high bloodglucoselevels insulin stimulatesglycogensynthesis and inhibitsglycogenbreakdown. Thechangeinglycogenconcentrationcan be modelled by the sum of constantproduction and concentration dependent breakdown Atequilibrium we have Thevector {-f,g} istheeigenvector of thedispersionmatrix and givesthestationary point. Thevalue -1 iscalledtheeigenvalue of this system.
How to transformvector A intovector B? Y Multiplication of a vectorwith a squarematrixdefines a newvectorthatpoints to a differentdirection. Thematrixdefines a transformationinspace B A Thevectorsthatdon’tchangeduringtransformationaretheeigenvectors. X Y In general we define B Uistheeigenvector and ltheeigenvalue of thesquarematrixX A X Imagetransformation X containsalltheinformationnecesssary to transformtheimage
Thebasicequation ThematricesA and Lhavethe same properties. We havediagonalizedthematrix A. We havereducedtheinformationcontainedinAinto a characteristicvaluel, theeigenvalue.
A nxnmatrixhas n eigenvalues and n eigenvectors Symmetricmatrices and theirtransposeshaveidenticaleigenvectors and eigenvalues Eigenvectors of symmetricmatricesareorthogonal.
How to calculateeigenvectors and eigenvalues? Theequationiseither zero for thetrivialcase u=0 orif [A-lI] =0
The general solutions of 2x2 matrices Dispersionmatrix Distancematrix
This system isindeterminate Matrixreduction
Higher order matrices Characteristicpolynomial Eigenvalues and eigenvectorscanonly be computedanalytically to thefourthpower of m. Higher order matricesneednumericalsolutions
Thepowermethod to findthelargesteigenvalue. Thepowermethodis an interativeprocessthatstartsfrom an initialguess of theeigenvector to approximatetheeigenvalue Letthe first component u11 of u1being 1. Rescaleu1 to become 1 for the first component. Thisgives a secondguess for l. Repeatthisprocedureuntilthedifferenceln+1 – lnis less than a predefinednumbere. Havingtheeigenvaluestheweigenvectorscomeimmediatelyfromsolvingthelinear system usingmatrixreduction
Someproperties of eigenvectors IfL isthe diagonal matrix of eigenvalues: Theeigenvectors of symmetricmatricesareorthogonal Eigenvectors do not changeafter a matrixismultiplied by a scalar k. Eigenvaluesarealsomultiplied by k. Theproduct of alleigenvaluesequalsthe determinant of a matrix. The determinant is zero ifatleast one of theeigenvaluesis zero. In thiscasethematrixissingular. If A istrianagularor diagonal theeigenvalues of A arethe diagonal entries of A.
Page Rank In largewebs (1-d)/N isverysmall A standard eigenvector problem Therequestedrakingissimplycontainedinthelargesteigenvector of P.
A B D C
The data points of thenew system areclose to thenewx-axis. Thevariancewithinthenew system is much smaller. Principal axes u2 u1 u1 u2 Principal axesdefinethelonger and shorter radius of an ovalaroundthescatter of data points. Thequotient of longer to short principal axesmeasurehowclosethe data pointsare associated (similar to thecoefficient of correlation). The principal axesspan a newCartesian system . Principal axesareorthogonal.
Major axisregression Thelargest major axisdefines a regressionlinethroughthe data points {xi,yi}. u1 The major axisisidenticalwiththelargesteigenvector of the associated covariancematrix. Thelength of theaxesaregiven by theeigenvalues. Theeigenvaluesmeasurethereforetheassociationbetween a and y. u2 The first principal axisisgiven by thelargesteigenvalue Major axisregressionminimizestheEuclideandistances of the data points to theregressionline.
Therelationshipbetweenordinaryleastsquare (OLS) and major axis (MAR) regression
Errorsinthe x and y variablescause OLS regression to predictlowerslopes. Major axisregressioniscloser to thecorrexctslope. Ordinaryleastsquaresregression (OLS) Major axisregression (MAR) The MAR slopeisalwayssteeperthanthe OLS slope. Ifbothvariableshaveerrorterms MAR should be preferred.
MAR is not stableafterrescaling of only one of thevariables Days/360 MAR should not be used for comparingslopesifvariableshavedifferentdimensions and weremeasuredindifferentunits, becausetheslopedepends on theway of measurement. Ifbothvariablesarerescaledinthe same manner (by the same factor) this problem doesn’tappear. OLS regressionretainsthe correct scalingfactor, MAR does not.
Scaling a matrix A simpleway to takethepower of a squarematrix
Thevariance and covariance of data according to the principal axis y (x;y) Thevector of data pointsinthenew system comesfromthetransformationaccording to the principal axes u1 u2 x Eigenvectorsarenormalized to thelength of one Theeigenvectorsareorthogonal Thevariance of variable k inthenew system isequal to theeigenvalue of thekth principal axis. Thecovariance of variables j and k inthenew system is zero. Thenewvariablesare independent.