560 likes | 1.06k Views
Environmental Data Analysis with MatLab. Lecture 8: Solving Generalized Least Squares Problems. SYLLABUS.
E N D
Environmental Data Analysis with MatLab Lecture 8: • Solving Generalized Least Squares Problems
SYLLABUS Lecture 01 Using MatLabLecture 02 Looking At DataLecture 03Probability and Measurement ErrorLecture 04 Multivariate DistributionsLecture 05Linear ModelsLecture 06 The Principle of Least SquaresLecture 07 Prior InformationLecture 08 Solving Generalized Least Squares ProblemsLecture 09 Fourier SeriesLecture 10 Complex Fourier SeriesLecture 11 Lessons Learned from the Fourier Transform Lecture 12 Power SpectraLecture 13 Filter Theory Lecture 14 Applications of Filters Lecture 15 Factor Analysis Lecture 16 Orthogonal functions Lecture 17 Covariance and AutocorrelationLecture 18 Cross-correlationLecture 19 Smoothing, Correlation and SpectraLecture 20 Coherence; Tapering and Spectral Analysis Lecture 21 InterpolationLecture 22 Hypothesis testing Lecture 23 Hypothesis Testing continued; F-TestsLecture 24 Confidence Limits of Spectra, Bootstraps
purpose of the lecture use prior information to solve exemplary problems
failure-proof least-squares add information to the problem that guarantees that matrices like [GTG] are never singular such information is called prior information
examples of prior information soil has density will be around 1500 kg/m3 give or take 500 or so chemical components sum to 100% pollutant transport is subject to the diffusion equation water in rivers always flows downhill
linear prior information with covariance Ch
simplest examplemodel parameters near known values Hm= h with H=I h = [10, 20]T Ch= m1 = 10 ± 5 m2 = 20 ± 5 m1 and m2 uncorrelated
Normal p.d.f.defines an“error in prior information” individualerrors weighted by their certainty
now suppose that we observe some data: d= dobs with covariance Cd
represent the observations with a Normal p.d.f. p(d) = observations mean of data predicted by the model
this Normal p.d.f.defines an“error in data” weighted by its certainty prediction error
Generalized Principle of Least Squaresthe best mest is the one thatminimizes the total error with respect to mjustified by Bayes Theoremin the last lecture
generalized least squaressolution pattern same as ordinary least squares … … but with more complicated matrices
(new material)How to use theGeneralized Least Squares Equations
Generalized least squares is equivalent to solvingF m = fby ordinary least squares = m
top partdata equation weighted by its certainty = m σd-1{Gm=d } data equation certainty of measurement
bottom partprior information equation weighted by its certainty = m σh-1{Hm=h } prior information equation certainty of prior information
exampleno prior information butdata equation weighted by its certainty = m called “weighted least squares”
straight line fitno prior information butdata equation weighted by its certainty fit data with low variance data with high variance
straight line fitno prior information butdata equation weighted by its certainty fit data with low variance data with high variance
another exampleprior information that the model parameters are small m≈0H=Ih=0assume uncorrelated with uniform variancesCd = σd2I Ch = σh2I
Fm=h m = [FTF]-1FTm=f • m=[GTG + ε2I]-1GTd with ε= σd/σm
called “damped least squares” • m=[GTG + ε2I]-1GTd with ε= σd/σm • ε=0: minimize the prediction error • ε→∞: minimize the size of the model parameters • 0<ε<∞: minimize a combination of the two
advantages:really easy to codemest = (G’*G+(e^2)*eye(M))\(G’*d);always works • m=[GTG + ε2I]-1GTd with ε= σd/σm • disadvantages: • often need to determine ε empirically • prior information that the model parameters are small not always sensible
model parameters represent the values of a functionm(x)at equally spaced increments along the x-axis
function approximated by its values at a sequence of x’s mi mi+1 m(x) x xi xi+1 Δx • m(x) → m=[m1, m2, m3, …, mM]T
rough function has large second derivative a smooth functionis one that is not rough a smooth function has a small second derivative
m(x) x xi • i-th row of H: (Δx)-2 [ 0, 0, 0, … 0, 1, -2, 1, 0, …. 0, 0, 0] 2nd derivative at xi column i
what to do about m1 and mM?not enough points for 2nd derivativetwo possibilitiesno prior information for m1 and mMorprior information about flatness(first derivative)
m(x) x x1 • first row of H: (Δx)-1 [ -1, 1, 0, … 0] 1st derivative at x1
example problem: • to fill in the missing model parameters so that • the resulting curve is smooth m = d x
the model parameters, man ordered list of all model parameters m=
data equationGm=d = data kernel “associates” a measured model parameter with an unknown model parameter data are just model parameters that have been observed
The prior information equation, Hm=h • “smooth interior” / “flat ends” • h=0
put them together into theGeneralized Least Squares equation F = f = • choose σd/σm to be << 1 • data takes precedence over prior information
graph of the solution solution passes close to data solution is smooth m = d x
Two MatLab issues Issue 1: matrices like G and F can be quite big, but contain mostly zeros. Solution 1: Use “sparse matrices” which don’t store the zeros Issue 2: matrices like GTG and FTF are not as sparse as G and F Solution 2: Solve equation by a method, such as “biconjugate gradients” that doesn’t require the calculation of GTG and FTF
Using “sparse matrices” which don’t store the zeros: N=200000; M=100000; F=spalloc(N,M,3*M); creates a 200000× 100000 matrix that can hold up to 300000 non-zero elements. “sparse allocate” note that an ordinary matrix would have 20,000,000,000 elements
Once allocated, sparse matrices are used just like ordinary matrices … … they just consume less memory.
Issue 2: Use biconjugate gradient solver to avoid calculating GTG and FTF Suppose that we want to solve FTF m = FTf The standard way would be: mest = (F’F)\(F’f); but that requires that we compute F’F
a “biconjugate gradient” solver requires only that we be able to multiply a vector, v, by GTG, where the solver supplies the vector, v. so we have to calculate y=GTG v the trick is to calculate t=Gv first, and then calculate y=G’t this is done in a Matlab function,afun()