1 / 46

Lecture 8

Lecture 8. Advanced Topics in Least Squares - Part Two -. Concerning the Homework. the dirty little secret of data analysis. You often spend more time futzing with reading files that are in inscrutable formats. than the intellectually-interesting side of data analysis. Sample MatLab Code.

adah
Download Presentation

Lecture 8

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lecture 8 Advanced Topics in Least Squares - Part Two -

  2. Concerning the Homework the dirty little secret of data analysis

  3. You often spend more time futzing with reading files that are in inscrutable formats than the intellectually-interesting side of data analysis

  4. Sample MatLab Code cs = importdata('test.txt','='); Ns = length(cs); mag = zeros(Ns,1); Nm=0; for i = [1:Ns] s=char(cs(i)); smag=s(48:50); stype=s(51:52); if( stype == 'Mb' ) Nm=Nm+1; mag(Nm,1)=str2num(smag); end end mag = mag(1:Nm); a routine to read a text file choose non-occurring delimiter to force complete line into one cell returns “cellstr” data type: array of strings convert “cellstr” element to string convert string to number

  5. What can go wrong in least-squares m = [GTG]-1GTdthe matrix [GTG]-1is singular

  6. EXAMPLE - a straight line fit d1d2d3…dN 1 x11 x21 x3…1 xN N Si xiSi xiSi xi2 GTG = m = det(GTG) = N Si xi2 – [Si xi]2 [GTG]-1 singular when determinant is zero

  7. det(GTG) = N Si xi2 – [Si xi]2 = 0 N=1, only one measurement (x,d) N Si xi2 – [Si xi]2 = x2 - x2 = 0 you can’t fit a straight line to only one point N1, all data measured at the same x N Si xi2 – [Si xi]2 = N2 x2 – N2 x2 = 0 measuring the same point over and over doesn’t help

  8. another example – sums and differences s1s2…dNd-1 dNd-1 Ns sums, si, and Nd differences, di, of two unknowns m1 and m2 1 11 1…1 -11 -1 m = Ns+Nd Ns-Nd Ns-Nd Ns+Nd GTG = det(GTG) = 0 = [Ns+Nd]2 - [Ns-Nd]2 = [Ns2+Nd2 +2NsNd] - [Ns2+Nd2 -2NsNd] = 4NsNd zero when Ns=0 or Nd=0, that is, only measurements of one kind

  9. This sort of ‘missing measurement’might be difficult to recognize in a complicated problembut it happens all the time …

  10. Example - Tomography

  11. in this method, you try to plaster the subject with X-ray beams made at every possible position and direction, but you can easily wind up missing some small region … no data coverage here

  12. What to do ? Introduce prior information assumptions about the behavior of the unknowns that ‘fill in’ the data gaps

  13. Examples of Prior Information The unknowns: are close to some already-known value the density of the mantle is close to 3000 kg/m3 vary smoothly with time or with geographical position ocean currents have length scales of 10’s of km obey some physical law embodied in a PDE water is incompressible and thus its velocity satisfies div(v)=0

  14. Are you only fooling yourself ? It depends … are your assumptions good ones?

  15. Application of theMaximum Likelihood Methodto this problem so, let’s have a foray into the world of probability

  16. Overall Strategy1. Represent the observed data as a probability distribution2. Represent prior information as a probability distribution3. Represent the relationship between data and model parameters as a probability distribution4. Combine the three distributions in a way that embodies combining the information that they contain5. Apply maximum likelihood to the combined distribution

  17. How to combine distributions in a way that embodies combining the information that they contain … Short answer: multiply them But let’s step through a more well-reasoned analysis of why we should do that … p1(x) x p2(x) x pT(x) x

  18. how to quantify the information in a distribution p(x) Information compared to what? Compared to a distribution pN(x) that represents the state of complete ignorance Example: pN(x) = a uniform distribution The information content should be a scalar quantity, Q

  19. Q =  ln[ p(x)/pN(x) ] p(x) dx Q is the expected value of ln[ p(x)/pN(x) ] Properties: Q=0 when p(x) = pN(x) Q0 always (since limitp0 of pln(p)=0) Q is invariant under a change of variables xy

  20. Combining distributions pA(x) and pB(x) Desired properties of the combination: pA(x) combined with pB(x) is the same as pB(x) combined with pA(x) pA(x) combined [ pB(x) combined with pC(x)] is the same as [ pA(x) combined pB(x) ] combined with pC(x) Q of [ pA(x) combined with pN(x) ]  QA

  21. pAB(x) = pA(x) pB(x) / pN(x) When pN(x) is the uniform distribution … … combining is just multiplying. But note that for “points on the surface of a sphere’, the null distribution, p(q,f), q is latitude and f is longitude, where would not be uniform, but rather proportional to sin(q).

  22. Overall Strategy1. Represent the observed data as a Normal probability distributionpA(d)  exp{ -½ (d-dobs)TCd-1 (d-dobs) } In the absence of any other information, the best estimate of the mean of the data is the observed data itself. I don’t feel like typing the normalization Prior covariance of the data.

  23. Overall Strategy2. Represent prior information as a Normal probability distribution pA(m)  exp{ -½ (m-mA)TCm-1 (m-mA) } Prior estimate of the model, your best guess as to what it would be, in the absence of any observations. Prior covariance of the model quantifies how good you think your prior estimate is …

  24. exampleone observationdobs = 0.8 ± 0.4one model parameter withmA=1.0 ± 1.25

  25. 0 pA(d) pA(m) dobs=0.8 2 0 2 mA=1

  26. Overall Strategy3. Represent the relationship between data and model parameters as a probability distributionpT(d,m)  exp{ -½ (d-Gm)TCG-1 (d-Gm) } linear theory, Gm=d, relating data, d, to model parameters, m. Prior covariance of the theory quantifies how good you think your linear theory is.

  27. exampletheory: d=mbut only accurate to ± 0.2

  28. 0 pT(d,m) dobs=0.8 2 0 2 mA=1

  29. Overall Strategy4. Combine the three distributions in a way that embodies combining the information that they containp(m,d) = pA(d) pA(m) pT(m,d)  exp{ -½ [ (d-dobs)TCd-1 (d-dobs) + (m-mA)TCm-1 (m-mA) + (d-Gm)TCG-1 (d-Gm) ]} a bit of a mess, but it can be simplified ,,,

  30. 0 p(d,m)=pA(d) pA(m) pT(d,m) 2 0 2

  31. Overall Strategy 5. Apply maximum likelihood to the combined distribution, p(d,m) = pA(d) pA(m) pT(m,d) There are two distinct ways we could do this:Find the (d,m) combinations that maximized the joint probability distribution, p(d,m) Find the m that maximized the individual probability distribution, p(m) =  p(d,m) ddThese do not necessarily give the same value for m

  32. maximum of p(d,m)=pA(d) pA(m) pT(d,m) 0 p(d,m) dpre Maximum likelihood point 2 0 mest 2

  33. maximum of p(m) = p(d,m)dd Maximum likelihood point p(m) m mest

  34. special case of an exact theory Dirac delta function, with property  f(x) d(x-y) dx = f(y) in the limit CG0 exp{ -½ (d-Gm)TCG-1 (d-Gm) }  d(d-Gm) and p(m) =  p(d,m) dd =pA(m) pA(d) d(d-Gm) dd = pA(m)pA(d=Gm) so for normal distributions p(m) =  exp{ -½ [ (Gm-dobs)TCd-1 (Gm-dobs) + (m-mA)TCm-1 (m-mA) ]}

  35. special case of an exact theory maximizing p(m) is equivalent to minimizing (Gm-dobs)TCd-1(Gm-dobs) + (m-mA)TCm-1(m-mA) + weighted “prediction error” weighted “distance of the model from its prior value”

  36. solutioncalculated via the usual messy minimization process mest = mA + M [ dobs – GmA] where M = [GTCd-1G + Cm-1]-1GTCd-1 Don’t Memorize, but be prepared to use (e.g. in homework).

  37. interesting interpretation mest-mA = M [ dobs – GmA] estimated model minus its prior observed data minus the prediction of the prior model linear connection between the two

  38. special case of no prior informationCm  M = [GTCd-1G + Cm-1]-1 GTCd-1[GTCd-1G]-1GTCd-1 mest = mA + [GTCd-1G]-1GTCd-1 [ dobs – GmA] = mA+[GTCd-1G]-1GTCd-1dobs–[GTCd-1G]-1GTCd-1GmA = mA+[GTCd-1G]-1GTCd-1dobs–mA = [GTCd-1G]-1GTCd-1dobs recovers weighted least squares

  39. special case of infinitely accurate prior information Cm 0 M = [GTCd-1G + Cm-1]-1 GTCd-1 0 mest = mA + 0 = mA recovers prior value of m

  40. special uncorrelated case Cm=sm2I and Cd=sd2I M = [GTCd-1G + Cm-1]-1 GTCd-1 = [sd-2GTG + sm-2I]-1 GTsd-2 = [ GTG + (sd/sm)2I ]-1 GT this formula is sometimes called “damped least squares”, with “damping factor” e=sd/sm

  41. Damped Least Squaresmakes the process of avoiding singular matrices associated with insufficient datatrivially easyyou just add e2I to GTG before computing the inverse

  42. GTG GTG + e2Ithis process regularizes the matrix, so its inverse always existsits interpretation is :in the absence of relevant data, assume the model parameter has its prior value

  43. Are you only fooling yourself ? It depends … is the assumption - that you know the prior value - a good one?

  44. Smoothness Constraints e.g. model is smooth when its second derivative is small d2mi/dx2 mi-1 - 2mi + mi+1 (assuming the data are organized according to one spatial variable)

  45. matrix D approximates second derivative 1 -2 1 0 0 0 … 0 1 -2 1 0 0 … … 0 0 0 … 1 -2 1 D = d2m/dx2Dm

  46. Choosing a smooth solution is thus equivalent to minimizing (Dm)T (Dm) = mT (DTD) m comparing to the (m-mA)TCm-1(m-mA) minimization implied by the general solution mest = mA + M [ dobs – GmA] where M = [GTCd-1G + Cm-1]-1GTCd-1 indicates that we should make the choices mA = 0 Cm-1 = (DTD) To implement smoothness

More Related