1 / 52

Lecture 6 Resolution and Generalized Inverses

This lecture introduces Generalized Inverses, Data and Model Resolution Matrices, and the Unit Covariance Matrix to solve inverse problems. It explains how to maximize resolution or covariance in problem-solving using the Generalized Inverse. The spread of resolution and covariance, minimization strategies, and different cases are discussed.

mdeborah
Download Presentation

Lecture 6 Resolution and Generalized Inverses

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lecture 6 ResolutionandGeneralized Inverses

  2. Syllabus Lecture 01 Describing Inverse ProblemsLecture 02 Probability and Measurement Error, Part 1Lecture 03 Probability and Measurement Error, Part 2 Lecture 04 The L2 Norm and Simple Least SquaresLecture 05 A Priori Information and Weighted Least SquaredLecture 06 Resolution and Generalized Inverses Lecture 07 Backus-Gilbert Inverse and the Trade Off of Resolution and VarianceLecture 08 The Principle of Maximum LikelihoodLecture 09 Inexact TheoriesLecture 10 Nonuniqueness and Localized AveragesLecture 11 Vector Spaces and Singular Value Decomposition Lecture 12 Equality and Inequality ConstraintsLecture 13 L1 , L∞ Norm Problems and Linear ProgrammingLecture 14 Nonlinear Problems: Grid and Monte Carlo Searches Lecture 15 Nonlinear Problems: Newton’s Method Lecture 16 Nonlinear Problems: Simulated Annealing and Bootstrap Confidence Intervals Lecture 17 Factor AnalysisLecture 18 Varimax Factors, Empircal Orthogonal FunctionsLecture 19 Backus-Gilbert Theory for Continuous Problems; Radon’s ProblemLecture 20 Linear Operators and Their AdjointsLecture 21 Fréchet DerivativesLecture 22 Exemplary Inverse Problems, incl. Filter DesignLecture 23 Exemplary Inverse Problems, incl. Earthquake LocationLecture 24 Exemplary Inverse Problems, incl. Vibrational Problems

  3. Purpose of the Lecture Introduce the idea of a Generalized Inverse, the Data and Model Resolution Matrices and the Unit Covariance Matrix Quantify the spread of resolution and the size of the covariance Use the maximization of resolution and/or covariance as the guiding principle for solving inverse problems

  4. Part 1The Generalized Inverse,the Data and Model Resolution Matricesand the Unit Covariance Matrix

  5. all of the solutions of the form mest = Md + v

  6. let’s focus on this matrix mest = Md + v

  7. rename it the “generalized inverse”and use the symbol G-g mest = G-gd + v

  8. (let’s ignore the vector v for a moment)Generalized Inverse G-goperates on the data to give an estimate of the model parametersifdpre = Gmestthenmest = G-gdobs

  9. Generalized Inverse G-gif dpre = Gmest then mest = G-gdobssort of looks like a matrix inverseexceptM⨉N, not squareandGG-g≠I and G-gG≠I

  10. so actuallythe generalized inverse is not a matrix inverse at all

  11. plug one equation into the other dpre = Gmest and mest = G-gdobs • dpre = Ndobs with N= GG-g “data resolution matrix”

  12. Data Resolution Matrix, N • dpre = Ndobs • How much does diobs contribute to its own prediction?

  13. ifN=I • dpre = dobs • dipre = diobs • diobs completely controlsits own prediction

  14. (A) dpre dobs = • The closer N is to I, the more diobs controls its own prediction

  15. straight line problem

  16. dpre = N dobs j = • only the data at the ends control their own prediction i i j

  17. plug one equation into the other dobs = Gmtrue and mest = G-gdobs • mest = Rmtrue with R= G-gG “model resolution matrix”

  18. Model Resolution Matrix, R • mest = Rmtrue • How much does mitrue contribute to its own estimated value?

  19. ifR=I • mest = mtrue • miest = mitrue • miestreflects mitrueonly

  20. else ifR≠I • miest = • … + Ri,i-1mi-1true + Ri,imitrue + Ri,i+1mi+1true+ … • miestis a weighted average • of all the elements of mtrue

  21. mest mtrue = • The closer R is to I, the more miest reflects only mitrue

  22. Discrete version ofLaplace Transform large c: d is “shallow” average of m(z) • small c: d is “deep” average of m(z)

  23. e-chiz ⨉ m(z) integrate e-cloz dlo z ⨉ integrate dhi z z

  24. mest = R mtrue j = • the shallowest model parameters are “best resolved” i i j

  25. Covariance associated with the Generalized Inverse “unit covariance matrix” divide by σ2 to remove effect of the overall magnitude of the measurement error

  26. unit covariance for straight line problem model parameters uncorrelated when this term zero happens when data are centered about the origin

  27. Part 2 The spread of resolution and the size of the covariance

  28. a resolution matrix has small spread if only its main diagonal has large elementsit is close to the identity matrix

  29. “Dirichlet” Spread Functions

  30. a unit covariance matrix has small size if its diagonal elements are smallerror in the data corresponds to only small error in the model parameters(ignore correlations)

  31. Part 3 minimization ofspread of resolutionand/orsize of covarianceas the guiding principlefor creating a generalized inverse

  32. over-determined casenote that forsimple least squares G-g = [GTG]-1GT model resolution • R=G-gG = [GTG]-1GTG=I • always the identify matrix

  33. suggests that we try to minimize the spread of the data resolution matrix, Nfind G-g that minimizes spread(N)

  34. spread of the k-th row of N now compute

  35. first term

  36. second term third term is zero

  37. which is justsimple least squares putting it all together G-g = [GTG]-1GT

  38. the simple least squares solutionminimizes the spread of data resolutionand has zero spread of the model resolution

  39. under-determined casenote that forminimum length solution • G-g = GT [GGT]-1 data resolution • N=GG-g = G GT [GGT]-1 =I • always the identify matrix

  40. suggests that we try to minimize the spread of the model resolution matrix, Rfind G-g that minimizes spread(R)

  41. which is justminimum length solution minimization leads to • [GGT]G-g = GT • G-g = GT [GGT]-1

  42. the minimum length solutionminimizes the spread of model resolutionand has zero spread of the data resolution

  43. general case leads to

  44. a Sylvester Equation, so explicit solution in terms of matrices general case leads to

  45. 1 special case #1 0 ε2 I [GTG+ε2I]G-g=GT G-g=[GTG+ε2I]-1GT damped least squares

  46. 0 special case #2 1 ε2 I G-g[GGT+ε2I]=GT G-g=GT [GGT+ε2I]-1 damped minimum length

  47. so no new solutions have arisen … … just a reinterpretation of previously-derived solutions

  48. reinterpretation instead of solving for estimates of the model parameters We are solving for estimates of weighted averages of the model parameters, where the weights are given by the model resolution matrix

  49. criticism of Direchletspread() functionswhen m represents m(x)is that they don’t capture the sense of “being localized” very well

More Related