1 / 53

Lecture 10 Nonuniqueness and Localized Averages

Lecture 10 Nonuniqueness and Localized Averages. Syllabus.

yamin
Download Presentation

Lecture 10 Nonuniqueness and Localized Averages

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lecture 10NonuniquenessandLocalized Averages

  2. Syllabus Lecture 01 Describing Inverse ProblemsLecture 02 Probability and Measurement Error, Part 1Lecture 03 Probability and Measurement Error, Part 2 Lecture 04 The L2 Norm and Simple Least SquaresLecture 05 A Priori Information and Weighted Least SquaredLecture 06 Resolution and Generalized Inverses Lecture 07 Backus-Gilbert Inverse and the Trade Off of Resolution and VarianceLecture 08 The Principle of Maximum LikelihoodLecture 09 Inexact TheoriesLecture 10 Nonuniqueness and Localized AveragesLecture 11 Vector Spaces and Singular Value Decomposition Lecture 12 Equality and Inequality ConstraintsLecture 13 L1 , L∞ Norm Problems and Linear ProgrammingLecture 14 Nonlinear Problems: Grid and Monte Carlo Searches Lecture 15 Nonlinear Problems: Newton’s Method Lecture 16 Nonlinear Problems: Simulated Annealing and Bootstrap Confidence Intervals Lecture 17 Factor AnalysisLecture 18 Varimax Factors, Empirical Orthogonal FunctionsLecture 19 Backus-Gilbert Theory for Continuous Problems; Radon’s ProblemLecture 20 Linear Operators and Their AdjointsLecture 21 Fréchet DerivativesLecture 22 Exemplary Inverse Problems, incl. Filter DesignLecture 23 Exemplary Inverse Problems, incl. Earthquake LocationLecture 24 Exemplary Inverse Problems, incl. Vibrational Problems

  3. Purpose of the Lecture • Show that null vectors are the source of nonuniqueness • Show why some localized averages of model parameters are unique while others aren’t • Show how nonunique averages can be bounded using prior information on the bounds of the underlying model parameters • Introduce the Linear Programming Problem

  4. Part 1null vectors as the source ofnonuniquenessin linear inverse problems

  5. suppose two different solutions exactly satisfy the same data since there are two the solution is nonunique

  6. then the difference between the solutions satisfies

  7. the quantitymnull = m(1) – m(2) is called a null vector it satisfies Gmnull = 0

  8. an inverse problem can have more than one null vectormnull(1) mnull(2) mnull(3)... any linear combination of null vectors is a null vector • αmnull(1) +βmnull(2) +γmnull(3) • is a null vector for any α, β, γ

  9. suppose that a particular choice of model parametersmparsatisfiesG mpar=dobswith error E

  10. then has the same error Efor any choice of αi

  11. since e = dobs-Gmgen = dobs-Gmpar + Σiαi 0

  12. since since αiis arbitrarythe solution is nonunique

  13. hencean inverse problem isnonuniqueif it has null vectors

  14. example consider the inverse problem Gm • a solution with zero error is • mpar=[d1, d1, d1, d1]T

  15. the null vectors are easy to work out note that times any of these vectors is zero

  16. the general solution to the inverse problem is

  17. Part 2Why some localized averages areuniquewhile others aren’t

  18. let’s denote a weighted average of the model parameters as<m> = aTmwhere a is the vector of weights

  19. let’s denote a weighted average of the model parameters as<m> = aTmwhere a is the vector of weights amay or may not be “localized”

  20. examples a = [0.25, 0.25, 0.25, 0.25]Ta = [0. 90, 0.07, 0.02, 0.01]T not localized localized near m1

  21. now compute the average of the general solution

  22. now compute the average of the general solution if this term is zero for all i, then<m>does not depend on αi, so average is unique

  23. an average <m>=aTmis uniqueif the average of all the null vectorsis zero

  24. if we just pick an average out of the hat because we like it ... its nicely localized chances are that it will not zero all the null vectors so the average will not be unique

  25. relationship to model resolution R

  26. relationship to model resolution R aT is a linear combination of the rows of the data kernel G

  27. if we just pick an average out of the hat because we like it ... its nicely localized its not likely that it can be built out of the rows of G so it will not be unique

  28. suppose we pick a average that is not unique is it of any use?

  29. Part 3 bounding localized averageseven though they are nonunique

  30. we will now show if we can put weak bounds on m they may translate into stronger bounds on <m>

  31. example with so

  32. example with so nonunique

  33. but suppose mi is bounded 0 > mi > 2d1 smallest α3= -d1 largest α3= +d1

  34. smallest α3= -d1 largest α3= +d1 (2/3) d1 > <m> > (4/3)d1

  35. smallest α3= -d1 largest α3= +d1 (2/3) d1 > <m> > (4/3)d1 bounds on <m> tighter than bounds on mi

  36. the question is how to do this in more complicated cases

  37. Part 4The Linear Programming Problem

  38. the Linear Programming problem

  39. the Linear Programming problem flipping sign switches minimization to maximization flipping signs of A and b switches to ≥

  40. in Business unit profit quantity of each product profit maximizes no negative production physical limitations of factory government regulations etc care about both profit z and product quantities x

  41. in our case first minimize then maximize a m <m> bounds on m not needed Gm=d care only about <m>, not m

  42. In MatLab

  43. Example 1simple data kernelone datumsum of mi is zerobounds|mi| ≤ 1averageunweighted average of K model parameters

  44. bounds on absolute value of weighted average K

  45. bounds on absolute value of weighted average K if you know that the sum of 20 things is zero and if you know that the things are bounded by ±1 then you know • the sum of 19 of the things is bounded by about ±0.1

  46. bounds on absolute value of weighted average K for K>10 <m> has tigher bounds than mi

  47. Example 2more complicated data kerneldk weighted average of first 5k/2m’sbounds0 ≤ mi ≤ 1averagelocalized average of 5 neighboringmodel parameters

  48. (A) (B) mi(zi) dobs G mtrue j ≈ width, w depth, zi i i j

  49. (A) (B) mi(zi) dobs G mtrue j ≈ width, w depth, zi i i complicated G but reminiscent of Laplace Transform kernel j

  50. (A) (B) mi(zi) dobs G mtrue j ≈ width, w depth, zi i i j true mi increased with depth zi

More Related