1 / 73

Exploring Equality and Inequality Constraints in Inverse Problems

This lecture series delves into the concepts of equality and inequality constraints in the realm of inverse problems. From discussing the Principle of Maximum Likelihood to solving exemplar problems, the syllabus explores how these constraints impact solution methods. It covers topics related to Natural Solution, SVD, prior information applications, and feasibility notions. By reviewing the Natural Solution and SVD, the lectures provide a comprehensive understanding of the resolution and covariance aspects of inverse problems.

mccaa
Download Presentation

Exploring Equality and Inequality Constraints in Inverse Problems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lecture 12 Equality and Inequality Constraints

  2. Syllabus Lecture 01 Describing Inverse ProblemsLecture 02 Probability and Measurement Error, Part 1Lecture 03 Probability and Measurement Error, Part 2 Lecture 04 The L2 Norm and Simple Least SquaresLecture 05 A Priori Information and Weighted Least SquaredLecture 06 Resolution and Generalized Inverses Lecture 07 Backus-Gilbert Inverse and the Trade Off of Resolution and VarianceLecture 08 The Principle of Maximum LikelihoodLecture 09 Inexact TheoriesLecture 10 Nonuniqueness and Localized AveragesLecture 11 Vector Spaces and Singular Value Decomposition Lecture 12 Equality and Inequality ConstraintsLecture 13 L1 , L∞ Norm Problems and Linear ProgrammingLecture 14 Nonlinear Problems: Grid and Monte Carlo Searches Lecture 15 Nonlinear Problems: Newton’s Method Lecture 16 Nonlinear Problems: Simulated Annealing and Bootstrap Confidence Intervals Lecture 17 Factor AnalysisLecture 18 Varimax Factors, Empirical Orthogonal FunctionsLecture 19 Backus-Gilbert Theory for Continuous Problems; Radon’s ProblemLecture 20 Linear Operators and Their AdjointsLecture 21 Fréchet DerivativesLecture 22 Exemplary Inverse Problems, incl. Filter DesignLecture 23 Exemplary Inverse Problems, incl. Earthquake LocationLecture 24 Exemplary Inverse Problems, incl. Vibrational Problems

  3. Purpose of the Lecture • Review the Natural Solution and SVD • Apply SVD to other types of prior information • and to • equality constraints • Introduce Inequality Constraints and the • Notion of Feasibility • Develop Solution Methods • Solve Exemplary Problems

  4. Part 1Review the Natural SolutionandSVD

  5. subspacesmodel parametersmp can affect datam0cannot affect datadatadp can be fit by modeld0cannot be fit by any model

  6. natural solutiondetermine mp by solving dp-Gmp=0set m0=0

  7. natural solutiondetermine mp by solving dp-Gmp=0set m0=0 error reduced to its minimum E=e0Te0

  8. natural solutiondetermine mp by solving dp-Gmp=0set m0=0 solution length reduced to its minimum L=mpTmp

  9. Singular Value Decomposition (SVD)

  10. singular value decomposition UTU=I and VTV=I

  11. suppose only pλ’s are non-zero

  12. suppose only pλ’s are non-zero only first p columns of U only first p columns of V

  13. UpTUp=I and VpTVp=Isince vectors mutually pependicular and of unit length • UpUpT≠I and VpVpT≠Isince vectors do not span entire space

  14. The Natural Solution

  15. The Natural Solution natural generalized inverse G-g

  16. resolution and covariance

  17. Part 2Application of SVD to other types of prior informationand toequality constraints

  18. general solution to linear inverse problem

  19. general minimum-error solution 2 lectures ago

  20. general minimum-error solution plus amount α of null vectors natural solution

  21. you can adjust α to match whatevera priori information you want • for examplem=<m> • by minimizing L=||m-<m>||2w.r.t. α

  22. you can adjust α to match whatevera priori information you want • for examplem=<m> • by minimizing L=||m-<m>||2w.r.t. α get α =V0T<m> so m = VpΛp-1UpTd + V0V0T<m>

  23. equality constraintsminimize E with constraint Hm=h

  24. Step 1find part of solution constrained by Hm=h SVD of H (not G) • H = VpΛpUpT • so • m=VpΛp-1UpTh + V0α

  25. Step 2convertGm=d • into and equation for α • GVpΛp-1UpTh + GV0α = d • and rearrange • [GV0]α = [d - GVpΛp-1UpTh] • G’α= d’

  26. Step 3solveG’α= d’ • for α • using least squares

  27. Step 4 reconstruct m from α • m=VpΛp-1UpTh + V0α

  28. Part 3 • Inequality Constraints and the • Notion of Feasibility

  29. Not all inequality constraints provide new information • x > 3 • x > 2

  30. Not all inequality constraints provide new information • x > 3 • x > 2 follows from first constraint

  31. Some inequality constraints are incompatible • x > 3 • x < 2

  32. Some inequality constraints are incompatible • x > 3 • x < 2 nothing can be both bigger than 3 and smaller than 2

  33. every row of the inequality constraint • Hm≥h • divides the space of m • into two parts • one where a solution is feasible • one where it is infeasible • the boundary is a planar surface

  34. when all the constraints are considered together • they either create a feasible volume • or they don’t • if they do, then the solution must be in that volume • if they don’t, then no solution exists

  35. feasible region (A) (B) m2 m2 m1 m1

  36. now consider the problem of minimizing the error E • subject to inequality constraints Hm ≥ h

  37. if the global minimum is • inside the feasible region • then • the inequality constraints • have no effect on the solution

  38. but • if the global minimum is • outside the feasible region • then • the solution is on the surface • of the feasible volume

  39. but • if the global minimum is • outside the feasible region • then • the solution is on the surface • of the feasible volume the point on the surface where E is the smallest

  40. Hm≥h feasible m2 mest -E m1 Emin infeasible

  41. furthermore • the feasible-pointing normal to the surface • must be parallel to ∇E • else • you could slide the point along the surface • to reduce the error E

  42. Hm≥h m2 mest -E m1 Emin

  43. Kuhn – Tucker theorem

  44. it’s possible to find a vector y withyi≥0such that

  45. it’s possible to find a vector y with y≥0such that • feasible-pointing normals to surface

  46. it’s possible to find a vector y with y≥0such that • feasible-pointing normals to surface • the gradient of the error

  47. it’s possible to find a vector y with y≥0such that • feasible-pointing normals to surface • is a non-negative combination of feasible normals • the gradient of the error

  48. it’s possible to find a vector y with y≥0such that • feasible-pointing normals to surface • y specifies the combination • is a non-negative combination of feasible normals • the gradient of the error

  49. it’s possible to find a vector y with y≥0such that • for linear case with Gm=d

  50. it’s possible to find a vector y with y≥0such that • some coefficients yiare positive

More Related