470 likes | 483 Views
This article explores the foundations of computational mathematics, focusing on topics such as data compression and adaptive PDE solvers. It discusses different algorithms for compression and decoding and examines which algorithm performs best. The article also covers the use of metrics to measure error and models for objects to be compressed. Additionally, it discusses compact sets in Lp space, entropy of Kolmogorov balls, and encoders that achieve entropy bounds. It concludes with examples of practical encoders and their impact in various fields.
E N D
QUESTION What is ‘foundations of computational mathematics’?
FOCM DATA COMPRESSION ADAPTIVE PDE SOLVERS
Who’s Algorithm is Best? • Test examples? • Heuristics? • Fight it out? • Clearly define problem (focm)
MUST DECIDE • METRIC TO MEASURE ERROR • MODEL FOR OBJECTS TO BE COMPRESSED
IMAGE PROCESSING Model “Real Images” Metric “Human Visual System” Stochastic Mathematical Metric Deterministic Smoothness Classes K Lp Norms Lp Norms
Kolmogorov Entropy • Given > 0, N(K) smallest number of balls that cover K
Kolmogorov Entropy • Given > 0, N(K) smallest number of balls that cover K
Kolmogorov Entropy • Given > 0, N(K) smallest number of balls that cover K • H(K):= log (N(K)) Best encoding with distortion of K
ENTROPY NUMBERS dn(K) := inf { : H(K) n} • This is best distortion for K with bit budget n • Typically dn(K) n-s
SUMMARY • Find right metric • Find right classes • Determine Kolmogorov entropy • Build encoders that give these entropy bounds
COMPACT SETS IN Lp FOR d=2 Sobolev embedding line 1/q= /2+1/p Smoothness (1/q, ) Lq Lp 1/q (1/p,0) Lq Space 2
COMPACT SETS IN L2 FOR d=2 Smoothness (1,1)-BV (1/q, ) Lq L2 1/q (1/2,0) Lq Space 2
ENTROPY OF K Entropy of Besov Balls B (Lq ) in Lp is nd Is there a practical encoder achieving this simultaneously for all Besov balls? ANSWER: YES Cohen-Dahmen-Daubechies-DeVore wavelet tree based encoder
f = S S cI yI j I D j [T0 |B0|S0|T1|U1 |B1|S1|T2|U2 |B2|S2|. . . ] Lead tree & bits Level 1 tree, update & new bits, signs Level 2 tree, update & new bits, signs COHEN-DAUBECHIES-DAHMEN-DEVORE • Partition growth into subtrees • Decompose image D j :=T j \ T j-1
WHAT DOES THIS BUY YOU? • Explains performance of best encoders: Shapiro, Said-Pearlman • Classifies images according to their compressibility (DeVore-Lucier) • Handles metrics other than L2 • Tells where to improve performance: Better metric, Better classes (e.g. not rearrangement invariant)
DTED DATA SURFACE Grand Canyon
Z-Values Grid POSTINGS Postings
FIDELITY • L2 metric not appropriate
FIDELITY • L2 metric not appropriate • L better
OFFSET If surface is offset by a lateral error of , the L norm may be huge L error
Hausdorff error OFFSET But Hausdorff error is not large. L error
CAN WE FIND dn(K)? • K bounded functions : dN(K) n-1 for N=nd+1 • K continuous functions: dN(K) n-1, for N= nd log n • K bounded variation in d=1: dn(K) n-1 • K class of characteristic functions of convex sets dn(K) n-1
Example: functions in BV, d=1 Assume f monotone; encode first (jk) and last (jk) square in column. Then k |jk-jk| M n. Can encode all such jk with C M n bits. jk jk k
ANTICIPATED IMPACTDTED • Clearly define the problem • Expose new metrics to data compression community • Result in better and more efficient encoders
NUMERICAL PDEs u solution to PDE uh or u n is a numerical approximation uh typically piecewise polynomial (FEM) un linear combination of n wavelets different from image processing because u is unknown
MAIN INGREDIENTS • Metric to measure error • Number of degrees of freedom / computations • Linear (SFEM) or nonlinear (adaptive) method of approximation using piecewise polynomials or wavelets • Inversion of an operator Right question: Compare error with best error that could be obtained using full knowledge of u
EXAMPLE OF ELLIPTIC EQUATION POISSON PROBLEM
CLASSICAL ELLIPTIC THEOREM Variational formulation gives energy norm Ht THEOREM: If u in Ht+s then SFEM gives ||u-uh ||Ht < hs |u|Ht+s Can replace Ht+s by Bs+t (L2 ) Approx. order hs equivalent to u in Bs+t (L2 ) . . ) h 8 8
HYPERBOLIC Conservation Law: ut + divx(f(u))=0, u(x,0)=u0(x) THEOREM: If u0 in BV then ||u(,,t)-uh(.,t)||L1 < h1/2 |u0| BV u0 in BV implies u in BV; this is equivalent to approximation of order h in L1 . . )
ADAPTIVE METHODS Wavelet Methods (WAM) : approximates u by a linear combination of n wavelets AFEM: approximates u by piecewise polynomial on partition generated by adaptive subdivision
FORM OF NONLINEAR APPROXIMATION Good Theorem: For a range of s >0, if u can be approximated with accuracy O(n-s) using full knowledge of u then numerical algorithm produces same accuracy using only information about u gained during the computation. Here n is the number of degrees of freedom Best Theorem: In addition bound the number of computations by Cn
AFEMs • Initial partition P0 and Galerkin soln. u0 • General iterative step Pj Pj+1 and uj uj+1 i. Examine residual (a posteriori error estimators) to determine cells to be subdivided (marked cells) ii. Subdivide marked cells - results in hanging nodes. iii. Remove hanging nodes by further subdivision (completion) resulting in Pj+1
FIRST FUNDAMENTALTHEOREMS Doerfler, Morin-Nochetto-Siebert: Introduce strategy for marking cells: a posterio estimators plus bulk chasing Rule for subdivision: newest vertex bisection • THEOREM (D,MNS): For Poisson problem algorithm convergence . . ) . . )
BINEV-DAHMEN-DEVORE New AFEM Algorithm: 1. Add coarsening step 2. Fundamental analysis of completion 3. Utilize principles of nonlinear approximation
BINEV-DAHMEN-DEVORE THEOREM (BDD): Poisson problem, for a certain range of s >0. If u can be approximated with order O(n-s ) in energy norm using full knowledge of u, then BDD adaptive algorithm does the same. Moreover, the number of computations is of order O(n). . . )
ADAPTIVE WAVELET METHODS General elliptic problem Au=f Problem in wavelet coordinates A u= f A: l2 l2 ||Av|| ~ ||v||
FORM OF WAVELET METHODS • Choose a set of wavelet indices • Find Gakerkin solution u from span{} • Check residual and update
COHEN-DAHMEN-DEVOREFIRST VIEW For finite index set A u = f u Galerkin sol. Generate sets j , j = 0,1,2, … Form of algorithm: 1. Bulk chase on residual several iterations • j j~ • 2. Coarsen: j~ j+1 • 3. Stop when residual error small enough
ADAPTIVE WAVELETS:COHEN-DAHMEN-DEVORE • THEOREM (CDD): For SPD problems. If u can be approximated with O(n-s ) using full knowledge of u (best n term approximation), then CDD algorithm does same. Moreover, the number of computations is O(n).
CDD: SECOND VIEW u n+1 = u n - (A u n -f ) This infinite dimensional iterative process converges Find fast and efficient methods to compute Au n , f when u n is finitely supported. Compression of matrix vector multiplication Au n
SECOND VIEW GENERALIZES • Wide range of semi-elliptic, and nonlinear THEOREM (CDD): For wide range of linear and nonlinear elliptic problems. If u can be approximated with O(n-s ) using full knowledge of u (best n term approximation), then CDD algorithm does same. Moreover, the number of computations is O(n).
WHAT WE LEARNED • Proper coarsening controls size of problem • Remain with infinite dimensional problem as long as possible • Adaptivity is a natural stabilizer, e.g. LBB conditions for saddle point problems are not necessary
WHAT focm CAN DO FOR YOU • Clearly frame the computational problem • Give benchmark of optimal performance • Discretization/Analysis/Solution interplay • Identify computational issues not apparent in computational heuristics • Guide the development of optimal algorithms