170 likes | 328 Views
Optimal solution error covariances in nonlinear problems of variational data assimilation. Victor Shutyaev Institute of Numerical Mathematics, Russian Academy of Science, Moscow Igor Gejadze Department of Civil Engineering, University of Strathclyde, Glasgow, UK
E N D
Optimal solution error covariances in nonlinear problems of variational data assimilation Victor Shutyaev Institute of Numerical Mathematics, Russian Academy of Science, Moscow Igor Gejadze Department of Civil Engineering, University of Strathclyde, Glasgow, UK F.-X. Le Dimet, LJK, University of Grenoble, France
Problem statement Model of evolution process: - nonlinear differential operator Unknown initial condition (analysis) True state Objective function (for the initial value control): Background Inverse of the observation covariance matrix Inverse of the background covariance matrix Observations Observation operator Control problem: Optimal solution (analysis) error:
Optimal solution error via errors in input data In the nonlinear case the optimal solution error and input data errors are related via the nonlinear operator equation [1]: Background error Observation error What interpretation for being of random nature? For example, for each sensor the observation error is a random time series. For the background error it can be seen as an error in expert guesses. Variational DA: Tikhonov regularization: Estimates in variational DA and Tikhonov’s method have different statistical properties: in particular, Tikhonov’s estimates are not consistent (biased)
Statistical properties of the optimal solution error If we consider as random errors, then optimal solution error is a random error. Moreover we assume that it is subjected to the multivariate normal distribution and can be quantified by: Expectation Covariance matrix What are reasons to believe that? Some classics from nonlinear regression: Estimate is consistent and asymptotically normal if is i.i.d. with and has certain regular properties, Gennrich (1969). This result follows from strong law of large numbers. Extended to multi-variate case and for certain classes of dependent observations, Amemiya, (1984). In reality the number of observation is always finite, thus the concept of ‘close-to-linear’ statistical behaviour, Ratkowski (1984). Are the results above valid for complete error equation ? It requires that and it must be normally distributed; A difficulty is that full equation might have many solutions; however, if among them we choose the one which corresponds to the global minimum of the cost functional, then we should also achieve consistency and asymptotic normality.
Covariance and the inverse Hessian Linear case and normal input data: H -Hessian, also: Fisher information matrix, grammian, … Nonlinear case and normal input data: With the following approximations one obtains The sufficient condition for these approximations to be valid is called the tangent linear hypothesis: This condition means that even though the dynamics is nonlinear, evolution of errors is well described by the tangent linear model. As most sufficient conditions the tangent linear hypothesis is overly restrictive. In practice, the above formula is valid if the linearization error is not cumulative in a probabilistic sense.
On errors Two types of error are presented in the formula . Error due to approximations can be called the ‘linearization error’. However, the true state is not usually known (apart from the identical twin experiment setup) and one must use its best available estimate as the Hessian origin. Hence, another error called the ‘origin error’. This error cannot be eliminated, however its possible magnitude can be estimated. Fully nonlinear ensemble method (Monte Carlo) 1. Consider function as the exact solution to the problem 2. Start ensemble loop 2.1 Generate using Monte-Carlo 2.2 Compute 2.3 Solve the original nonlinear DA problem with perturbed data and find 2.4 Compute 3. End ensemble loop. 4. Compute the sample covariance
Iterative methods for the inverse Hessian computation 1. Inverse Hessian by the Lanczos and Arnoldi methods The Lanczos and Arnoldi methods compute a set of leading Ritz values/vectors which approximate the eigen-pairs of the preconditioned Hessian using the Hessian-vector product: 2. Inverse Hessian by the BFGS method The BFGS forms the inverse Hessian in course of solving the auxiliary control problem: Iterative methods allow us to compute a limited-memory approximation of the inverse Hessian (at a limited computational cost) without a need to compute the Hessian matrix. These methods require efficient preconditioning (B).
Example 1: Initialization problem Model (non-linear convection-diffusion): Nonlinear diffusion coefficient Field evolution ? and ensemble variance and ensemble covariance
When the main result is not valid In a general nonlinear case one may not expect the inverse Hessian to always be a satisfactory approximation to the optimal solution error covariance. Model: 1D Burgers with strongly nonlinear dissipation term Field evolution: case A and case B ? and ensemble variance for initialization problem Case A: sensors at Case B: sensors at In Figures: inverse Hessian – solid line, ensemble estimate – marked line, background variance – dashed line
Effective Inverse Hessian method (EIH): main idea Exact nonlinear operator equation Exact optimal solution error covariance (by definition) Resulting from series of assumptions the equation above reduces to the form: Assumes nonlinear dynamics, but asymptotic normality and ‘close-to-linear’ statistical behavior (Ratkowski, 1983 ) I. Computing the expectation by Monte Carlo: II. Computing the expectation by definition: As we assume that - normal, then l-thoptimal solution v is dummy argument !
EIH method: implementation Preconditioning 1-level preconditioning: 2-level preconditioning: Iterative process This integral is a matrix which can be presented in a compact form ! Monte Carlo (MC) for integration For integration instead of MC one can use quasi-MC or multi-pole method for faster convergence (smaller L) !
EIH method: example - 1 Relative error in the variance estimate by the ‘effective’ IH (asymptotic) and IH Envelope for by the ‘effective IH’, L=25(black) and L=100(red) - based on a set of optimal solutions Envelope for relative error in the sample variance estimate for L=25(black) and L=100 (white) - does not require optimal solutions - reference covariance (sample covariance with large L=2500 ) Can be improved using ‘localization’, but requires optimal solutions!
EIH method: example - 2 Relative error in the variance estimate by the ‘effective’ IH (asymptotic) and IH Envelope for by the ‘effective IH’, L=25(black) and L=100(red) - based on a set of optimal solutions Envelope for relative error in the sample variance estimate for L=25(black) and L=100 (white) - does not require optimal solutions - reference covariance (sample covariance with large L and after ‘sampling error compensation’ procedure ) Can be improved using ‘localization’, but requires optimal solutions!
EIH method: examples 1-2, correlation matrix Example 1 Error in the correlation matrix by EIH method Reference correlation matrix Error in the correlation matrix by IH method Example 2
On a danger of the origin error Each is a likely optimal solution given . For each the likelihood region is defined by its covariance approximated by , which may significantly differ. Dependent on what optimal solution actually implemented (and considered as an origin), the covariance estimates may not approximate at all. Thus, solutions of such nonlinear systems cannot by verified in principle. Difference in mutual probabilities can be considered as an indicator of verifiability.
Conclusions In the linear case the optimal solution error covariance is equal to the inverse Hessian reasonably nonlinear case In the nonlinear case, one must distinguish the linearization error (originates from linearization of operators around the Hessian origin) and the origin error (originates from the difference between the best known and a true state) For an exact origin: the inverse Hessian is expected to approximate well the optimal solution error covariance if the tangent linear hypothesis (TLH)is valid. In practice, this approximation can be sufficiently accurate even though the TLH breaks down; if the nonlinear DA problem tends to be at least asymptotically normal or (better) exhibits a ‘close-to-linear’ statistical behavior, then the optimal solution error covariance can be approximated by the ‘effectiveinverse Hessian’. For an approximate origin: the likely magnitude of the origin error can be revealed by a set of variance vectors generated around an optimal solution; based on this information the verifiability of the optimal solution can be analysed. the upper bound of the set can be chosen to achieve reliable (robust) state estimation In an extremely nonlinear case the posterior covariance Does not represent the pDF (though locally!)
References Gejadze, I., Copeland, G.J.M., Le Dimet, F.-X., Shutyaev, V.P. Computation of the optimal solution errror covariance in variational data assimilation problems with nonlinear dynamics. J. Comp. Physics. (2011, in press) Gejadze, I., Le Dimet, F.-X., Shutyaev, V.P. On optimal solution error covariances in variational data assimilation problems. J. Comp. Physics. (2010), v.229, pp.2159-2178 Gejadze, I., Le Dimet, F.-X., Shutyaev, V.P. On analysis error covariances in variational data assimilation. SIAM J. Sci. Comput. (2008), v.30, no.4, 1847-1874