370 likes | 583 Views
Constrained Minimax Estimation. Dmitry Rudoy. Agenda. Linear Minimax MSE estimator with weighted norm constraint Linear Robust Minimax MSE estimator Linear Minimax regret estimator with weighted norm constraint Linearly Biased Estimation. Problem Setup. Model: Where
E N D
Constrained Minimax Estimation Dmitry Rudoy
Agenda • Linear Minimax MSE estimator with weighted norm constraint • Linear Robust Minimax MSE estimator • Linear Minimax regret estimator with weighted norm constraint • Linearly Biased Estimation
Problem Setup • Model: • Where • is known matrix with rank • is zero-mean vector with covariance • is unknown deterministic parameter and satisfies weighted norm constraint:
Motivation • The common approach is to use unbiased estimator (LS). There are also biased alternatives, like Tikhonov regularization, shrunken estimator, etc. • Allowing bias may improve the performance but then we can’t generally minimize the MSE (which depends on x). • The constraint on x will be used to minimize the maximal possible MSE and get minimax MSE estimator.
The Constraint Set • When L isn’t given one can use technique called Blind Minimax Estimation. • We take the norm of the LS estimator of x as L. • The result can be extended to any • In other words, we always can use the minimax technique, and if constraints aren’t given we estimate them.
Linear Minimax MSE Estimator • We use linear estimator of : where some matrix. • Its MSE is: • Generally it cannot been minimized because of dependence on x. • We minimize worst-case MSE:
SDP Formulation The problem above can be formulated as semidefinite programming (SDP) problem: subject to where
Closed Form Solution • If and are jointly diagonalizable, i.e.there is a closed form solution for the SDP problem. • If , i.e. x has bounded Euclidian norm there is even simpler closed form for the estimator.
Case of T=I • If the estimator is simply scaled version of LS with optimal choice of shrinkage factor. • It’s also have been proposed by Mayer and Willke. • It has very simple intuition:where is the variance of the LS estimator.
Discussion • Constrained Linear Minimax MSE Estimator can be formulated as SDP problem, which can be solved very efficiently. • Significantly improves the LS estimator. • When L approaches infinity the estimator reduces to LS. • Won’t Minimaxity lead to “too conservative” solution? • Do we have to be linear?
Agenda • Linear Minimax MSE estimator with weighted norm constraint • Linear Robust Minimax MSE estimator • Linear Minimax regret estimator with weighted norm constraint • Linearly Biased Estimation
Motivation • In many application we can’t be sure that the model matrix H is known exactly. • In this case the previous estimator may perform poorly. • One can try to develop estimator that takes those “perturbations” in H into account.
New “Unknown” Model • The original model changed slightly: • Where • is known matrix with rank • is unknown matrix satisfying where denotes the spectral norm of the matrix, i.e. the largest singular value. • and are as in the previous model.
Linear Robust Estimator • Similarly we want to minimize the maximum MSE of the linear estimator. • But now the model “perturbation” constraint will be added.
SDP Formulation Again, the problem is equivalent to the following SDP problem: subject to where
Jointly Diagonalizable Matrices • If and , and are jointly diagonalizable the problem reduces to a simple convex optimization problem in two unknowns. • In case and the matrices above are jointly diagonalizable. • This is approximately the case when H and T represent convolution with some filter and w is stationary.
Example • The resulting image of LS isn’t shown since its MSE is too big (9.07).
Discussion • Robust minimax MSE estimator can be developed and formulated as SDP problem. • It coincides with the non-robust version when the model is known. • It improves the minimax non-robust estimator even more (specifically where the latter performs poorly).
Agenda • Linear Minimax MSE estimator with weighted norm constraint • Linear Robust Minimax MSE estimator • Linear Minimax regret estimator with weighted norm constraint • Linearly Biased Estimation
Motivation • Try to improve the Minimax linear estimator by “removing the pessimism”. • Instead of minimizing maximum possible MSE other criterion can be minimized • The choice is to minimize regret , which measures how close is the estimator to the perfect one.
Regret • Regret is the difference between MSE of the linear estimator that doesn’t know the parameter x and the MSE of the linear estimator that knows x. • In the last case G may be function of x. • Since we’re restricted to linear estimators the last MSE isn’t zero. • In our case we calculate regret as by differentiating MSE of linear estimator with respect to G.
Minimax regret estimator (1) The Minimax regret estimator is the solution of the following problem: where
Minimax regret estimator (2) It can be shown that if and are jointly diagonalizable the Minimax regret estimator has the form: where are the solution of some convex optimization problem.
Minimax regret estimator (3) The convex optimization problem to solve is: and it can be simplified for certain choices of T.
Special Cases • In the case of we can solve the optimization problem above and get closed form solution. • If the Euclidian norm bounded, i.e. We have to solve m simple convex optimization problems.
Discussion • It can be shown by simulations that both minimax MSE and minimax regret estimators over perform LS. • In many cases regret estimator performs better than minimax. • Both estimators are formulated as optimization problems. • It may be interesting to develop the robust version of the regret estimator.
Agenda • Linear Minimax MSE estimator with weighted norm constraint • Linear Robust Minimax MSE estimator • Linear Minimax regret estimator with weighted norm constraint • Linearly Biased Estimation
Scalar Problem • We want to estimate scalar deterministic parameter based on its measurements. • Assume we have MVU (minimum variance unbiased) estimator with variance . • We want to reduce the MSE by allowing linear bias:and minimizing the MSE:
General Solution • It’s clear that • The optimal m is given bywhich depends on the unknown parameter.
Constant MUSNR • MUSNR (maximum unbiased signal-to-noise ratio) is defined as: • If it’s independent of the MMSE with linear bias can be implemented as:
Example: Exponential PDF • We have N IID observations of random variable with exponential PDF: • The MVU is: • And the MMSE:
MUSNR depends on θ • If the MUSNR depends on the MMSE cannot been implemented. • Thus minimax strategy is employed to maximize the smallest difference between MSE of the MVU estimator and the MSE of the biased one. • This gives us: • Domination over MVU, since we perform better in the worst case. • Admissibility among linearly biased estimators, since we perform as good as possible in the worst case.
Constant Minimum Variance • If the variance of the MVU is constant (denoted by V) we can simplify the problem. • The resulting estimator is: • And its MSE:which is smaller than V for every .
Example: Gaussian Location • Suppose we have N IID observations of Gaussian random variable: • In this case the MVU is mean and its variance is: • The biased estimator is for :which has smaller MSE in the given range.
The Vector Case • All the results can be extended to the vector case:
Discussion • Allowing bias can give significant improvement of the MSE of estimator. • The restriction is that only linear bias is analysed. • This approach is based on solving optimization problems.
References • Y. C. Eldar, A. Ben-Tal, and A. Nemirovski, “Robust Mean-Squared Error Estimation in Presence of Model Uncertainties,” IEEE Trans. Signal Process., vol. 53, no. 1, pp. 168–181, Jan. 2004. • Y. C. Eldar, A. Ben-Tal, and A. Nemirovski, “Linear Minimax regret estimation of deterministic parameters with bounded data uncertainties,” IEEE Trans. Signal Process., vol. 52, no. 8, pp. 2177–2188, Aug. 2004. • S. Kay, Y. C. Eldar, “Rethinking Biased Estimation”. • Z. Ben-Haim and Y. C. Eldar, “Blind Minimax Estimation,” IEEE Trans. on Inform. Theory, vol. 53, no. 9, pp. 3145-3157, Sep. 2007.