350 likes | 603 Views
Multihypothesis Sequential Probability Ratio Tests. Part I: Asymptotic Optimality Part II: Accurate Asymptotic Expansions for Expected Sample Size. Published by V. P. Dragalin , A. G. Tartakovsky , and V. V. Veeravalli. Presented by: Yi (Max) Huang Advisor: Prof. Zhu Han
E N D
Multihypothesis Sequential Probability Ratio Tests Part I: Asymptotic Optimality Part II: Accurate Asymptotic Expansions for Expected Sample Size Published by V. P. Dragalin, A. G. Tartakovsky, and V. V. Veeravalli Presented by: Yi (Max) Huang Advisor: Prof. Zhu Han Wireless Network, Signal Processing & Security Lab University of Houston, USA 2009 11 12
Outline • Introduction • Objectives • System Model • MSPR Tests • Test δa– Bayesian Optimality • Test δb– LLR test • Asymptotic Optimality of MSPR Tests • The case of i.i.d. Observations • Higher-order Approximation for accurate results • Conclusions
Intro • MSPRT – Multiple hypothesis + SPRT • Many applications need M≥2 hypothesis • Multiple-resolution radar systems • CDMA systems • The quasi-Bayesian MSPRT with expected sample size in asymptotical i.i.d case was established by V. Veeravalli. Need… • make accurate decision by shortest observation time for MSPRT
Objectives • Continue investigate the asymptotic behaviors for two MSPRTs • Test δa quasi-Bayesian optimality • Test δb log-likelihood-ratio (LLR) test Goal: • Asymptotically optimal to any positive moment of stopping time distribution. • minimize average observation time and generalize the results for most environments.
System Model • The sequential testing of M hypotheses ( ). • . is a sequential hypothesis test. • d is “terminal decision function”; d=i, Hioccurs • is the prior distribution hypotheses. • loss function: • In case of zero-one : • is the conditional error probability.
Ri is the risk function: • when and =Pr[accepting Hi incorrectly] • The class of test: • The predefined value:
Z is the log-likelihood function and ratio (up to given time n) • is the positive threshold and is defined by • is the expected stopping time • (average stopping time)
Recap… • Introduction • Objectives • System Model • MSPR Tests • Test δa– Bayesian Optimality • Test δb– LLR test • Asymptotic Optimality of MSPR Tests • The case of i.i.d. Observations • Higher-order Approximation for accurate results • Conclusions
Test δa • and is applied by a Bayesian framework. If , then If , then
In the special case of zero-one loss: • Remark: stop as soon as the largest posterior probability exceeds a threshold, A.
Test δb • and is corresponded to LLR test • vi is the accepting time for Hi
In the special case of zero-one loss: If , then
Recap… • Introduction • Objectives • System Model • MSPR Tests • Test δa– Bayesian Optimality • Test δb– LLR test • Asymptotic Optimality of Ei[T] inMSPRT • The case of i.i.d. Observations • Higher-order Approximation for accurate results • Conclusions
Asymptotic Optimality • is Kullback-Leibler (KL) information distance • is minimal distance between Hi and others • Constraints: • Dijmust be positive, finite • Ri to 0 • Asymptotic lower bounds for any positive moments of stopping time:
Now, the minimal expected stopping time in asymptotic optimality:
Higher-order Approximation • In order to has accurate results, the higher order approx. expected stopping time Is estimated: • Using non-linear renewal theory, & are now in form of random walk crossing a constant boundary + the non-linear term “slowly changing” • The result of “slowly changing” is that limiting overshot (xi) of random walk over a fixed threshold are unchanged • Redefined to • When “r=M-1” and “\i” exclusion of “i” from the set • hr,i is expected value of max r zero-mean normal r.v.
The expected stopping time in higher-order approx. for Test δb and Test δa :
Conclusion • The proposed MSPRT are asymptotically optimal under fairly general conditions in discrete or continuous time, stochastic models, and, …etc. • Asymptotically optimal minimize any positive moment of the stopping time (average observation time) in both generalized approx. by “risk” go to zero, and higher-order approximations , up to a vanishing term by non-linear renewal theory.
References • V. P. Dragalin, A. G. Tartakovsky, and V. V. Veeravalli, “Multihypothesis Sequential Probability Ratio Tests – Part I: Asymptotic Optimality”, IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 45, NO. 7, NOV. 1999 • V. P. Dragalin, A. G. Tartakovsky, and V. V. Veeravalli, “Multihypothesis Sequential Probability Ratio Tests – Part II: Accurate Asymptotic Expansions for the Expected Sample Size”, IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 45, NO. 7, NOV. 1999
Higher-order Approximation • In order to has accurate results, the higher order apprex. expect stopping time Is estimated: • Nonlinear to random
D_i • relax the condition , and rewrite it to:
Asymmetric of MSPR Tests Symmetric case: • guarantees -to-zero rate keeping up ‘s rate, so
1 • 1 • 1 • 1 • 1
1 • 1 • 1 • 1 • 1