470 likes | 898 Views
Hypothesis Testing. Presenting: Lihu Berman. Agenda. Basic concepts Neyman-Pearson lemma UMP Invariance CFAR. X is a random vector with distribution. is a parameter, belonging to the parameter space. disjoint covering of the parameter space. denotes the hypothesis that.
E N D
Hypothesis Testing Presenting: Lihu Berman
Agenda • Basic concepts • Neyman-Pearson lemma • UMP • Invariance • CFAR
Xis a random vector with distribution is a parameter, belonging to the parameter space disjoint covering of the parameter space denotes the hypothesis that Binary test of hypotheses: vs. M-ary test: vs. vs. Basic concepts
If then is said to be a simple hypothesis vs. simple vs. composite hypotheses Example: A two-sided test: the alternative lies on both sides of vs. A one-sided test (for scalar ): Basic concepts (cont.) Otherwise, it is said to be a composite hypothesis RADAR – Is there a target or not ? Physical model – is the coin we are tossing fair or not ?
Introduce the test function for binary test of hypotheses: If the measurement is in the Acceptance region – is accepted If it is in the Rejection region – is rejected, and is accepted. is a disjoint covering of the measurement space Basic concepts (cont.)
simple: composite: i.e. the worst case simple: composite: Basic concepts (cont.) Probability of False-Alarm (a.k.a. Size): Detection probability (a.k.a. Power):
The best test of size has the largest among all tests of that size Basic concepts (cont.) Receiving Operating Characteristics (ROC): Chance line
Let: and let denote the density function of X, then: Is the most powerful test of size for testing which of the two simple hypotheses is in force, for some The Neyman-Pearson lemma
Proof: Let denote any test satisfying: Obviously: The Neyman-Pearson (cont.)
Note 1: If then the most powerful test is: Note 2: Introduce the likelihood function: Then the most powerful test can be written as: The Neyman-Pearson (cont.)
Note 3: Choosing the threshold k. Denote by the probability density of the likelihood function under , then: Note 4: If is not continuous (i.e. ) Then the previous equation might not work! In that case, use the test: Toss a coin, and choose if heads up The Neyman-Pearson (cont.)
Source Mapper ‘1’ = Enemy spotted. ‘0’ = All is clear. Prior probabilities unknown ! Binary comm. in AWGN
Binary comm. in AWGN Natural logarithm is monotone, enabling the use of Log-Likelihood
Binary comm. (cont.) Assume equal energies: and define
A test is UMP of size , if for any other test , we have: UMP Tests The Neyman-Pearson lemma holds for simple hypotheses. Uniformly Most Powerful tests generalize to composite hypotheses
Consider scalar R.Vs whose PDFs are parameterized by scalar Karlin-Rubin Theorem (for UMP one-sided tests): If the likelihood-ratio is monotone non-decreasing in x for any pair , then the test: Is the UMP test of size for testing UMP Tests (cont.)
Proof: begin with fixed values By the Neyman-Pearson lemma, the most powerful test of size for testing is: As likelihood is monotone, we may replace it with the threshold test UMP Tests (cont.)
The test is independent of , so the argument holds for every making the most powerful test of size for testing the composite alternative vs. the simple hypothesis Consider now the power function At . For any because is more powerful than the test A similar argument holds for any UMP Tests (cont.)
Thus, we conclude that is non-decreasing Consequently, is also a test whose size satisfies Finally, no test with size can have power , as it would contradict Neyman-Pearson, in UMP Tests (cont.)
The statistic T(x) is sufficient for if and only if Fisher-Neyman factorization theorem: The statistic T(x) is sufficient for if and only if A note on sufficiency No other statistic which can be calculated from the same sample provides any additional information as to the value of the parameter One can write the likelihood-ratio in terms of the sufficient statistic
Theorem: the one-parameter exponential family of distributions with density: has a monotone likelihood ratio in the sufficient statistic provided that is non-decreasing Proof: UMP Tests (cont.) UMP one-sided tests exist for a host of problems !
Example: UMP Tests (cont.)
Therefore, the test is the Uniformly Most Powerful test of size for testing UMP Tests (cont.)
Revisit the binary communication example, but with a slight change. Source Mapper So what?! Let us continue with the log-likelihood as before… Oops Invariance
Intuitively: search for a statistic that is invariant to the nuisance parameter Invariance (cont.) Project the measurement on the subspace orthogonal to the disturbance! Optimal signals ?
Let G denote a group of transformations. X has probability distribution: Invariance (formal discussion)
The measurement is distributed as Invariance (cont.) Revisit the previous example (AWGN channel with unknown bias)
organizes the measurements x into equivalent classes where: Invariance (cont.)
Let us show that is indeed a maximal invariant statistic Invariance (cont.)
Another example Consider the group of transformations: The hypothesis test problem is invariant to G Invariance (another example)
What statistic is invariant to the scale of S ? The angle between the measurement and the signal-subspace (or the subspace orthogonal to it: ) In fact, Z is a maximal invariant statistic to a broader group G’, that includes also rotation in the subspace. G’ is specifically appropriate for channels that introduce rotation in as well as gain Invariance (another example)
Invariance (UMPI & summary) • Invariance may be used to compress measurements into statistics of low dimensionality, that satisfy invariance conditions. • It is often possible to find a UMP test within the class of invariant tests. • Steps when applying invariance: 1. Find a meaningful group of transformations, for which the hypothesis testing problem is invariant. 2. Find a maximal invariant statistic M, and construct a likelihood ratio test. 3. If M has a monotone likelihood ratio, then the test is UMPI for testing one sided hypotheses of the form • Note: Sufficiency principals may facilitate this process.
CFAR (introductory example) Project the measurement on the signal space. A UMP test ! The False-Alarm Rate is Constant. Thus: CFAR
m depends now on the unknown. Test is useless. Certainly not CFAR Redraw the problem as: CFAR (cont.) Utilize Invariance !!
As before: Change slightly: independent CFAR (cont.)
The distribution of is completely characterized under even though is unknown !!! Thus, we can set a threshold in the test: in order to obtain Furthermore, as the likelihood ratio for non-central t is monotone, this test is UMPI for testing in the distribution when is unknown ! CFAR (cont.) CFAR !
CFAR (cont.) The actual probability of detection depends on the actual value of the SNR
Summary • Basic concepts • Neyman-Pearson lemma • UMP • Invariance • CFAR