130 likes | 141 Views
Lecture 14 Decision Rules, Nearest Neighbor Decoding (Section 4.2, 4.3). Theory of Information. What Is a Decision Rule?. received message … x …. encoded message … …. decoded message. Communications channel model. decision rule. noise.
E N D
Lecture 14Decision Rules, Nearest Neighbor Decoding(Section 4.2, 4.3) Theory of Information
What Is a Decision Rule? received message … x … encoded message … … decoded message Communications channel model decision rule noise An (n,M)-code means a block code of length n and size M (i.e. every codeword has length n, and there are M codewords altogether) Definition Let C be an (n,M)-code over a code alphabet A, and assume C does not contain the symbol ?. A decision rulefor C is a function f: AnC{?}. Intuition: f(x)=c means assuming that the received word x was meant to be c, i.e.that c was sent. In other words, decoding, or interpreting x as c. If we cannot identify such a codeword, i.e. cC, the symbol ? is used to declare a decoding error, so in this case always c=?.
Two Sorts of Decision Rules Goal: maximize the probability of correct decoding Code alphabet: {0,1} Code: {0000,1111} Channel: 90% 0 0 1 1 10% 11% 89% If 0111 is received, how would you interpret it? If 0011 is received, how would you interpret it? But if you knew that 1111 is sent 99% of the time, how would you interpret 0011?
Which Rule Is Better? Ideal observer decision rule Maximum likelihood decision rule Advantages:
Ideal Observer Decision Rule c1 25% 40% c2 x 35% c3 c1 P(c1 sent | x received) x P(cMsent | x received) cM Ideal observer decision rule decodes received codeword x ascodeword c with maximal probability P(c sent | x received).
Maximum Likelihood Decision Rule c1 50% 80% c2 x 70% c3 c1 P(x received | c1sent ) x P(x received | cM sent) cM Maximum likelihooddecision rule decodes received codeword x as codeword c = f(x)with maximal probability P(x received | c sent).
One Rule as a Special Case of the Other c1 50% 80% c2 x 70% c3 Assume all codewords are equally likely to be sent. What would be the backward probabilities then? c1 c2 x c3 Theorem 4.2.2: For the uniform input distribution, ideal observer decoding coincides with maximum likelihood decoding.
Example 4.2.1 Suppose codewords of C = {000, 111} are sent over a binary symmetric channel with crossover prob. p = 0.01. If string 100 is received, how should it be decoded by the maximum likelihood decision rule? P(100 received | 000 sent)= P(100 received | 111 sent)= Would the same necessarily be the case under ideal observer decoding?
Sometimes the Two Rules Yield Different Results 95% 0 0 5% 6% 1 1 94% Assume: C={00,11}; 11 is sent 70% of the time; 01 is received. How would 01 be decoded by the maximum likelihood rule? How would 01 be decoded by the ideal observer rule? As As
Handling Ties In the case of ties, it is reasonable for a given decision rule to declare an error. Suppose codewords of C = {0000, 1111} are sent over a binary symmetric channel with crossover prob. p = ¼ . How should the following strings be decoded by the maximum likelihood decision rule? 0000 1011 0011 P(0011 received | 0000 sent)= P(0011 received | 1111 sent)=
Hamming Distance and Nearest Neighbor Decoding DefinitionLet x and y be two strings of the same length over the same alphabet. The Hamming distance between x and y, denotedd(x,y),is defined to be the number of places in which x and y differ. E.g.: d(000,100) = 1, d(111,100) = 2. The decision rule assigning a received word the closest codeword (in Hamming distance) is called the nearest neighbor decision rule. Theorem 4.3.2 For BSC, the maximum likelihood decision rule is equivalent to the nearest neighbor decision rule.
Exercise 7 of Section 4.3 10% Construct a binary channel for which maximum likelihood decoding is not the same as nearest neighbor decoding. 0 0 1 1 Let C= {001, 011} Assume 000 is received. 90% 50% 50% P(000 received | 001 sent) = P(000 received | 011 sent) =
Homework Exercises 2,3,4,5 of Section 4.2. Exercises 1,2,3 of Section 4.3.