300 likes | 463 Views
MIMO continued and Error Correction Code. 2 by 2 MIMO. Now consider we have two transmitting antennas and two receiving antennas. A simple scheme called ``V-BLAST:’’ Send independent data symbols over the transmitting antennas as well as over time. MIMO.
E N D
2 by 2 MIMO • Now consider we have two transmitting antennas and two receiving antennas. • A simple scheme called ``V-BLAST:’’ Send independent data symbols over the transmitting antennas as well as over time.
MIMO • MIMO receiver. Will receive two samples per time slot. hij: the channel coefficient from Tx ant j to Rx ant i. • How to decode the data?
MIMO receiver • The simplest receiver just do a matrix inversion: • This is NOT the optimal decoder! The maximum likelihood decoder is better.
Beyond 2 by 2 • (Section 7.1.1 of Tse book.) Let’s say we have nt transmitting antennas and nr receiving antennas. The received signal is y=Hx+w, where y is a nrby 1 vector, H is an nt by nr matrix, and x is a ntby 1 vector. • Again, hij: the channel coefficient from Tx ant j to Rx ant i.
Beyond 2 by 2 • Each matrix has an Singular Value Decomposition (SVD). Meaning that where U is an nr by nr unitary matrix, V is an nt by nt unitary matrix, and is an nr by nt matrix whose elements in the diagonal are nonnegative. • unitary matrix: a complex matrix times its conjugate transpose is the identity matrix.
Beyond 2 by 2 • So, at the sender side, we send • At the receiver side, we process the received vector : • And we will get • And the lambda matrix is diagonal. Meaning that you will have min{nr, nt} independent channels.
Error Control Code • Widely used in many areas, like communications, DVD, data storage… • In communications, because of noise, you can never be sure that a received bit is right • In physical layer, what you do is, given k data bits, add n-k redundant bits and make it into a n-bit codeword. You send the codeword to the receiver. If some bits in the codeword is wrong, the receiver should be able to do some calculation and find out • There is something wrong • Or, these things are wrong (for binary codes, this is enough) • Or, these things should be corrected as this for non-binary codes • (this is called Block Code)
Error Control Codes • You want a code to • Use as few redundant bits as possible • Can detect or correct as many error bits as possible
Error Control Code • Repetition code is the simplest, but requires a lot of redundant bits, and the error correction power is questionable for the amount of extra bits used • Checksum does not require a lot of redundant bits, but can only tell you “something is wrong” and cannot tell you what is wrong
(7,4) Hamming Code • The best example for introductory purpose and is also used in many applications • (7,4) Hamming code. Given 4 information bits, (i0,i1,i2,i3), code it into 7 bits C=(c0,c1,c2,c3,c4,c5,c6). The first four bits are just copies of the information bits, e.g., c0=i0. Then produce three parity checking bits c4, c5, and c6 as (additions are in the binary field) • c4=i0+i1+i2 • c5=i1+i2+i3 • c6=i0+i1+i3 • For example, (1,0,1,1) coded to (1,0,1,1,0,0,0). • Is capable of correcting one bit error
Generator matrix • Matrix representation. C=IG where • G is called the generator matrix
Parity check matrix • It can be verified that any CH=(0,0,0) for all codeword C
Error Correction • What you receive is R=C+E. You multiply R with H: S=RH=(C+E)H=CH+EH=EH. S is called the syndrome. If there is only one `1’ in E, S will be one of the rows of H. Because each row is unique, you know which bit in E is `1’. • The decoding scheme is: • Compute the syndrome • If S=(0,0,0), do nothing. If S!=(0,0,0), output one error bit.
How G is chosen • How G is chosen such that it can correct one error? • Any combinations of the row vectors of G has weight at least 3 (having at least three `1’s) – and codeword has weight at least 3. • The sum of any two codeword is still a codeword, so the distance (number of bits that differ) is also at least 3. • So if one bit is wrong, won’t confuse it with other codewords
The existence of H • We didn’t compare a received vector with all codewords. We used H. • The existence of H is no coincidence (need some basic linear algebra!) Let \Omega be the space of all 7-bit vectors. The codeword space is a subspace of \Omega spanned by the row vectors of G. There must be a subspace orthogonal to the codeword space spanned by 3 vectors which is the column vectors of H.
Linear Block Code • Hamming Code is a Linear Block Code. Linear Block Code means that the codeword is generated by multiplying the message vector with the generator matrix. • Minimum weight as large as possible. If minimum weight is 2t+1, capable of detecting 2t error bits and correcting t error bits.
Cyclic Codes • Hamming code is useful but there exist codes that offers same (if not larger) error control capabilities while can be implemented much simpler. • Cyclic code is a linear code that any cyclic shift of a codeword is still a codeword. • Makes encoding/decoding much simpler, no need of matrix multiplication.
Cyclic code • Polynomial representation of cyclic codes, here, assume all coefficients are either 0 or 1. • That is, if your code is (1010011) (c6 first, c0 last), you write it as • Addition and subtraction of polynomials --- Done by doing binary addition or subtraction on each bit individually, no carry and no borrow. • Division and multiplication of polynomials. Try divide by .
Cyclic Code • A (n,k) cyclic code can be generated by a polynomial g(x) which has degree n-k and is a factor of xn-1. Call it the generator polynomial. • Given message bits, (mk-1, …, m1, m0), the code is generated simply as: • In other words, C(x) can be considered as the product of m(x) and g(x).
Example • A (7,4) cyclic code g(x) = x3+x+1. • If m(x) = x3+1, C(x) = x6+x4+x+1.
Cyclic Code • One way of thinking it is to write it out as the generator matrix • So, clearly, it is a linear code. Each row of the generator matrix is just a shifted version of the first row. Unlike Hamming Code. • Why is it a cyclic code?
Example • The cyclic shift of C(x) = x6+x4+x+1 is C1(x) = x5+x2+x+1. • It is still a code polynomial, because it the code polynomial if m(x) = x2+1.
Cyclic Code • Given a code polynomial • We have • C1(x) is the cyclic shift of C(x) and (1) has a degree of no more than n-1 and (2) divides g(x) (why?) hence is a code polynomial.
Cyclic Code • So, to generate a cyclic code is to find a polynomial that (1) has degree n-k and (2) is a factor of xn-1.
Generating Systematic Cyclic Code • A systematic code means that the first k bits are the data bits and the rest n-k bits are parity checking bits. • To generate it, you let where • The claim is that C(x) must divide g(x) hence is a code polynomial. 33 mod 7 = 6. Hence 33-6=28 can be divided by 7.
Division Circuit • Division of polynomials can be done efficiently by the division circuit. (just to know there exists such a thing, no need to understand it)
Remaining Questions for Those Really Interested • Decoding. Divide the received polynomial by g(x). If there is no error you should get a 0 (why?). Make sure that the error polynomial you have in mind does not divide g(x). • How to make sure to choose a good g(x) to make the minimum degree larger? Turns out to learn this you have to study more – it’s the BCH code.
Cyclic code used in IEEE 802 • g(x) = x32 + x26 + x23 + x22 + x16 + x12 + x11 + x10 + x8 + x7 + x5 + x4 + x2 + x + 1 • all single and double bit errors • all errors with an odd number of bits • all burst errors of length 32 or less
Other codes • RS code. Block code. Used in CD, DVD, HDTV transmission. • LDPC code. Also block code. Reinvented after first proposed 40 some years ago. Proposed to be used in 802.11n. Achieve close-to-Shannon bound • Trellis code. Not block code. More closely coupled with modulation. • Turbo code. Achieve close-to-Shannon bound.