140 likes | 302 Views
Parity Augmentation. An Alternative Approach to LDPC Decoding. Background. LDPC codes, developed by Robert Gallager, are among the best codes for error correction coding
E N D
Parity Augmentation An Alternative Approach to LDPC Decoding
Background • LDPC codes, developed by Robert Gallager, are among the best codes for error correction coding • However, the decoding process can be long and involved as well as take more iterations than are feasible for data which needs to be transmitted quickly
Alternative Algorithms • Sinkhorn is a process in which a matrix is made doubly stochastic (all rows and columns sum to 1) by repeatedly normalizing both the rows and columns • Sinkhorn worked much better than belief propagation when solving Sudokus, so the question arose as to if Sinkhorn would surpass Belief Propagation for LDPC decoding
The key to solving Sudoku with Sinkhorn was “Math by wish fulfillment”—that is, the matrices were assumed to be doubly stochastic and were thus altered on those constraints • Similarly, Parity Augmentation does “math by wish fulfillment” in its . . . It does it some how, I think
Basis behind Parity Augmentation • When decoding using a parity check matrix H, the message is considered decoded if all the parities check—that is, if the decoded message lies in the nullspace of H. • Therefore, by maximizing the probabilities in which parities check for each row, eventually even parity can be reached
Equations galore • By Gallager’s lemma, the probability that n bits sum to 0 in GF(2) is • 1+pi blahblah that’s what LaTeX is for • Similarly, the probability that n bits sum to 1 is _____
Click to add Title • For a single bit p_i,alter it by multiplying it by the probability that the other bits sum to one and normalizing by P_E. Notice that if the extrinsic probability > P_E, then the bit is more likely to be 1 than 0, since the bit and the sum of the other bits must sum to 0. • Finally, find the new probability of even parity
The success of this algorithm is dependent on even parity increasing at each succesive step. It does. • Proof
How does it compare? • As a rule, Parity Augmentation is faster, less complex, and takes fewer iterations to achieve success than the Gallager decoder. • However, Parity Augmentation tends to perform slightly worse. This is perhaps because the increase in even parity may be miniscule.
Decoding Performance for ½ code Notice how the Parity reaches a limit where more iterations don’t improve its performance, but the Gallager continues to improve with each successive iteration
Therefore, what? • The algorithm generally stopped working when there were too many probabilities or they hovered around .5, so if that kind of situation could be avoided or remedied, the performance would improve • The lemma that took For.Ev.Er to figure out may prove useful for other instances when even parity is a goal.
Optimization • Another possible way to decode is through gradient descent, that is, the individual values are changed by small values as to prevent too large of jumps, but rather steer the probabilities in the right direction • Or, in mathematical terms,
Results of Gradient Optimization • Has a slightly worse prbobability of error than the Parity or Gallager but is comparable time as the Parity Augmentation (in other words, faster than the Gallager) • Takes more iterations than either • So . . . needs some tweaking.
Conclusion • LDPC decoding is an exploding area of study. Figuring out fast, efficient, and effective ways to decode is quite the way to spend your summer