A Low-Complexity Improved Successive Cancellation Decoder

A Low-Complexity Improved Successive Cancellation Decoder

A Low-Complexity Improved Successive Cancellation Decoder for Polar Codes Orion Afisiadis, Alexios Balatsoukas-Stimming, and Andreas Burg Telecommunications Circuits Laboratory, Ecole´ Polytechnique F´ed´erale de Lausanne, Switzerland. Abstract—Under successive cancellation (SC) decoding, polar for the least reliable bits of the polar code, thus also reducing codes are inferior to other codes of similar blocklength in terms of the computational complexity. However, in [7] L distinct paths frame error rate. While more sophisticated decoding algorithms are still followed in parallel. such as list- or stack-decoding partially mitigate this performance loss, they suffer from an increase in complexity. In this paper, we Unfortunately, when implementing any decoder in hard- describe a new flavor of the SC decoder, called the SC flip decoder. ware, one always has to provision for the worst case in Our algorithm preserves the low memory requirements of the terms of hardware resources. For the reduced-complexity SC basic SC decoder and adjusts the required decoding effort to the list decoders in [4]–[7] and the reduced-complexity SC stack signal quality. In the waterfall region, its average computational decoder in [4] this means that O(LN) and O(DN) memory complexity is almost as low as that of the SC decoder. needs to be instantiated, respectively. Moreover, the reduced- complexity list SC and stack SC algorithms also have a I. INTRODUCTION significantly higher computational complexity than that of the Polar codes [1] are particularly attractive from a theoretical original SC algorithm. point of view because they are the first codes that are both Contribution: In this paper, we describe a new SC-based de- highly structured and provably optimal for a wide range of coding algorithm, called SC flip, which retains the O(N) mem- applications (in the sense of optimality that pertains to each ory complexity of the original SC algorithm and has an average application). Moreover, they can be decoded using an elegant, computational complexity that is practically O(N log N) at albeit suboptimal, successive cancellation (SC) algorithm, high SNR, while still providing a significant gain in terms of which has computational complexity O(N log N) [1], where error correcting performance. N = 2n, n ∈ Z, is the blocklength of the code, and II. POLAR CODES AND SUCCESSIVE CANCELLATION memory complexity O(N) [2]. Even though the SC decoder DECODING is suboptimal, it is sufficient to prove that polar codes are capacity achieving in the limit of infinite blocklength. A. Construction of Polar Codes Unfortunately, the error correcting performance of SC de- Let W denote a binary input memoryless channel with coding at finite blocklengths is not as good as that of other input u ∈ {0, 1}, output y ∈ Y, and transition proba- modern codes, such as LDPC codes. To improve the finite bilities W (y|u). A polar code is constructed by recursively blocklength performance, more sophisticated algorithms, such applying a 2 × 2 channel combining transformation on 2n as SC list decoding [3] and SC stack decoding [4], were independent copies of W , followed by a channel splitting introduced recently. These algorithms use SC as the underlying step [1]. This results in a set of N = 2n synthetic channels, (i) N i−1 , decoder, but improve its performance by exploring multiple denoted by Wn (y1 ,u1 |ui), i = 1,...,N. Let Zi (i) N i−1 paths on a decision tree simultaneously, with each path re- Z Wn Y ,U Ui , i ,...,N, where Z W is the ( 1 1 | ) =1 ( ) arXiv:1412.5501v1 [cs.IT] 17 Dec 2014 sulting in one candidate codeword. The computational and Bhattacharyya parameter of W , which can be calculated using memory complexities of SC list decoding are O(LN log N) various methods (cf. [1], [8], [9]). The construction of a polar and O(LN), respectively, where L is the list size parameter, , k code of rate R N , 0 < k < N, is completed by choosing whereas the computational and memory complexities of SC the k best synthetic channels (i.e., the synthetic channels with stack decoding are O DN N and O DN , respectively, ( log ) ( ) the lowest Zi) as non-frozen channels which carry information where D is the stack depth parameter. bits, while freezing the input of the remaining channels to Since an exhaustive search through all paths is prohibitively some values ui that are known both to the transmitter and complex, choosing a suitable strategy for pruning unlikely to the receiver. The set of frozen channel indices is denoted paths is an important ingredient for low-complexity tree search by Ac and the set of non-frozen channel indices is denoted N algorithms. To this end, in [4], some path pruning-based c by A. The encoder generates a vector u1 by setting uA methods were proposed in order to reduce the computational equal to the known frozen values, while choosing uA freely. complexity of both SC stack and SC list decoding. An alter- N N A codeword is obtained as x1 = u1 GN , where GN is the native approach to reduce the computational complexity of SC generator matrix [1]. list decoding was taken in [5], [6], where decoding starts with list size 1, and the list size is increased only when decoding B. Successive Cancellation Decoding fails (failures are detected using a CRC), up to the maximum The SC decoding algorithm [1] starts by computing an list size L. Moreover, in [7] SC list decoding is employed only estimate of u1, denoted by uˆ1, based only on the received N N u L(1) L(1) L(1) L(1) ˆ1 3 2 1 0 values y1 . Subsequently, u2 is estimated using (y1 , uˆ1), etc. c ui, i u L(2) (1) L(2) (1) L(2) (1) L(2) Since ∈ A are known to the receiver, the real task of ˆ2 3 (u3 ) 2 (u2 ) 1 (u1 ) 0 SC decoding is to estimate ui, i ∈ A. Let the log-likelihood Channel LLRs – stage i uˆ L(3) L(3) L(3) L(3) ( ) N i−1 3 3 2 1 0 ratio (LLR) for Wn (y1 , uˆ1 |ui) be defined as (4) (3) (4) (3) (4) (3) (4) uˆ4 L (u ) L (u ) L (u ) L (i) N i−1 3 3 2 2 1 1 0 i N i− Wn (y , uˆ |ui = 0) ( ) 1 , 1 1 (5) (5) (5) (5) Ln (y , uˆ |ui) log . (1) uˆ5 L L L L 1 1 (i) N i−1 3 2 1 0 Wn y , u ui ( 1 ˆ1 | = 1) s (6) (5) (6) (5) (6) (5) (6) u L L L L =0 ˆ6 3 (u3 ) 2 (u2 ) 1 (u1 ) 0 Decisions are taken according to u L(7) L(7) L(7) L(7) ˆ7 3 2 1 0 (i) N i−1 n i 0, L (y1 , uˆ |u ) ≥ 0 and i ∈ A, (8) (7) (8) (7) (8) (7) (8) 1 uˆ8 L u L u L u L i i 3 ( 3 ) 2 ( 2 ) 1 ( 1 ) 0 uˆi = ( ) N −1 (2) 1, Ln (y1 , uˆ1 |ui) < 0 and i ∈ A, c ui, i ∈ A . Fig. 1: The computation graph of the SC decoder for N = 8. The f nodes are green and g nodes are blue and in the (i) N i−1 The decision LLRs Ln (y1 , uˆ1 |ui) can be calculated effi- parentheses are the partial sums that are used by each g node. ciently through a computation graph which contains two types of nodes, namely f nodes and g nodes. An example of this 1 Eb/N0 = 1.5 dB graph for N =8 is given in Fig. 1. Both types of nodes have 0.9 Eb/N0 = 2.0 dB two input LLRs, denoted by L1 and L2, and one output LLR, Eb/N0 = 2.5 dB 0.8 denoted by L. The g nodes have an additional input called the partial sum, denoted by u. The partial sums form the decision 0.7 feedback part of the SC decoder. The min-sum update rules [2] 0.6 for the two types of nodes are 0.5 0.4 f(L1,L2)= sign(L1)sign(L2) min (|L1|, |L2|) , (3) Relative frequency u 0.3 g(L1,L2,u) = (−1) L1 + L2. (4) 0.2 The partial sums at stage (s − 1) can be calculated from the partial sums at stage s, s ∈{1,...,n}, as 0.1 s−1 0 (2i−1−[(i−1) mod 2 ]) (2i−1) (2i) 0 1 2 3 4 5 6 us−1 = us ⊕ us , (5) Number of errors s−1 s−1 (2 +2i−1−[(i−1) mod 2 ]) (2i) Fig. 2: Histogram showing the relative frequency of the us−1 = us , (6) number of errors caused by the channel for a polar code with where N = 1024 and R =0.5 for three different SNR values. (i) un , uˆi, ∀i ∈{1,...,N} . (7) The computation graph contains N log(N + 1) nodes and For example, assume that, for the polar code in Fig. 1, the each node only needs to be activated once. Thus, the com- frozen set is Ac = {1, 2, 5, 6} and the information set is A = putational complexity of SC decoding is O(N log N). A , , , . Moreover, assume that the all-zero codeword was straightforward implementation of the computation graph in {3 4 7 8} transmitted and that uˆ was erroneously decoded as uˆ = 1 Fig. 1 requires O(N log N) memory positions. However, by 3 3 due to channel noise. Now suppose that the two LLRs that are cleverly re-using memory locations, it is possible to reduce L(4) the memory complexity to O(N) [2].

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    5 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us