arXiv:1412.5501v1 [cs.IT] 17 Dec 2014 itdcdn a ae n[] 6,weedcdn trswi starts decoding where [6], size [5], alter- in list S An taken of was decoding. complexity decoding list computational list pruning-based the SC reduce and computational path to stack approach the native some SC reduce both to [4], of order complexity in in proposed end, were unlikely methods this se pruning To tree low-complexity for for algorithms. ingredient strategy important an suitable is paths a choosing complex, al fiue eetduigaCC,u otemaximum the to CRC), a size using list detected are (failures fails where hra h opttoa n eoycmlxte fSC of complexities are memory decoding and stack computational the whereas and eoycmlxte fS itdcdn are and decoding computational - list The path SC codeword. of each complexities candidate with memory simultaneously, one tree multiple in exploring decision sulting by a performance on its paths improve underly the but as decoder, SC use algorithms These recently. introduced eoycomplexity memory oencds uha DCcds oipoetefinite the other improve of To that algorithms, codes. as as sophisticated LDPC more good performance, as as blocklength such not is codes, blocklengths are modern finite codes at polar blocklength. coding that infinite of prove limit to the sufficient in achieving is capacity it suboptimal, is N letsbpia,scesv aclain(C algorith (SC) complexity computational cancellation has successive ea eleganwhich to an suboptimal, using pertains of decoded that albeit both range be optimality can are wide they of Moreover, that a sense application). codes for the first optimal (in the provably applications are and they structured because highly view of point ecieanwflvro h Cdcdr aldthe called pap decoder, this SC the In complexity. of flavor in new increase a an describe per from this suffer mitigate they algori partially loss, stack-decoding decoding or sophisticated list- more as t such While in blocklength rate. similar error of codes frame other to inferior are codes opeiyi lota o sta fteS decoder. SC the of that as comput low average as its almost region, to the is effort waterfall complexity decoding of the required In requirements the quality. adjusts memory signal and low decoder SC the basic preserves algorithm Our ic nehutv erhtruhalptsi prohibitive is paths through search exhaustive an Since notntl,teerrcretn efrac fS de- SC of performance correcting error the Unfortunately, Abstract oa oe 1 r atclryatatv rmatheoreti a from attractive particularly are [1] codes Polar Clist SC 2 = O ( D LN 1 n oevr n[]S itdcdn sepoe only employed is decoding list SC [7] in Moreover, . sthe is n h itsz sicesdol hndecoding when only increased is size list the and , Udrscesv aclain(C eoig polar decoding, (SC) cancellation successive —Under n , ) eoig[]and [3] decoding , eeomnctosCrut Laboratory, Circuits Telecommunications o-opeiyIpoe Successive Improved Low-Complexity A epciey where respectively, ∈ depth stack aclainDcdrfrPlrCodes Polar for Decoder Cancellation Z O , O .I I. ( DN ( stebokegho h oe and code, the of blocklength the is N NTRODUCTION ) ro fiids lxo aasua-tmig n Andrea and Balatsoukas-Stimming, Alexios Afisiadis, Orion log 2.Ee huhteS decoder SC the though Even [2]. parameter. N Cstack SC ) L and sthe is O O ( N ( eoig[] were [4], decoding DN itsize list log ) N Cflip SC O , ) respectively, ( LN 1,where [1], parameter, formance decoder. rsof erms log ational cl oyehiu ´deaed asne Switzerland. Lausanne, F´ed´erale de Polytechnique Ecole ´ r we er, thms such arch N ing the cal m, ch th ly C t, ) o h es eibebt fteplrcd,tu loreduc also thus code, [7] polar in the However, complexity. of computational bits the reliable least the for opeiyls CadsakS loihsas aea have also algorithms algorithm. of SC that SC than original stack complexity computational and higher significantly SC redu the list Moreover, respectively. complexity instantiated, be to needs oigagrtm called algorithm, coding ae n lashst rvso o h os aein that stac case means SC this worst reduced-complexity [4] the the in and SC decoder for [4]–[7] reduced-complexity in the provision decoders For list to resources. hardware has of terms always one ware, parallel. in followed still are aiu ehd c.[] 8,[].Tecntuto fapo a of construction rate The of [9]). [8], code [1], (cf. methods various of parameter Bhattacharyya Z eoe by denoted opttoa opeiyta spractically is averag that an has complexity and algorithm computational SC original the of complexity ory tp[] hsrslsi e of set a in results This [1]. step needn oisof copies independent input of terms in gain significant performance. a correcting providing error still while SNR, high .Cntuto fPlrCodes Polar of Construction A. pliga applying bilities siaeof estimate Decoding Cancellation Successive B. [1]. matrix generator h lowest the the oeodi bandas obtained is codeword A oevalues some while bits, by otercie.Testo rzncanlidcsi denoted is indices channel frozen of set by The receiver. the to qa otekonfoe aus hl choosing while values, frozen known the to equal  Contribution: notntl,we mlmnigaydcdri hard- in decoder any implementing when Unfortunately, Let h Cdcdn loih 1 trsb optn an computing by starts [1] algorithm decoding SC The I II. A A W k c h noe eeae vector a generates encoder The . etsnhtccanl ie,tesnhtccanl with channels synthetic the (i.e., channels synthetic best n u ( W i n h e fnnfoe hne nie sdenoted is indices channel non-frozen of set the and W ) OLAR ( Y { ∈ ( Z eoeabnr nu eoyescanlwith channel memoryless input binary a denote 1 y N freezing 2 i u | W R as ) u U , u 1 × ) C eoe by denoted , i 0 n ( , oa oei osrce yrecursively by constructed is code polar A . 1 i , i DSAND ODES nti ae,w ecieanwS-ae de- SC-based new a describe we paper, this In 2 htaekonbt otetasitrand transmitter the to both known are that ) − non-frozen 1 ( N } 1 y k hne combining channel | output , 1 N U , h nu ftermiigcanl to channels remaining the of input the u , i 0 )  W Cflip SC 1 i N, < k < D − i , 1 olwdb a by followed , ECODING | W S Burg s x u hneswihcryinformation carry which channels y 1 = u ˆ UCCESSIVE 1 N i 1 hc a ecluae using calculated be can which , ) hc ean the retains which , O i , ae nyo h received the on only based , Y ∈ = N , . . . , ( N LN u scmltdb choosing by completed is 2 = 1 = 1 N ) n rniinproba- transition and , G rnfrainon transformation and n N where , N , . . . , u C , ytei channels, synthetic 1 N ANCELLATION where O hne splitting channel ysetting by L ( O DN ( itntpaths distinct Z Let . O N ( ( G u ) W log N A N memory ) ) freely. N Z mem- sthe is sthe is i ced- u ) ing the A lar 2 , at n k e c N N u L(1) L(1) L(1) L(1) ˆ1 3 2 1 0 values y1 . Subsequently, u2 is estimated using (y1 , uˆ1), etc. c ui, i u L(2) (1) L(2) (1) L(2) (1) L(2) Since ∈ A are known to the receiver, the real task of ˆ2 3 (u3 ) 2 (u2 ) 1 (u1 ) 0 SC decoding is to estimate ui, i ∈ A. Let the log-likelihood stage – LLRs Channel i uˆ L(3) L(3) L(3) L(3) ( ) N i−1 3 3 2 1 0 ratio (LLR) for Wn (y1 , uˆ1 |ui) be defined as (4) (3) (4) (3) (4) (3) (4) uˆ4 L (u ) L (u ) L (u ) L (i) N i−1 3 3 2 2 1 1 0 i N i− Wn (y , uˆ |ui = 0) ( ) 1 , 1 1 (5) (5) (5) (5) Ln (y , uˆ |ui) log . (1) uˆ5 L L L L 1 1 (i) N i−1 3 2 1 0 Wn y , u ui ( 1 ˆ1 | = 1) s (6) (5) (6) (5) (6) (5) (6) u L L L L 0 = ˆ6 3 (u3 ) 2 (u2 ) 1 (u1 ) 0 Decisions are taken according to u L(7) L(7) L(7) L(7) ˆ7 3 2 1 0 (i) N i−1 n i 0,L (y1 , uˆ |u ) ≥ 0 and i ∈ A, (8) (7) (8) (7) (8) (7) (8) 1 uˆ8 L u L u L u L  i i 3 ( 3 ) 2 ( 2 ) 1 ( 1 ) 0 uˆi = ( ) N −1 (2)  1,Ln (y1 , uˆ1 |ui) < 0 and i ∈ A, c ui, i ∈ A . Fig. 1: The computation graph of the SC decoder for N =  8. The f nodes are green and g nodes are blue and in the (i) N i−1 The decision LLRs Ln (y1 , uˆ1 |ui) can be calculated effi- parentheses are the partial sums that are used by each g node. ciently through a computation graph which contains two types of nodes, namely f nodes and g nodes. An example of this 1 Eb/N0 = 1.5 dB graph for N =8 is given in Fig. 1. Both types of nodes have 0.9 Eb/N0 = 2.0 dB two input LLRs, denoted by L1 and L2, and one output LLR, Eb/N0 = 2.5 dB 0.8 denoted by L. The g nodes have an additional input called the partial sum, denoted by u. The partial sums form the decision 0.7 feedback part of the SC decoder. The min-sum update rules [2] 0.6 for the two types of nodes are 0.5

0.4 f(L1,L2)= sign(L1)sign(L2) min (|L1|, |L2|) , (3) Relative frequency u 0.3 g(L1,L2,u) = (−1) L1 + L2. (4) 0.2 The partial sums at stage (s − 1) can be calculated from the partial sums at stage s, s ∈{1,...,n}, as 0.1

s−1 0 (2i−1−[(i−1) mod 2 ]) (2i−1) (2i) 0 1 2 3 4 5 6 us−1 = us ⊕ us , (5) Number of errors s−1 s−1 (2 +2i−1−[(i−1) mod 2 ]) (2i) Fig. 2: Histogram showing the relative frequency of the us−1 = us , (6) number of errors caused by the channel for a polar code with where N = 1024 and R =0.5 for three different SNR values. (i) un , uˆi, ∀i ∈{1,...,N} . (7) The computation graph contains N log(N + 1) nodes and For example, assume that, for the polar code in Fig. 1, the each node only needs to be activated once. Thus, the com- frozen set is Ac = {1, 2, 5, 6} and the information set is A = putational complexity of SC decoding is O(N log N). A , , , . Moreover, assume that the all-zero codeword was straightforward implementation of the computation graph in {3 4 7 8} transmitted and that uˆ was erroneously decoded as uˆ = 1 Fig. 1 requires O(N log N) memory positions. However, by 3 3 due to channel noise. Now suppose that the two LLRs that are cleverly re-using memory locations, it is possible to reduce L(4) the memory complexity to O(N) [2]. used to calculate the next decision LLR (i.e., 3 ), namely, L(2) L(6) L(2) L(6) 2 and 2 , are both positive and 2 > 2 . By applying III. ERROR PROPAGATION IN SC DECODING (3) the g node update rule with u = u3 =u ˆ3 =1, the resulting L(4) L(6) L(2) In SC decoding, erroneous bit decisions can be caused decision LLR 3 = 2 − 2 has a negative value which by channel noise or by error propagation due to previous leads to a second erroneous decision, while with the correct erroneous bit decisions. The first erroneous decision is always partial sum u =0 the decision would have been correct. caused by the channel noise since there are no previous errors, so error propagation does not affect the frame error rate of B. Significance of Error Propagation polar codes, but only the bit error rate. The foregoing analysis of the effects of error propagation insinuates the following question: Given that we had an A. Effect of Error Propagation erroneously decoded codeword with many erroneous bits, how The erroneous decisions due to error propagation are caused many of these bits were actually wrong because of channel by erroneous decision feedback, which in turns leads to noise rather than due to previous erroneous decisions? In erroneous partial sums. Erroneous partial sums can corrupt the order to answer to this question, we employ an oracle-assisted output LLR values at all stages, including, most importantly, SC decoder. Each time an error occurs at the decision level, the decision LLRs at level n. the oracle corrects it instantaneously without allowing it to 0 1 10 N = 1024 0.9 N = 2048 −1 N = 4096 10 0.8 −2 0.7 10

0.6 10−3

0.5 FER 10−4 0.4

Relative frequency −5 0.3 10 SC (N = 1024) SC Oracle (N = 1024)

0.2 −6 SC (N = 2048) 10 SC Oracle (N = 2048) 0.1 SC (N = 4096) −7 SC Oracle (N = 4096) 10 0 1 1.5 2 2.5 3 3.5 1 2 3 4 5 6 Eb/N0 (dB) Number of errors Fig. 4: Performance of oracle-assisted SC decoder compared Fig. 3: Histogram showing the relative frequency of the to the SC decoder for N = 1024, 2048, 4096 and R =0.5. number of errors actually caused by the channel for Eb/N0 = 2.00 and three different codelengths, N = 1024, 2048, 4096. A. SC Flip Decoding Algorithm k affect any future bit decisions. Moreover, the oracle-assisted Assume that we are given a polar code of rate R˜ = N with SC decoder counts the number of times it had to correct an a set of information bits A˜. We use an r-bit CRC that tells N erroneous decision. us, with high probability, whether the codeword estimate uˆ1 In Fig. 2 we plot a histogram of the number of errors given by the SC decoder is a valid codeword or not. In order caused by channel noise (given that there was at least one to incorporate the CRC, the rate of the polar code is increased ˜ r k+r error) for three different Eb/N0 values for a polar code with to R = R + N = N , so that the effective information rate N = 1024 and R =0.5 over an AWGN channel. We observe remains unaltered. Equivalently, the set of information bits A˜ that most frequently the channel introduces only one error and is extended with the r most reliable channel indices in A˜c, ˜c ˜ ˜c that this behavior becomes even more prominent for increasing denoted by Ar–max Thus, A = A ∪ Ar–max. Eb/N0 values. In Fig. 3 we plot a histogram of the number The SC flip decoder starts by performing standard SC of errors caused by channel noise for polar codes with three N decoding in order to produce a first estimated codeword uˆ1 . If different blocklengths and R = 0.5 over an AWGN channel N uˆ1 passes the CRC, then decoding is completed. If the CRC at Eb/N0 = 2 dB. We observe that, the relative frequency of fails, the SC flip algorithm is given T additional attempts to the single error event increases with increasing blocklengths. identify the first error that occurred in the codeword. To this This happens because, as N gets larger, the synthetic channels end, let U denote the set of the T least reliable decisions, (i) N i−1 Wn (y1 ,u1 |ui) become more polarized, meaning that all i.e., the set containing the indices i ∈ A corresponding to information channels in A become better. (i) N i−1 the T smallest |Ln (y1 , uˆ1 |ui)| values. After the set U has been constructed, SC decoding is restarted for a total C. Oracle-Assisted SC Decoder of no more than T additional attempts. In each attempt, a single uˆk, k ∈U, is flipped with respect to the initial decision From the discussion in the previous section, it is clear of the SC algorithm. The algorithm terminates when a valid that, by identifying the position of the first erroneous bit codeword has been found or when all T additional attempts decision and inverting that decision, the performance of the SC have failed. Note that, for T = 0, SC flip decoding is decoder could be improved significantly. In order to examine equivalent to SC decoding. the potential benefits of correcting a single error we employ a The SC flip algorithm is formalized in the second oracle-assisted SC decoder, which is only allowed to N N intervene once in the decoding process in order to correct the SCFLIP(y1 , A, k) function in Fig. 5. The SC(y1 , A, k) first erroneous bit decision. function performs SC decoding based on the channel output yN and the set of non-frozen bits with a slight twist: when In Fig. 4 we compare the performance of the SC decoder 1 A k > , the codeword bit uk is decoded by flipping the value with that of the oracle-assisted SC decoder for a polar code 0 obtained from the decoding rule (2). of three blocklengths and R = 0.5 over an AWGN channel. We observe that correcting a single erroneous bit decision Note that SC flip decoding is similar to chase decoding for significantly improves the performance of the SC decoder. polar codes [10]. The main differences are that SC flip decod- ing only considers error patterns containing a single error and that these error patterns are not generated offline using the a- IV. SC FLIP DECODING (i) N i−1 priori reliabilities of the synthetic channels Wn (y1 ,u1 |ui), (i) The goal of SC flip decoding is to identify the first error but online using the decision LLRs Ln , which reflect the that occurs during SC decoding without the aid of an oracle. actual reliabilities of the bit decisions for each transmitted N 0 1: function SCFLIP(y1 , A,T ) 10 N N i−1 N 2: uˆ1 ,L(y1 , uˆ1 |ui) ← SC(y1 , A, 0); N 10−1 3: if T > 1 and CRC(ˆu1 )= failure then N i−1 4: U ← i ∈ A of T smallest |L(y , uˆ |ui)|; −2 1 1 10 5: for j ← 1 to T do 6: k ←U(j); 10−3 N N 7: uˆ ← SC(y , A, k); 1 1 FER N 10−4 8: if CRC(ˆu1 )= success then 9: break; 10−5 10: end if SC

−6 SC Flip (T = 4) 11: end for 10 SC Flip (T = 16) 12: end if SC Flip (T = 32) N −7 SC Oracle 10 13: return uˆ1 ; 1 1.5 2 2.5 3 3.5 Eb/N0 (dB) Fig. 5: SC flip decoding with maximum trials T . Fig. 6: Frame error rate of SC decoding, SC flip decoding with T = 4, 16, 32 and the oracle-assisted SC decoder for a polar code of length N = 1024 and R =0.5. codeword and channel noise realization.

0 B. Complexity of SC Flip Decoding 10

In this section, we derive the worst-case and average-case 10−1 computational complexities of the SC flip algorithm, as well as its memory complexity. 10−2

Proposition 1. The worst-case computational complexity of 10−3 the SCFLIP algorithm defined in Fig. 5 is O(TN log N). FER 10−4 Proof: SC decoding in line 2 has complexity O(N log N) and the computation of the CRC in line 3 has complexity 10−5 O(N). Moreover, the sorting step in line 4 can be implemented SC −6 SC Flip (T = 16) with complexity O(N log N) (e.g., using merge sort). Finally, 10 SC Flip (T = 64) SC Flip (T = 128) the operations in the loop (lines 5–11) have complexity −7 SC Oracle 10 1 1.5 2 2.5 3 3.5 O(N log N) and the loop runs T times in the worst case. Eb/N0 (dB) Thus, the overall worst-case complexity is O(TN log N). Fig. 7: Frame error rate of SC decoding, SC flip decoding with Proposition 1 shows that, in the worst case, the complexity T , , and the oracle-assisted SC decoder for a polar of our algorithm increases linearly with the parameter T , = 4 16 32 code of length N and R . . meaning that its complexity scaling is no better than that of SC = 4096 =0 5 list decoding. However, if we consider the average complexity, then the situation is much more favorable, as the following Proof: SC decoding in line 2 requires O(N) memory result shows. positions. The storage of the CRC calculated in lines 3 and 8 requires exactly C memory positions, where C ≤ N. Proposition 2. Let Pe(R, SNR) denote the frame error rate The sorting step in line 4 can be implemented with O(N) of a polar code of rate R at the given SNR point. Then, memory positions (e.g., using merge sort), while storing the the average-case computational complexity of the SCFLIP T smallest values requires exactly T memory positions, where algorithm defined in Fig. 5 is O(N log N(1+T ·Pe(R, SNR))), T ≤ N. Moreover, the SC decoding performed in line 7 can k+r where R = N . re-use the memory positions of the SC decoding in line 2, so Proof: It suffices to observe that the loop in lines 5–11 no additional memory is required. Thus, the overall memory runs only if SC decoding fails and the CRC detects the failure, scaling behavior is O(N). which happens with probability at most Pe(R, SNR). C. Error Correcting Performance of SC Flip Decoding As the SNR increases, the FER drops asymptotically to zero. In Fig. 6 we compare the performanceof the SC flip decoder Thus, for high SNR the average computational complexity of with T =4, 16, 32, and a 16-bit CRC with the SC decoder and SC flip decoding converges to the computational complexity the oracle-assisted SC decoder described in Section III-C. Note of SC decoding. In other words, SC flip exhibits an energy- that the oracle-assisted decoder characterizes a performance proportional behavior where more energy is spent when the bound for the SC flip decoder. We observe that SC flip problem is difficult (i.e., at low SNR) and less energy is spent decoding with T =4 already leads to a gain of one order of when the problem is easy (i.e., at high SNR). magnitude in terms of FER at Eb/N0 = 3.5 dB. With T = 32, Proposition 3. The SCFLIP algorithm defined in Fig. 5 we can reap all the benefits of the oracle-assisted SC decoder, requires O(N) memory positions. since the T = 32 curve is shifted to the right with respect 0 10 10 SC 9 SC Flip (T = 32) 10−1 SC List (L = 2) 8 SC List (L = 4)

−2 10 7

6 10−3 5 FER 10−4 4

10−5 Normalized complexity 3

2 −6 SC 10 SC Flip (T = 32) SC List (L = 2) 1 −7 SC List (L = 4) 10 0 1 1.5 2 2.5 3 3.5 1 1.5 2 2.5 3 3.5 Eb/N0 (dB) Eb/N0 (dB) Fig. 8: Frame error rate of SC decoding, SC flip decoding with Fig. 9: Average complexity of SC flip decoding normalized T = 32 and SC list decoding with L =2, 4 for a polar code with respect to the complexity of SC decoding for a polar of length N = 1024 and R =0.5. code of length N = 1024 and R =0.5. to the oracle-assisted curve by an amount that corresponds V. CONCLUSION exactly to the rate loss incurred by the 16-bit CRC. In this paper we have introduced successive cancellation flip In Fig. 7 we depict the same curves for a codelength decoding for polar codes. This algorithm improves the frame T N = 4096, while keeping the ratio N constant. We observe error rate performance by opportunistically retrying alternative that it seems to become more difficult to reach the bound per- decisions for bits that turned out to be unreliable in a failing formance of the oracle-assisted SC decoder. As N increases, initial decoding iteration. By exploring alternative passes in the channels get more polarized, which would suggest the the decoding tree one after another until a correct codeword is opposite behavior. However, at the same time, the absolute found, the average complexity and memory requirements are number of the possible positions for the first error increases kept low, while approaching the performance of more complex as well. Our results suggest that the aforementioned negative tree-search based decoders. effect negates the positive effect of channel polarization. In Fig. 8, we compare the performance of standard SC REFERENCES decoding, SC flip decoding, and SC list decoding. We observe [1] E. Arıkan, “Channel polarization: A method for constructing capacity- that the performance of the SC flip decoder with T is achieving codes for symmetric binary-input memoryless channels,” IEEE = 32 Trans. Inf. Theory, vol. 55, no. 7, . 3051–3073, July 2009. almost identical to that of the SC list decoder with L =2, but [2] C. Leroux, I. Tal, A. Vardy, W. J. Gross, “Hardware architectures for with half the computational complexity at high Eb/N0 values successive cancellation decoding of polar codes”, Proc. IEEE Int. Conf. and half the memory complexity at all Eb/N0 values. For Acoustics, Speech and Sig. Proc., pp. 1665–1668, May 2011. [3] I. Tal, A. Vardy, “List decoding of polar codes,” Proc. IEEE Int. Symp. higher list sizes, such as L =4, SC list decoding outperforms Inf. Theory, pp. 1–5, Jul. 2011. SC flip decoding, at the cost of significantly higher complexity, [4] K. Chen, K. Niu, J. Lin, “Improved successive cancellation decoding since the performance of SC flip decoding is limited by the of polar codes,” IEEE Trans. Commun., vol. 61, no. 8, pp. 3100–3107, Aug. 2013. fact that it can only correct a single error. [5] B. Li, H. Shen, D. Tse, “An adaptive successive cancellation list decoder for polar codes with cyclic redundancy check,” IEEE Comm. Letters, vol. 16, no. 12, pp. 2044–2047, Dec. 2012. D. Average Computational Complexity of SC Flip Decoding [6] G. Sarkis, P. Giard, A. Vardy, C. Thibeault, W. J. Gross, “Increasing the speed of polar list decoders,” arXiv:1407.2921, Jun. 2014. In Fig. 9, we compare the average computational complexity [7] C. Cao, Z. Fei, J. Yuan, J. Kuang, “Low complexity list successive of standard SC decoding, SC list decoding, and SC flip cancellation decoding of polar codes,” arXiv:1309.3173, Sep. 2013. decoding. We observe that, as predicted by Proposition 2, at [8] R. Pedarsani, S. Hassani, I. Tal, and E. Telatar, “On the construction of polar codes,” in Proc. IEEE Int. Symp. Inf. Theory, pp. 11–15, Aug. low SNR the average computational complexity of SC flip 2011. decoding is (T + 1) times larger than that of SC decoding [9] I. Tal and A. Vardy, “How to construct polar codes,” IEEE Trans. Inf. but at higher SNR the computational complexity is practically Theory, vol. 59, no. 10, pp. 6562–6582, Jul. 2013. [10] G. Sarkis and W. J. Gross, “Polar codes for data storage applications,” identical to that of SC decoding. Moreover, the energy- in Proc. Int. Conf. on Comp., Netw. and Comm. (ICNC), pp. 840–844, proportional behavior of SC flip decoding is evident since, Jan. 2013 contrary to SC list decoding, the computational complexity decreases rapidly with decreasing difficulty of the decoding problem (i.e., increasing SNR). We also emphasize that SC flip decoding is not a viable option for the low SNR region, but this a not a region of interest for practical systems because the FER is very high.