
Hints and Solutions to Problems Chapter 2 2.5.1. L (N)qkpN-k < (pq)N/Z L (N) Os,k<N/Z k Os,k<N/Z k = 2N - 1 (pqt/Z < (0.07)N. 2.5.2. There are 64 possible error patterns. We know that 8 of these lead to 3 correct information symbols after decoding. To analyse the remainder one should realize that there are only 4 essentially different 3-tuples (Sl' Sz, S3)' Consider one possibility, e.g. (Sl' Sz, S3) = (1, 1, 0). This can be caused by the error patterns (101011), (011101), (110000), (010011), (100101), (000110), (111110) and of course by (001000) which is the most likely one. Our decision is to assume that e3 = 1. So here we obtain two correct information symbols with probability pZq4 + 2p4qZ and we have one correct information symbol with probability 2p3q3 + pS q. By analysing the other cases in a similar way one finds as symbol error probability !(22p2 q4 + 36p3q3 + 24p4 q2 + 12ps q + 2p6) = !(22p2 - 52p3 + 48p4 - 16pS). In our example this is 0.000007 compared to 0.001 without coding. 2.5.3. Take as codewords all possible 8-tuples of the form This code is obtained by taking the eight words of the code of the previous problem and adding an extra symbol which is the sum of the first 6 symbols. 196 Hints and Solutions to Problems This has the effect that any two distinct codewords differ in an even number of places. i.e. d(x, y) ~ 4 for any two distinct codewords x, y. The analysis of the error patterns is similar to the one which was treated in Section 2.2. For (e l , e2, ... ' e7) we find e2 + e3 + e4 = Sl' e l + e3 + es = S2' el + e2 + e3 + e7 = S4· There are 16 possible outcomes (S1> S2' S3' S4). Eight of these can be explained by an error pattern with no errors or one error. Of the remainder there are seven, each of which can be explained by three different error patterns with two errors, e.g. (Sl' S2' S3' S4) = (1, 1,0, 0) corresponds to (e 1, e2 • ... , e7) being (0010001), (1100000) or (0001100). The most likely explanation of (1, 1, 1, 0) is the occurrence of three errors. Hence the probability of correct decoding is q7 + 7q6p + 7qSp2 + q4 p3. This is about 1 - 14p2, i.e. the code is not much better than the previous one even though it has smaller rate. 2.5.4. For the code using repetition of symbols the probability of correct reception of a repeated symbol is 1 - p2. Therefore the code of length 6 with codewords (ai' a2, a3' ai' a2• al) has probability (1 - p2)3 = 0.97 of correct reception. The code of Problem 2.5.2 has the property that any two code­ words differ in three places and therefore two erasures can do no harm. In fact an analysis of all possible erasure patterns with three erasures shows that 16 of these do no harm either. This leads to a probability (1 - p)3(1 + 3p + 6p2 + 6p3) = 0.996 of correct reception. This is a remarkable improvement considering the fact that the two codes are very similar. 2.5.5. Treat the erasures as zeros. The inner products are changed by at most 2el +e2· 2.5.6. Replace a 1 in C by -1, a 0 by + 1. The two conditions (i) and (ll) imply that the images of codewords are orthogonal vectors in R16. Hence ICI ::: 16. To construct such a code, we need a Hadamard matrix of order 16 with six -Is in every row. There is a well known construction. It yields a binary code that is most easily described by writing codewords as 4 by 4 matrices. Fix a row and a column; put Is in this row and in this column, except where they meet. In this way, we find 16 words of weight 6 that indeed pairwise have distance 8. 2.5.7. For any x, there are at most n/2 codewords that differ from x in two places. If there exists a codeword that differs from x in exactly one place, then there are at most (n - 2)/2 other codewords that differ from x in two places (because n is Hints and Solutions to Problems 197 even). For any codeword e, there are exactly n words that differ from e in one place and (~) words that differ in two places. Counting pairs (x, c) in two ways, we find n) n- 2 n n IC!' (2 ~ IC!' n . -2-+ (2 - IC! . (n + 1» . 2' from which the result follows. The code of §2.1 is an example where equality holds. Chapter 3 3.8.1. By (3.1.6) we have L~=OC) = 2' for some integer I. This equation reduces to (n + 1)(n2 -;n + 6) = 3.2'+1, i.e. (n + l){(n + 1)2 - 3(n + 1) + 8} = 3.21+1. If n + 1 is divisible by 16 then the second factor on the left is divisible by 8 but not by 16, i.e. it is 8 or 24 which yields a contradiction. Therefore n + 1 is a divisor of 24. Since n ~ 7 we see that n = 7, 11, or 23 but n = 11 does not satisfy the equation. For n = 7 the code M = {O, I} is an example. For n = 23 see Section 4.2. 3.8.2. Let e E C, wee) ~ n - k. There is a set of k positions where e has a coordinate equal to O. Since C is systematic on these k positions we have e = O. Hence d > n - k. Given k positions, there are codewords which have d - 1 zeros on these positions, i.e. d ~ n - k + 1. In accordance with the definition of separable given in Section 3.1 an [n, k, n - k + 1] code is called a maximum distance separable code (MOS code). 3.8.3. Since C c Col every e E C has the property (e, c) = 0, i.e. wee) is even and hence (e, 1-) = O. However, (1, 1) = 1 since the word length is odd. Therefore Col \ C is obtained by adding 1 to all the words of C. 3.8.4. IB1(x)1 = 1 + 6 = 7. Since 71C1 = 63 < 26 one might think that such a code C exists. However, if such a C exists then by the pigeon hole prin­ ciple some 3-tuple of words of C would have the same symbols in the last two positions. Omitting these symbols yields a binary code C' with three words of length 4 and minimum distance 3. W.l.o.g. one of these words is o and then the other two would have weight ~ 3 and hence distance ~ 2, a contradiction. 3.8.5. By elementary linear algebra, for every i it is possible to find a basis for C such that k - 1 basis vectors have a 0 in position i and the remaining one has a 1. Hence exactly qk-1 code words have a 0 in position i. 3.8.6. The even weight subcode of C is determined by adding the row 1 to the parity check matrix of C. This decreases the dimension of the code by 1. 3.8.7. From the generator matrix we find for e E C C1 + C2 + Cs = C3 + C4 + C6 = C1 + C2 + C3 + C4 + C7 = O. 198 Hints and Solutions to Problems Hence the syndromes (Sl' S2' S3) = (e 1 + ez + es, e3 + e4 + e6 , e1 + e2 + e3 + e4 + e7 ), for the three received words are resp. (0. 0, 0), (0, 0, 1), (1, 0, 1). Hence (a) is a code word; by maximum likelihood decoding (b) has an error in position 7; (c) has an error in position 1 or an error in position 2, so here we have a choice. 3.8.8. (i) If p == 1 (mod 4) then there is an rt. E IFp such that rt. z = -1. Then G = (/4' rt.I4 ) is the generator matrix of the required code. (ii) If p == 3 (mod 4) we use the fact that not all the elements of IFp are squares and hence there is an rt. which is a square, say rt. = f3 z, such that rt. + 1 is not a square, i.e. rt. + 1 = _yz. Hence f3z + yZ = -1. Then 0 0 13 y 0 °0 -y 13 0 G ~ [~ °1 0 13 °0 ° ° -y ° ° ° ~l does the job. (iii) If p = 2, see (3.3.3). n-k 2k - 1 - k 3.8.9. Rk =--= -.1, as k -. 00. n 2k - 1 3.8.10. (i) Let (Ao, AI' ... , An, An+d be the weight distribution of C. Zk Then A Zk - 1 = ° and AZk = AZk - 1 + A Zk . Since L AZkz = HA(z) + A( -z)} and LAzk_lZZk-l = HA(z) - A( -z)}, we find A(z) = t{(1 + z)A(z) + (1 - z)A( -z)}. (ii) From (i) and (3.5.2) we find the weight enumerator of the extended Hamming code of length n + 1 = 2k to be !{(l + zr1 + (1 - z)n+l} + _n_(1 _ zZ)(n+l)/Z. 2 n+l n+l Now apply Theorem 3.5.3. The weight enumerator of the dual code is 1 + 2nz(n+l)/z + zn+l, i.e. all the words of this code, except o and 1, have weight 2k- 1• 3.8.11.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages37 Page
-
File Size-