arXiv:1301.5108v1 [cs.IT] 22 Jan 2013 oigter htti oigshm lostebs statio base the allows retrieve scheme coding to this that theory coding i nysc odtos hs if Thus, conditions. such only i oe Sensor code. esr,wihi nipratcieinfrsno network issue. critical sensor a for is criterion among saving important energy workload an where of is to distribution which This even required sensors, conditions. an are of guarantees sensors number balance same the the then approximately entries measure nonzero of number ahsno nynest esr e among few a measure to needs only sensor each hr laseit an exists always there nomto.Mroe,tebs tto a loidentify sensor also each can For station sensors. base malfunctioned the the Moreover, information. nest,ec Let etc. intensity, k sparsest nomto akt h aesaini h olwn a.Le way. the following the transmitting in before station base G has, the some it to performs back information sensor information the each on Furthermore, collector. encoding w data station, base a a to is collected they information the transmit o fta,i oun of columns capability if correction that, error of desired the top achieve to order in r novdit noig oi ssfcetfor sufficient is it So encoding. into involved are oiae yisapiaini ro orcinfrsensor for correction was error codes in MDS application Suppose networks. for its matrices by motivated generator sparsest from differs balanced column one. each most in at by entries other tha nonzero each require of we specifically, number same More the the entries. approximately nonzero all contains of column among matrix number every generator entries A if nonzero code. balanced MDS sparsest of is same the number the of least is matrices the contains generator it A if codes. (MDS) Separable odtoscrepnigt ozr nre fcolumn of entries nonzero to corresponding conditions (C1) (C2) 1 = of conditions Abstract esuyteeitneadpoieacntuto fa of construction a provide and existence the study We pr rmbigo hoeia neet u td on study our interest, theoretical of being from Apart ean be k , . . . , aacdSass eeao arcsfrMDS for Matrices Generator Sparsest Balanced G pret ahrwof row each Sparsest: aacd amn egt fteclmsof columns the of weights Hamming Balanced: rmec te ya otone. most at by other each from G and k otebs tto.I swl nw nclassical in known well is It station. base the to W hwta given that show —We aifigtefloigtoconditions: two following the satisfying × x n ( balanced S x F eeao arxo an of matrix generator hna most at when 1 i q x , . . . , rnmt h clrpoutof product scalar the transmits safiiefil of field finite a is x n ( = sensors, .I I. eeao arxo aiu Distance Maximum of matrix generator Emails: k [ ,k n, uha eprtr,pesr,light pressure, temperature, as such , x NTRODUCTION 1 x , . . . , ] G q G S D oeta a generator a has that code MDS ⌊ a amn weight Hamming has o on Dau Hoang Son { 1 n aeapoiaeytesame the approximately have d , . . . , G ∗ − 2 sonhoang and igpr nvriyo ehooyadDsg,Singapore Design, and Technology of University Singapore k 1 q ) ssas hni average, in then sparse is ⌋ where , [ lmns.Teesensors These elements). S ,k d k, n, esr rnmtwrong transmit sensors k for , n olcieymeasure collectively , q ] dau x q ufiinl large, sufficiently i error-correcting S ∈ x , S i ∗ nythose only , † k et Song Wentu , i F n column and wentu omeasure to q conditions n G o each for − i k Codes differ of On . song hich 1 + G n s ; t t , ‡ † dong u w at,nml,telf part left the namely, parts, two [ m denoted vrta ed pr rmHmigwih,w louse minimum also as we such a theory weight, vectors linear of coding Hamming as distance, from column from them notions a standard Apart regarding or other by field. row field, that a finite over of some support over and matrix weight define also eeao arcs(o ntne e [1]). see instance, (for matrices generator k or aeprecisely have enda olw.Tevre set vertex The follows. as defined fi sblne,te ahclm otiseither contains column each then balanced, is it if u part hr h rdc stknoe all over taken is product the where o any For ( eas eoeby denote also We inI.W tt n rv u anrsl nScinIII. Section in result main our prove and state We networks. II. sensor tion for scheme ideal encoding is a above which over that the matrix, code for generator that sparsest MDS suitable prove and an balanced We exists a always dimension. has there the field, large and sufficiently length capabi [1, the error-correcting instance optimal given for have (see, they studied Moreover, th 11]). well especially to is structure, choose distribution, their We because weight first networks. codes MDS sensor study for scheme aforementioned entry formally, ( n hn Dong Zheng , m v i i,j ] i,j of o matrix a For ednt by denote We pretgnrtrmti fan of matrix generator sparsest A eesr oain n entosaepoie nSec- in provided are definitions and notations Necessary the in used be can code error-correcting any fact, In ( = 0 6= i,j ⌈ eoeteset the denote ) k R N ) 0 = ( m zheng h arxotie from obtained matrix the n u } = ea be − . 1 i,j h (Hamming) The . n supp E u , . . . , k k { v +1) if = r 1 = × i,j , 1 k ( r , . . . , § g  n ⌉ G yuenchau 0 = i,j n × ( n by G ℓ ozr nre.Hratr eotnuse often we Hereafter, entries. nonzero − [ matrix ) ‡ ,k n, i ) sa is , huYuen Chau , n r , 0 = k ( = F ∈ ξ +1 if n j iaymti.W eoeby denote We matrix. binary i,j q gr { I P II. ] : ) I.M III. } q F 1 m g h destis set edge The . where , k ( h nt edwith field finite the ozr nre neeyrw Moreover, row. every in entries nonzero q n and , i,j and M } N × 2 i,j i @sutd.edu.sg sdfie by defined is n , . . . , ) RELIMINARIES n ∈ edefine we , ) weight ∈ 0 = m [ AIN iaymatrix binary ,k d k, n, [ h iatt graph bipartite the k F i,j ] L ξ § q k M j , i,j × } R and = 1 = The . n saeidtriae.More indeterminates. are ’s ESULT of ] yrpaigeeynonzero every replacing by the , ∈ V q { n k [  ℓ v [ ,k n, oe,MScds and codes, MDS codes, u 1 n if i,j a epriindinto partitioned be can submatrices ℓ , . . . , f ] supp m , is upr matrix support ( support g N ] = i,j q M | supp D oewould code MDS = ) ( i,j ξ q u 0 6= k i,j ( = } = ) 0 6= lmns Let elements. n h right the and , ( G if Q u m favector a of Let . ⌊ var ) P

k P m { ( = i,j | ( ecan We . . i n det( i,j ( forder of ) − ∈ M n V M where k of 1 = +1) [ = ) , n P lity, Ch. R G E eir : ] = ly ) i ⌋ ) , , , . i ∈ [k], and Cj , j ∈ [n], to denote the supports of row i and ∪i∈I Ri ≥ n − k + |I|, for every subset ∅ 6= I ⊆ [k]. (2) column j, respectively, of a k ×n binary matrix M. Note that Proof: Suppose that (1) holds and that there exists a

Ri ⊆ [n] and Cj ⊆ [k]. nonempty set I ⊆ [k] satisfying

Lemma 1. Let M = (mi,j ) be a k×n binary matrix. Suppose ∪i∈I Ri ≤ n − k + |I|− 1. (3) that each row of M has weight n − k +1. Then M is the We aim to obtain a contradiction. The condition (3) is equiv- support matrix of a generator matrix of some [n, k]q MDS n−1 alent to code over a sufficiently large field Fq (q> ) if and only k−1 ∩i I Ri ≥ k − |I| +1. (4) if f(var(M)) 6≡ 0. ∈  M G G Hence there exists a set J of k − |I| +1 columns of M that Proof: Suppose = supp( ), where = (gi,j ) satisfies ∩j∈J Cj ≥ |I|. Equivalently we have is a generator matrix of some [n, k]q MDS code. Due to a well-known property of MDS codes (see [1, p. 319]), ∪j∈J C j ≤ k − |I| < k − |I| +1= |J|. (5) every submatrix of order k of G has nonzero determinant. We obtain a contradiction between (1) and (5). The “only if” Therefore, f(G) 6= 0. Note that f(var(M)) can be regarded direction can be proved in a similar manner. as a multivariable polynomial in Fq[...,ξi,j ,...]. Moreover, since M = supp(G), we deduce that f(G) can be obtained Lemma 5. Let M = (mi,j ) be a k×n binary matrix. Suppose from f(var(M)) by substituting ξi,j by gi,j for all i, j where that each row of M has weight n − k +1. Then M is the gi,j 6=0. As f(G) 6=0, we conclude that f(var(M)) 6≡ 0. support matrix of a generator matrix of some [n, k]q MDS M F n−1 Now suppose that f(var( )) 6≡ 0. Note that each column code over a sufficiently large field q (q> k−1 ) if and only of var(M) belongs to precisely n−1 submatrices of order k if (2) holds. k−1  of var(M). Hence the exponent of each ξi,j in f(var(M)) n−1 M  Proof: The proof follows from Lemma 1-4. is at most k−1 . Since f(var( )) 6≡ 0, by [2, Lemma n−1 F We present below our main result. 4], if q > k−1 then there exist gi,j ∈ q (for i, j  M where mi,j = 1) so that f(var( ))(...,gi,j,...) 6= 0. Let Theorem 6 (Main Theorem). Suppose 1 ≤ k ≤ n and q > G  n−1 = (gi,j ) (for i, j where mi,j = 0 we set gi,j = 0). Since . Then there always exists an [n, k] MDS code that has G M k−1 q f( ) = f(var( ))(...,gi,j,...) 6= 0, again by [1, p. 319], a generator matrix G satisfying the following two conditions. G  we deduce that is a generator matrix of an [n, k]q MDS (C1) Sparsest: each row of G has weight n − k +1. G code. Therefore, each row of has weight at least n − k +1, (C2) Balanced: column weights of G differ from each other due to the Singleton Bound (see [1, p. 33]). Since each row by at most one. of M also has weight n − k +1, we deduce that gi,j 6= 0 whenever mi,j =1. Therefore, M = supp(G). By Lemma 5, to prove Theorem 6, we need to show that there always exists a k × n binary matrix M satisfying the M Lemma 2. Let = (mi,j ) be a k × n binary matrix. following properties Then var M if and only if every bipartite subgraph f( ( )) 6≡ 0 (P1) each row of M has weight n − k +1, induced by the k left-vertices and some k right-vertices in (P2) column weights of M differ from each other by at most M gr( ) has a . one, G M P Proof: Let = gr( ). Each submatrix of order k (P3) ∪i∈I Ri ≥ n − k + |I|, for every subset ∅ 6= I ⊆ [k], of var(M) corresponds to a bipartite subgraph HP induced where R denotes the support of row i of M. i by the k left-vertices and some k right-vertices in G . In the We prove the existence of such a binary matrix by designing P literature, is usually referred to as the Edmonds matrix of an algorithm (Algorithm 1) that starts from an initial binary HP . It is well known (see [3, p. 167]) that a has matrix which satisfies (P1) and (P3). In each iteration, the a perfect matching if and only if the deteminant of its Edmonds matrix at hand is slightly modified so that it still satisfies (P1) matrix is not identically zero. Hence the proof follows. and (P3) and its column weights become more balanced. When

Lemma 3. Let M = (mi,j ) be a k × n binary matrix. Then the algorithm terminates, it produces a matrix that satisfies every bipartite subgraph induced by the k left-vertices and (P1), (P2), and (P3). some k right-vertices in gr(M) has a perfect matching if and Observe that it is fairly easy to construct a binary matrix only if that satisfies (P1) and (P2), using the Gale-Ryser Theorem (see Manfred [4]). However, (P1) and (P2) do not automatically ∪j∈J Cj ≥ |J|, for every subset J ⊆ [n], |J|≤ k. (1) guarantee (P3). Indeed, the matrix P given below satisfies Proof: Let G gr M . Each submatrix P of order of = ( ) k both (P1) and (P2). However, (P3) is violated if we choose var (M) corresponds to a bipartite subgraph HP induced by I = {1, 2, 3}. the k left-vertices and some k right-vertices in G . The lemma 10001110 follows by applying Hall’s marriage theorem to each of such 10001011 subgraphs of gr(M).   P = 10000111 . Lemma 4. Let M = (mi,j ) be a k × n binary matrix. The 01111000   condition (1) is equivalent to 01110100     2 Let M be any k × n binary matrix that satisfies both respectively. Without loss of generality, we assume that the (P1) and (P3). For instance, we can shift the vector first t rows are all the rows of M satisfying the property that (1 1 f··· 1 0 0 ··· 0) k times cyclically to produce k rows each of them has a one at column jmax and a zero at column n−k+1 jmin. In other words, assume that of such a matrix as below. | {z } {i ∈ [k]: mi,jmax =1 and mi,jmin =0} = [t]. 1 1 1 ··· 1 00 0 ··· 0 0 1 1 ··· 1 10 0 ··· 0 Since max − min ≥ 2, we have t ≥ 2.   Suppose, for contradiction, that none of these t rows satisfy M 0 0 1 ··· 1 11 0 ··· 0 = . the condition in Step 7 of Algorithm 1. Let M (i), i ∈ [t], be ......  ......  the matrix obtained from M after swapping the two entries f   (i) 0 0 0 ··· 1 11 1 ··· 1 m max and m min . Then M , i ∈ [t], does not satisfy (P3).   i,j i,j   Since M satisfies (P3) and the only difference between M (i) M The Algorithm 1 takes as an input parameter. and M is the row i, the set of rows of M (i) that violates the condition (P3) must contain row i. Therefore, for each i ∈ [t], f Algorithm 1 there exists a set Ii ⊂ [k], i∈ / Ii, such that {i} ∪ Ii is a set Input: n, k, M; of rows that violates (P3) in M (i). For our purpose, for each Initialization: M M; := i ∈ [t], we choose Ii to be of minimum size among those sets 1: repeat f that satisfied the aforementioned requirement. Since for each 2: f (i) Let max and min be the maximum and minimum i ∈ [t], |Ri | = |Ri| = n − k +1, we deduce that Ii 6= ∅. M (i) (i) weights of columns of ; Let Rr denote the support of row r of M , i ∈ [t], r ∈ 3: if max − min ≤ 1 then [k]. Note that Rr denotes the support of row r of M, r ∈ [k]. 4: Return M; (i) (i) For simplicity, we use RI to denote the union ∪r∈I Rr for 5: end if (i) any subset I ⊆ [k]. Since {i} ∪ Ii is the set of rows of M 6: Find two columns jmax and jmin that have weights that violates (P3), for every i ∈ [t] we have max and min, respectively; (i) (6) 7: Find a row is satisfying mis,jmax = 1 and |R{i}∪Ii |≤ n − k + |{i} ∪ Ii|− 1= n − k + |Ii|. mis,jmin =0 and moreover, if we set mis,jmax := Lemma 9. For all i,i′ ∈ [t], the following statements hold M 0 and mis,jmin := 1 then still satisfies (P1) and ′ (i) Ri′ , if i 6= i, (P3); a) ′ Ri = ′ (Ri′ \{jmax}) ∪{jmin}, if i = i, 8: Swapping: set mis,jmax =0 and mis,jmin := 1; ( (i) 9: until max − min ≤ 1; b) RIi = RIi , c) jmax ∈/ RIi , d) jmin ∈ RIi , (i) e) i∈ / Ii′ , f) |R | = n − k + |Ii|. Due to space constraint, we have prepared a separate note {i}∪Ii at [5] with an example to demonstrate the algorithm. Proof: Proof of a). Note that all the rows of M (i) except (i) M ′ Lemma 7. Suppose in every iteration, Algorithm 1 can always for the row i are the same as that of . Therefore, Ri′ = Ri ′ M (i) M find a legitimate row described in Step 7. Then the algorithm if i 6= i. As row i of is obtained from row i of by terminates after finitely many iterations and returns a matrix swapping mi,jmax =1 and mi,jmin =0, we deduce that satisfying (P1), (P2), and (P3). (i) Ri = (Ri \{jmax}) ∪{jmin}. Proof: At a certain iteration, let ∆ = max − min. After Proof of b). By definition of Ii, i∈ / Ii. Therefore, using Part swapping the two entries mis,jmax and mis,jmin , the weight (i) a), we conclude that RIi = RIi . of column jmax is decreased by one whereas the weight of Proof of c). Suppose, for contradiction, that j ∈ R i . Due column is increased by one. Therefore, after at most max I jmin to Part a) and b), we have ⌊n/2⌋ iterations, ∆ is decreased by at least one. Hence, the (i) (i) (i) algorithm must terminate after finitely many iterations. The R{i}∪Ii = Ri ∪ RIi ouput matrix obviously satisfies (P1), (P2), and (P3). = ((Ri \{jmax}) ∪{jmin}) ∪ RIi Lemma 8. In every iteration of Algorithm 1, a row is as = ((Ri \{jmax}) ∪ RIi ) ∪{jmin} described in Step 7 of the algorithm can always be found. = (Ri ∪ RIi ) ∪{jmin}⊇ R{i}∪Ii . Since column j has a larger weight than column j , max min As M satisfies (P3), we have there always exists at least one row is where mis,jmax = 1 (i) and mis,jmin =0. Obviously, swapping mis,jmax and mis,jmin |R{i}∪Ii | ≥ |R{i}∪Ii |≥ n − k + |{i} ∪ Ii| = n − k + |Ii| +1. does not make M violate (P1). The stricter criterion is that This inequality contradicts (6). M must still satisfy (P3) after the swap. We need a few more Proof of d). Suppose, for contradiction, that j ∈/ R i . Then auxiliary results before we can prove Lemma 8. min I by Part a) and b) we have Suppose at a certain iteration, we choose some columns (i) jmax and jmin that have maximum and minimum weights, R{i}∪Ii = ((Ri \{jmax}) ∪{jmin}) ∪ RIi ⊇{jmin} ∪ RIi .

3 Therefore, using the fact that M satisfies (P3), we deduce that where the last equality can be explained by the fact that (i) (i) . Indeed, we have R{ℓ}∪K ⊆ R{i}∪Ii\{ℓ} |R{i}∪Ii | ≥ |{jmin} ∪ RIi | =1+ |RIi |≥ 1+ n − k + |Ii|. This inequality contradicts (6). (i) Rℓ ⊆ R{i}∪Ii\{ℓ} Proof of e). Note that jmax ∈ Ri. However, by Part c), jmax ∈/ ′ due to (9). Moreover, K ⊆ Ii \{ℓ}. Hence the aforementioned RIi′ . Hence, i∈ / Ii . Proof of f). Using Part a) we have inclusion holds. We complete the proof of Claim 2. Claim 3: If I ∩ I = {ℓ} then for i =1, 2, we have (i) (i) (i) (i) 1 2 |R{i}∪Ii | = |Ri ∪RIi | = |Ri ∪RIi | ≥ |RIi |≥ n−k+|Ii|, (7) |R{i}∪Ii\{ℓ} \ Rℓ \{jmax}| ≤ |Ii|− 1. (11) M where the last inequality comes from the fact that satisfies Proof of Claim 3: Applying (10) with K = ∅, we obtain (P3). Combining (6) and (7), the proof of f) follows. (i) ′ ′ |R{i}∪Ii\{ℓ} \ Rℓ \{jmax}| ≤ |R{i}∪Ii\{ℓ}| − |Rℓ| Lemma 10. For all i,i ∈ [t], i 6= i , it holds that Ii ∩Ii′ = ∅. (8) Proof: Without loss of generality, we prove that I1 ∩I2 = = (n − k + |Ii|) − (n − k + 1) ∅ . Suppose, for contradiction, that there exists ℓ ∈ I1 ∩I2. We = |Ii|− 1. first present three claims, which are used later in this proof. We complete the proof of Claim 3. Claim 1: For i =1, 2 we have The remaining of the proof of Lemma 10 is divided into (i) (8) |R{i}∪Ii\{ℓ}| = n − k + |Ii|, two cases. Our goal is to obtain contradictions in both cases. and Case 1: I1 ∩ I2 = {ℓ}. (i) (i) (9) We aim to show that Rℓ = Rℓ ⊆ R{i}∪Ii\{ℓ}.

Proof of Claim 1: Indeed, because of the minimality |R{1,2}∪I1∪I2 |

|(R{i}∪Ii\{ℓ} \ R{ℓ}∪K) \{jmax}| = |(R{i}∪Ii\{ℓ} \{jmax}) |R{ℓ}∪K| = |RK | = n − k + |K|

4 where ε ≥ 0. Then (S3) jmin ∈ RIi , for all i ∈ [t], ′ ′ (S4) Ii ∩ Ii′ = ∅, for all i,i ∈ [t], i 6= i . |R | = |R | + |R \ R | = n − k + |K| + ε + δ. (17) {ℓ}∪K K ℓ K Due to (S2) and (S3), for each i ∈ [t], there exists a row r(i) ∈ We have Ii that has a zero at column jmax and a one at column jmin. By (S1) and (S4), r(i) 6= i′ for all i,i′ ∈ [t] and r(i) 6= r(i′) R = R ∪ (R \ R ) {1,2}∪I1∪I2 {ℓ}∪K {1}∪I1\{ℓ} {ℓ}∪K whenever i 6= i′. ∪ (R{2}∪I2\{ℓ} \ R{ℓ}∪K)

= R{ℓ}∪K ∪{jmax} row 1 1 0 row 2 1 0 ∪ ((R \ R ) \{j }) b b t rows {1}∪I1 \{ℓ} {ℓ}∪K max b b b b ∪ ((R{2}∪I2 \{ℓ} \ R{ℓ}∪K ) \{jmax}). row t 1 0 Therefore row r(1) 0 1 row r(2) 0 1 |R{1,2}∪I1∪I2 | ≤ |R{ℓ}∪K ∪{jmax}| b b rows b b t + |(R \ R ) \{j }| b b {1}∪I1 \{ℓ} {ℓ}∪K max row r(t) 0 1 + |(R{2}∪I2 \{ℓ} \ R{ℓ}∪K ) \{jmax}| (10) Along the rows in the set [t] ∪{r(i): i ∈ [t]}, the weights ≤ |R ℓ K | +1 { }∪ of the two columns jmax and jmin are the same (equal to t). (1) The other rows of M, because of (20), must contribute at + |R{1}∪I1\{ℓ}| − |R{ℓ}∪K| (2) least as much to the weight of column jmin as to the weight + |R | − |R{ℓ}∪K| (18) {2}∪I2\{ℓ} of column jmax. Therefore, in total, the weight of column (1) (2) jmax is not larger than the weight of column jmin of M. This = |R{1}∪I1\{ℓ}| + |R{2}∪I2\{ℓ}| conclusion contradicts the fact that max ≥ min +2. − |R{ℓ}∪K | +1 We now discuss the complexity of Algorithm 1. In the (8)(17) = (n − k + |I1|) + (n − k + |I2|) initial matrix M, the difference between the maximum and − (n − k + |K| + ε + δ)+1 the minimum column weights is at most k − 1. Therefore, according to thef proof of Lemma 7, the repeat loop finishes ≤ n − k + |I1| + |I2| − |K| +1. n after at most (k − 1)⌊ 2 ⌋ iterations. It is obvious that all Moreover, as I1 ∩ I2 = {ℓ} ∪ K, we have steps in each iteration can be done in polynomial time in n and k, except for Step 7. It is not straightforward that the |{1, 2} ∪ I1 ∪ I2| =2+ |I1| + |I2| − |{ℓ} ∪ K| (19) verification of (P3) for a given k × n matrix can be done in = |I1| + |I2| − |K| +1. polynomial time. However, it can be shown that by considering As M satisfies (P3), from (18) and (19), we conclude that a special one-source k-sink network (of linear size in k and n) associated with each matrix, (P3) is equivalent to the condition |R{1,2}∪I1∪I2 | = n − k + |I1| + |I2| − |K| +1. that in this network, the minimum capacity of a cut between Therefore, all of the inequalities in (18) must be equalities. In the source and any sink is at least n. On any network, this particular, the last equality forces ε =0 and δ =0. As δ =0 condition can be verified in polynomial time using the famous implies that (15) holds and ε =0 implies that (16) holds, we network flow algorithm (see, for instance [6]). Therefore, complete the analysis of Case 2. Algorithm 1 runs in polynomial time in k and n. We omit In any cases, we always derive a contradiction. Therefore, the proof due to lack of space. Interested reader can find the proof online at [5]. our assumption that there exists some ℓ ∈ I1 ∩ I2 is wrong. Hence I1 ∩ I2 = ∅. It follows immediately that Ii ∩ Ii′ = ∅ IV. ACKNOWLEDGMENT ′ ′ for every i,i ∈ [t], i 6= i . The first author thanks Yeow Meng Chee for informing him We are now in position to prove Lemma 8, which in turn of the Gale-Ryser Theorem. implies Theorem 6. REFERENCES Proof of Lemma 8: Recall that we assume that [1] F. J. MacWilliams and N. J. A. Sloane, The Theory of Error-Correcting Codes. Amsterdam: North-Holland, 1977. [2] T. Ho, M. M´edard, R. Koetter, D. R. Karger, M. Effros, J. Shi, and {i ∈ [k]: mi,jmax =1 and mi,jmin =0} = [t]. (20) B. Leong, “A random linear network coding approach to multicast,” IEEE. Moreover, we suppose, for contradiction, that none of these t Trans. Inform. Theory, vol. 52, no. 10, 4413–4430. [3] R. Motwani and P. Raghavan, Randomized Algorithms. Cambridge rows satisfy the second condition in Step 7 of Algorithm 1. As University Press, 1995. shown by Lemma 9 c), d), e), and Lemma 10, we can associate [4] K. Manfred, “A simple proof of the Gale-Ryser Theorem,” The American Mathematical Monthly, vol. 103, no. 4, pp. 335–337, 1996. to each i ∈ [t] a subset Ii ⊂ [k] satisfying the following [5] http://www.sutd.edu.sg/cmsresource/MDS-ISIT2013.pdf. ′ (S1) i∈ / Ii′ , for all i,i ∈ [t], [6] R. K. Ahuja, R. L. Magnanti, and J. B. Orlin, Network Flows. Englewood Cliffs, NJ: Prentice-Hall, 1993. (S2) jmax ∈/ RIi , for all i ∈ [t],

5