arXiv:1711.01975v1 [math.CO] 6 Nov 2017 an Ω vr ( every usin1.1. Question qiaet entoso h hehl ucinse[3]). see function threshold the of definitions equivalent, many integers vr ( every ustof subset eia ae fJhnsn an n u[]wosoe htt that showed who and [7] attention Vu of and lot Kahn, a is Johansson, attracted of ex problem paper the Their for seminal is model. threshold the this what in asked [15] Shamir and Schmidt gahcl efc matching. perfect (graphical) a hr o all for where random fapretmthn nagah ti,i u id uta nat as just mind, our in is, It graph. ( a an in of matching emergence perfect a of ie yGok K Glock, by given oko ee evs 8 seTerm21,blw.H prove He below). 2.11, Theorem (see [8] many Keevash Peter of work sta h hehl o h perneo efc matchin perfect a of appearance the for threshold the that is sa pe on ntetrsodfnto,wiht u know our to which function, threshold relate the for on 5 bound Section upper (see an threshold correct is the be well might osdrtetrsodfrteapaac fSenrsystem Steiner of appearance the for threshold the consider odtos n( an conditions) H hr xsssm unique some exists there probability k p novosncsaycniinfrahprrp ocnana contain to hypergraph a for condition necessary obvious An n fEdo n ´nisfis icvre nteter fr of theory the in discoveries R´enyi’s first Erd˝os and of One niepretmthns h eeeitneo ( of existence mere the matchings, perfect Unlike oee,apretmthn na in matching perfect a However, eoeby Denote ( ( log eiae oNa oa oe.“nyoei ydv,m perfec my dove, my is one “Only Rosen. Yonat Noam to Dedicated = ,k k k, n, n n ; n n ( n p n k log k ,k k k, n, Abstract. oli evs’ ehdo admzdAgbacConstruct Algebraic Randomized of method Keevash’s is tool og to longs hnaypoial lotsurely almost asymptotically then k etedsrbto on distribution the be ) ∈ k n − − aifigncsaydvsblt odtos(e Sectio (see conditions divisibility necessary satisfying − n -uniform N ntetrsodfnto.W o’ nwo n a oimprove to way any of know don’t We function. threshold the on 1 N − o hc n( an which for )sz usti otie npeieyoeeg.Nt that Note edge. one precisely in contained is subset 1)-size )stms ecnandi tlatoeeg.Ti rvdsa provides This edge. one least at in contained be must 1)-set tesm stetrsodfrtedsperneo isolated of disappearance the for threshold the as same (the p and and 1) neednl.If independently. , − H n H Senrsse,where system, -Steiner k 1) hti h hehl ucinfrtepoet that property the for function threshold the is What ,k k k, n, ∈ k P ( neednl ihprobability with independently n SENRSSESI ADMHYPERGRAPHS RANDOM IN SYSTEMS -STEINER Let ,k k k, n, ≥ h,L,adOtu 6.Ti ae tmaigu n interes and meaningful it makes This [6]. Osthus and Lo, uhn, ¨ n I, h e of set the ) = vre yegahi hc every which in hypergraph -vertex a eyln-tnigoe usinutltercn bre recent the until question open long-standing very a was 5 P H {P − n earandom a be ,k k k, n, n − )Senrsse xss natraiepofwssubseque was proof alternative An exists. system Steiner 1) ⊆ p } c n )Senrsse,nml,a namely, system, 1)-Steiner H = ∈ I k p − sasqec fnnrva,mntn nraigpropertie increasing monotone nontrivial, of sequence a is ( k c H n uiomhprrpso etxst[ set vertex on hypergraphs -uniform ( 6 ∅ )Senrsse xss eask: We exists. system 1)-Steiner ), P k = ( ) k 1. k n p IHE SIMKIN MICHAEL -uniform H ∈ uiomhprrp sntteol aua analogue natural only the not is hypergraph -uniform c hr aheg sicue ntehprrp with hypergraph the in included is edge each where ) P n ( Introduction [0 otisa ( an contains P ( agsover ranges , n ]s.t. 1] sthe is ) H p n 1 eso htfrsome for that show We . k vre yegahweeevery where hypergraph -vertex ( n P samntn nraigpoet then property increasing monotone a is ) H ,k k k, n, ,k k k, n, hehl function threshold ∼H N ( k k n ? − ; sti nlddwt probability with included is -set p k − c )SenrSse.Ormain Our System. 1)-Steiner ) uiomhprrp n[ on hypergraph -uniform ojcue) u anresult main Our conjectures). d Let )Senrssesfrinfinitely for systems 1)-Steiner [ nrno hypergraphs. random in s ions. n”(ogo og 6:9 Songs of (Song one” t H sec fapretmatching perfect a of istence P ∈ . eo o h precise the for below 1.1 n is g a vnulystldi a in settled eventually was N andom eg snew: is ledge htfralbtfinitely but all for that d k etrsodprobability threshold he ε = ] k ete(nnt)stof set (infinite) the be H tie ytmi that is system Steiner p rlt s bu the about ask to ural > n = for H ∼ ,if 0, 2 1 .For ]. If . G vertices). log for k k ( n tpebe- -tuple k p n oe on of bound lower hsi just is this 2 = n I ( ≥ ; n P osdra Consider . p hsadthis and this sa infinite an is p n ; rps[5] graphs ) − p frother, (for ∈ akthrough ) ε k contains [16] , [0 igto ting , n ]let 1] ). s.t. ] ntly p s . −ε Theorem 1.2. For every k 2 there exists some ε > 0 s.t. if n Nk and p n then the probability that H (n≥; p) contains an (n,k,k 1)-Steiner∈ system is 1 ≥n−ω(1). ∼Hk − − Remark 1.3. It’s natural to ask how ε depends on k in our proof, and how close n−ε is to the threshold function. We believe that even the best ε obtainable by our methods is far from the truth; therefore we’ve chosen a simpler analysis over optimal results. As to the value of ε we do obtain, it will be easier to discuss after proving Theorem 1.2. We do so in Section 5. Remark 1.4. The proof of Theorem 1.2 is by a probabilistic algorithm that, given H −ε ∼ k (n; p),p = n for sufficiently small ε, finds an (n,k,k 1)-Steiner system contained in H with high probability. The proof of Keevash’s main theorems− in [8] and [9] is similar. In fact, two steps of our algorithm, constructing the “Template” and “Nibble” (in Sections 2.1 and 2.2, respectively), are “lifted” straight from Keevash’s proof of the main theorem in [8]. Another step (described in Section 2.3) is a direct application of Keevash’s Theorem ([9] Theorem 6.2, quoted below as Theorem 2.11) guaranteeing the existence of a Kk- decomposition of pseudo-random (k 1)-uniform hypergraphs (to be defined below). In light of the reliance on Keevash’s− techniques one might wonder if Theorem 1.2 isn’t a more direct consequence of Keevash’s work. In fact, it seems that a weaker version of Theorem 1.2, with n−ε replaced by a function tending to 0 at an unknown rate, can be derived straightforwardly from [8] Theorem 6.1. Alternatively, it seems likely that Theorem 1.2 can be proved by following the proof of [9] Theorem 6.2 almost to the letter, with the −ε additional constraint that only k-sets from some H k (n; n ) (with ε sufficiently small) may be used in the proof (though we haven’t verified∼H this). Despite the points raised in the previous paragraph we feel it is better to give a self- contained proof, modulo known results. Since we use Keevash’s theorem as a “black box”, the reader need not verify any specific details of his proof, which is quite complex. Fur- thermore, while our proof indeed uses techniques from Keevash’s proof it is significantly less complicated, in that it relies only on elementary linear algebra and measure concentra- tion while avoiding the need to induct on k and the topic of integral designs, which play prominent roles in [8] and [9]. The remainder of this paper is organized as follows: The rest of section 1 sets up notation and terminology, and quotes several measure concentration results. In section 2 we prove Theorem 1.2 for k 4. In section 3 we explain how to modify the proof for the slightly more complicated case where≤ k 5. In section 4 we discuss generalizations to Latin squares, one- factorizations, and high-dimensional≥ permutations. In section 5 we present some conjectures and related problems.
1.1. Notation and Terminology. Let Hk (n) be the set of k-uniform hypergraphs on vertex set [n]. Let (n; p) be the distribution on H (n) where each edge is included with Hk k probability p, independently. We identify H Hk (n) with its edge set, viewing H as a [n] |H| ∈ n subset of k . We write d (H)= n . We write Kk for the complete k-uniform hypergraph (k) n [n] on [n], i.e. Kk = k . Borrowing from the language of polytopes, if n and k are clear (for example, if we have fixed some H H (n)), we refer to an element of [n] as a facet. ∈ k k−1 Although the edges of a hypergraph are sets, we sometimes need to fix an ordering of the elements of an edge x = x ,x ,...,x Kn. In such circumstances we write { 1 2 k} ∈ k x = x1x2 ...xk. This is to be understood as an arbitrary ordering that, once chosen, remains fixed. For H H (n), K (H) := e [n] : e H H (n). For example, if G is a ∈ k−1 k ∈ k k−1 ⊆ ∈ k graph, K3 (G) is the set of inducedn triangles. For H Hok (n), ∈ e K (H)= H (n) k−1 ∪e∈H k 1 ∈ k−1 2 − is the set of facets contained in the edges of H, and is referred to as the set of H’s facets. A Kk-decomposition of G Hk−1 (n) is a graph H Kk (G) s.t. every facet in G is contained in precisely one∈ edge in H. There are necessary⊆ divisibility conditions G must satisfy for it to have a Kk-decomposition. Specifically, for S [n], we write [n] ⊆ G (S)= f [n] S : S f G . If S i , 0 i k 1, then, on the one hand, there are G (S{) ⊆edges\ containing∪ ∈S,} and on∈ the other≤ hand≤ each− k-set containing S contains k |i edges| containing S. Therefore, if a K -decomposition of G exists, we must have − k k i G (S) . We say that G is k-divisible if for all 0 i k 1 and all S [n] , k−i − | | | ≤ ≤ − ∈ i k−1−i divides G (S) . | | n A Kk-decomposition of Kk−1 is precisely an (n,k,k 1)-Steiner system. In this case, the conditions for k-divisibility translate to the arithmetic− constraints n i 0 i k 1, k i − ∀ ≤ ≤ − − | k 1 i − − If n satisfies these conditions we say it is k-divisible. A consequence of Keevash’s main Theorem in [8] is that Nk, the set of numbers n for which (n,k,k 1)-Steiner systems exist, is all but finitely many of the k-divisible numbers. If n and k are− clear from context, we refer to an (n,k,k 1)-Steiner system as simply a Steiner system or a design. n − S, T Kk are facet-disjoint if Kk−1 (S) Kk−1 (T ) = . We say that S is facet- disjoint∈if for all x,y S, x = y = x y ∩< k 1. We sometimes∅ call a facet-disjoint set a partial design.∈ A facet6 f is covered⇒ | ∩by| a partial− design S if there exists some e S containing f. ∈ If G H (n), F is a field, and π : [n] F, we denote the set ∈ k → (πx ,πx ,...,πx )T : x x ...x G Fk 1 2 k { 1 2 k}∈ ⊆ by π (G). n o A sequence of events En n∈N occurs with very high probability (w.v.h.p.) if −ω(1) { } P [En] = 1 n . If we have polynomially many (sequences of) events, each occurring w.v.h.p., then− their intersection also occurs w.v.h.p. For the most part we will abuse ter- minology and refer to a single event as occurring w.v.h.p., where the sequential structure and the dependence on n may be inferred. For a, b R we write a b to indicate a quantity in the interval [a b , a + b ]. ∈ ± − | | | | 1.2. Measure Concentration. Throughout the proof we’ll calculate the expectations of various random variables and then argue that w.v.h.p. they are, in fact, close to their mean. To this end we collect several concentration inequalities. Lemma 1.5 ([11] Theorem 2.11). Let f : 0, 1 N R be a Boolean function and C R { } → ∈ s.t. for all x,y 0, 1 N that differ in exactly one coordinate, f (x) f (y) C (i.e. f is ∈ { } | − |≤ C-Lipschitz). Let X = (X1, X2,...,XN ) be a sequence of independent Bernoulli variables with mean q [0, 1]. Then for all t> 0: ∈ t2 P [ f (X) Ef (X) >t] exp | − | ≤ −4C2Nq + 2Ct N Lemma 1.6 ([8] Theorem 2.10). Let f : 0, 1 R be a Boolean function and b1, b2, . . . , bN > N { } → 0 where if x,y 0, 1 differ only on the ith coordinate, f (x) f (y) bi. Let X1, X2,...,XN be independent∈ Bernoulli { } random variables. Then for every| t>−0: |≤ t2 P [ f (X) Ef (X) >t] 2exp | − | ≤ − N 2 2 i=1 bi ! For two sets X,Y we write Sym (X) for the group of bijectionsP from X to itself and (X,Y ) for the set of injections from X to Y . The next Lemma is an immediate conse- quenceI of [8] Lemma 2.13. 3 Lemma 1.7. Let X,Y be finite sets and let C > 0. Suppose f : (X,Y ) R has the property that for every π and every transposition τ Sym (Y ), If (π) f→(τ π) C. Let π be a uniformly random∈I element of (X,Y ). Then∈ for every t>| 0: − ◦ |≤ I t2 P [ f (π) Ef (π) >t] 2exp | − | ≤ −2 Y C2 | | Lemma 1.8 ([8] Lemma 2.7). Let . . . be a filtration of a finite probability F0 ⊆ F1 ⊆ ⊆ Fn space. Let Y1,Y2,...,Yn be a sequence of random variables and C R s.t. for every i, Yi is ∈n n i-measurable, Yi C, and E [ Yi i−1] µi. Suppose Y = i=1 Yi and µ > i=1 µi. FThen for all c>| 0:| ≤ | | |F ≤ P P µc2 P [ Y > (1 + c) µ] 2exp | | ≤ −2 (1 + 2c) C 2. Proof of Theorem 1.2 for k 4 ≤ The k = 2 case follows from the classical result of Erd˝os and R´enyi so we may assume k 3, 4 . ∈Throughout { } the section we’ll assume w.l.o.g. that p = n−ε. Wherever necessary we assume n Nk is arbitrarily large and ε is arbitrarily small (but independent of n). As- ymptotic terms∈ refer to fixed k and n tending to infinity. n Let H k (n; p). Our goal is to find a Kk-decomposition of Kk−1 contained in H. We do so by∼H by means of a probabilistic algorithm proceeding roughly as follows, where each step can be completed w.v.h.p.: Template (Section 2.1): Set aside a partial design T H that has desirable • switching properties. Later in the proof we’ll construct⊆ a design that may use a small number of edges not in H. These edges will be replaced by “algebraic absorbers” induced by the template. Nibble (Section 2.2): Using a random greedy algorithm construct a partial design • N H that is facet-disjoint from T and covers almost all facets. ⊆ Applying Keevash’s Theorem (Section 2.3): Let L Hk−1 (n) be the collection • of facets uncovered by T N. Apply Keevash’s existence∈ theorem from [9] to obtain ∪ a Kk-decomposition S Kk (L) of L. Typically, S uses edges that aren’t in H. Absorbing S (Section⊆ 2.4): Using algebraic absorbers replace the edges in S (as • well as some edges from T ) with edges from H, yielding a Kk-decomposition of n Kk−1 that is contained in H.
m n m 2.1. Template. Let 2n 2 4n. Let γ = m = Θ (1). Let F be the field with 2 ≤ ≤ 2 −1 elements (so that F = Θ (n)). We remark that since F has characteristic 2, if u, v are | | elements in a vector space over F then v = v and u + v = 0 u = v. Furthermore, − ⇐⇒ if V is a vector space over F of dimension d = O (1) then V =Θ nd . If T : V U is a | | → linear map, then ker T Im T =Θ nd . Let F∗ = F 0 . | | | | \ { } Let π : [n] ֒ F∗ be a uniformly random injection. For the remainder of the proof we → identify x [n] with π (x), and view [n] as a subset of F∗. We define the template T : ∈ T = x x ...x H : x + x + . . . + x = 0 { 1 2 k ∈ 1 2 k } The edges in T are called algebraic. A facet f Kn is algebraic if for some (unique) ∈ k−1 x [n], f x is algebraic. ∈The remainder∪ { } of Section 2.1 establishes properties of T and H that hold w.v.h.p. Sec- tion 2.1.1 introduces cross-polytopes - small hypergraphs that form the building blocks of the absorbers. Section 2.1.2 introduces the absorbers themselves - small subhypergraphs of H whose vertices satisfy certain linear constraints. Section 2.1.3 defines families of linear operators related to the absorbers and establishes their salient properties. These operators 4 provide a convenient framework within which we may study the interactions between ab- sorbers. Finally, Section 2.1.4 establishes the properties of T and H necessary to continue with the proof.
2.1.1. Cross-Polytopes. Let x ,x ,...,x , a , a , . . . , a [n] be distinct vertices. The 1 2 k 1 2 k ∈ cross-polytope spanned by x1,x2,...,xk, a1, a2, . . . , ak and denoted Cx1,x2,...,xk,a1,a2,...,ak is the k-uniform hypergraph on vertex set x ,x ,...,x , a , a , . . . , a with the edges: { 1 2 k 1 2 k} e := x : i [k] I a : i I ,I [k] I { i ∈ \ } ∪ { i ∈ } ⊆ In other words, the edges consist of the sets containing exactly one of xi and ai for each i. We refer to the edges e as either even or odd, depending on the parity of I . { I }I⊆[k] | | Observe that both Ceven = e : I [k] , I even x1,x2,...,xk,a1,a2,...,ak { I ⊆ | | } and Codd = e : I [k] , I odd x1,x2,...,xk,a1,a2,...,ak { I ⊆ | | } are Kk-decompositions of Kk−1 (Cx1,x2,...,xk,a1,a2,...,ak ). n If x = x1x2 ...xk Kk is non-algebraic with the property that every facet in x is alge- ∈ n braic, then there exist a = a1, a2, . . . , ak Kk s.t. for every i [k], x1x2 ...xi−1aixi+1 ...xk T . Observe that a = n x + x and that∈ the a s are distinct.∈ Indeed, if i = j: ∈ i j=1 j i i 6 P n a + a = 2 x + x + x = x + x = 0 i j ℓ i j i j 6 Xℓ=1 We call Cx,a the associated cross-polytope of x, and denote it by Cx. We refer to the odd vertices a1, a2, . . . , ak as the algebraic vertices of Cx. Observe that Cx T ; we call this even ⊆ the algebraic decomposition of Cx. We call Cx the non-algebraic decomposition of Cx. n n Denote by Kk the set of x Kk s.t. x is non-algebraic but every facet in x is (so C ⊆ ∈ even that the associated cross-polytope Cx exists), and Cx x H. Note that every edge in the algebraic decomposition Codd is necessarily in T \H {;} hence ⊆ Codd H. x ⊆ x ⊆ 2.1.2. Absorbers. In Section 2.1.4 we’ll show that w.v.h.p. (over the choice of H and π) n almost every s Kk can be embedded in many absorbers. These have the following structure: ∈ Let x = x x ...x Kn. An edge a = a , a , . . . , a Kn spans an absorber for x if: 1 2 k ∈ k 1 2 k ∈ k x a = . • C∩odd H∅ . • x,a ⊆ Ceven x . In particular, if = I [k] has even cardinality, then e C • x,a \ { }⊆C ∅ 6 ⊆ I ∈ x,a has an associated cross polytope CeI . Furthermore, all edges in CeI are contained in H, with the possible exception of eI itself. x1,x2,...,xk, a1, a2, . . . , ak, and the vertices of all the associated cross-polytopes • even Ce, where x = e Cx,a , are all distinct. This ensures that all the associated cross-polytopes6 are∈ facet disjoint. The absorber spanned by x, a is the k-uniform hypergraph
odd Cx,a (CeI eI ) ∪ even \ { } e∈Cx,a[ \{x} The image one should keep in mind is that an absorber for x is a cross-polytope containing x in which on every even edge (with the exception of x) lies an associated cross-polytope. Denote by x the set of absorbers for x. A 5 If A is spanned by a Kn, the algebraic decomposition of A is the hypergraph: ∈Ax ∈ k alg odd A = Ce T even ⊆ e∈Cx,a[ \{x} The non-algebraic decomposition of A is the hypergraph:
non−alg odd even A = Cx,a (Ce e ) H ∪ even \ { } ⊆ e∈Cx,a[ \{x} Observe that Aalg is a K -decomposition of K (A) K (x), and that Anon−alg is a k k−1 \ k−1 Kk-decomposition of Kk−1 (A). Finally, set: M = M (k) := A = Aalg + Anon−alg = 22k−1 2k + 1 | | −
2.1.3. Vertex, Facet, and Edge Operators. An important property of absorbers is that their vertices, facets, and edges can be seen as linear functions of the spanning vertices. This allows us to estimate the number of absorbers for a given edge, and to control the ways in which absorbers for different edges intersect. In what follows, we sometimes abuse terminology and refer to a vector as what properly should be a set. For example, we may refer to an element of Fk−1 as a facet. This should be understood as the set of coordinates of the vector. In the other direction, we sometimes refer to a set by what properly is a vector. For example, we may say that an edge x Kn ∈ k is an element of Fk. This should be understood to mean that at the first mention, we fix an ordering of the elements of x so that it is indeed a vector. All subsequent statements should hold regardless of the particular ordering. k k k−1 Let T1, T2,...,Tk : F F and P1, P2,...,Pk : F F be the canonical projec- →T →T T tions, i.e., Ti (x1,x2,...,xk) = xi and Pi (x1,x2,...,xk) = (x1,...,xi−1,xi+1,...xk) . Observe that if x Kn the vertices of x are precisely T x, T x,...,T x and the facets ∈ k 1 2 k contained in x are precisely P1x, P2x,...,Pkx. Define the linear operator C : Fk F2k by: → I C = k J + I k k Where Ik Mk (F) is the identity matrix and Jk Mk (F) is the all 1s matrix. If x , Cx is the vertex∈ set of the associated cross-polytope∈C . For every I [k], define the linear∈ C x ⊆ operator E : F2k Fk by: I → EI = X[k]\I XI Where XI Mk (F) is 1 on the positions (i, i) , i I and 0 elsewhere. Notice that if ∈ ∈ x x, a Fk span a cross-polytope, its edges are precisely E : I [k] . ∈ I a ⊆ Combining the observations above, for x , PiEI Cx : i [k] ,I [k] is the collection of facets in C . In fact, since every facet is∈C contained{ in both∈ an edge⊆e with} I even and x I | | an edge eI with I odd, we may restrict ourselves to I [k] of a specific parity. Therefore, if we define: | | ⊆ = P E : i [k] F1 { i ∅ ∈ } = P E CE : i [k] , J [k] , J =0mod2, = I [k] , I =0mod2 F2 { i J I ∈ ⊆ | | ∅ 6 ⊆ | | } = F F1 ∪ F2 then the collection of facets in the absorber for x spanned by a Fk is: ∈ x F : F a ∈ F 6 We call the family of facet operators. The facets given by are those contained F F1 in x, with the remaining facets given by 2. We define the vertex operators similarly.F Let = T E , T E : i [k] , = V V1 i ∅ i [k] ∈ V2 T E CE : i [k] , = I [k] , I =0mod2 , and = . If a Fk spans an i [k] I 1 2 absorber for x,∈ its vertex∅ 6 set⊆ is: | | V V ∪V ∈ x V : V a ∈V Finally, we define the edge operators. Let: = E : I [k] , I =1mod2 E1 { I ⊆ | | } = E CE : = J [k] , = I [k] , I =0mod2 E2 { J I ∅ 6 ⊆ ∅ 6 ⊆ | | } = E E1 ∪E2 If x, a Fk span an absorber, its edge set is: ∈ x E : E a ∈E 2k Notation. For a linear map T defined on F we write T1, T2 for the unique linear maps de- fined on Fk s.t. for all x = (x ,x ,...,x )T F2k, Tx = T (x ,...,x )T +T (x ,...,x )T . 1 2 2k ∈ 1 1 k 2 k+1 2k Before discussing these operators’ properties we pause for motivation. The point of the ′ n linear operators is to allow us to answer questions such as: for x,x Kk , how many pairs ′ ∈ of absorbers A , A ′ are there that intersect each other on a facet? Well, this ∈ Ax ∈ Ax quantity is certainly bounded above by the number of pairs a, a′ Fk for which there exist F, F ′ s.t.: ∈ ∈ F x x′ (1) F + F ′ = 0 a a′ Changing our perspective, we may ask: for a given pair F, F ′ , how many pairs a, a′ Fk x x′ ∈ F ∈ exists s.t. Equation 1 holds? Well, F + F ′ is an affine subspace of Fk−1, a a a,a′∈Fk and we are asking about the size of the inverse image of 0 under given affine maps. Linear algebra is the perfect framework for this type of calculatio{ }n. We now establish useful properties of , , and . V E F Proposition 2.1. Let x Kn. There are O nk−1 vectors a Fk s.t. either 0 ∈ k ∈ ∈ x x V : V or the values V are not distinct. As a consequence there a ∈V a V ∈V x are Ω nk edges a Kn s.t. the values V are all distinct and non-zero. ∈ k a V ∈V n Remark 2.2. The proof of Proposition 2.1 uses the fact that x Kk doesn’t contain any subset of k 2 vertices that sum to 0. This holds only because∈ k 4. This is the only place in the− proof where this is used. If x doesn’t contain any (k ≤2)-set that sums to 0 then the conclusion holds even when k 5. − ≥ Proof. We first observe that if T : F2k F is a linear functional and x Fk is fixed, x → ∈ then a Fk : T = 0 is an affine subspace of Fk. If it isn’t the entire space then its ∈ a dimension is at most k 1 and its size is O nk−1 . To show that the latter is the case it’s − x enough to exhibit some a Fk s.t. T = 0. Alternatively we may show: ∈ a 6 (2) T1x = 0 T2 = 0 6 7∨ 6 n Let x Kk . In order to prove the proposition it’s enough to show that Equation 2 holds for each∈ of the linear maps V and V + V ′ where V,V ′ are distinct. ∈V ∈V First, if V 1, then by definition of 1 we have either V1 = Ti or V2 = Ti for some i [k], both of∈ which V imply Equation 2. V ∈If V , then for some i [k] , = I [k] , I = 0mod2, V = T E CE . Let ∈ V2 ∈ ∅ 6 ⊆ | | i [k] I j I i . Then V e = T E CX e = 1 = 0 which implies Equation 2. ∈ \ { } 2 j i [k] I j 6 It remains to show that Equation 2 holds for V + V ′, where V,V ′ are distinct. We consider several cases: ∈ V x V,V ′ . In this case (V + V ′) is the difference between two elements of • ∈ V1 a k−1 x1,...,xk, a1, . . . , ak, which are distinct for all but O n choices of a. ′ V = TiE∅ 1 for some i and V = TjE[k]CEI 2, for some j and even sized non- • ∈V ′ ∈V ′ empty I [k]. Then V2 + V2 = TjE[k]CXI . Let ℓ I j . Then (V2 + V2 ) eℓ = 1, ⊆ ′ ∈ \ { } implying that V2 + V2 = 0, so Equation 2 holds. V = T E for some6 i and V ′ = T E CE , for some j and even sized non- • i [k] ∈V1 j [k] I ∈V2 empty I [k]. If I i, j is non-empty then by an argument similar to the previous case V +⊆V ′ = 0.\{ Otherwise} I = i, j , so (V + V ′) x = T E CX x = 2 2 6 { } 1 1 j [k] [k]\{i,j} x . This is the sum of k 2 non-zero distinct elements of F∗, and so is ℓ∈[k]\{i,j} ℓ − non-zero. Therefore Equation 2 holds. V,VP ′ , and V = T E CE ,V ′ = T E CE for appropriate i, j and I. In this • ∈V2 i [k] I j [k] I x x case V +V ′ = (T + T ) E , which is the sum of two of x ,...,x , a , . . . , a . a a j i I 1 k 1 k For all but O nk−1 choices for a these are distinct, implying that Equation 2 holds. ′ ′ V,V 2, V = TiE[k]CEI , and V = TjE[k]CEJ for I = J. Note that I∆J 2. • ∈ V ′ 6 | |≥ If there is some ℓ I (J i ) then (V2 + V2 ) eℓ = 1 and we’re done. The same holds if J (I j∈) =\ . Otherwise∪ { } we have I∆J = i, j , and in particular i = j. But then (\V +∪V {′)}x 6= ∅ x + x = x +{ x }= 0, so Equation 2 holds.6 1 1 ℓ∈[k]\I ℓ ℓ∈[k]\J ℓ i j 6 P P The next two propositions, whose proofs are omitted, follow from similar examinations of the definitions of and . F E Proposition 2.3. For every F the following hold: ∈ F (a) If F there exist distinct i, j [k] s.t. e , e ker F . ∈ F2 ∈ i j ∈ 1 (b) If F 2 then rk F2 1. ∈ F k−1 ≥ k −1 k (c) For every f F , a F : F1 (f + F2a) = F is an affine subspace of dimension rk∈F + 1. ∈ 6 ∅ ⊆ 1 Proposition 2.4. For every E , dim ker E k 1. ∈E 2 ≤ − n 2.1.4. Properties of the Template. The next Proposition establishes that every edge in Kk n can be embedded in many absorbers. Recall that x is the set of absorbers for x Kk and that M is the number of edges in an absorber. A ∈ Proposition 2.5. W.v.h.p. (over the choice of H and π) for every x Kn, = ∈ k |Ax| Ω nk−Mε . Remark 2.6 . The proof relies only on the conclusion of Proposition 2.1 holding for x. There- fore, if x contains no (k 2)-set whose sum is 0, w.v.h.p. there are Ω nk−Mε absorbers for x even if k 5. − ≥ n Proof. It’s enough to show that the bound holds w.v.h.p. for an arbitrary x Kk . As H and π are independent, we may apply the following two-step analys∈is: Condition ′ ∗ k on the values π takes on x. Let x be the set of a (F ) satisfying the conditions in A 8 ∈ x Proposition 2.1 (so ′ = Ω nk ). For each a ′ , V : V = = O (1). |Ax| ∈ Ax a ∈V |V| Therefore the probability (over π([n] x)) of all vertices in the absorber spanned by x, a \ |V| ′′ ′ being in π ([n]) is at least (1 o (1)) γ = Ω (1). Let x be the set of a x s.t. x − A ∈ A V : V π ([n]). Then: a ∈V ⊆ k−1 E ′′ = ′ (1 o (1)) γk(2 +1) =Ω nk π([n]\x) Ax Ax − Composing a transposition of F ∗ π (x) with π affects ′ by at most O nk−1 . Thus, \ |Ax| applying Lemma 1.7, w.v.h.p. ′′ =Ω nk . x For the second step, to obtain|A | , we choose H (n; p). Note that for a ′′, since x k x x A ∼H x ∈A the vertices V are distinct, so are the facets F . Therefore a ′′ a a ∈ Ax V ∈V F ∈F is also in x iff all of the M (k) edges of the would-be absorber spanned by x, a are in H. Thus: A E [ ]= pM(k) ′′ =Ω nk−M(k)ε H |Ax| Ax x is a Boolean function of the independent Bernoulli random variables χe n where |A | { }e∈Kk χe = 1 iff e H. We’d like to apply Lemma 1.6 to conclude that w.v.h.p. x is close to its expectation.∈ For e Kn, set: |A | ∈ k b = ker E e | 2| E∈E:e∈XE1x+Im E2 Adding or removing e from H alters by at most b . We have: |Ax| e 2 2 2 B := be = ker E2 = O ker E2 n n | | n | | e∈K e∈K E∈E:e∈E1x+Im E2 e∈K E∈E:e∈E1x+Im E2 Xk Xk X Xk X 2 2 k = O ker E2 = O Im E2 ker E2 = O n ker E2 | | | | | | ! | |! EX∈E e∈E1Xx+Im E2 EX∈E EX∈E By Proposition 2.4 dim ker E k 1 = ker E = O nk−1 . Thus: 2 ≤ − ⇒ | 2| B = O n2k−1 Applying Lemma 1.6 with (for example) t = 1 E =Ω nk−Mε we conclude that w.v.h.p. 2 |Ax| =Ω nk−Mε . |Ax| We proceed conditioning on the conclusions of Proposition 2.5 holding. 2.2. Nibble. The goal of this stage is to extend T to a partial design T N H covering n ∪ ⊆ almost all facets, such that the hypergraph L := Kk−1 Kk−1 (T N) of uncovered facets is pseudo-random (in a sense to be made precise) and not\ too spar∪se (specifically, it will have density at least n−a, where a = a (k) > 0 is a small constant to be specified later). There are several avenues we can follow. We sketch two approaches, both involving probabilistic proofs. The first is to apply the random greedy packing algorithm to H: Set N = . Let −a ∅ L Hk−1 (n) be the set of facets uncovered by T N. As long as d (L) n and H contains∈ edges that are facet disjoint from T N, choose∪ one uniformly at random≥ and add it to N, then update L accordingly. ∪ The second approach is to use the R¨odl Nibble: As before, set N = and let L be the set of facets uncovered by T N. Let G H be the set of edges that∅ are facet disjoint from T N. Let η = η (k) >∪0 be a sufficiently⊆ small constant. As long as d (L) n−a ∪ 9 ≥ and G = , choose a random subset S G where each edge is selected with independent 6 ∅ η ⊆ probability nd(G) . Of the edges in S, keep only those that are facet disjoint from all others, and add them to N. Update L and G accordingly. We now define the pseudo-randomness conditions we’ll need. Definition 2.7. Let S Fk−1 and let C > 0. S is C-affine-bounded if for every affine k−1 ⊆ |S| n space A F of dimension at least 1, A S C A k 1 . If G H and τ : [n] F, ⊆ | ∩ |≤ | | |F| − ∈ k−1 → we say that (G, τ) is C-affine-bounded if τ (G) is C-affine-bounded. If τ and C are implicit, we simply say that G is affine-bounded. Definition 2.8. Let G Hn , c> 0, and h N. G is (c, h)-typical if for any collection ∈ k−1 ∈ of ℓ h distinct S ,...,S [n] , G (S ) = (1 c) d (G)ℓ n. ≤ 1 ℓ ∈ k−2 ∩i∈[ℓ] i ± Roughly speaking, a hypergraph is affine-bounded if its intersection with every non- trivial affine space (under an appropriate map of the vertices) is bounded above by what we’d expect in a random hypergraph of the same density. Similarly, typicality means that all the degrees, codegrees, etc., are what we’d expect in a random hypergraph. The next lemma establishes that affine-boundedness carries over, in a certain sense, to Kk-decompositions.
Lemma 2.9. There exists some D = D (k) > 0 s.t. the following holds: Let G Hk−1 (n), ∗ ∈ .τ : [n] ֒ F be injective, and C > 0. Let S Kk (G) be a Kk-decomposition of G → ⊆k−1 −1 If (G, τ) is C-affine-bounded then for any f F and F 2, τ (S) F1 (f) |S| −1 ∈ ∈ F ∩ ≤ DC nk F1 (f) .
−1 Proof. If f / Im F1 then F1 (f) = and the conclusion holds trivially. Otherwise −1 ∈ −1 ∅ k F1 (f) = ker F1 . Furthermore, F1 (f)= x0 + ker F1 for some x0 F . By Proposition| 2.3| item (a), there exist distinct i, j [k] s.t. e , e∈ ker F . Observe ∈ i j ∈ 1 that ei ker Pi. Therefore dim Pi (ker F1) dim ker F1 1. Additionally, Piej = 0, so dim P (ker∈ F ) 1. Hence: ≤ − 6 i 1 ≥ 1 dim P F −1 (f) = dim P (ker F ) dim ker F 1 ≤ i 1 i 1 ≤ 1 − Because S is a Kk-decomposition of G, Pi induces an injection from τ (S) into τ (G). Therefore: τ (S) F −1 (f) = P τ (S) F −1 (f) τ (G) P F −1 (f) ∩ 1 i ∩ 1 ≤ ∩ i 1 Applying affine-boundedness and the observations above:
G G τ (G) P F −1 (f) = O C P F −1 (f) | | = O Cndim ker F1−1 | | ∩ i 1 i 1 nk−1 nk−1 Finally, since S = Θ ( G ), we have: | | | | S τ (G) P F −1 (f) = O C | | F −1 (f) ∩ i 1 nk 1 We may take D to be the implicit constant in the final term.
Proposition 2.10. Fix h, ℓ N, and a, c0 > 0. There exist C,δ > 0 s.t. w.v.h.p. (over the choice of H and π) there∈ exists some N H facet-disjoint from T s.t. L := Kn ⊆ k−1 \ K (N T ) satisfies: k−1 ∪ (a) L is C-affine-bounded. ℓ (b) L is (c, h)-typical, with c < c0d (L) . (c) n−δ d (L) n−a. (d) L is ≥k-divisible.≥ 10 Both the random greedy packing algorithm and the Nibble yield N and L with the desired properties w.v.h.p. (over the internal randomness of the algorithms as well as H and π). Both algorithms have been extensively analyzed (for example, see [14] or [1], Chapter 4.7, for the Nibble, and [17] Section 7.2 for the greedy packing algorithm) and it is straightforward to adapt these analyses to prove Proposition 2.10. For completeness’ sake we analyze the Nibble in Appendix A. We proceed conditioning on the existence of N,L, and δ as in Proposition 2.10.
2.3. Applying Keevash’s Theorem. We recall Keevash’s existence theorem ([9], Theo- rem 6.2) as it applies to (n,k,k 1)-Steiner systems: − Theorem 2.11. For every k 2 there exists c , a (0, 1) and h,ℓ,n N s.t. if n n ≥ 0 ∈ 0 ∈ ≥ 0 and G H (n) is k-divisible, (c, h)-typical, d (G) >n−a, and c < c d (G)ℓ then K (G) ∈ k−1 0 k has a Kk-decomposition.
By Proposition 2.10 L satisfies the conditions in Theorem 2.11, and hence it has a Kk- decomposition S K (L). ⊆ k 2.4. Absorbing S. Note that T N S is a K -decomposition of Kn . Typically, however, ∪ ∪ k k−1 S * H. To remedy this, we’ll replace the edges in S with appropriate absorbers, as follows: Algorithm 2.12. Order the the edges in S arbitrarily, s ,s ,...,s . • 1 2 |S| For every 1 i S , choose A uniformly at random from the absorbers in • ≤ ≤ | | i Asi that are facet-disjoint from A1, A2,...,Ai−1 as well as S si . If there are no such absorbers, abort. \{ } If the algorithm doesn’t abort then
|S| |S| N T Aalg Anon−alg H ∪ \ i ∪ i ⊆ i[=1 i[=1 n is a Kk-decomposition of Kk−1. It remains to prove that w.v.h.p. Algorithm 2.12 doesn’t abort. To get some feel for why this is true, consider the last step of the algorithm, in which we choose an absorber for s|S|. At this point S 1= O nk−1−δ absorbers have been selected, each covering O (1) facets. We adopt the| heuristic|− that the set of covered facets is a random subset of Kn of size k−1 O nk−1−δ . This will be justified by affine-boundedness of L together with Lemma 2.9. Now, consider how many absorbers for s|S| a facet f excludes if it is covered by a different absorber. If, for example, f F s +Im F for some F , then it may exclude as many ∈ 1 |S| 2 ∈ F2 as ker F2 absorbers. If we sum over all elements of F1s|S| + Im F2, the expected number | | k−1−δ of absorbers excluded in this way is O n Im F ker F = O nk−δ . The last term nk−1 | 2| | 2| is independent of F , so we conclude that the expected number of excluded absorbers is O nk−δ . Provided ε is small enough, this is much less than . As|S| We’ll need an estimate for the number Of of absorbers for the elements of S that cover a given facet f Kn L. ∈ k−1 \ Lemma 2.13. Let f Kn L. Let O = A : f K (A) . Then O = ∈ k−1 \ f s∈S |{ ∈As ∈ k−1 }| f O nk−δ . P Proof. By definition:
O a Fk : F s + F a = f f ≤ ∈ 1 2 s∈S F ∈F n o X X 11
If F then, by the definition of , for all s S, F s + F a = F s π (L). But by ∈ F1 F1 ∈ 1 2 1 ∈ assumption f / L, hence F1s + F2a = f and this contributes nothing to the sum. Changing the order of summation∈ we’re left with:6 O a Fk : F s + F a = f f ≤ ∈ 1 2 F ∈F2 s∈S n o X X Rearranging:
Of s S : F1s + F2a = f ≤ k |{ ∈ }| FX∈F2 aX∈F Now, for any F and a Fk: ∈ F2 ∈ s S : F s + F a = f π (S) F −1 (f + F a) { ∈ 1 2 }⊆ ∩ 1 2 Since π (L) is affine-bounded and S is a Kk-decomposition of L we have, by Lemma 2.9: L π (S) F −1 (f + F a) = O | | F −1 (f + F a) ∩ 1 2 nk 1 2
Thus: L −1 Of = O | k| F1 (f + F2a) −1 n F ∈F2 a∈ k:F1 (f+F2a)6=∅ X F X
By Proposition 2.3 item (c) a Fk : F −1 (f + F a) = is an affine space of dimension ∈ 1 2 6 ∅ rk F + 1. For such a, F −1 (f + F a) = ker F = O nk−rk F1 . Therefore: 1 1 2 | 1| L O = O nrk F1+1 | |nk−rk F1 = O (n L )= O nk−δ f nk | | FX∈F2
Proposition 2.14. W.v.h.p. (over the internal randomness of the algorithm) Algorithm 2.12 doesn’t abort. Proof. Write := . Recall that by Proposition 2.5 =Ω nk−Mε . Given A , A ,...,A Ai Asi |Ai| 1 2 i−1 we’ll call the absorbers in i that are facet disjoint from A1, A2,...,Ai−1 and S si per- missible and the remainingA absorbers excluded. We’ll show that w.v.h.p. at\ every { } step 1 k−Mε of the algorithm there are at least 2 i =Ω n permissible absorbers. In particular, w.v.h.p. the algorithm doesn’t abort.|A | We first show that for every 1 i S all but o ( ) absorbers in are facet disjoint ≤ ≤ | | |Ai| Ai from S si . Let Bi be the number of absorbers for si that intersect S si on a facet, i.e.: \ { } \ { } B = A : K (A) K (S s ) = i |{ ∈Ai k−1 ∩ k−1 \ { i} 6 ∅}| We have: s B a Fk : F i K (S s ) i ≤ ∈ a ∈ k−1 \ { i} F ∈F X ker F (F s + Im F ) π (K (S s )) ≤ | 2| | 1 i 2 ∩ k−1 \ { i} | FX∈F Now, for every F 1, F1si +Im F2 = F1si Kk−1 (si) and, since S is a collection of facet disjoint edges, this∈ is F disjoint from K (S ∈ s ). So we’re left with: k−1 \ { i} B ker F (F s + Im F ) π (K (S s )) i ≤ | 2| | 1 i 2 ∩ k−1 \ { i} | FX∈F2 ker F (F s + Im F ) π (L) ≤ | 2| | 1 i 2 ∩ | FX∈F2 12 By Proposition 2.3 item (b) F1si + Im F2 is an affine space of dimension at least 1. Since π (L) is affine-bounded, we have:
L B = O ker F Im F | | = O nk−δ = o ( ) i | 2| | 2| nk−1 |Ai| FX∈F2 δ The last equality holds provided ε< M . Next we demonstrate that w.v.h.p. the choices of A1, A2,...,Ai−1 leave many permissible absorbers for si. We’ll define the stopping time τ as the smallest i s.t. there are fewer than |Ai| 2 permissible absorbers in i and τ = if there is no such i. It’s enough to show that for each i: A ∞ (3) Pr [τ = i τ i]= n−ω(1) | ≥ Fix 1 i S and assume, inductively, that Equation 3 holds for j < i. We’d like to bound (w.v.h.p.)≤ ≤ | | the number of absorbers in excluded by the choices of A , A ,...,A . Ai 1 2 i−1 For each F , let E be the number of elements a Fk s.t. f := F (s )+ F (a) / π (L) ∈ F F ∈ 1 i 2 ∈ and f has been covered by one of A1,...,Ai−1. The number of excluded absorbers is bounded above by B + E . We’ll show that w.v.h.p. E = o ( ) for every F . i F ∈F F F |Asi | s First, if F , thenP for every a Fk, F i = F s π (L), so by definition E = 0. ∈ F1 ∈ a 1 i ∈ F Otherwise F 2. We’ll bound EF by an application of Lemma 1.8. The filtration is the successive choices∈ F of A ,...,A , and we bear in mind that we’ve conditioned on τ i. 1 i−1 ≥ For f Fk−1 π (L), and j < i let Xj be the indicator of the event that f is covered by ∈ \ f Aj. Then: E ker F Xj F ≤ | 2| f Xj 0. By the inductive hypothesis this is a non-empty set: | ≥ j µj := max E EF A1, A2,...Aj−1,τ i A1,A2,...,Aj−1 | ≥ h i 1 j max E EF A1,...,Aj−1,τ >j ≤ A1,A2,...,Aj−1 P [τ i] | ≥ h i By the inductive hypothesis P [τ i] = 1 o (1). Furthermore, τ > j implies that there are at least 1 =Ω nk−Mε permissible≥ absorbers− for s . Therefore: 2 |Aj| j ker F µ O | 2| A : f K (A) j ≤ nk−Mε |{ ∈Aj ∈ k−1 }| f∈F1Xsi+Im F2 Hence: ker F µ := µ O | 2| A : f K (A) j ≤ nk−Mε |{ ∈Aj ∈ k−1 }| Xj