arXiv:1704.02767v1 [cs.DS] 10 Apr 2017 vnls-declrn,i any in list-edge-coloring, even constant itiue loih o orienting for algorithm distributed enih n oosi[OS1] oolr fordtriitclist- (2∆ deterministic of our complexity of randomized corollary A the [FOCS’16]. Kosowski and Heinrich, rbe fBrnomadEknsbo,a book, Elkin’s indepen and neighborhood Barenboim distribut bounded of deterministic with 5 graphs polylogarithmic-time Problem for a algorithm (MIS) obtain set we particular, In o 1+ (1 for itiue rp ooigbo fBrnomadEkn h previous The e.g Elkin. See, and Barenboim algorithm. 2 of of polylogarithmic-time were book questions a Coloring open for Graph asked Distributed long-standing which the 1980s, of late one answers This loih optsamxmlmthn in matching maximal a computes algorithm hr ahhprdehsa most at has hyperedge each where — epeetadtriitcdsrbtdagrtmta optsa(2 a computes that algorithm distributed deterministic a present We hshprrp acigagrtmadisetnin lola oan a to lead also extensions its and algorithm matching This h e ehia nrdeti eemnsi itiue loih f algorithm distributed deterministic a is ingredient technical key The manuela.fi[email protected] O ( ε √ hc ebleewl eo neetbyn hsrsl.I n hyperg any In result. this beyond interest of be will believe we which , > ε -prxmto fmxmmmthn,adaquasi-polylogarithmic-t a and matching, maximum of )-approximation aul Fischer Manuela log n ,hneprilyaseigOe rbe 0o aebi n li’ b Elkin’s and Barenboim of 10 Problem Open answering partially hence 0, T Zurich ETH ) yPnoeiadSiiaa SO’2 and [STOC’92] Srinivasan and Panconesi by eemnsi itiue Edge-Coloring Distributed Deterministic i yegahMxmlMatching Maximal Hypergraph via n nd rp ihmxmmdge ,in ∆, degree maximum with graph -node − λ abrct rpswt u-erea most at out-degree with graphs -arboricity )eg-ooigt oylglog poly(log to 1)-edge-coloring r ghaff[email protected] osnGhaffari Mohsen etcs—with — vertices Abstract O T Zurich ETH ( r 5 lg∆ (log log 6+log /ε ) O r ∆ n lg1 (log itiue rp algorithms graph distributed oe n aiu ere∆ this ∆, degree maximum and nodes · O log ˜ /ε ( ) √ n  ruddtriitcalgorithm deterministic -round )+ ∆) rounds. ) [email protected] n ec,hneaseigOpen answering hence dence, nvriyo Freiburg of University rounds. ) declrn loimproves also edge-coloring ,Oe rbe nthe in 4 Problem Open ., ainKuhn Fabian dmxmlindependent maximal ed O ∆ O etrudcomplexities round best (log or me fohrresults. other of umber (log − yegahmaximal hypergraph )eg-ooig or 1)-edge-coloring, ∗ 7 ⌈ n 1+ (1 m deterministic ime ∆ yFraigniaud, by ) aho rank of raph · log ε ) λ n ook. ⌉ rounds. ) rmthe from o any for , r 1 Introduction and Related Work Distributed graph algorithms have been studied extensively over the past 30 years, since the seminal work of Linial [Lin87]. Despite this, determining whether there are efficient deterministic distributed algorithms for the most classic problems of the area remains a long-standing open question. Distributed graph algorithms are typically studied in a standard synchronous message passing model known as the LOCAL model [Lin87, Pel00]: the network is abstracted as an undirected graph G = (V, E), n = V , with maximum degree ∆, and where each node has a Θ(log n)-bit unique identifier. Initially,| nodes| only know their neighbors in G. At the end, each node should know its own part of the solution, e.g., the colors of its edges in edge-coloring. Communication happens in synchronous rounds, where in each round each node sends a message to each of its neighbors.1 The main complexity measure is the number of rounds needed for solving a given graph problem. The four classic local distributed graph problems are maximal independent set (MIS), (∆ + 1)- vertex-coloring, (2∆ 1)-edge-coloring, and maximal matching [PR01, BE13]. All of these problems − have trivial greedy sequential algorithms, as well as simple O(log n)-round randomized distributed algorithms [Lub86,ABI86], and even some faster ones [BEPS12,EPS15,Gha16,HSS16]. But the deter- ministic distributed complexity of these problems remains widely open, despite extensive interest (see, e.g., the first five open problems of [BE13]). Particularly, with regards to MIS — which is the hardest of the four problems, as the other three can be reduced to MIS locally [Lin87] — Linial [Lin92] asked

“can it [MIS] always be found [deterministically] in polylogarithmic time?”. This remains the most well-known open question of the area. The best known round complexity is 2O(√log n), due to Panconesi and Srinivasan [PS92]. Panconesi and Rizzi pointed out in the year 2000 [PR01] that “while maximal matchings can be computed in polylogarithmic time, in n, in the distributed model [HKP98], it is a decade old open problem whether the same running time is achievable for the remaining 3 structures.” The status remains the same as of today, after almost two more decades. In particular, for edge-coloring which is our main target, Barenboim and Elkin stated the following problem in their recent Distributed Graph Coloring book [BE13]:

Open Problem 11.4 [BE13] Devise or rule out a deterministic (2∆ 1)-edge-coloring algorithm that runs in polylogarithmic time. − 1.1 Our Contributions Improved Deterministic Edge-Coloring Algorithm: One of our main end results is a positive resolution of the above question. Theorem 1.1. There is a deterministic distributed algorithm that computes a (2∆ 1)-edge-coloring in − O(log7 ∆ log n) rounds, in any n-node graph with maximum degree ∆. Moreover, the same algorithm solves list-edge-coloring,· where each edge e E must get a color from an arbitrary given list L of ∈ e colors with L = d + 1, where d denotes the number of edges incident to e. | e| e e For list-edge-coloring, the previously best known round complexity was 2O(√log n), by a classic net- O(√log n log log n) work decomposition of Panconesi and Srinivasan [PS92], which itself improved on an 2 · - round algorithm of Awerbuch et al. [ALGP89]. For low-degree graphs, the best known is an (O˜(√∆)+ O(log∗ n))-round algorithm of Fraigniaud, Heinrich, and Kosowski [FHK16], which is more general and applies also to (∆ + 1)-list-vertex-coloring. There are some known poly log n-round determin- istic edge-coloring algorithms, but all have two major shortcomings: (A) they require more colors, and (B) they are quite restricted and do not work for list-coloring. These algorithms are as follows:

1In the LOCAL model, messages might be of arbitrary size. A variant of the model where the messages have to be of bounded size is known as the CONGEST model [Pel00]. Our edge-coloring algorithms and our MIS and vertex-coloring algorithms for graphs of bounded neighborhood independence in fact work with small O(log n)-bit size messages. Though, for the sake of the readability, we avoid explicitly discussing the details of this aspect.

1 O log ∆ (I) a ((2 + o(1))∆)-edge-coloring by Ghaffari and Su [GS17]; (II) a ∆ 2  log log ∆ -edge-coloring by · Barenboim and Elkin [BE11]—see also [GS17, Appendix B] for a short proof; and (III) an O(∆ log n)- edge-coloring by Czygrinow et al. [CHK01]. See also [BE11, Chapter 8]. We also note that a recent work of Barenboim, Elkin, and Maimon [BEM16] presents an efficient deterministic algorithm for computing a (∆ + o(∆))-edge-coloring in graphs with arboricity a ∆1 δ, ≤ − for some constant δ > 0. In Corollary 5.1, we sketch how a simple combination of the list-edge-coloring algorithm of Theorem 1.1 with H-partitionings [BE13, Chapter 5.1] significantly extends their result. Improved Randomized Edge-Coloring Algorithm: The deterministic list-edge-coloring algo- rithm of Theorem 1.1, in combination with some randomized edge-coloring algorithms of [EPS15, BEPS12], also improves the complexity of randomized algorithms for (2∆ 1)-edge-coloring, making it the first among the four classic problems whose randomized complexity falls− down to poly(log log n).

Corollary 1.2. There is a randomized distributed algorithm that computes a (2∆ 1)-edge-coloring in O(log8 log n) rounds, with high probability, in any n-node graph with maximum degree− ∆. The previous (worst-case) complexity for randomized (2∆ 1)-edge-coloring was 2O(√log log n) − rounds2, due to Elkin, Pettie, and Su [EPS15]. By improving this, Corollary 1.2 widens the prov- able gap between the complexity of (2∆ 1)-edge-coloring, which is now in poly(log log n) rounds, and the complexity of maximal matching,− which needs Ω( log n/ log log n) rounds [KMW16]. Unified Formulation as Hypergraph Maximal Matchingp : Our first step towards proving Theorem 1.1 is a simple unification of all the aforementioned four classic problems: MIS, (∆ + 1)- vertex-coloring, (2∆ 1)-edge-coloring, and maximal matching. We can cast each of these problems − as a maximal matching problem on hypergraphs3 of some rank r, which depends on the problem, and increases as we move from maximal matching to maximal independent set. In other words, the hy- pergraph maximal matching problem can be used to obtain a smooth interpolation between maximal matching in graphs and maximal independent set in graphs. Recall that the rank of a hypergraph is the maximum number of vertices in any of its hyperedges. Moreover, a matching in a hypergraph is a set of hyperedges, no two of which share an endpoint. We present a reduction from (2∆ 1)-edge-coloring to maximal matching in rank-3 , − as we sketch next in Lemma 1.3. A similar reduction can be used for list-edge-coloring, as formalized in Lemma 2.13. We note that these reductions are inspired by the well-known reduction of Luby from (∆ + 1)-vertex-coloring to MIS [Lub86, Lin87].

Lemma 1.3. Given a deterministic distributed algorithm that computes a maximal matching in A N-vertex hypergraphs of rank 3 and maximum degree d in T (N, d) rounds, there is a deterministic distributed algorithm that computes a (2∆ 1)-edge-coloring of any n-node graph G = (V, E) B − with maximum degree ∆ in at most T (3n∆, 2∆ 1) rounds. − Proof Sketch. To edge-color G = (V, E), we generate a hypergraph H: Take 2∆ 1 copies of G. − For each edge e E, let e1 to e2∆ 1 be its copies. For each e E, add one extra vertex we to H ∈ − ∈ and then, change all copy edges e1 to e2∆ 1 to 3-hyperedges by adding we to them. Algorithm runs the maximal matching algorithm −on H, and then, for each e E, if the copy e of e is inB A ∈ i the computed maximal matching, colors e with color i. One can verify that each G-edge e must B have exactly one copy e in the maximal matching, and thus we get a (2∆ 1)-edge-coloring. i −

2It is worth noting that the randomized algorithm of [EPS15], as well as its predecessors [PS97, DGP98], can also obtain better colorings, even as good as ((1 + ε)∆)-edge-coloring. Though this becomes slow in low-degree graphs. 3In the LOCAL model, when communicating on a hypergraph, per round each node v can send a message on each of its hyperedges, which then gets delivered to all the other endpoints of that hyperedge. The variant of the model with bounded-size messages can be specialized in a few different ways, see e.g. [KNPR14]. For our purposes, hypergraphs are mainly used for formulating the requirements of the problem, and the real communication happens on the base graph.

2 Besides edge-coloring that gets reduced to hypergraph maximal matching for rank r = 3, and graph maximal matching which trivially is the special case of r = 2, we can also formulate MIS and (∆ + 1)- vertex-coloring as maximal matching in hypergraphs. For instance, to translate MIS on a graph G to maximal matching on a hypergraph H, view each G-edge as one H-vertex and each G-node v as one H-edge on the H-vertices corresponding to the G-edges incident to v. However, unfortunately, in this naive formulation, the rank becomes ∆. As such, we do not obtain any improvement over the known algorithms for these problems, in the general case. It remains an intriguing open question whether any alternative formulation, perhaps in combination with other ideas, can help. However, as we shall discuss soon, using some more involved ideas, we obtain improvements for some special cases, which lead to answers for a few other open problems. Our Hypergraph Maximal Matching Algorithm: Our main technical contribution is an efficient deterministic algorithm for maximal matching in low-rank hypergraphs. In combination with the reduction of Lemma 1.3, this leads to our edge-coloring algorithm stated in Theorem 1.1. Theorem 1.4. There is a deterministic distributed algorithm that computes a maximal matching in O(r5 log6+log r ∆ log n) rounds4, in any n-node hypergraph with maximum degree ∆ and rank r, i.e., · where each hyperedge contains at most r vertices. This result has a number of other implications, as we overview in Section 1.2. Besides those, it also supplies an alternative poly log n-round deterministic algorithm for maximal matching in graphs, where r = 2. We remark that poly log n deterministic algorithms for graph maximal matching have been known for about two decades, due to the breakthroughs of Ha´n´ckowiak, Karonski, and Panconesi [HKP98, HKP99]. Moreover, a faster algorithm was recently presented in [FG17]. However, the methods of [HKP98, HKP99, FG17], or their natural extensions, inherently rely on rank r = 2 in a seemingly crucial manner, and they do not extend to hypergraphs of rank 3 or higher. The method we develop for Theorem 1.4 is quite different and significantly more flexible. We overview this method in Section 1.3, and contrast it with the previously known techniques. The difference and the generality of our method becomes more discernible when considering another closely related open problem which did not seem solvable using the methods of [HKP98, HKP99] and which can now be solved using a natural, though non-trivial, extension of Theorem 1.4, as we discuss next. Extension to MIS in Graphs with Bounded Neighborhood Independence: Consider the problem of computing an MIS in graphs with neighborhood independence bounded by an integer r, i.e., where the number of mutually non-adjacent neighbors of each node is at most r. Notice that maximal matching in graphs is the same as MIS in the corresponding line graph, which is a graph of neighborhood independence r = 2. It is not clear how to extend the methods of [HKP98, HKP99] to MIS in such graphs, even for r = 2. As an open question alluding to this point, and as “a good stepping stone towards the MIS problem in general graphs”, Barenboim and Elkin asked in their book [BE13]:

Open Problem 11.5 [BE13] Devise or rule out a deterministic polylogarithmic algorithm for the MIS problem in graphs with neighborhood independence bounded by 2. Our method for Theorem 1.4 generalizes to MIS in graphs with bounded neighborhood independence, as we state formally next, hence positively answering this open question. Theorem 1.5. There is a deterministic distributed algorithm that computes a maximal independent set in O(r5 log6+log r ∆ log n) rounds, in any n-node graph with maximum degree ∆ and neighborhood · independence bounded by r. Moreover, since Luby’s reduction of (∆ + 1)-vertex-coloring to MIS [Lub86, Lin87] increases the neighborhood independence by at most 1, we also get efficient algorithms for (∆ + 1)-vertex-coloring in graphs with bounded neighborhood independence. 4Throughout this paper, all logarithms are to base 2.

3 Corollary 1.6. There is a deterministic distributed algorithm that computes a (∆+1)-vertex-coloring in O(r5 log6+log(r+1) ∆ log n) rounds, in any n-node graph with maximum degree ∆ and neighborhood · independence bounded by r. Moreover, the same algorithm solves list-vertex-coloring, where each node v V must get a color from an arbitrary given list L of colors with L deg(v) + 1. ∈ v | v|≥ 1.2 Other Implications Our hypergraph maximal matching algorithm enables us to obtain answers and improvements for some other problems. A family of improvements comes for graph problems in which the main technical challenge is to find a maximal set of “disjoint” augmenting paths of short length ℓ. These problems can be phrased as maximal matching in hypergraphs with rank r = Θ(ℓ), essentially by viewing each augmenting path as one hyperedge on its elements (depending on the required disjointness). We next mention the results that we obtain based on this connection. Maximum Matching Approximation: By integrating our hypergraph maximal matching into the framework of Hopcroft and Karp [HK73], we can compute a (1 + ε)-approximation of maximum matching in graphs in (log ∆/ε)O(log 1/ε) rounds. For that, we mainly need to find maximal sets of vertex-disjoint augmenting paths of length at most ℓ = O(1/ε). This is faster than the previously best  known deterministic algorithm for (1 + ε)-approximation, which required logO(1/ε) n rounds [CH03]. We remark that an O(log n/ε3)-round randomized (1 + ε)-approximation algorithm was presented by Lotker et al. [LPSP08], mainly by computing this maximal set of vertex-disjoint augmenting paths using Luby’s randomized MIS algorithm [Lub86]. Low-Out-Degree Orientation and (Pseudo-)Forest Decomposition: By integrating our hy- pergraph maximal matching into the low-out-degree orientation framework of Ghaffari and Su [GS17], we can compute orientations with out-degree at most (1 + ε)λ , for any 0 <ε< 1, in graphs with arboricity λ. For that, we mainly need to find maximal⌈ sets of⌉ disjoint augmenting paths of length ℓ = O(log n/ε). This low-out-degree orientation directly implies a decomposition into (1 + ε)λ edge- disjoint pseudo-forests. For constant ε and even ε = Ω(1/ poly log n), the round complexity⌈ ⌉ of the 2 resulting algorithm is quasi-polylogarithmic— that is, 2O(log log n). Although this is not a polyloga- rithmic complexity, it gets close and it is almost exponentially faster than the previously best known 2O(√log n) deterministic algorithm [GS17, PS92]. This improvement can be viewed as partial solution for Open Problem 11.10 of Barenboim and Elkin [BE13], which asks for an efficient deterministic distributed algorithm for decomposing the graph into less than 2λ forests.

1.3 Our Method for Hypergraph Maximal Matching, in a Nutshell The main ingredient in our results is our hypergraph maximal matching algorithm. Here, we present a brief overview of this algorithm. Before the overview, we note that the key technical novelty in our hypergraph maximal matching algorithm is developing an efficient deterministic distributed rounding method, which transforms frac- tional hypergraph matchings to integral hypergraph matchings. This becomes more instructive when viewed in the context of the recent results of Ghaffari, Kuhn, and Maus [GKM17], which show that deterministically rounding fractional solutions of certain linear programs to integral solutions while approximately preserving some linear constraints is the “the only obstacle” for efficient deterministic distributed algorithms. In other words, if we find an efficient deterministic method for approximately rounding certain linear programs, we would get efficient algorithms for essentially all the classic local graph problems, including MIS. See [GKM17] for the precise statement. Our rounding for hypergraph matchings can be seen as a drastic generalization of the rounding methods we presented recently for matching in normal graphs [FG17]. The methods of [FG17] do not extend to hypergraphs (even for rank r = 3), for reasons that we will discuss soon. The new rounding method we present is more general and significantly more flexible. As such, we are hopeful that this

4 deterministic rounding method will prove useful for a wider range of problems, and may potentially serve as a stepping stone towards a poly log n-time deterministic MIS algorithm. We next present a high-level overview of our hypergraph maximal matching algorithm. Matching Approximation: The core of our maximal matching algorithm is an algorithm that computes a matching whose size is within an O(r3)-factor of the maximum matching. Once given such an approximation algorithm, we can easily find a maximal matching within O(r3 log n) iterations, by repeated applications of this approximation algorithm, each time adding the found matching to the output matching, and then removing the found matching and its incident edges from the hypergraph. Fractional Matchings: In approximating the maximum matching, the main challenge is finding an integral matching with such an approximation guarantee. Finding a fractional matching — where each edge e has a value x [0, 1] such that for each vertex v we have x 1 — with such e ∈ e E(v) e ≤ an approximation is trivial, and can be done in O(log ∆) rounds: initially, set∈ x = 1/∆ for all edges P e e. Then, for log ∆ iterations, each time double the values xe of all the edges e for which all vertices v e have 1/2. One can see that this produces a (2r)-approximation. The challenge thus ∈ v E(v) ≤ is in rounding ∈fractional matchings to integral matchings, without losing much in the size. P Known Rounding Methods for Graphs, and Their Shortcomings: In graphs with rank r = 2, this rounding can be done essentially with no loss. Indeed, this is the core part of the recent maximal matching algorithm of Fischer and Ghaffari [FG17], which finds a maximal matching in O(log2 ∆ log n) rounds. The method of [FG17] rounds any fractional matching in graphs in O(log ∆) iterations, in each iteration moving a 2-factor closer to integrality while decreasing the matching size only negligibly, by a (1 ε )-factor. Hence, even after all the rounding iterations, the overall loss is a negligible − Θ(log ∆) ε-factor, for a desirably small ε > 0. Although the algorithms of [HKP98, HKP99] are not explicitly phrased in this rounding framework, one can see that the principle behind them is the same. The reader familiar with [HKP98, HKP99] might recall that the key component is, roughly speaking, to decompose edges of any regular graph into two groups, say red and blue, so that almost all nodes see a fair split of their edges into the two colors. This is a special case of rounding for regular graphs. This whole methodology of rounding without more than a o(1)-factor loss in the size seems to be quite limited, and it certainly gets stuck at rank r = 2. For the interested reader, we briefly sketch the obstacle: all of those matching rounding methods [FG17, HKP98, HKP99] decompose the edges of the graph into bipartite low-diameter degree-2 graphs (i.e., short even-length cycles) — aside from a smaller portion of some not-so-nice parts, which are handled separately — and then 2-color edges of each short cycle so that each node has half of its edges in each color. Then, in rounding, one color is raised by a 2-factor while the other is dropped to zero. Unfortunately, this type of locally- balanced splittings of edges does not seem within reach for hypergraphs, as of now. Indeed, if we could solve that, we would get far more consequential results: Ghaffari et al. [GKM17] recently proved that this splitting problem for hypergraphs is ‘complete’, meaning that if one can do such a splitting in polylogarithmic time for all hypergraphs, we get polylogarithmic deterministic algorithms for all the classic local problems, including MIS. Challenges in Rounding for Hypergraphs: When trying to deterministically round fractional matchings in hypergraphs, we face essentially two challenges: (1) It is not clear how to efficiently perform any slight rounding— e.g., rounding all fractional values so that the minimum moves from at least 1/d to at least 2/d without violating the constraints — without a considerable loss in the matching size. (2) An even more crucial issue comes from the need to do many levels of rounding. Even once we have an efficient solution for a single iteration of rounding, which moves say a constant factor closer to integrality, a Θ(r)-factor reduction of the matching size seems inevitable. However, if we do this repeatedly, and our matching size drops by a Θ(r)-factor in each rounding iteration, the matching size would become too small. Notice that we need about O(log ∆) levels of 2-factor roundings. If we decrease by an Ω(r)-factor per iteration, we would be left with a matching of size

5 a factor 1/rΘ(log ∆) = 1/ poly(∆) of the maximum matching, which is essentially useless. We next mention a bird’s eye view of our rounding method. Our Rounding Method for Matchings in Hypergraphs: We devise a rounding procedure for hypergraph matchings which rounds the fractional matching by an L-factor— i.e., raising fractional values by an L-factor—while reducing the matching size only by a Θ(r)-factor. On a high level, this rounding is by recursion on L. The base level of the recursion is an algorithm that rounds the fractional matching by a constant factor, for L = O(1), with only a Θ(r)-factor decrease of the matching size. This part is somewhat simpler and is performed efficiently using defective coloring results of [Kuh09]. This is a solution for the first challenge above. To overcome the second challenge, our method interleaves some iterations of rounding with refilling the fractional matching. In particular, suppose that we would like to do an L-factor rounding of a given fractional matching ~x, thus producing an output fractional matching ~y with fractional values raised by an L-factor compared to ~x. We do this in Θ(r) iterations, using a number of √2L-factor rounding procedures. Concretely, per iteration, we first ‘remove’ the current output fractional matching ~y from the input fractional matching ~x, in a sense to be made precise, and then we apply two successive (√2L)-factor rounding operations on the left-over fractional matching. This creates a fractional matching which is rounded by a factor of 2L, but may be a (1/Θ(r2))-factor smaller than ~x. We add (a half of) this to the current fractional matching ~y, in a sense to be made precise. The removal and also the addition are done carefully, so as to ensure that the size of the output fractional matching grows by about a (1/Θ(r2))-factor of the size of ~x while the fractionality is by an L-factor better than the one of ~x. After Θ(r) such iterations, we get that the output fractional matching is a Θ(r)-approximation of the input. Extension to MIS in Graphs with Bounded Neighborhood Independence: When moving from matchings in hypergraphs to independent sets in graphs of neighborhood independence at most r, it is not directly clear how to define a fractional solution of an MIS in such graphs. Note that the integrality gap of the natural LP relaxation might be linear in ∆. However, any MIS is within an r-factor of a maximum independent set, and this can in fact be generalized to maximal fractional solutions of the following kind. We start by setting the fractional values of all nodes to 0 and then, we iteratively increment the value of some nodes. As long as right after incrementing the value of a node v the total value in the 1-neighborhood of v does not exceed 1, the total value of the resulting fractional solution is guaranteed to be within an r-factor of a maximum independent set. We call such a fractional solution a greedy packing and show that our rounding scheme for hypergraph matching can be adapted to greedy packings of graphs of bounded neighborhood independence. Integral greedy packings are exactly independent sets. Thus, integral greedy packings of the line graph of a hypergraph H correspond to matchings of H. However, we note that a fractional greedy packing of the line graph of H is not the same as a fractional matching of H. We believe that this stresses the robustness of our approach. For example, when running the MIS algorithm for graphs of bounded neighborhood independence on the line graph of a bounded rank hypergraph H, we get a slightly different but equally efficient algorithm for computing a maximal matching of H.

2 Maximal Matching and Edge-Coloring in Hypergraphs In this section, we present our hypergraph maximal matching algorithm, thus proving Theorem 1.4. Then, at the end Section 2.4, we use this hypergraph maximal matching algorithm to prove our edge- coloring results, including Theorem 1.1 and Corollary 1.2. For our hypergraph maximal matching algorithm, the key part is a matching approximation pro- cedure that finds a matching whose size is at least a (1/(32r3))-factor of the maximum matching.

Lemma 2.1. There is a deterministic distributed algorithm that computes a (32r3)-approximate matching in O r2 log6+log r ∆ rounds, given an O(r2∆2)-edge-coloring.  6 Once we have this approximation algorithm, we can find a maximal matching by iteratively apply- ing this matching approximation procedure to the remainder hypergraph, for O(r3 log n) iterations, each time removing the found matching and all its incident hyperedges. This is formalized in the proof of Theorem 1.4 in Section 2.3. Over the next two subsections, we discuss the matching approximation procedure of Lemma 2.1. We note that finding a fractional matching with size close to the maximum matching is straightforward, as we soon overview in Section 2.1. The challenge is in finding an integral matching with the same guarantee. In other words, the core technical component of our method is an algorithm for rounding fractional hypergraph matchings to integral matchings, without losing much in the size. In particular, we present our deterministic rounding technique for hypergraph matchings in Section 2.2.

2.1 Fractional Matching Approximation In the following, we present a simple O(log ∆)-round algorithm that computes a (2r)-approximate fractional matching. Some Notions and Terminology for Fractional Matchings: Given a hypergraph H = (V, E), a fractional matching of H is an assignment of values ~x [0, 1] E to edges such that for each vertex ∈ | | v V , we have e E(v) xe 1. Here, E(v) := e E : v e is the set of edges incident to v. We ∈ ∈ ≤ { ∈ ∈ } say a vertex v is half-tight in the given fractional matching ~x if x 1 . Moreover, we say ~x P e E(v) e ≥ 2 is a (1/d)-fractional matching if each edge e E has x 1/d or x ∈= 0. ∈ e ≥ Pe 1 Greedy Fractional Matching Algorithm: Initially, we set xe = ∆ for all edges e. This obviously is a valid fractional matching. Then, for log ∆ iterations, in each iteration, we freeze all the edges that have at least one half-tight vertex and then raise the value of all unfrozen edges by a 2-factor. This way, we always keep a valid fractional matching, since only the values of edges incident to non-half-tight vertices are increased. Moreover, within O(log ∆) iterations all edges will be frozen. We next show that this property already implies an approximation ratio 2r.

Lemma 2.2. The greedy algorithm described above computes a (2r)-approximate fractional matching. Moreover, any (fractional) matching ~x with the property that each edge has at least one half-tight endpoint is a (2r)-approximation.

Proof. We show that ~x must have size at least a (1/(2r))-factor of a maximum matching M ∗ employing an argument based on counting in two ways. To that end, we give 1 dollar to each edge e M and ∈ ∗ ask it to redistribute this money among edges in such a way that no edge e′ receives more than 2rxe′ dollars. This can be achieved as follows. Each edge e M asks a half-tight vertex, say v e, to ∈ ∗ ∈ distribute e’s dollar on e’s behalf. Vertex v does so by splitting this money among its incident edges e E(v) proportionally to the edge values x ′ . In this way, every edge e E(v) receives no more ′ ∈ e ′ ∈ than 2xe′ dollars from v. This is because v is half-tight and because it cannot have more than one incident edge in M ∗, hence does not receive more than 1 dollar. Since an edge can receive money only from its vertices, every edge e′ receives at most 2rxe′ dollars in total.

2.2 Rounding Fractional Matchings in Hypergraphs Our method for rounding fractional matchings is recursive, and parametrized mainly by a parameter L which captures the extent of the performed rounding. In simple words, given a fractional matching ~x [0, 1] E , the method round(~x, L) rounds ~x by an L-factor. That is, if in the input fractional ∈ | | matching ~x the smallest (non-zero) value is 1/d, then in the output fractional matching the smallest (non-zero) value is at least L/d. On the other hand, the guarantee is that the output fractional matching has size at least a (1/(4r))-factor of the input fractional matching. The functionality of this rounding method is abstracted by the following definition.

7 Definition 2.3 (L-factor rounding). Given a (1/d)-fractional matching ~x [0, 1] E — i.e. where for ∈ | | each e E, we have xe 1/d or xe = 0 — the method round(~x, L) computes an (L/d)-fractional ∈ E ≥ 1 matching ~y [0, 1]| | such that e E ye 4r e E xe. ∈ ∈ ≥ ∈ Remark 2.4. The method requiresP some conditionP on the values of L and d. Since L/d refers to the fractionality, the statement is meaningful only when L d. Due to some small technicalities, we will perform the recursive parts of rounding only for values≤ of L that satisfy a slightly stronger condition of L log2 L d. For the remaining cases, we resort to our basic rounding. ≤ We explain our rounding method in two main parts. The first part, explained in Section 2.2.1, is a procedure that we use as the base case, to round the matching by a constant factor L = O(1) in O(r2 +log∆) rounds. The second part, discussed in Section 2.2.2, is the recursive step which explains how our L-factor rounding works by making a few calls to √2L-factor rounding procedures, and a few smaller steps. Finally, in Section 2.3, we combine these rounding procedures with the previously seen algorithm of Section 2.1 for fractional matchings to obtain our matching approximation procedure of Lemma 2.1.

2.2.1 Basic Rounding In this subsection, we explain our base case rounding procedure for small rounding parameters, i.e., L = O(1). Throughout, we will assume that the base hypergraph already has an O(r2∆2)-edge- coloring, which can be computed easily using Linial’s algorithm [Lin87], in O(log∗ n) rounds. Lemma 2.5 (Basic Rounding). There is an O(L2r2 + log ∆)-round deterministic distributed algo- rithm that turns a (1/d)-fractional matching ~x into an (L/d)-fractional matching ~y with e E ye 1 ∈ ≥ 2r e E xe, for any L d. ∈ ≤ P P Algorithm Outline and Intuitive Discussions

Let Ex be the set of all edges e for which xe > 0, and let Hx = (V, Ex) be the subgraph of H with this edge set. Notice that Hx has degree at most d, because ~x is a (1/d)-fractional matching. Our goal is to compute a fractional matching ~y, supported on the edge set E , such that for each edge e E , at x ∈ x ′ ′ least one of its endpoints v e is half-tight in ~y, meaning that e Ex(v) ye 1/2. One can easily see ∈ ∈ ≥ 1 that such a fractional matching is a (2r)-approximation of ~x, i.e., e E ye 2r e E xe. Thus, the P ∈ ≥ ∈ goal is to find a fractional matching ~y such that for each edge e Ex, at least one of its endpoints is ∈ P P half-tight in ~y. Furthermore, we want ~y to be (L/d)-fractional, meaning that all the non-zero ye-values must be greater than or equal to L/d. If we had no concern for the time complexity, we could go through the color classes of edges one by one, each time setting ye = 1 for all edges of that color, and then removing edges of Ex that have half-tight vertices. This would ensure that, at the end, all edges in Ex have at least one half-tight endpoint. However, this would require time proportional to the number of colors. Even if we were given an ideal edge-coloring for free, that would be Ω(d) rounds, which is too slow for us. To speed up the process, we use a relaxed notion of edge-coloring, namely defective edge-coloring, which allows us to have much less colors, while each color class has a bounded number of edges incident to each vertex, say k. Now, we cannot raise the ye-values of all the edges of the same color at the ′ ′ same time to ye = 1, because that would be too fast and could violate the condition e Ex(v) ye 1. ∈ ≤ However, we can raise each of these edge values to say y = 1 and still be sure that the summation e 2k P ′ ′ e Ex(v) ye for each node does not increase faster than an additive 1/2. That is because there are only∈k edges incident to each node, per color class. If we freeze and remove all edges that now have P one half-tight vertex, these fractional value raises would never violate the condition ′ y ′ 1, e Ex(v) e ≤ thus always lead to a valid fractional matching. ∈ P

8 The Basic Rounding Algorithm To materialize the above intuitive approach, we first compute a defective edge-coloring with O r2∆2 colors and defect k = d/(2L). Then, we go through the colors, one by one, applying the above  fractional-value increases. This ensures that all the non-zero fractional values y are at least 1 L/d. e 2k ≥ At the very end, we perform O(log d/L) doubling steps to ensure that each edge has at least one half- tight endpoint. We next explain the steps of this algorithm, and then provide the related analysis.

Part I, Defective Edge-Coloring Algorithm: We compute a defective edge-coloring of Hx with O(L2r2) colors and defect — that is, maximum degree induced by edges of the same color — at most d/(2L), as follows. Let F = (VF , EF ) be the line graph of Hx, that is, the graph which has a vertex ve VF for every edge e Ex and an edge ve, ve′ EF if e and e′ are incident, thus e e′ = . Note that∈ F has maximum degree∈ at most r d{, since }∈H ’s maximum degree is bounded by∩d.6 With∅ the · x defective coloring algorithm of Kuhn [Kuh09], we can compute a (d/(2L) 1)-defective vertex-coloring r d 2 2 2 5 2− 2 of F with O ( (d/(2L· ) 1) ) = O L r colors . Exploiting the given O r ∆ -edge-coloring of H, and − 2 2 thus Hx, which is an O r∆ -vertex-coloring  of the line graph F , we can make this algorithm run in O(log∗(r∆)) rounds. The vertex-coloring of the line graph with defect d/(2L) 1 is an edge-coloring  − of Hx where every edge has at most d/(2L) 1 incident edges of the same color, resulting in at most d/(2L) many edges of the same color incident− to each vertex.

Part II, Fractional Matching Computation via Defective Coloring: We process the colors of the d/(2L)-defect-defective coloring one by one, in O L2r2 iterations. In the ith iteration, for each non-frozen edge e with color i, we raise y from y = 0 to y = L/d. Then for each node v that is e e  e already half-tight, meaning that y 1/2, we freeze all the edges incident to v. This means e Ex(v) e ≥ the fractional value of these edges will∈ not be raised in the future. Notice that since we raise values only incident to nodes that are notP already half-tight, and as for each such node the summation goes up by at most d L = 1/2, the vector ~y always remains a fractional matching, meaning that we 2L · d always have e Ex(v) ye 1 for each node v. ∈ ≤ At the very end, once we are done with processing all colors, some edges in Ex may remain without P any half-tight endpoint. Though any such edge e would itself have ye = L/d. We perform log (d/L) iterations of doubling, where in each iteration, we double all the fractional values ye for all edges that do not have a half-tight endpoint. At the end, we are ensured that each edge has at least one half-tight endpoint, and moreover, each non-zero fractional value ye is at least L/d.

Lemma 2.6. The above algorithm computes an (L/d)-fractional matching ~y such that e E ye 1 2 2 2 2 ∈ ≥ 2r e E xe, in O(L r + log(d/L) + log∗(r∆)) = O(L r + log ∆) rounds. ∈ P Proof.P The round complexity of the algorithm comes from the O(log∗(r∆)) rounds spent for computing the defective edge-coloring, O(L2r2) rounds for processing the colors of the defective coloring one by one, and then O(log(d/L)) rounds for the final doubling steps. It is clear by construction that the computed vector ~y is a fractional matching, because we always

have e Ex(v) ye 1, and that it is (L/d)-fractional, because the smallest non-zero ye value that we ∈ ≤ use is L/d. What remains to be proved is that y 1 x . For that, we use the property P e E e ≥ 2r e E e that the fractional matching ~y that we compute∈ is such that for∈ each e E , at least one of the ∈ x vertices v e must be half-tight, meaning that P y 1P/2. We use this property to argue that ∈ e E(v) e ≥ the fractional matching ~y has size at least a (1/(2r))-factor∈ of ~x. This is done via a blaming argument P along the same lines as the proof of Lemma 2.2. We let every edge e E put x dollars on edges ∈ x e e E as follows. Each edge e passes its x dollars to one of its half-tight vertices v e. Then, the ′ ∈ y e ∈ 5If we happen to have d/(2L) ≤ 1, then (d/(2L)−1)-defective coloring becomes a degenerate case of the definition, as (d/(2L)−1) ≤ 0, and then by convention, this simply means proper coloring. In that case the algorithm of Kuhn [Kuh09] provides a proper coloring with O(d2r2)= O(L2r2) colors.

9 half-tight vertex v distributes these x dollars among all its incident edges e E (v) proportionally to e ′ ∈ x the values ye′ . As ~x is a fractional matching, in this way, v cannot receive more than 1 dollar in total from its incident edges in Ex. Therefore, and since v is half-tight, no edge e′ incident to v receives more than 2ye′ dollars from v e′. In total, an edge e′ Ey can receive at most 2rye′ dollars from ′ ∈ ∈ edges in Ex, at most 2ye from each of its endpoints. Therefore, e E xe 2r e E ye. ∈ ≤ ∈ P P 2.2.2 Recursive Rounding We explain a recursive method round(~x, L) that given a (1/d)-fractional matching ~x computes an (L/d)-fractional matching ~y such that y 1 x . This procedure will be applied when e E e ≥ 4r e E e L is greater than some fixed constant. The∈ procedure works∈ mainly by a number of recursive calls to √2L-factor rounding procedures, and aP few additionalP steps. Lemma 2.7 (Recursive Rounding). There is an O (r2 + log ∆) log5+log r L -round deterministic dis- tributed algorithm that turns a (1/d)-fractional matching ~x into an (L/d)-fractional matching ~y with 1 2  e E ye 4r e E xe, for any L such that L log L d. ∈ ≥ ∈ ≤ PThe RecursiveP Rounding Algorithm: The method round(~x, L) consists of 16r iterations. Initially, we set ye = 0 for all edges. Then, in 16 iterations, we gradually grow ~y while keeping it (L/d)-fractional. The process in each iteration is as follows: We first generate a fractional matching ~z by initially setting it equal to ~x, and then removing • from it each edge e that is incident to a at least one half-tight vertex of ~y. In other words, for each vertex v such that e E(v) ye 1/2, we set ze = 0 for all e with v e; for all other edges, ∈ ≥ ∈ we set ze = xe. P round ~ We first perform (~z, √2L), producing some intermediate (√2L/d)-fractional matching z′. • round ~ ~ Then we perform (z′, √2L). This creates a (2L/d)-fractional matching z′′ whose size is at least a factor ( 1 ) ( 1 )= 1 of the size of ~z. 4r · 4r 16r2 We divide the values of this fractional matching z~ by a 2-factor, creating an (L/d)-fractional • ′′ matching, and we add the result to ~y. Thus, we effectively update ~y ~y + z~ /2. ← ′′ Remark 2.8. Recall the promise from Remark 2.4 that we will apply the rounding method only for values such that L log2 L d. The main reason for this stronger condition, compared to the more ≤ natural condition of L d, is the 2-factor that we have in the recursive rounding call. For instance, the matching z~ is a (2≤L/d)-fractional matching and thus, for this to be meaningful, we need 2L d. ′′ ≤ However, with the stronger condition that L log2 L d, we can say that the promise is satisfied ≤ round ~ throughout the recursive calls. For instance, in the second call to (z′, √2L), the new condition would be √2L(log √2L)2 d/√2L, which is readily satisfied given that L log2 L d and L 8. ≤ ≤ ≥ Analysis of the Recursive Rounding: We next provide the related analysis. In particular, Lemma 2.9 proves that the generated fractional matching ~y is valid, Lemma 2.10 proves that it is a good approximation of ~x, and Lemma 2.11 analyzes the running time of this recursive procedure.

Lemma 2.9. The fractional matching ~y is valid, meaning that e E(v) ye 1 for all vertices v. ∈ ≤ Proof. We show by induction on i that the fractional matchingP~y in iteration i does not violate the constraints y 1 for all v. At the beginning, the condition is trivially satisfied. If v is e E(v) e ≤ half-tight at the∈ beginning of an iteration, then z = 0 and hence z = 0 for all e E(v), thus no P e e′′ ∈ value is added to e E(v) ye in this iteration. If v is not half-tight at the beginning of an iteration, we add at most half of∈ a fractional matching to edges incident to v, thus at most a value 1/2 to the P 1 summation e E(v) ye. More formally, we have e E(v) ze′′ 1, thus e E(v) ze′′/2 2 , which results ∈ ∈ ≤ ∈ ≤ in a new value of at most e E(v)(ye + ze′′/2) 1. P ∈ ≤P P P 10 1 Lemma 2.10. At the end of 16r iterations, we have e E ye 4r e E xe. ∈ ≥ ∈ 1 Proof. Consider one iteration and suppose that e PE ye 4r ePE xe. We first show that then 1 ∈ ≤ ∈ e E ze 2 e E xe by a blaming argument along the same lines as the proof of Lemma 2.2. For ∈ ≥ ∈ P P that, we let every edge e E which is incident to a half-tight vertex in ~y put xe dollars on edges P P ∈ in a manner that each edge e′ receives at most 2rye′ dollars. This can be done by sending those xe dollars of e to (one of) its ~y-half-tight vertex v, and then letting v distribute these xe dollars among its incident edges e E(v) proportionally to the values y ′ . Since ~x is a matching, each vertex v in ′ ∈ e total receives at most 1 dollar from its incident edges. Then, since v is ~y-half-tight, it can distribute this dollar among its incident edges such that no edge e′ receives more than 2ye′ dollars from one of its endpoints v. Now, an edge e E can possibly receive 2y ′ dollars from each of its (half-tight) ′ ∈ e endpoint vertices, thus in total at most 2rye′ dollars. Therefore, indeed 2r 1 x 2r y x = x . e ≤ e ≤ 4r e 2 e e E : v e: ′∈ y ′ 1/2 e E e E e E ∈ ∃ ∈ PXe E(v) e ≥ X∈ X∈ X∈ It follows that if y 1 x , then z 1 x . Thus, in each such iteration, the e E e ≤ 4r e E e e E e ≥ 2 e E e matching ~y grows by∈ at least ∈ ∈ ∈ P P P P 1 1 1 1 1 z′′/2 z x . e ≥ 2 · 16r2 e ≥ 2 · 16r2 · 2 e e E e E e E X∈ X∈ X∈ 2 1 Therefore, after at most 16r iterations, we have e E ye 4r e E xe. ∈ ≥ ∈ Lemma 2.11. The algorithm round(~x, L) has roundP complexityPO((r2 + log ∆) log5+log r L).

Proof. The complexity R(L) of the rounding algorithm round(~x, L) follows the recursive inequality R(L) 16r(R(√2L)+ R(√2L)+ O(1)). Furthermore, we have the base case solution of R(L) = ≤ O(L2r2 + log ∆) for L = O(1). The claim can now be proved by an induction on L, as formalized in Lemma A.1. Here, instead of the formal calculations, we mention an intuitive explanation: the complexity gets multiplied by roughly 32r as we move from L to √2L. There are, roughly, log log L such moves, and hence the complexity gets multiplied by (32r)log log L < log5+log r L until we reach the base case of L = O(1), where the base complexity is O(r2 + log ∆) by Lemma 2.5.

Remark 2.12. We remark that we have not tried to optimize the constant that appears in the exponent of the round complexity O((r2 + log ∆) log5+log r L). This constant mainly comes from the constant in the number of iterations in our recursive rounding, which is currently set to 16r, for simplicity. Optimizing this constant may be of interest specially for small values of r. In particular, for the case of r = 3, which leads to our edge-coloring result via Lemma 1.3, the current constants lead to a complexity O(log6+log2 3 ∆) = O(log7.6 ∆) for Θ(1)-approximation of maximum matching, thus an O(log7.6 ∆ log n) complexity for maximal rank-3 matching and also edge-coloring. One can easily improve this to O(log6.75 ∆ log n), and perhaps even further, by adjusting the constants throughout.

2.3 Wrap-up We now use our rounding procedure to find the approximate maximum matching of Lemma 2.1.

Proof of Lemma 2.1. First, we compute a (1/∆)-fractional (2r)-approximate matching ~x in O(log ∆) rounds, by the greedy algorithm described in Section 2.1. Then, we apply the recursive rounding 2 2 ~ from Lemma 2.7 for L =∆/ log ∆. This produces a (1/ log ∆)-fractional matching x′ whose size is a (1/(8r2))-factor of the maximum fractional matching of the hypergraph, in O log5+log r ∆(r2 + log ∆) rounds. To finish up the rounding, we apply the basic rounding of Lemma 2.5 for L = log2 ∆, which 

11 2 2 ~ runs in O(r log ∆) and produces a 1-fractional — i.e. integral — matching x′′ whose size is at least a ~ 3 (1/(4r))-factor of the size of x′. Hence, the final produced integral matching is a (32r )-approximation of the maximum matching.

We compute a maximal matching via iterative applications of this matching approximation.

2 2 Proof of Theorem 1.4. First, we pre-compute an O(r ∆ )-edge-coloring of H in O(log∗ n) rounds by Linial’s algorithm [Lin87]. Then, iteratively, we apply the maximum matching approximation proce- dure of Lemma 2.1 to the remaining hypergraph. We add the found matching M to the matching that we will output at the end, and remove M along with its incident edges from the hypergraph. In each iteration, the size of the maximum matching of the remaining hypergraph goes down to at least a factor of 1 1/(32r3) of the previous size. This is because otherwise we could combine − the matching computed so far with the maximum matching in the remainder hypergraph to obtain a matching larger than the maximum matching in H. After O(r3 log n) repetitions, the remaining maximum matching size is 0, which means the remaining hypergraph is empty. Hence, we have found 3 2 6+log r 5 6+log r a maximal matching in O log∗ n + r log n r log ∆ = O(r log ∆ log n) rounds. ·  2.4 Edge-Coloring Lemma 2.13. Given a deterministic distributed algorithm that computes a maximal matching in A N-vertex hypergraphs of rank 3 and maximum degree d in T (N, d) rounds, there is a deterministic distributed algorithm that solves list-edge-coloring of any n-node graph G = (V, E) with maximum B degree ∆ in at most T (2n∆2, 2∆ 1) rounds. In the list-edge-coloring problem, each edge e must − choose its color from an arbitrary given list Le of colors with Le = de + 1, where de denotes the number of edges adjacent to e. | |

Proof Sketch. To edge-color G = (V, E), we generate a hypergraph H: For each edge e E with ∈ list-color L , we take L copies of e = v, u as follows: For each color i L , we take one copy e e | e| { } ∈ e i of e which is put incident to copies vi and ui of v and u. Thus, if two adjacent edges e = v, u and th { } e′ = v, u′ have a common color i Le Le′ , then their i copies ei and ei′ will be present and will { } ∈ ∩ 2 both be incident to vi. Notice that for each vertex v, at most 2∆ copies of it will be used because for each of the edges e incident to v, at most L < 2∆ additional copies of v are added. | e| Algorithm runs the maximal matching algorithm on H, and then, for each edge e E, if the copy e of e isB in the computed maximal matching, colorsA e with color i. One can verify∈ that each i B G-edge e has exactly one copy ei in the maximal matching, and thus we get a list-edge-coloring.

Theorem 1.1. There is a deterministic distributed algorithm that computes a (2∆ 1)-edge-coloring − in O(log7 ∆ log n) rounds, in any n-node graph with maximum degree ∆. Moreover, the same algorithm · solves list-edge-coloring, where each edge e E must get a color from a given list Le of colors with L = d + 1, where d denotes the number∈ of edges adjacent to e. | e| e e Proof. Follows directly from Lemma 2.13 and Theorem 1.4 (with the slightly improved exponent con- stant, noted in Remark 2.12).

Remark 2.14. We note that Lemma 2.13, and hence also Theorem 1.1, can be easily extended from graphs to hypergraphs. In particular, list-edge-coloring of hypergraphs of rank r can be reduced to maximal matching in hypergraphs of rank r + 1. Thus, we can obtain a deterministic list-edge-coloring algorithm for hypergraphs of rank r with round complexity O(r5 log6+log(r+1) ∆ log n) rounds.

12 As stated before, the deterministic list-edge-coloring algorithm of Theorem 1.1, in combination with known randomized algroithms of Elkin, Pettie, and Su [EPS15] and Johansson [Joh99], leads to a poly(log log n) randomized algorithm for (2∆ 1)-edge-coloring. − Corollary 1.2. There is a randomized distributed algorithm that computes a (2∆ 1)-edge-coloring − in O(log8 log n) rounds, with high probability.

Proof of Corollary 1.2. For ∆ = Ω(log2 n), we run the O log ∆+ log n -round algorithm by Elkin, ∗ ∆1−o(1) 2 Pettie, and Su [EPS15] for ((1 + ε)∆)-edge-coloring, in O(log∗ ∆) rounds. For ∆ = o(log n), we first apply the simple randomized coloring algorithm of Johansson [Joh99] for O(log ∆) = O(log log n) rounds. In particular, in each iteration, every remaining edge e independently picks a color qe from its remaining palette uniformly at random. If there is no incident edge that picked the same color qe, then the edge e is colored with this color qe and removed from the graph. Moreover, the color qe gets deleted from the palettes of every incident edge. As proved in e.g. [BEPS12], after O(log ∆) rounds, this procedure leaves us with a graph where each connected component of remaining edges has size at most N = poly log n. On these components, we then run the list-edge-coloring algorithm of Theorem 1.1 to complete the partial coloring. This takes at most O(log8 N) = O(log8 log n) rounds. Hence, including the O(log log n) initial rounds, the overall complexity is O(log8 log n) rounds.

3 MIS in Graphs of Bounded Neighborhood Independence

We will now generalize the hypergraph maximal matching algorithm of Section 2 to computing max- imal independent sets and (∆ + 1)-vertex colorings of graphs of bounded neighborhood independence, thus proving Theorem 1.5 and Corollary 1.6. Formally, the neighborhood independence of a graph is defined as follows.

Definition 3.1 (Bounded Neighborhood Independence). For an integer r 1, we say that a graph ≥ G = (V, E) has neighborhood independence at most r if for every node v V , the graph G[N(v)] induced by the set N(v) of neighbors of v has independence number at most r∈.

We note that the line graph of a hypergraph H of rank r has neighborhood independence at most r. This is because for each hyperedge e of H, all incident hyperedges f share at least one node with e and because all the hyperedges sharing a node of H form a clique in the line graph of H. Hence, the neighborhood of each edge can be covered by at most r cliques in the line graph. Despite the fact that graphs of neighborhood independence r are significantly more general than line graphs of rank-r hypergraphs, we show that our techniques for computing a maximal matching in rank-r hypergraphs can be generalized to computing an MIS in graphs of neighborhood independence r, and this, in fact, even with exactly the same asymptotic dependency on r and ∆. While in the case of hypergraph matchings, the natural LP relaxation leads to fractional solutions that are within a small factor of a maximum matching, it is not as straightforward to model fractional versions of independent sets in graphs of bounded neighborhood independence in a meaningful way. Note that for example even for r = O(1), the integrality gap of the natural LP relaxation of the maximum independent set problem might be as large as Ω(∆). In the following, we study special fractional solutions ~x that assign a non-negative value x 0 to each node v V of a graph G = (V, E) v ≥ ∈ and allow to approximate maximum and maximal independent sets in G if G if a graph of bounded neighborhood independence. For convenience, we first introduce some notation. Recall that given a graph G = (V, E) and a node v V , we use N(v) to denote the set of neighbors of v. Further, we ∈ define N +(v) := v N(v) { }∪

13 to denote the set of nodes in the 1-neighborhood of v. Moreover, for node vector ~x assigning values x to every node v V , for each node v V , we define v ∈ ∈

Σ~x(v) := xu u N +(v) ∈X to be the local sum of the values xu in the 1-neighborhood of v. As a fractional relaxation of the independent set of a graph G, we define a greedy packing as follows. Definition 3.2 (Greedy Packing). For a graph G = (V, E), a node vector ~x assigning a non-negative value x 0 to each node v V is called a greedy packing if there exists a global order on the v ≥ ∈ ≺ nodes V such that v V : x + x 1. ∀ ∈ v u ≤ u N(v):u v ∈ X ≺ Hence, in a greedy packing, the values xv can be assigned to the nodes in some order such that for all nodes v V , when node v gets assigned value x , the sum of the values in v’s 1-neighborhood is ∈ v bounded by 1. Analogously to the matching algorithm in Section 2, the key part is a recursive algorithm that finds and independent set that is an approximation of a maximum independent set in graphs of bounded neighborhood independence. We formally prove the following result. Lemma 3.3. In graphs of neighborhood independence at most r, there is a deterministic distributeqd algorithm that computes a (32r3)-approximate independent set in O r2 log6+log r ∆ rounds, given an O(∆2)-vertex-coloring.  We first show that in graphs of bounded neighborhood independence for any such greedy packing ~x, the local sum Σ~x(v) is bounded for all nodes. Lemma 3.4. Let G = (V, E) be a graph with neighborhood independence at most r and assume that we are given a greedy packing ~x. Then, for all v V , we have Σ (v) r. ∈ ~x ≤ Proof. Consider an arbitrary node v V and let Gv be the subgraph of G induced by the nodes in N +(v). Let be the global order on V∈which is defined by Definition 3.2 because ~x is a greedy packing. ≺ + Assume that the nodes in N (v) are named u0, . . . , ud(v) such that for all 0 i < d(v), ui ui+1. + ≤ ≻ We construct an MIS S of Gv by processing the nodes in N (v) in the order u0, u1, . . . , ud(v), always adding the current node ui to S if no neighbor of ui has already been added to S. In this way, every + + node ui N (v) S has an MIS neighbor uj N (v) for which j < i and thus ui uj. We charge ∈ \ + ∈ ≺ the value xui of every node ui N (v) S to some MIS neighbor uj for which j < i. In addition, the value x of each MIS node u ∈ S is charged\ to the node itself. For each MIS node u S, let X uj j ∈ j ∈ uj be the total value charged to uj. We can upper bound Xuj as follows:

Xuj xui xuj + xw 1. ≤ + + ≤ ≤ ui N (uj ) N (v):i>j w N(uj ):w uj ∈ X∩ ∈ X ≺ The last inequality follows because ~x is a greedy packing w.r.t. the global order . The claim of the ≺ lemma now follows because G has neighborhood independence bounded by r, and thus the MIS S can contain at most S r nodes. | |≤ We will show how to recursively compute a large greedy packing. Before doing this, we first prove some useful simple properties of greedy packings. Lemma 3.5. Let G = (V, E) be a graph, and assume that we are given a global order on V and a ≺ greedy packing ~x w.r.t. the order . Then, the following statements hold: ≺

14 (1) Let v V be node for which Σ (v) 1 and let ~y be a node vector such that y = x for all u = v ∈ ~x ≤ u u 6 and such that y x + 1 Σ (v). Then ~y is also a greedy packing. v ≤ v − ~x (2) Let U V be a subset of the nodes and let ~y be a node vector such that y = x for all nodes ⊆ v v v V U and such that y x for all u U. If Σ (u) 1 for all nodes u U, ~y is also a ∈ \ u ≥ u ∈ ~y ≤ ∈ greedy packing.

(3) Let U V be a set of nodes u for which Σ (u) 1/2 and consider a node vector ~y such that ⊆ ~x ≤ y = x for all v U and such that y = 2x for all u U. Then ~y is also a greedy packing. v v 6∈ u u ∈ Proof. We first prove claim (1). Consider a global order that is obtained from by moving node ≺0 ≺ v to the very end of the order (without changing the relative order of any of the other nodes). We claim that ~y is a greedy packing w.r.t. the global order . For all nodes u = v, the condition ≺0 6 of Definition 3.2 follows because ~x is a greedy packing w.r.t. the global order . For node v, the ≺ condition follows because Σ~y(v)=Σ~x(v)+ yv xv 1. Claim (2) follows from claim (1) by sequentially− ≤ processing the nodes in U. We start with vector ~x and when processing node u, we replace the current value xu of node u by yu. Because for all nodes yv xv, for each of the intermediate vectors ~z, we have Σ~z(u) 1 for all u U. The conditions for claim≥ (1) are therefore satisfied for each node u U. ≤ ∈ ∈ Finally, to prove claim (3), observe that because we have Σ (u) 1/2 for all nodes u U, the ~x ≤ ∈ local sum for the nodes in u is still bounded by 1 even if we double the values of all nodes v V . Claim (3) therefore follows as a special case of claim (2). ∈

In order to recursively compute a large greedy packing, we will need to be able to add a new greedy packing to an existing one. The next lemma shows in which way this can be done. In the following, given a real-valued non-negative node vector ~x and a parameter c> 0, we call a node v V ∈ c-saturated if Σ (v) c. ~x ≥ Lemma 3.6. Let G = (V, E) be a graph and let ~x be a greedy packing of G. Further, let F V be ⊆ the nodes of G that are not 1/2-saturated w.r.t. ~x and let ~y be a greedy packing of G for which yv > 0 only for v F . Then, the fractional assignment ~z := ~x + ~y/2 is a greedy packing of G. ∈ Proof. Assume that ~x is a greedy packing of G w.r.t. to the global order x on V and that ~y is a greedy packing of G w.r.t. to the global order on F . Note that for all nodes≺v F , we have Σ (v) < 1/2. ≺y ∈ ~x Therefore for the greedy packing ~x the condition of Definition 3.2 is satisfied for the nodes in F for every choice of the global order x. W.l.o.g., we can therefore assume that x first orders all nodes in V F and it then orders the nodes≺ in F in an arbitrary way. We can thus≺ define a global order \ ≺ on the nodes V as a combination of x and y in an obvious way. The order first orders the nodes in V F in the same order as and≺ it then≺ orders the nodes in F in the same≺ order as . We show \ ≺x ≺y that ~z = ~x + ~y/2 is a greed packing of G w.r.t. to the global order . For each node v V F , the ≺ ∈ \ condition of Definition 3.2 is satisfied because ~x is a greedy packing. For a node v F , we have ∈ y y 1 1 z + z = v + u + x + x < + = 1, v u 2 2 v u 2 2 u N(v):u v u N(v) F :u yv u N(v):u xv ∈ X ≺ ∈ X∩ ≺ ∈ X ≺ and therefore the claim of the lemma follows.

As in the case of computing matchings in hypergraphs, our goal is to start with a large fractional greedy packing and to gradually round the fractional solution to an integer one of approximately the same size. For a given parameter δ > 0, we call a non-negative real-valued node vector ~xδ-fractional if for every node v V , either x =0or x δ. Given a parameter L> 1, we show how to recursively ∈ v v ≥ turn a δ-fractional greedy packing into an (Lδ)-fractional greedy packing of a similar size. The proof

15 follows the same basic structure as the rounding for fractional hypergraph matchings in Sections 2.2.1 and 2.2.2. The following lemma provides a way to upper bound the size of a greedy packing in terms of another greedy packing. We will use it to compare the size of a computed (Lδ)-fractional greedy packing to the existing δ-fractional greedy packing. Lemma 3.7. Let ~x and ~y be two greedy packings of a graph G = (V, E) with neighborhood independence at most r. Further, let U V be the set of nodes of V for which Σ (v) 1/2. We have ⊆ ~y ≥ 1 y x . v ≥ 2r v v V v U X∈ X∈ Proof. To prove the lemma, we use a blaming argument. Let Vy be the set of nodes for which yv > 0. We distribute all the x -values of nodes in U among the nodes in V . That is, for each node v V , v y ∈ y we define a variable αv such that v Vy αv = v U xv. More concretely, we define the values αv for ∈ ∈ each node v Vy as follows: ∈ P P yv αv := xu xu 2yv = 2yv Σ~x(v). (1) · Σ~y(u) ≤ · · u N +(v) U u N +(v) U ∈ X ∩ ∈ X ∩ Hence, every node u U distributes its value x among the neighboring nodes in v V proportionally ∈ u ∈ y to the values y . Because ~x is a greedy packing of G, Lemma 3.4 implies that Σ (v) r for all v V . v ~x ≤ ∈ Together with Equation (1), we thus get αv 2ryv for all v Vy and the claim of the lemma follows. ≤ ∈

3.1 Basic Rounding of Greedy Packings Lemma 3.8 (Basic Rounding of Greedy Packings). Let G = (V, E) be a graph with neighborhood independence at most r 1. Further, assume that a parameter L > 1, an integer d L, and a (1/d)-fractional greedy packing≥ ~x of G are given. If in addition, an O(∆2)-vertex-coloring≥ of G is 2 given, there is an O((rL) + log∗ ∆ + log d)-round deterministic distributed algorithm that computes an (L/d)-fractional greedy packing ~y for which yv > 0 only if xv > 0 and such that ~y is of size 1 y x . v ≥ 2r v v V v V X∈ X∈ Proof. Let V be the set of nodes v V for which x > 0 and let G = G[V ] be the subgraph of x ∈ v x x G induced by Vx. Note that because ~x is a greedy packing, Lemma 3.4 implies that for every node v V ,Σ (v) r and since ~x is (1/d)-fractional, this implies that G has maximum degree at most ∈ ~x ≤ x r d. We compute the (L/d)-fractional greedy packing ~y in two steps. In a first step, we compute an · arbitrary (L/d)-fractional greedy packing ~z of G by assigning value zv = L/d to a subset of the nodes v Vx. In the second step, we obtain ~y from ~z by iteratively doubling the value of each node that is not∈ (1/2)-saturated at most O(log d) times. For the first step, we apply the deterministic defective coloring algorithm of Kuhn [Kuh09]. For a C-vertex-colored graph G of maximum degree ∆ and a parameter p 1, the algorithm allows to 2 ≥ compute a p-defective O((∆/p) )-coloring of G in time O(log∗ C). That is, the algorithm assigns one of O((∆/p)2) colors to each node of G such that the subgraph induced by each of the colors has maximum degree at most p. We apply the defective coloring algorithm of [Kuh09] to the graph Gx 2 with parameter p = d/(2L). Because we are given an O(∆ )-coloring of G (and thus also of Gx), the time for computing this defective coloring is O(log∗ ∆) and because the maximum degree of Gx is at most dr, the number of colors of the defective coloring is at most O((rL)2). We now compute an initial (L/d)-fractional greedy packing ~z as follows. For all nodes v V Vx, 2 ∈ \ we set zv = 0. For the nodes in Vx, we iterate through the O((rL) ) colors of the defective coloring of

16 Input : A graph G = (V, E) with a δ-fractional greedy packing ~x, a parameter L< 1/2δ Output: A δL-fractional packing ~y of G if L 4 then run≤ the basic rounding algorithm of Lemma 3.8 else ~y := ~0; for phase 1,..., 16r do Vz := v V :Σ~y(v) < 1/2 ; define ~z s.t.∈ z = x for v V and z = 0 otherwise;  v v ∈ z v ~ round z′ := (~z, √2L); ~ round ~ z′′ := (z′, √2L); ~ ~y := ~y + z′′/2 end end Algorithm 1: Recursive Rounding Algorithm round(~x, L)

G and process all nodes of the same color in parallel. At the beginning, we set z = 0 for all v V . x v ∈ x When processing the nodes of colors c, for each node v Vx of color c, we set zv = L/d if and only if Σ (v) 1/2. Because each node of color c has at most ∈d/(2L) neighbors of color c, this implies that ~z ≤ even after this step, Σ (v) 1 for all nodes of color c. Claim (2) of Lemma 3.5 therefore implies that ~z ≤ throughout this process, vector ~z remains a valid greedy packing. Because at the end all non-zero values of ~z are equal to L/d, clearly, ~z is (L/d)-fractional. Note also that for all nodes v V for ∈ x which z = 0, we have Σ (v) 1/2. v ~z ≥ To obtain the greedy packing ~y from ~z we first set ~y = ~z and we then proceed in synchronous rounds. Let V V be the set of nodes for which z > 0. In each round, each node v V for which y ⊆ x v ∈ y Σ (v) 1/2 doubles its value y . The process stops when Σ (v) 1/2 for all nodes v V . Because ~y ≤ v ~y ≥ ∈ y y 1/2 implies that Σ (v) 1/2, this happens after at most log(d/L) log d rounds. Claim (3) of v ≥ ~y ≥ ≤ Lemma 3.5 implies that the vector ~y remains a valid greedy packing throughout this process. We therefore obtain an (L/d)-fractional greedy packing ~y where for each node v Vx, we have Σ (v) 1/2. Lemma 3.7 then shows that that y 1/(2r) x , which∈ concludes the ~y ≥ v V v ≥ v V v proof. ∈ ∈ P P 3.2 Recursive Rounding of Greedy Packings We next explain a recursive method round(~x, L) that given a δ-fractional greedy packing ~x of a graph G = (V, E) of neighborhood independence r computes an (Lδ)-fractional greedy packing ~y of G of ≤ size y 1 x and such that y > 0 only if x > 0. The method round(~x, L) runs in 16r v V v ≥ 4r v V v v v phases.∈ At the beginning,∈ ~y = 0 and in each phase, some values of ~y are increased. As soon as for someP node v V ,ΣP (v) 1/2, the value y is not increased any further. In each phase, the method ∈ ~y ≥ v therefore first defines a δ-fractional greedy packing ~z which is identical to ~x on all nodes v for which Σ~y(v) < 1/2 and which is 0 on all other nodes. On this vector ~z, the method is called recursively ~ with parameter √2L, resulting in a (√2Lδ)-fractional greedy packing z′. Afterwards, the method is ~ again called recursively with parameter √2L on the vector z′, resulting in a (2Lδ)-fractional greedy ~ ~ packing z′′. Finally, the vector ~y is updated by adding z′′/2 to it. The algorithm is also summarized in Algorithm 1. The following lemma shows that the algorithm round(~x, L) computes an (Lδ)-fractional greedy packing of size within a factor 4r of the size of ~x.

17 Lemma 3.9. Assume that we are given a graph G = (V, E) with neighborhood independence at most r, parameters 0 <δ< 1 and L < 1/(2δ), and a δ-fractional greedy packing ~x of G. The algorithm round(~x, L) computes an (Lδ)-fractional greedy packing ~y for which yv > 0 only if xv > 0 and such that 1 y x . (2) v ≥ 4r v v V v V X∈ X∈ Proof. Note that for L 4, the algorithm directly applies the basic rounding algorithm of Lemma 3.8 ≤ and the claims of the lemma thus directly hold by applying Lemma 3.8. Let us therefore assume that L > 4 and let us therefore (inductively) also assume that the recursive calls to round(~z, √2L) and round ~ (z′, √2L) satisfy the claims of the lemma. We first show that ~y is an (Lδ)-fractional greedy packing of G and that yv > 0 only if xv > 0. Note that zv > 0 only if xv > 0 and we have zv′ > 0 only if zv > 0 and zv′′ > 0 only if zv′ > 0 because that is round round ~ guaranteed by the recursive calls to (~z, √2L) and (z′, √2L). Because yv is only increased for nodes v V for which z > 0, we therefore have y > 0 only if x > 0 throughout the algorithm. ∈ v′′ v v To see that ~y is (Lδ)-fractional, note that because ~x is δ-fractional, the recursive calls to round(~z, √2L) round ~ ~ ~ and (z′, √2L) guarantee that z′ is (√2Lδ)-fractional and z′′ is (2Lδ)-fractional. We update ~y by ~ adding z′′/2 and thus an (Lδ)-fractional vector to it. Thus, ~y is (Lδ)-fractional at all times during the execution of Algorithm 1. We prove that ~y at all times is a greedy packing by induction on the number of phases. Clearly at the beginning when ~y = ~0, ~y is a greedy packing. Also, whenever, ~y is ~ ~ round ~ ~ updated, we add z′′/2 to it. Note that because z′′ is the result of the call to (z′, √2L), z′′ is a greedy packing. Further, zv′′ > 0 only where zv > 0 and thus only for nodes v where Σ~y(v) < 1/2 at ~ the beginning of the respective phase. It therefore follows directly from Lemma 3.6 that ~y + z′′/2 is a greedy packing of G. It thus remains to show Equation (2). As long as Equation (2) does not hold, at the beginning of each of the 16r phases, we have v V xv > 4r v V yv. As in Algorithm 1, let Vz be the set of nodes ∈ ∈ for which Σ~y(v) < 1/2 and V¯z := V Vz be the set of nodes for which Σ~y(v) 1/2. From Lemma 3.7, P \ P ≥ 1 we get that ¯ x 2r y and we thus have z = x > x . From the v Vz v ≤ v V v v V v v Vz v 2 v V v ∈ ∈ round √ round∈ ~ √ ∈ ~ ∈ guarantees ofP the recursive callsP to (~z, 2L) and P (z′, 2LP), the size of zP′′/2 that we add to ~y is thus 1 1 1 1 z′′ z′ z x . 2 v ≥ 8r v ≥ 32r2 v ≥ 64r2 v v V v V v V v V X∈ X∈ X∈ X∈ 16r After 16r phases, we therefore have v V yv 64r2 v V xv. ∈ ≥ ∈ Lemma 3.10. The algorithm round(P~x, L) has roundP complexity O((r2 + log ∆) log5+log r L).

Proof. The proof is identical to the proof of the analogous Lemma 2.11 in the hypergraph matching analysis. The complexity R(L) of the rounding algorithm round(~x, L) follows the recursive inequality R(L) 16r(R(√2L)+ R(√2L)+ O(1)). Furthermore, we have the base case solution of R(L) = ≤ O(L2r2 + log ∆) for L = O(1). The claim can now be proved by an induction on L, as formalized in Lemma A.1.

3.3 Wrap-up We now use our rounding procedure to find the approximate independent set of Lemma 3.3.

Proof of Lemma 3.3. Let S∗ be some maximum independent set of the given graph G = (V, E) with neighborhood independence r. We first compute a (1/∆)-fractional greedy packing ~x of size at ≤ least 1 S . To compute ~x, we initially set x = 1/∆ for all nodes v V . As this guarantees 2r · | ∗| v ∈ that Σ (v) 1 for all v V , this initial vector ~x clearly is a greedy packing. Now, we proceed in ~x ≤ ∈

18 log∆ synchronous rounds, where in each round, all nodes v V for which Σ (v) < 1/2 double their ∈ ~x value xv. Claim (3) of Lemma 3.5 implies that the vector ~x remains a greedy packing throughout this process. Further, after at most log ∆ doubling steps, we certainly have Σ~x(v) 1/2 for all nodes v V . Lemma 3.7 therefore implies that x 1 y for every greedy packing≥ ~y of G. The ∈ v V v ≥ 2r v V v claim that x S /(2r) now follows∈ because for any∈ independent S set of G, setting y = 1 v V v ≥ | ∗| v for v S and y∈ = 0 otherwise results inP a greedy packingP ~y. ∈ v Given theP greedy packing ~x, we now apply the recursive rounding from Lemma 3.9 for L = 2 2 ~ 5+log r 2 ∆/ log ∆. This produces a (1/ log ∆)-fractional greedy packing x′, in O log ∆(r + log ∆) rounds, whose size is a (1/(8r2))-factor of S . To finish up the rounding, we apply the basic rounding | ∗|  of Lemma 3.8 for L = log2 ∆, which runs in O(r2 log2 ∆) and produces a 1-fractional — i.e., integral ~ ~ — greedy packing x′′ of size is at least a (1/(4r)) times the size of x′. Hence, the final produced 3 integral greedy packing is a (32r )-approximation of the maximum independent set S∗. We compute a maximal independent set via iterative applications of this independent set approx- imation.

2 Proof of Theorem 1.5. First, we pre-compute an O(∆ )-vertex-coloring of G in O(log∗ n) rounds by Linial’s algorithm [Lin87]. Then, iteratively, we apply the maximum independent set approxima- tion procedure of Lemma 3.3 to the remaining graph. We add the found independent set S to the independent set that we will output at the end, and remove S along with its neighbors from the graph. In each iteration, the size of the maximum independent set of the remaining graph goes down to at least a factor of 1 1/(32r3). This is because otherwise we could combine the independent set − computed so far with the maximum independent set in the remaining graph to obtain an independent set larger than the maximum independent set in G. After at most O(r3 log n) repetitions, the remaining independent set size is 0, which means the remaining graph is empty. Hence, we have found a maximal 3 2 6+log r 5 6+log r independent set in O log∗ n + r log n r log ∆ = O(r log ∆ log n) rounds. ·  4 Other Implications: Matching Approximation, and Orientations 4.1 Approximating Maximum Matching in Graphs Theorem 4.1. There is a deterministic distributed algorithm that computes a (1 + ε)-approximation of maximum matching in O(poly( 1 ) ( 1 log ∆)7+log 1/ε) rounds, for any ε (0, 1]. ε · ε ∈ Proof. We first discuss an algorithm with complexity O(poly( 1 ) ( 1 log ∆)6+log 1/ε log n), and then ε · ε · explain how a small change improves the complexity to O(poly( 1 ) ( 1 log ∆)7+log 1/ε). ε · ε We follow a well-known approach of Hopcroft and Karp [HK73] of increasing the size of the matching using short augmenting paths. Given a matching M, an augmenting path P with respect to M is a path that starts with an unmatched vertex, then alternates between non-matching and matching edges, and ends in an unmatched vertex. Augmenting the matching M with this path P means replacing the matching edges in P M with the edges P M. Notice that the result is a ∩ \ matching, with one more edge. The approximation algorithm variant of Hopcroft and Karp [HK73] works as follows: For each ℓ = 1 to 2(1/ε) 1, we find a maximal set of vertex-disjoint augmenting paths of length ℓ, and we − augment them all. Hopcroft and Karp [HK73] show that this produces a (1 + ε)-approximation of maximum matching. See also [LPSP15], where they use the same method to obtain a O(log n/ε3)- round randomized distributed algorithm for (1 + ε)-approximation of maximum matching, using the help of the O(log n) round randomized MIS algorithm of Luby [Lub86]. What remains to be discussed is how do we compute a maximal set of vertex-disjoint augmenting paths of a given length ℓ 2(1/ε) 1. This can be easily formulated as a hypergraph maximal ≤ −

19 matching for a hypergraph of rank at most 1/ε + 1: create a hypergraph H by including one vertex for each unmatched node and also one vertex for each matching edge. Then, each augmenting path is simply a hyperedge made of its elements, i.e., its unmatched vertices and its matching edges. This hypergraph has rank at most 1/ε + 1, maximum degree at most ∆2(1/ε), and the number of its vertices is no more than n. Moreover, a single round of communication on this hypergraph can be simulated in O(1/ε) rounds of the base graph, simply because each hyperedge spans a path of length at most O(1/ε). Hence, we can directly apply Theorem 1.4 to compute a maximal matching of it, i.e., a maximal set of vertex-disjoint augmenting paths. This runs in O( 1 ( 2 log ∆)5+log(1/ε+1) log n) rounds. This is the ε6 ε · complexity of the algorithm for each one value of ℓ [1, 2/ε 1]. Thus, the overall complexity is at most O( 1 ( 2 log ∆)5+log(1/ε+1) log n). ∈ − ε7 ε · What remains to be discussed is removing the log n factor from the complexity. We next provide a sketch. In the above algorithm, we compute a maximal set of disjoint augmenting paths, and this precise maximality necessitates the log n-factor (in our approach). However, we do not need such a precise maximality. It suffices if the set of disjoint augmenting paths is almost maximal, in particular 1/ε in the sense that the fraction of the remaining augmenting paths is less than poly(ε∆− ), say. Then, even if we permanently remove all nodes that have such a remaining augmenting path, we lose only a negligible poly(ε∆)-factor of the matching, which at the end only changes our approximation ratio to 1 + 2ε. To compute such an almost maximal set of disjoint augmenting paths, instead of O(r3 log n) iterations in the proof of Theorem 1.4, it suffices to have O r3 log(poly(∆1/ε/ε)) iterations. This brings down the overall complexity to O(poly( 1 ) ( 1 log ∆)7+log 1/ε). ε · ε  4.2 Orientations with Small Out-Degree Theorem 4.2. There is a deterministic distributed algorithm that computes an orientation with maxi- 2 mum out-degree at most λ(1+ε) in 2O(log (log n/ε)) rounds, for any ε> 0, in any graph with arboricity ⌈ ⌉ at most λ. Proof. We follow the approach of Ghaffari and Su [GS17], which iteratively improves the orientation, i.e., reduces its maximum out-degree, using suitably defined augmenting paths. They developed this approach and used it along with Luby’s randomized MIS algorithm [Lub86] to obtain a polylogarithmic round randomized algorithm for finding an orientation with out-degree at most λ(1 + ε) . We show ⌈ ⌉ how to turn that algorithm into a quasi-polylogarithmic round deterministic algorithm, mainly by replacing their MIS module with our hypergraph maximal matching algorithm. Let D = λ(1 + ε) . Given an arbitrary orientation, we call a path P an augmenting path for this ⌈ ⌉ orientation if P is a directed path that starts in a node with out-degree at least D + 1 and ends in a node with out-degree at most D 1. Augmenting this path means reversing the direction of all of its − edges. Notice that this would improve the orientation, as it would decrease the out-degree of one of the nodes whose outdegree is above the budget D, without creating a new such node. Let G0 be the graph with our initial arbitrary orientation. Define G0′ to be a directed graph

obtained by adding a source node s and a sink node t to G0. Then, we add outdegG0 (u) D edges from s to every node u with outdegree at least D + 1, and D outdeg (u) edges from every− node − G0 u with outdegree at most D 1 to t. We will improve the orientation gradually, in ℓ = O(log n/ε) − iterations. In the ith iteration, we find a maximal set of edge-disjoint augmenting paths of length 3 + i from s to t in Gi′ , and then we reverse all these augmenting paths. The resulting graph is called Gi′+1. Ghaffari and Su [GS17, Lemma D.6] showed that in this manner, each time the length of the augmenting path increases by at least an additive 1. Moreover, they showed that at the end of the process, no augmenting paths of length at most ℓ = O(log n/ε) remains. They used this to prove that there must be no node of out-degree D + 1 left, at the end of the process, as any such node would imply the existence of an augmenting path of length at most ℓ = O(log n/ε) [GS17, Lemma D.9]. The only algorithmic piece that remains to be explained is how we compute a maximal set of edge-disjoint augmenting paths of length at most 3 + i<ℓ, in a given orientation. Ghaffari and

20 Su [GS17, Theorem D.4] solved this part using Luby’s randomized MIS algorithm [Lub86]. We instead use our hypergraph maximal matching algorithm. In particular, we view each edge as one vertex of our hypergraph, and each augmenting path of length at most 3 + i<ℓ as one hyperedge of our hypergraph. Then, we invoke Theorem 1.4, which provides us with a maximal set of edge-disjoint augmenting paths. The round complexity of the process is at most poly(ℓ) loglog(log n)/ε+O(1) ∆ log n, · · where the first term ℓ is because simulating each hyperedge needs ℓ rounds, and the second ℓ-factor comes from the fact that the degree of the hypergraph may be as large as ∆ℓ, which means the related logarithm is at most ℓ log ∆. This is the complexity for each iteration. Since the algorithm has ℓ iterations, each time working on an incremented augmenting-path length, the overall complexity is 2 at most poly(ℓ) loglog((log n)/ε)+O(1) ∆ log n. This is no more than 2O(log (log n/ε)) rounds, which is · · quasi-polylogarithmic in n for most ε-values of interest, e.g., ε = Ω(1/ poly log n).

5 Open Problems

We believe that our techniques and results open the road for further progress on deterministic dis- tributed graph algorithms, with clear consequences also on randomized algorithms, as exemplified by Corollary 1.2. As Barenboim and Elkin suggested when discussing their Open Problem 5, perhaps these will serve as a “good stepping stone” towards obtaining an efficient deterministic algorithms for MIS, thus resolving Linial’s long-standing question [Lin87]. As concrete steps on this path, we point out two smaller problems, which appear to be the immediate next steps. Hypergraph Maximal Matching with Better Rank Dependency: For our hypergraph maximal matching algorithm, we have been more focused on the case of smaller ranks r, and the current complexity has a factor of loglog r ∆ in it. Can we improve this to poly(r log n), for instance? Notice that the case of r = poly log n captures a range of problems of interest, see e.g. Section 4.2, and this improvement would give a poly log n-time algorithm for these cases, including a resolution of Open Problem 10 of Barenboim and Elkin’s book [BE13]. Better than (2∆ 1)-Edge-Coloring: We obtained a polylogarithmic-time algorithm for (2∆ 1)- − − edge-coloring, as formalized in Theorem 1.1. This value of 2∆ 1 is a natural threshold, because − this is what greedy sequential arguments obtain, which made it the classic target of (deterministic) distributed algorithms. However, as Vizing’s theorem [Viz64] shows, every graph has a (∆ + 1)-edge- coloring. How close can we get to this, while remaining with polylogarithmic-time LOCAL algorithms? We are confident that by combining Theorem 1.1 with ideas of Panconesi and Srinivasan [PS95] for ∆-vertex-coloring, we can obtain a polylogarithmic-time algorithm for (2∆ 2)-edge-coloring. But how about (2∆ 3)-edge-coloring, or even (3∆/2)-edge-coloring? − − Interestingly, we can already make some progress on this question for graphs with small arboricity. This result is achieved by combining our list-edge-coloring algorithm of Theorem 1.1 with an H- partitioning method of Barenboim and Elkin [BE13, Chapter 5.1]. This significantly generalizes the (∆ + o(∆))-edge-coloring results of [BEM16], which worked for a ∆1 δ for some constant δ > 0. ≤ − Corollary 5.1. There is a deterministic distributed algorithm that computes an edge-coloring with 1 7 2 ∆+(2+ ε)a 1 colors in O( ε log ∆ log n) rounds, on any n-node graph G = (V, E) with maximum degree ∆ and− arboricity a.

Notice that any graph has arboricity a ∆/2. The above corollary shows that we start seeing savings in the number of colors as soon as the≤ arboricity goes slightly below this upper bound, e.g., for a< ∆(1 ε)/2, we already get colorings with less than 2∆ 2 colors. − − Proof of Corollary 5.1. First, we compute an H-partitioning [BE13, Chapter 5.1] in O(log n/ε) rounds. This decomposes V into disjoint vertex sets H1, H2,..., Hℓ, for ℓ = O(log n/ε), with the property that

21 each node in H has degree at most (2 + ε)a in the graph G[ ℓ H ]. To compute this decomposition, i ∪j=i j one just needs to iteratively peel vertices of degree at most (2 + ε)a from the remaining graph. Having this partitioning, we compute a (∆ + (2 + ε)a 1)-edge-coloring by gradually moving − backwards in this partition, from Hℓ towards H1. Each step is as follows. Suppose we already have ℓ a coloring of edges of G[ j=i+1Hj]. We now introduce the vertices of Hi and also their edges whose ℓ ∪ other endpoint is in j=iHj. Each such edge e has at most (2 + ε)a 1 other incident edges on the side of its H -endpoint∪ and at most ∆ 1 other incident edges on the− other endpoint. If we take away i − the colors of 1, 2,..., ∆+(2+ ε)a 1 that are already used by neighboring edges e whose both { − } ′ endpoints are in ℓ H , the edge e would still have at least d + 1 remaining colors in its palette, ∪j=i+1 j e where d is the number of edges in G[ ℓ H ] incident on e who remain uncolored. Hence, we can color e ∪j=i j all these edges by applying the list-edge-coloring algorithm of Theorem 1.1, in O(log7 ∆ log n) rounds. This is the round complexity needed for coloring new edges after introducing each layer Hi. Hence, the overall complexity until we go through all the ℓ layers and finish the edge-coloring of G = G[ ℓ H ] ∪j=1 j is ℓ O(log7 ∆ log n)= O( 1 log7 ∆ log2 n). · ε Acknowledgment

We are grateful to Moab Arar and Shiri Chechik for sharing with us their manuscript about distributed matching approximation in graphs [AC17]. We note that we arrived at a prior (and slower) version of Lemma 2.5 (for rank-3 hypergraphs) inspired by a concept they use, called the kernel of a graph, which itself is borrowed from [BHN16].

References

[ABI86] Noga Alon, L´aszl´oBabai, and Alon Itai. A fast and simple randomized parallel algorithm for the maximal independent set problem. Journal of algorithms, 7(4):567–583, 1986.

[AC17] Moab Arar and Shiri Chechik. A distributed deterministic (2 + ǫ)-approximation for max- imum matching in O(∆o(1)) rounds. Manuscript, 2017.

[ALGP89] Baruch Awerbuch, M Luby, AV Goldberg, and Serge A Plotkin. Network decomposition and locality in distributed computation. In Foundations of Computer Science, 1989., 30th Annual Symposium on, pages 364–369. IEEE, 1989.

[BE11] Leonid Barenboim and Michael Elkin. Distributed deterministic using bounded neighborhood independence. In the Proc. of the Int’l Symp. on Princ. of Dist. Comp. (PODC), pages 129–138, 2011.

[BE13] Leonid Barenboim and Michael Elkin. Distributed graph coloring: Fundamentals and recent developments. Synthesis Lectures on Distributed Computing Theory, 4(1):1–171, 2013.

[BEM16] Leonid Barenboim, Michael Elkin, and Tzalik Maimon. Deterministic distributed (∆+ o(∆))-edge-coloring, and vertex-coloring of graphs with bounded diversity. arXiv preprint arXiv:1610.06759, 2016.

[BEPS12] Leonid Barenboim, Michael Elkin, Seth Pettie, and Johannes Schneider. The locality of distributed symmetry breaking. In Foundations of Computer Science (FOCS) 2012, pages 321–330. IEEE, 2012.

22 [BHN16] Sayan Bhattacharya, Monika Henzinger, and Danupon Nanongkai. New deterministic approximation algorithms for fully dynamic matching. In Proceedings of the 48th Annual ACM SIGACT Symposium on Theory of Computing, pages 398–411. ACM, 2016.

[CH03] Andrzej Czygrinow and M Ha´n´ckowiak. Distributed algorithm for better approximation of the maximum matching. In International Computing and Combinatorics Conference, pages 242–251. Springer, 2003.

[CHK01] Andrzej Czygrinow, M Ha´n´ckowiak, and M Karo´nski. Distributed O(∆ log n)-edge-coloring algorithm. In European Symposium on Algorithms, pages 345–355. Springer, 2001.

[DGP98] Devdatt Dubhashi, David A Grable, and Alessandro Panconesi. Near-optimal, distributed edge colouring via the nibble method. Theoretical Computer Science, 203(2):225–251, 1998.

[EPS15] Michael Elkin, Seth Pettie, and Hsin-Hao Su. (2∆- 1)-edge-coloring is much easier than maximal matching in the distributed setting. In Proceedings of the Twenty-Sixth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 355–370. Society for Industrial and Applied Mathematics, 2015.

[FG17] Manuela Fischer and Mohsen Ghaffari. Deterministic distributed matching: Simpler, faster, better. arXiv preprint arXiv:1703.00900, 2017.

[FHK16] Pierre Fraigniaud, Marc Heinrich, and Adrian Kosowski. Local conflict coloring. In Founda- tions of Computer Science (FOCS), 2016 IEEE 57th Annual Symposium on, pages 625–634. IEEE, 2016.

[Gha16] Mohsen Ghaffari. An improved distributed algorithm for maximal independent set. In Pro. of ACM-SIAM Symp. on Disc. Alg. (SODA), 2016.

[GKM17] Mohsen Ghaffari, Fabian Kuhn, and Yannic Maus. On the complexity of local distributed graph problems. In Proc. of the Symp. on Theory of Comp. (STOC), pages to appear, arXiv:1611.02663, 2017.

[GS17] Mohsen Ghaffari and Hsin-Hao Su. Distrbuted degree splitting, edge coloring, and orien- tations. In Pro. of ACM-SIAM Symp. on Disc. Alg. (SODA), 2017.

[HK73] John E Hopcroft and Richard M Karp. An n5/2 algorithm for maximum matchings in bipartite graphs. SIAM Journal on computing, 2(4):225–231, 1973.

[HKP98] Michal Hanckowiak, Michal Karonski, and Alessandro Panconesi. On the distributed com- plexity of computing maximal matchings. In Pro. of ACM-SIAM Symp. on Disc. Alg. (SODA), pages 219–225, 1998.

[HKP99] Micha l Ha´n´ckowiak, Micha l Karo´nski, and Alessandro Panconesi. A faster distributed algorithm for computing maximal matchings deterministically. In the Proc. of the Int’l Symp. on Princ. of Dist. Comp. (PODC), pages 219–228, 1999.

[HSS16] David G Harris, Johannes Schneider, and Hsin-Hao Su. Distributed (∆+ 1)-coloring in sublogarithmic rounds. In Proceedings of the 48th Annual ACM SIGACT Symposium on Theory of Computing, pages 465–478, 2016.

[Joh99] Ojvind¨ Johansson. Simple distributed (∆ + 1)-coloring of graphs. Information Processing Letters, 70(5):229–232, 1999.

23 [KMW16] Fabian Kuhn, Thomas Moscibroda, and Roger Wattenhofer. Local computation: Lower and upper bounds. J. ACM, 63(2):17:1–17:44, March 2016.

[KNPR14] Shay Kutten, Danupon Nanongkai, Gopal Pandurangan, and Peter Robinson. Distributed symmetry breaking in hypergraphs. In International Symposium on Distributed Computing, pages 469–483. Springer, 2014.

[Kuh09] Fabian Kuhn. Weak graph colorings: distributed algorithms and applications. In Proceed- ings of the twenty-first annual symposium on Parallelism in algorithms and architectures, pages 138–144. ACM, 2009.

[Lin87] Nathan Linial. Distributive graph algorithms global solutions from local data. In Proc. of the Symp. on Found. of Comp. Sci. (FOCS), pages 331–335. IEEE, 1987.

[Lin92] Nathan Linial. Locality in distributed graph algorithms. SIAM Journal on Computing, 21(1):193–201, 1992.

[LPSP08] Zvi Lotker, Boaz Patt-Shamir, and Seth Pettie. Improved distributed approximate match- ing. In the Proc. of the Int’l Symp. on Princ. of Dist. Comp. (PODC), pages 129–136, 2008.

[LPSP15] Zvi Lotker, Boaz Patt-Shamir, and Seth Pettie. Improved distributed approximate match- ing. Journal of the ACM (JACM), 62(5), 2015.

[Lub86] Michael Luby. A simple parallel algorithm for the maximal independent set problem. SIAM journal on computing, 15(4):1036–1053, 1986.

[Pel00] David Peleg. Distributed Computing: A Locality-sensitive Approach. Society for Industrial and Applied Mathematics, Philadelphia, PA, USA, 2000.

[PR01] Alessandro Panconesi and Romeo Rizzi. Some simple distributed algorithms for sparse networks. Distributed computing, 14(2):97–100, 2001.

[PS92] Alessandro Panconesi and Aravind Srinivasan. Improved distributed algorithms for coloring and network decomposition problems. In Proc. of the Symp. on Theory of Comp. (STOC), pages 581–592. ACM, 1992.

[PS95] Alessandro Panconesi and Aravind Srinivasan. The local nature of ∆-coloring and its algorithmic applications. Combinatorica, 15(2):255–280, 1995.

[PS97] Alessandro Panconesi and Aravind Srinivasan. Randomized distributed edge coloring via an extension of the chernoff–hoeffding bounds. SIAM Journal on Computing, 26(2):350– 368, 1997.

[Viz64] Vadim G Vizing. On an estimate of the chromatic class of a p-graph. Diskret. Analiz, 3(7):25–30, 1964.

24 Appendix

A Solution of the Recurrence Relation of Sections 2 and 3

Lemma A.1. Let r 2 and ∆ 2 be two parameters and let α 1 and c> 0 be two given constants. Further, let R(L) be≥ a function≥ that is defined for L 1 by the following≥ recurrence relation: ≥ cr2 + c log ∆, if L 4, R(L) := ≤ (3) αr R(√2L)+ cr, otherwise. ( ·

Then we have R(L)= O r2 + (log L)log2 α+log2 r(r2 + log ∆) .

Proof. For all x 1, we define a non-negative integer t as  ≥ x x 2−t t := min t N : 2 . x ∈ 0 2 ≤     We prove that for all L 1, we have ≥

tL 1 − (αr 2) R(L) (αr)tL (cr2 + c log ∆) + cr (αr)i <≥ 2(αr)tL (cr2 + c log ∆). (4) ≤ · · Xi=0 For x 1, we have t max 0, log log x and thus the claim of the lemma directly follows from ≥ x ≤ { 2 2 } Equation (4). To prove Equation (4), first note that for L 4, we have tL 0 and because αr 1, we thus have R(L) cr2 as required by Equation (3). For≤L> 4, we prove≥Equation (4) by induction.≥ More ≤ formally, for each L> 4, we show that there is a finite sequence L = Lk >Lk 1 > >L0 such that − · · · L0 4 and such that for each i 1,...,k , Equation (3) implies that if Equation (4) holds for Li 1, ≤ ∈ { } − it also holds for Li. Let us therefore assume that Lk = L > 4. For i 1, we define Li 1 := √2Li. First note that ≥ − because for x > 4, √2x x/√2 and thus we reach a value smaller than 4 in a bounded number of steps. For every i 1 such≤ that L > 4, we have ≥ i −t 2 2−(t+1) N Li Li tLi−1 = min t 0 : = 2 = tLi 1.  ∈ 2 ! 2 ≤  −  r   

From Equation (3), for Li >4, we therefore have 

R(Li) αr R(Li 1)+ cr ≤ · − tL 2 i − tL 1 2 j αr (αr) i − (cr + c log ∆) + cr (αr) + cr ≤ ·  · ·  Xj=0  tL 1  i − t 2 j = (αr) Li (cr +log∆)+ cr (αr) . · · Xj=0 This proves Equation (4) and thus concludes the proof.

25