Low-Complexity Decompositions of Combinatorial Objects

Total Page:16

File Type:pdf, Size:1020Kb

Low-Complexity Decompositions of Combinatorial Objects IMPA Master's Thesis Low-Complexity Decompositions of Combinatorial Objects Author: Supervisor: Davi Castro-Silva Roberto Imbuzeiro Oliveira A thesis submitted in fulfillment of the requirements for the degree of Master in Mathematics at Instituto de Matem´aticaPura e Aplicada April 15, 2018 iii Contents 1 Introduction 1 1.1 High-level overview of the framework.........................1 1.2 Remarks on notation and terminology........................2 1.3 Examples and structure of the thesis.........................3 2 Abstract decomposition theorems7 2.1 Probabilistic setting..................................7 2.2 Structure and pseudorandomness...........................8 2.3 Weak decompositions..................................9 2.3.1 Weak Regularity Lemma........................... 10 2.4 Strong decompositions................................. 10 2.4.1 Szemer´ediRegularity Lemma......................... 11 3 Counting subgraphs and the Graph Removal Lemma 15 3.1 Counting subgraphs globally.............................. 15 3.2 Counting subgraphs locally and the Removal Lemma................ 17 3.3 Application to property testing............................ 19 4 Extensions of graph regularity 21 4.1 Regular approximation................................. 21 4.2 Relative regularity................................... 23 5 Hypergraph regularity 27 5.1 Intuition and definitions................................ 27 5.2 Regularity at a single level............................... 28 5.3 Regularizing all levels simultaneously......................... 30 6 Dealing with sparsity: transference principles 33 6.1 Subsets of pseudorandom sets............................. 33 6.2 Upper-regular functions................................ 34 6.3 Green-Tao-Ziegler Dense Model Theorem...................... 37 7 Transference results for L1 structure 41 7.1 Relationships between cut norm and L1 norm.................... 41 7.2 Inheritance of structure lemmas............................ 43 7.3 A \coarse" structural correspondence......................... 45 7.4 A “fine" structural correspondence.......................... 46 7.4.1 Proof of Theorem 7.2............................. 47 8 Extensions and open problems 51 Bibliography 53 1 Chapter 1 Introduction Many times in Mathematics and Computer Science we are dealing with a large and general class of objects of a certain kind and we wish to obtain non-trivial results which are valid for all objects belonging to this class. This may be a very hard task if the possible spectrum of behavior for the members of this class is very broad, since it is unlikely that any single argument will hold uniformly along this whole spectrum. Such results may be easy (or easier) to obtain when the class we are dealing with is highly structured, in the sense that one can encode its elements in such a way that the description of each object has a relatively small size; then it may be possible to use this structure to prove results valid uniformly over all objects in this class, or to do a case-by-case analysis to obtain such results. At the other end of the spectrum there are the random objects, which have a very high complexity in the sense that any description of a randomly chosen object must specify the random choices made at each point separately, and thus be very large if the object in consideration is large. However, for such objects there are various \concentration inequalities" which may be used to obtain results valid with high probability over the set of random choices made. Therefore, if we can decompose every object belonging to the general class we are interested in into a \highly structured" component (which has low complexity) and a \pseudorandom" component (which mimics the behavior of random objects in certain key statistics), then we may analyze each of these components separately by different means and so be able to obtain results which are valid for all such objects. An illustrative example of a \structure-pseudorandomness" decomposition of this kind is Szemer´edi'scelebrated Regularity Lemma [31]. This important result roughly asserts that the vertices of any graph G may be partitioned into a bounded number of equal-sized parts, in such a way that for almost all pairs of partition classes the bipartite graph between them is random-like. Both the upper bound we get for the order of this partition and the quality of the pseudorandomness behavior of the edges between these pairs depend only on an accuracy parameter " we are at liberty to choose. In this example, the object to be decomposed is the edge set E of a given arbitrary graph G = (V; E), which belongs to the \general class" of all graphs. The structured component then represents the pairs (Vi;Vj) of partition classes together with the density of edges between them, and it has low complexity because the order of the partition is uniformly bounded for all graphs. The pseudorandom component represents the actual edges between these pairs, and has a random-like property known as "-regularity which we will define in the next chapter. This result has many applications in Combinatorics and Computer Science (see, e.g., [20, 21] for a survey), and it has inspired numerous other decomposition results in a similar spirit both inside and outside Graph Theory. In this work we aim to survey many decomposition theorems of this form present in the literature. We provide a unified framework for proving them and present some new results along these same lines. 1.1 High-level overview of the framework In our setting, the combinatorial objects to be decomposed will be represented as functions defined over a discrete space X. This identification does not give much loss in generality, since given a combinatorial object O (such as a graph, hypergraph or additive group), we may usually 2 Chapter 1. Introduction identify some underlying discrete space X for this kind of object and then represent O as a function fO defined on X. We endow X with a probability measure P, so that the objects considered may be viewed as random variables, and define a family C of \low-complexity" subsets of X. The specifics of both the probability measure P and the structured family C will depend on the application at hand, and it is from them that we will define our notions of complexity and pseudorandomness. The sets belonging to C are seen as the basic structured sets, which have complexity 1, and any subset of X which may be obtained by boolean operations from at most k of these basic structured sets A1; ··· ;Ak 2 C is said to have complexity at most k according to C. We then say two functions g; h : X ! R are "-indistinguishable according to C if, for all sets A 2 C, we have that jE [(g − h) 1A]j ≤ ". Intuitively, this means that we are not able to effectively distinguish between h and g by taking their empirical averages over random elements chosen from one of the basic sets in C. A function f : X ! R is then said to be "-pseudorandom if it is "-indistinguishable from the constant function 1 on X. Thus pseudorandom functions are in some sense uniformly distributed over structured sets, mimicking random functions of mean 1 defined on X. These concepts are closely related to the notions of pseudorandomness and indistinguishability in the area of Computational Complexity Theory (in the non-uniform setting). In this case, we have a collection F of “efficiently computable" boolean functions f : X ! f0; 1g (which are though of as adversaries), and two distributions A and B on X are said to be "-indistinguishable by F if jP (f(A) = 1) − P (f(B) = 1)j ≤ " 8f 2 F A distribution R is then said to be "-pseudorandom for F if it is "-indistinguishable from the uniform distribution UX on X. Intuitively, this means that no adversary from the class F is able to distinguish R from UX with non-negligible advantage. This is completely equivalent to our definitions, if we identify each function f in F with its support f −1(1) in X, and identify the distributions A, B with the functions g(x) := P(A = x)·jXj, h(x) := P(B = x) · jXj. Then jP (f(A) = 1) − P (f(B) = 1)j = E (g − h) 1f −1(1) ; where the expectation on the right-hand side is with respect to the uniform distribution. In our abstract decomposition theorems given in Chapter2, it will be convenient to deal with σ-algebras on X rather than with subsets of X; since a σ-algebra on a finite space X is a finite collection of subsets of X, the intuition will be essentially the same. However, this change will make it simpler to apply tools such as the Cauchy-Schwarz inequality and Pythagoras' theorem, which will be both very important in our energy-increment arguments. Moreover, we will also require pseudorandom functions to have no correlation in average value to the structured sets, and thus be "-indistinguishable from the zero function on X. Since the expected value function is linear, this \translation" in our definition makes no important difference. The framework as described here will be retaken in Chapter6, when we talk about transfer- ence principles and the Dense Model Theorem. 1.2 Remarks on notation and terminology We will be mainly interested in very large objects, and use the usual asymptotic notation O, Ω, and Θ with subscripts indicating parameters the implied constant is allowed to depend on. For instance, Oα,β(X) denotes a quantity bounded in absolute value by Cα,βjXj for some quantity Cα,β depending only on α and β. We write Ea2A;b2B to denote the expectation when a in chosen uniformly from the set A and b is chosen uniformly from the set B, both choices being independent. For any real numbers a and b, we write x = a ± b to denote a − b ≤ x ≤ a + b. Given an integer n ≥ 1, we write [n] for the set f1; ··· ; ng. If A is a set and k is an integer, A we write k to denote the collection of all k-element subsets of A.
Recommended publications
  • Pseudorandom Ramsey Graphs
    Pseudorandom Ramsey Graphs Jacques Verstraete, UCSD [email protected] Abstract We outline background to a recent theorem [47] connecting optimal pseu- dorandom graphs to the classical off-diagonal Ramsey numbers r(s; t) and graph Ramsey numbers r(F; t). A selection of exercises is provided. Contents 1 Linear Algebra 1 1.1 (n; d; λ)-graphs . 2 1.2 Alon-Boppana Theorem . 2 1.3 Expander Mixing Lemma . 2 2 Extremal (n; d; λ)-graphs 3 2.1 Triangle-free (n; d; λ)-graphs . 3 2.2 Clique-free (n; d; λ)-graphs . 3 2.3 Quadrilateral-free graphs . 4 3 Pseudorandom Ramsey graphs 4 3.1 Independent sets . 4 3.2 Counting independent sets . 5 3.3 Main Theorem . 5 4 Constructing Ramsey graphs 5 4.1 Subdivisions and random blocks . 6 4.2 Explicit Off-Diagonal Ramsey graphs . 7 5 Multicolor Ramsey 7 5.1 Blowups . 8 1. Linear Algebra X. Since trace is independent of basis, for k ≥ 1: n If A is a square symmetric matrix, then the eigenvalues X tr(Ak) = tr(X−kAkXk) = tr(Λk) = λk: (2) of A are real. When A is an n by n matrix, we denote i i=1 them λ1 ≥ λ2 ≥ · · · ≥ λn. If the corresponding eigen- k vectors forming an orthonormal basis are e1; e2; : : : ; en, The combinatorial interpretation of tr(A ) as the num- n P then for any x 2 R , we may write x = xiei and ber of closed walks of length k in the graph G will be used frequently. When A is the adjacency matrix of a n n X 2 X graph G, we write λi(G) for the ith largest eigenvalue hAx; xi = λixi and hAx; yi = λixiyi: (1) i=1 i=1 of A and For any i 2 [n], note xi = hx; eii.
    [Show full text]
  • Pseudorandom Graphs
    Pseudorandom graphs David Conlon What is a pseudorandom graph? A pseudorandom graph is a graph that behaves like a random graph of the same edge density. For example, it might be the case that the graph has roughly the same edge density between any two large sets or that it contains roughly the same number of copies of every small graph H as one would expect to find in a random graph. The purpose of this course will be to explore certain notions of pseudorandomness and to see what properties the resulting graphs have. For much of the course, our basic object of study will be (p; β)-jumbled graphs, a concept introduced (in a very slightly different form) by Andrew Thomason in the 1980s. These are defined as follows. Definition 1 A graph G on vertex set V is (p; β)-jumbled if, for all vertex subsets X; Y ⊆ V (G), je(X; Y ) − pjXjjY jj ≤ βpjXjjY j: That is, the edge density between any two sets X and Y is roughly p, with the allowed discrepancy measured in terms of β. The normalisation may be explained by considering what happens in a random graph. In this case, the expected number of edges between X and Y will be pjXjjY j and the standard deviation will be pp(1 − p)jXjjY j. So β is measuring how many multiples of the standard deviation our edge density is allowed to deviate by. Of course, random graphs should themselves be pseudorandom. Recall that the binomial random graph G(n; p) is a graph on n vertices formed by choosing each edge independently with probability p.
    [Show full text]
  • The Regularity Method for Graphs with Few 4-Cycles
    THE REGULARITY METHOD FOR GRAPHS WITH FEW 4-CYCLES DAVID CONLON, JACOB FOX, BENNY SUDAKOV, AND YUFEI ZHAO Abstract. We develop a sparse graph regularity method that applies to graphs with few 4-cycles, including new counting and removal lemmas for 5-cycles in such graphs. Some applications include: • Every n-vertex graph with no 5-cycle can be made triangle-free by deleting o(n3=2) edges. • For r ≥ 3, every n-vertex r-graph with girth greater than 5 has o(n3=2) edges. • Every subset of [n] without a nontrivial solution to the equation x + x + 2x = x + 3x has p 1 2 3 4 5 size o( n). 1. Introduction Szemerédi’s regularity lemma [52] is a rough structure theorem that applies to all graphs. The lemma originated in Szemerédi’s proof of his celebrated theorem that dense sets of integers contain arbitrarily long arithmetic progressions [51] and is now considered one of the most useful and important results in combinatorics. Among its many applications, one of the earliest was the influential triangle removal lemma of Ruzsa and Szemerédi [40], which says that any n-vertex graph with o(n3) triangles can be made triangle-free by removing o(n2) edges. Surprisingly, this simple sounding statement is already sufficient to imply Roth’s theorem, the special case of Szemerédi’s theorem for 3-term arithmetic progressions, and a generalization known as the corners theorem. Most applications of the regularity lemma, including the triangle removal lemma, rely on also having an associated counting lemma. Such a lemma roughly says that the number of embeddings of a fixed graph H into a pseudorandom graph G can be estimated by pretending that G is a random graph.
    [Show full text]
  • Arxiv:2104.11626V1 [Math.CO] 23 Apr 2021 Φ San Is Partition Ento 1.4 Definition Result
    REMOVAL LEMMAS AND APPROXIMATE HOMOMORPHISMS JACOB FOX AND YUFEI ZHAO Abstract. We study quantitative relationships between the triangle removal lemma and several of its variants. One such variant, which we call the triangle-free lemma, states that for each ǫ> 0 there exists M such that every triangle-free graph G has an ǫ-approximate homomorphism to a triangle- free graph F on at most M vertices (here an ǫ-approximate homomorphism is a map V (G) → V (F ) where all but at most ǫ |V (G)|2 edges of G are mapped to edges of F ). One consequence of our results is that the least possible M in the triangle-free lemma grows faster than exponential in any − polynomial in ǫ 1. We also prove more general results for arbitrary graphs, as well as arithmetic analogues over finite fields, where the bounds are close to optimal. 1. Introduction 1.1. Graph removal and related results. The triangle removal lemma of Ruzsa and Sze- mer´edi [27] is a fundamental tool in extremal combinatorics. Theorem 1.1 (Triangle removal lemma). For every ǫ > 0, there exists δ > 0 such that every n-vertex graph with fewer than δn3 triangles can be made triangle-free by deleting at most ǫn2 edges. Definition 1.2. Let δT RL(ǫ) denote the largest possible constant δ in Theorem 1.1. The standard proof of the triangle removal lemma, which uses Szemer´edi’s regularity lemma [30], −1 −O(1) gives an upper bound on δT RL(ǫ) which is a tower of 2’s of height ǫ .
    [Show full text]
  • On the Resilience of Hamiltonicity and Optimal Packing of Hamilton Cycles in Random Graphs*
    SIAM J. DISCRETE MATH. © 2011 Society for Industrial and Applied Mathematics Vol. 25, No. 3, pp. 1176–1193 ON THE RESILIENCE OF HAMILTONICITY AND OPTIMAL PACKING OF HAMILTON CYCLES IN RANDOM GRAPHS* † ‡ § SONNY BEN-SHIMON , MICHAEL KRIVELEVICH , AND BENNY SUDAKOV Abstract. Let k ¼ðk1; :::;knÞ be a sequence of n integers. For an increasing monotone graph property P we say that a base graph G ¼ð½n;EÞ is k-resilient with respect to P if for every subgraph H ⊆ G such that dH ðiÞ ≤ ki for every 1 ≤ i ≤ n the graph G − H possesses P. This notion naturally extends the idea of the local resilienceofgraphsrecentlyinitiatedbySudakovandVu.Inthispaperwestudythek-resilienceofatypicalgraph from Gðn; pÞ with respect to the Hamiltonicity property, where we let p range over all values for which the base graph is expected to be Hamiltonian. Considering this generalized approach to the notion of resilience our main result implies several corollaries which improve on the best known bounds of Hamiltonicity related questions. For ε K ln n one, it implies that for every positive > 0 and large enough values of K,ifp> n , then with high probability the local resilience of Gðn; pÞ with respect to being Hamiltonian is at least ð1 − εÞnp∕ 3, improving on the previous bound for this range of p. Another implication is a result on optimal packing of edge-disjoint Hamilton cycles in a ≤ 1.02 ln n G random graph. We prove that if p n , then with high probability a graph G sampled from ðn; pÞ contains δðGÞ b 2 c edge-disjoint Hamilton cycles, extending the previous range of p for which this was known to hold.
    [Show full text]
  • Finding Any Given 2‐Factor in Sparse Pseudorandom Graphs Efficiently
    Received: 16 February 2019 | Revised: 7 April 2020 | Accepted: 18 April 2020 DOI: 10.1002/jgt.22576 ARTICLE Finding any given 2‐factor in sparse pseudorandom graphs efficiently Jie Han1 | Yoshiharu Kohayakawa2 | Patrick Morris3,4 | Yury Person5 1Department of Mathematics, University of Rhode Island, Kingston, Rhode Island Abstract 2Instituto de Matemáticae Estatística, Given an n‐vertex pseudorandom graph G and an Universidade de São Paulo, São Paulo, n‐vertex graph H with maximum degree at most two, Brazil we wish to find a copy of H in G, that is, an em- 3Institut für Mathematik, Freie Universität Berlin, Berlin, Germany bedding φ :()VH→ VG () so that φ()u φ ()vEG∈ ( ) 4Berlin Mathematical School, Berlin, for all uv∈ E() H . Particular instances of this pro- Germany blem include finding a triangle‐factor and finding a 5 Institut für Mathematik, Technische Hamilton cycle in G. Here, we provide a deterministic Universität, Ilmenau, Germany polynomial time algorithm that finds a given H in Correspondence any suitably pseudorandom graph G. The pseudor- Jie Han, Department of Mathematics, andom graphs we consider are (p,)λ ‐bijumbled University of Rhode Island, 5 Lippitt Road, Kingston, RI 02881. graphs of minimum degree which is a constant pro- Email: [email protected] portion of the average degree, that is, Ω()pn .A(p,)λ ‐ bijumbled graph is characterised through the dis- Funding information Fundação de Amparo à Pesquisa do Estado crepancy property: |eAB(, )− pA|||||< B λ ||||AB de São Paulo, Grant/Award Numbers: for any two sets of vertices A and B. Our condition 2013/03447‐6, 2014/18641‐5; Conselho λ =(Opn2 /log n) on bijumbledness is within a log Nacional de Desenvolvimento Científico e Tecnológico, Grant/Award Numbers: factor from being tight and provides a positive answer 310974/2013‐5, 311412/2018‐1, 423833/ to a recent question of Nenadov.
    [Show full text]
  • MITOCW | 11. Pseudorandom Graphs I: Quasirandomness
    MITOCW | 11. Pseudorandom graphs I: quasirandomness PROFESSOR: So we spent the last few lectures discussing Szemerédi's regularity lemma. So we saw that this is an important tool with important applications, allowing you to do things like a proof of Roth's theorem via graph theory. One of the concepts that came up when we were discussing the statement of Szemerédi's regularity lemma is that of pseudorandomness. So the statement of Szemerédi's graph regularity lemma is that you can partition an arbitrary graph into a bounded number of pieces so that the graph looks random-like, as we called it, between most pairs of parts. So what does random-like mean? So that's something that I want to discuss for the next couple of lectures. And this is the idea of pseudorandomness, which is a concept that is really prevalent in combinatorics, in theoretical computer science, and in many different areas. And what pseudorandomness tries to capture is, in what ways can a non-random object look random? So before diving into some specific mathematics, I want to offer some philosophical remarks. So you might know that, on a computer, you want to generate a random number. Well, you type in a "rand," and it gives you a random number. But of course, that's not necessarily true randomness. It came from some pseudorandom generator. Probably there's some seed and some complex-looking function and outputs something that you couldn't distinguish from random. But it might not actually be random but just something that looks, in many different ways, like random.
    [Show full text]
  • Large Networks and Graph Limits
    American Mathematical Society Colloquium Publications Volume 60 Large Networks and Graph Limits László Lovász Large Networks and Graph Limits http://dx.doi.org/10.1090/coll/060 American Mathematical Society Colloquium Publications Volume 60 Large Networks and Graph Limits László Lovász American Mathematical Society Providence, Rhode Island Editorial Board Lawrence C. Evans Yuri Manin Peter Sarnak (Chair) 2010 Mathematics Subject Classification. Primary 58J35, 58D17, 58B25, 19L64, 81R60, 19K56, 22E67, 32L25, 46L80, 17B69. For additional information and updates on this book, visit www.ams.org/bookpages/coll-60 ISBN-13: 978-0-8218-9085-1 Copying and reprinting. Individual readers of this publication, and nonprofit libraries acting for them, are permitted to make fair use of the material, such as to copy a chapter for use in teaching or research. Permission is granted to quote brief passages from this publication in reviews, provided the customary acknowledgment of the source is given. Republication, systematic copying, or multiple reproduction of any material in this publication is permitted only under license from the American Mathematical Society. Requests for such permission should be addressed to the Acquisitions Department, American Mathematical Society, 201 Charles Street, Providence, Rhode Island 02904-2294 USA. Requests can also be made by e-mail to [email protected]. c 2012 by the author. All rights reserved. Printed in the United States of America. ∞ The paper used in this book is acid-free and falls within the guidelines established to ensure permanence and durability. Visit the AMS home page at http://www.ams.org/ 10987654321 171615141312 To Kati as all my books Contents Preface xi Part 1.
    [Show full text]
  • Arxiv:2004.10180V2 [Math.CO]
    THE REGULARITY METHOD FOR GRAPHS WITH FEW 4-CYCLES DAVID CONLON, JACOB FOX, BENNY SUDAKOV, AND YUFEI ZHAO Abstract. We develop a sparse graph regularity method that applies to graphs with few 4-cycles, including new counting and removal lemmas for 5-cycles in such graphs. Some applications include: Every n-vertex graph with no 5-cycle can be made triangle-free by deleting o(n3/2) edges. • For r 3, every n-vertex r-graph with girth greater than 5 has o(n3/2) edges. • ≥ Every subset of [n] without a nontrivial solution to the equation x1 + x2 + 2x3 = x4 + 3x5 has • size o(√n). 1. Introduction Szemerédi’s regularity lemma [52] is a rough structure theorem that applies to all graphs. The lemma originated in Szemerédi’s proof of his celebrated theorem that dense sets of integers contain arbitrarily long arithmetic progressions [51] and is now considered one of the most useful and important results in combinatorics. Among its many applications, one of the earliest was the influential triangle removal lemma of Ruzsa and Szemerédi [40], which says that any n-vertex graph with o(n3) triangles can be made triangle-free by removing o(n2) edges. Surprisingly, this simple sounding statement is already sufficient to imply Roth’s theorem, the special case of Szemerédi’s theorem for 3-term arithmetic progressions, and a generalization known as the corners theorem. Most applications of the regularity lemma, including the triangle removal lemma, rely on also having an associated counting lemma. Such a lemma roughly says that the number of embeddings of a fixed graph H into a pseudorandom graph G can be estimated by pretending that G is a random graph.
    [Show full text]
  • Arxiv:1402.0984V2
    POWERS OF HAMILTON CYCLES IN PSEUDORANDOM GRAPHS PETER ALLEN, JULIA BOTTCHER,¨ HIEˆ. P HAN,` YOSHIHARU KOHAYAKAWA, AND YURY PERSON Abstract. We study the appearance of powers of Hamilton cycles in pseudo- random graphs, using the following comparatively weak pseudorandomness no- tion. A graph G is (ε,p,k,ℓ)-pseudorandom if for all disjoint X and Y ⊆ V (G) with |X|≥ εpkn and |Y |≥ εpℓn we have e(X,Y ) = (1 ± ε)p|X||Y |. We prove that for all β > 0 there is an ε> 0 such that an (ε,p, 1, 2)-pseudorandom graph on n vertices with minimum degree at least βpn contains the square of a Hamil- ton cycle. In particular, this implies that (n, d, λ)-graphs with λ ≪ d5/2n−3/2 contain the square of a Hamilton cycle, and thus a triangle factor if n is a multiple of 3. This improves on a result of Krivelevich, Sudakov and Szab´o [Triangle factors in sparse pseudo-random graphs, Combinatorica 24 (2004), no. 3, 403–426]. We also extend our result to higher powers of Hamilton cycles and establish corresponding counting versions. 1. Introduction and results The appearance of certain graphs H as subgraphs is a dominant topic in the study of random graphs. In the random graph model G(n,p) this question turned out to be comparatively easy for graphs H of constant size, but much harder for graphs H on n vertices, i.e., spanning subgraphs. Early results were however obtained in the case when H is a Hamilton cycle, for which this question is by now very well understood [8, 19, 20, 21, 27].
    [Show full text]
  • Szemerédi's Regularity Lemma for Sparse Graphs
    Szemer´edi'sRegularity Lemma for Sparse Graphs Y. Kohayakawa? Instituto de Matem´aticae Estat´ıstica,Universidade de S~aoPaulo Rua do Mat~ao1010, 05508{900 S~aoPaulo, SP Brazil Abstract. A remarkable lemma of Szemer´ediasserts that, very roughly speaking, any dense graph can be decomposed into a bounded number of pseudorandom bipartite graphs. This far-reaching result has proved to play a central r^olein many areas of combinatorics, both `pure' and `algorithmic.' The quest for an equally powerful variant of this lemma for sparse graphs has not yet been successful, but some progress has been achieved recently. The aim of this note is to report on the successes so far. 1 Introduction Szemer´edi'scelebrated proof [39] of the conjecture of Erd}osand Tur´an[10] on arithmetic progressions in dense subsets of integers is certainly a masterpiece of modern combinatorics. An auxiliary lemma in that work, which has become known in its full generality [40] as Szemer´edi'sregularity lemma, has turned out to be a powerful and widely applicable combinatorial tool. For an authoritative survey on this subject, the reader is referred to the recent paper of Koml´osand Simonovits [29]. For the algorithmic aspects of this lemma, the reader is referred to the papers of Alon, Duke, Lefmann, R¨odl,and Yuster [1] and Duke, Lefmann, and R¨odl[8]. Very roughly speaking, the lemma of Szemer´edisays that any graph can be decomposed into a bounded number of pseudorandom bipartite graphs. Since pseudorandom graphs have a predictable structure, the regularity lemma is a powerful tool for introducing `order' where none is visible at first.
    [Show full text]
  • Graph Theory and Additive Combinatorics, a Graduate-Level Course Taught by Prof
    GRAPHTHEORY AND ADDITIVECOMBINATORICS notes for mit 18.217 (fall 2019) lecturer: yufei zhao http://yufeizhao.com/gtac/ About this document This document contains the course notes for Graph Theory and Additive Combinatorics, a graduate-level course taught by Prof. Yufei Zhao at MIT in Fall 2019. The notes were written by the students of the class based on the lectures, and edited with the help of the professor. The notes have not been thoroughly checked for accuracy, espe- cially attributions of results. They are intended to serve as study resources and not as a substitute for professionally prepared publica- tions. We apologize for any inadvertent inaccuracies or misrepresen- tations. More information about the course, including problem sets and lecture videos (to appear), can be found on the course website: http://yufeizhao.com/gtac/ Contents A guide to editing this document 7 1 Introduction 13 1.1 Schur’s theorem........................ 13 1.2 Highlights from additive combinatorics.......... 15 1.3 What’s next?.......................... 18 I Graph theory 21 2 Forbidding a subgraph 23 2.1 Mantel’s theorem: forbidding a triangle.......... 23 2.2 Turán’s theorem: forbidding a clique............ 24 2.3 Hypergraph Turán problem................. 26 2.4 Erd˝os–Stone–Simonovits theorem (statement): forbidding a general subgraph...................... 27 2.5 K˝ovári–Sós–Turán theorem: forbidding a complete bipar- tite graph............................ 28 2.6 Lower bounds: randomized constructions......... 31 2.7 Lower bounds: algebraic constructions.......... 34 2.8 Lower bounds: randomized algebraic constructions... 37 2.9 Forbidding a sparse bipartite graph............ 40 3 Szemerédi’s regularity lemma 49 3.1 Statement and proof....................
    [Show full text]