Probabilistic Methods in Combinatorics
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Automorphisms of the Double Cover of a Circulant Graph of Valency at Most 7
AUTOMORPHISMS OF THE DOUBLE COVER OF A CIRCULANT GRAPH OF VALENCY AT MOST 7 ADEMIR HUJDUROVIC,´ ¯DOR¯DE MITROVIC,´ AND DAVE WITTE MORRIS Abstract. A graph X is said to be unstable if the direct product X × K2 (also called the canonical double cover of X) has automorphisms that do not come from automorphisms of its factors X and K2. It is nontrivially unstable if it is unstable, connected, and non-bipartite, and no two distinct vertices of X have exactly the same neighbors. We find all of the nontrivially unstable circulant graphs of valency at most 7. (They come in several infinite families.) We also show that the instability of each of these graphs is explained by theorems of Steve Wilson. This is best possible, because there is a nontrivially unstable circulant graph of valency 8 that does not satisfy the hypotheses of any of Wilson's four instability theorems for circulant graphs. 1. Introduction Let X be a circulant graph. (All graphs in this paper are finite, simple, and undirected.) Definition 1.1 ([16]). The canonical bipartite double cover of X is the bipartite graph BX with V (BX) = V (X) 0; 1 , where × f g (v; 0) is adjacent to (w; 1) in BX v is adjacent to w in X: () Letting S2 be the symmetric group on the 2-element set 0; 1 , it is clear f g that the direct product Aut X S2 is a subgroup of Aut BX. We are interested in cases where this subgroup× is proper: Definition 1.2 ([12, p. -
Graph Operations and Upper Bounds on Graph Homomorphism Counts
Graph Operations and Upper Bounds on Graph Homomorphism Counts Luke Sernau March 9, 2017 Abstract We construct a family of countexamples to a conjecture of Galvin [5], which stated that for any n-vertex, d-regular graph G and any graph H (possibly with loops), n n d d hom(G, H) ≤ max hom(Kd,d,H) 2 , hom(Kd+1,H) +1 , n o where hom(G, H) is the number of homomorphisms from G to H. By exploiting properties of the graph tensor product and graph exponentiation, we also find new infinite families of H for which the bound stated above on hom(G, H) holds for all n-vertex, d-regular G. In particular we show that if HWR is the complete looped path on three vertices, also known as the Widom-Rowlinson graph, then n d hom(G, HWR) ≤ hom(Kd+1,HWR) +1 for all n-vertex, d-regular G. This verifies a conjecture of Galvin. arXiv:1510.01833v3 [math.CO] 8 Mar 2017 1 Introduction Graph homomorphisms are an important concept in many areas of graph theory. A graph homomorphism is simply an adjacency-preserving map be- tween a graph G and a graph H. That is, for given graphs G and H, a function φ : V (G) → V (H) is said to be a homomorphism from G to H if for every edge uv ∈ E(G), we have φ(u)φ(v) ∈ E(H) (Here, as throughout, all graphs are simple, meaning without multi-edges, but they are permitted 1 to have loops). -
A Note on Indicator-Functions
proceedings of the american mathematical society Volume 39, Number 1, June 1973 A NOTE ON INDICATOR-FUNCTIONS J. MYHILL Abstract. A system has the existence-property for abstracts (existence property for numbers, disjunction-property) if when- ever V(1x)A(x), M(t) for some abstract (t) (M(n) for some numeral n; if whenever VAVB, VAor hß. (3x)A(x), A,Bare closed). We show that the existence-property for numbers and the disjunc- tion property are never provable in the system itself; more strongly, the (classically) recursive functions that encode these properties are not provably recursive functions of the system. It is however possible for a system (e.g., ZF+K=L) to prove the existence- property for abstracts for itself. In [1], I presented an intuitionistic form Z of Zermelo-Frankel set- theory (without choice and with weakened regularity) and proved for it the disfunction-property (if VAvB (closed), then YA or YB), the existence- property (if \-(3x)A(x) (closed), then h4(t) for a (closed) comprehension term t) and the existence-property for numerals (if Y(3x G m)A(x) (closed), then YA(n) for a numeral n). In the appendix to [1], I enquired if these results could be proved constructively; in particular if we could find primitive recursively from the proof of AwB whether it was A or B that was provable, and likewise in the other two cases. Discussion of this question is facilitated by introducing the notion of indicator-functions in the sense of the following Definition. Let T be a consistent theory which contains Heyting arithmetic (possibly by relativization of quantifiers). -
Soft Set Theory First Results
An Inlum~md computers & mathematics PERGAMON Computers and Mathematics with Applications 37 (1999) 19-31 Soft Set Theory First Results D. MOLODTSOV Computing Center of the R~msian Academy of Sciences 40 Vavilova Street, Moscow 117967, Russia Abstract--The soft set theory offers a general mathematical tool for dealing with uncertain, fuzzy, not clearly defined objects. The main purpose of this paper is to introduce the basic notions of the theory of soft sets, to present the first results of the theory, and to discuss some problems of the future. (~) 1999 Elsevier Science Ltd. All rights reserved. Keywords--Soft set, Soft function, Soft game, Soft equilibrium, Soft integral, Internal regular- ization. 1. INTRODUCTION To solve complicated problems in economics, engineering, and environment, we cannot success- fully use classical methods because of various uncertainties typical for those problems. There are three theories: theory of probability, theory of fuzzy sets, and the interval mathematics which we can consider as mathematical tools for dealing with uncertainties. But all these theories have their own difficulties. Theory of probabilities can deal only with stochastically stable phenomena. Without going into mathematical details, we can say, e.g., that for a stochastically stable phenomenon there should exist a limit of the sample mean #n in a long series of trials. The sample mean #n is defined by 1 n IZn = -- ~ Xi, n i=l where x~ is equal to 1 if the phenomenon occurs in the trial, and x~ is equal to 0 if the phenomenon does not occur. To test the existence of the limit, we must perform a large number of trials. -
Information Theory
Information Theory Gurkirat,Harsha,Parth,Puneet,Rahul,Tushant November 8, 2015 1 Introduction[1] Information theory is a branch of applied mathematics, electrical engineering, and computer science involving the quantification of information . Firstly, it provides a general way to deal with any information , be it language, music, binary codes or pictures by introducing a measure of information , en- tropy as the average number of bits(binary for our convenience) needed to store one symbol in a message. This abstraction of information through entropy allows us to deal with all kinds of information mathematically and apply the same principles without worrying about it's type or form! It is remarkable to note that the concept of entropy was introduced by Ludwig Boltzmann in 1896 as a measure of disorder in a thermodynamic system , but serendipitously found it's way to become a basic concept in Information theory introduced by Shannon in 1948 . Information theory was developed as a way to find fundamental limits on the transfer , storage and compression of data and has found it's way into many applications including Data Compression and Channel Coding. 2 Basic definitions 2.1 Intuition behind "surprise" or "information"[2] Suppose someone tells you that the sun will come up tomorrow.This probably isn't surprising. You already knew the sun will come up tomorrow, so they didn't't really give you much information. Now suppose someone tells you that aliens have just landed on earth and have picked you to be the spokesperson for the human race. This is probably pretty surprising, and they've given you a lot of information.The rarer something is, the more you've learned if you discover that it happened. -
What Is Fuzzy Probability Theory?
Foundations of Physics, Vol.30,No. 10, 2000 What Is Fuzzy Probability Theory? S. Gudder1 Received March 4, 1998; revised July 6, 2000 The article begins with a discussion of sets and fuzzy sets. It is observed that iden- tifying a set with its indicator function makes it clear that a fuzzy set is a direct and natural generalization of a set. Making this identification also provides sim- plified proofs of various relationships between sets. Connectives for fuzzy sets that generalize those for sets are defined. The fundamentals of ordinary probability theory are reviewed and these ideas are used to motivate fuzzy probability theory. Observables (fuzzy random variables) and their distributions are defined. Some applications of fuzzy probability theory to quantum mechanics and computer science are briefly considered. 1. INTRODUCTION What do we mean by fuzzy probability theory? Isn't probability theory already fuzzy? That is, probability theory does not give precise answers but only probabilities. The imprecision in probability theory comes from our incomplete knowledge of the system but the random variables (measure- ments) still have precise values. For example, when we flip a coin we have only a partial knowledge about the physical structure of the coin and the initial conditions of the flip. If our knowledge about the coin were com- plete, we could predict exactly whether the coin lands heads or tails. However, we still assume that after the coin lands, we can tell precisely whether it is heads or tails. In fuzzy probability theory, we also have an imprecision in our measurements, and random variables must be replaced by fuzzy random variables and events by fuzzy events. -
Probabilistic Methods in Combinatorics
Lecture notes (MIT 18.226, Fall 2020) Probabilistic Methods in Combinatorics Yufei Zhao Massachusetts Institute of Technology [email protected] http://yufeizhao.com/pm/ Contents 1 Introduction7 1.1 Lower bounds to Ramsey numbers......................7 1.1.1 Erdős’ original proof..........................8 1.1.2 Alteration method...........................9 1.1.3 Lovász local lemma........................... 10 1.2 Set systems................................... 11 1.2.1 Sperner’s theorem............................ 11 1.2.2 Bollobás two families theorem..................... 12 1.2.3 Erdős–Ko–Rado theorem on intersecting families........... 13 1.3 2-colorable hypergraphs............................ 13 1.4 List chromatic number of Kn;n ......................... 15 2 Linearity of expectations 17 2.1 Hamiltonian paths in tournaments....................... 17 2.2 Sum-free set................................... 18 2.3 Turán’s theorem and independent sets.................... 18 2.4 Crossing number inequality.......................... 20 2.4.1 Application to incidence geometry................... 22 2.5 Dense packing of spheres in high dimensions................. 23 2.6 Unbalancing lights............................... 26 3 Alterations 28 3.1 Ramsey numbers................................ 28 3.2 Dominating set in graphs............................ 28 3.3 Heilbronn triangle problem........................... 29 3.4 Markov’s inequality............................... 31 3.5 High girth and high chromatic number.................... 31 3.6 Greedy random coloring............................ 32 2 4 Second moment method 34 4.1 Threshold functions for small subgraphs in random graphs......... 36 4.2 Existence of thresholds............................. 42 4.3 Clique number of a random graph....................... 47 4.4 Hardy–Ramanujan theorem on the number of prime divisors........ 49 4.5 Distinct sums.................................. 52 4.6 Weierstrass approximation theorem...................... 54 5 Chernoff bound 56 5.1 Discrepancy.................................. -
Be Measurable Spaces. a Func- 1 1 Tion F : E G Is Measurable If F − (A) E Whenever a G
2. Measurable functions and random variables 2.1. Measurable functions. Let (E; E) and (G; G) be measurable spaces. A func- 1 1 tion f : E G is measurable if f − (A) E whenever A G. Here f − (A) denotes the inverse!image of A by f 2 2 1 f − (A) = x E : f(x) A : f 2 2 g Usually G = R or G = [ ; ], in which case G is always taken to be the Borel σ-algebra. If E is a topological−∞ 1space and E = B(E), then a measurable function on E is called a Borel function. For any function f : E G, the inverse image preserves set operations ! 1 1 1 1 f − A = f − (A ); f − (G A) = E f − (A): i i n n i ! i [ [ 1 1 Therefore, the set f − (A) : A G is a σ-algebra on E and A G : f − (A) E is f 2 g 1 f ⊆ 2 g a σ-algebra on G. In particular, if G = σ(A) and f − (A) E whenever A A, then 1 2 2 A : f − (A) E is a σ-algebra containing A and hence G, so f is measurable. In the casef G = R, 2the gBorel σ-algebra is generated by intervals of the form ( ; y]; y R, so, to show that f : E R is Borel measurable, it suffices to show −∞that x 2E : f(x) y E for all y. ! f 2 ≤ g 2 1 If E is any topological space and f : E R is continuous, then f − (U) is open in E and hence measurable, whenever U is op!en in R; the open sets U generate B, so any continuous function is measurable. -
Expectation and Functions of Random Variables
POL 571: Expectation and Functions of Random Variables Kosuke Imai Department of Politics, Princeton University March 10, 2006 1 Expectation and Independence To gain further insights about the behavior of random variables, we first consider their expectation, which is also called mean value or expected value. The definition of expectation follows our intuition. Definition 1 Let X be a random variable and g be any function. 1. If X is discrete, then the expectation of g(X) is defined as, then X E[g(X)] = g(x)f(x), x∈X where f is the probability mass function of X and X is the support of X. 2. If X is continuous, then the expectation of g(X) is defined as, Z ∞ E[g(X)] = g(x)f(x) dx, −∞ where f is the probability density function of X. If E(X) = −∞ or E(X) = ∞ (i.e., E(|X|) = ∞), then we say the expectation E(X) does not exist. One sometimes write EX to emphasize that the expectation is taken with respect to a particular random variable X. For a continuous random variable, the expectation is sometimes written as, Z x E[g(X)] = g(x) d F (x). −∞ where F (x) is the distribution function of X. The expectation operator has inherits its properties from those of summation and integral. In particular, the following theorem shows that expectation preserves the inequality and is a linear operator. Theorem 1 (Expectation) Let X and Y be random variables with finite expectations. 1. If g(x) ≥ h(x) for all x ∈ R, then E[g(X)] ≥ E[h(X)]. -
Lombardi Drawings of Graphs 1 Introduction
Lombardi Drawings of Graphs Christian A. Duncan1, David Eppstein2, Michael T. Goodrich2, Stephen G. Kobourov3, and Martin Nollenburg¨ 2 1Department of Computer Science, Louisiana Tech. Univ., Ruston, Louisiana, USA 2Department of Computer Science, University of California, Irvine, California, USA 3Department of Computer Science, University of Arizona, Tucson, Arizona, USA Abstract. We introduce the notion of Lombardi graph drawings, named after the American abstract artist Mark Lombardi. In these drawings, edges are represented as circular arcs rather than as line segments or polylines, and the vertices have perfect angular resolution: the edges are equally spaced around each vertex. We describe algorithms for finding Lombardi drawings of regular graphs, graphs of bounded degeneracy, and certain families of planar graphs. 1 Introduction The American artist Mark Lombardi [24] was famous for his drawings of social net- works representing conspiracy theories. Lombardi used curved arcs to represent edges, leading to a strong aesthetic quality and high readability. Inspired by this work, we intro- duce the notion of a Lombardi drawing of a graph, in which edges are drawn as circular arcs with perfect angular resolution: consecutive edges are evenly spaced around each vertex. While not all vertices have perfect angular resolution in Lombardi’s work, the even spacing of edges around vertices is clearly one of his aesthetic criteria; see Fig. 1. Traditional graph drawing methods rarely guarantee perfect angular resolution, but poor edge distribution can nevertheless lead to unreadable drawings. Additionally, while some tools provide options to draw edges as curves, most rely on straight-line edges, and it is known that maintaining good angular resolution can result in exponential draw- ing area for straight-line drawings of planar graphs [17,25]. -
More on Discrete Random Variables and Their Expectations
MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2018 Lecture 6 MORE ON DISCRETE RANDOM VARIABLES AND THEIR EXPECTATIONS Contents 1. Comments on expected values 2. Expected values of some common random variables 3. Covariance and correlation 4. Indicator variables and the inclusion-exclusion formula 5. Conditional expectations 1COMMENTSON EXPECTED VALUES (a) Recall that E is well defined unless both sums and [X] x:x<0 xpX (x) E x:x>0 xpX (x) are infinite. Furthermore, [X] is well-defined and finite if and only if both sums are finite. This is the same as requiring! that ! E[|X|]= |x|pX (x) < ∞. x " Random variables that satisfy this condition are called integrable. (b) Noter that fo any random variable X, E[X2] is always well-defined (whether 2 finite or infinite), because all the terms in the sum x x pX (x) are nonneg- ative. If we have E[X2] < ∞,we say that X is square integrable. ! (c) Using the inequality |x| ≤ 1+x 2 ,wehave E [|X|] ≤ 1+E[X2],which shows that a square integrable random variable is always integrable. Simi- larly, for every positive integer r,ifE [|X|r] is finite then it is also finite for every l<r(fill details). 1 Exercise 1. Recall that the r-the central moment of a random variable X is E[(X − E[X])r].Showthat if the r -th central moment of an almost surely non-negative random variable X is finite, then its l-th central moment is also finite for every l<r. (d) Because of the formula var(X)=E[X2] − (E[X])2,wesee that: (i) if X is square integrable, the variance is finite; (ii) if X is integrable, but not square integrable, the variance is infinite; (iii) if X is not integrable, the variance is undefined. -
The Operations Invariant Properties on Graphs Yanzhong Hu, Gang
International Conference on Education Technology and Information System (ICETIS 2013) The Operations Invariant Properties on Graphs Yanzhong Hu, Gang Cheng School of Computer Science, Hubei University of Technology, Wuhan, 430068, China [email protected], [email protected] Keywords:Invariant property; Hamilton cycle; Cartesian product; Tensor product Abstract. To determine whether or not a given graph has a Hamilton cycle (or is a planar graph), defined the operations invariant properties on graphs, and discussed the various forms of the invariant properties under the circumstance of Cartesian product graph operation and Tensor product graph operation. The main conclusions include: The Hamiltonicity of graph is invariant concerning the Cartesian product, and the non-planarity of the graph is invariant concerning the tensor product. Therefore, when we applied these principles into practice, we testified that Hamilton cycle does exist in hypercube and the Desargues graph is a non-planarity graph. INTRODUCTION The Planarity, Eulerian feature, Hamiltonicity, bipartite property, spectrum, and so on, is often overlooked for a given graph. The traditional research method adopts the idea of direct proof. In recent years, indirect proof method is put forward, (Fang Xie & Yanzhong Hu, 2010) proves non- planarity of the Petersen graph, by use the non-planarity of K3,3, i.e., it induces the properties of the graph G with the properties of the graph H, here H is a sub-graph of G. However, when it comes to induce the other properties, the result is contrary to what we expect. For example, Fig. 1(a) is a Hamiltonian graph which is a sub-graph of Fig.