Probabilistic Methods in Combinatorics

Total Page:16

File Type:pdf, Size:1020Kb

Probabilistic Methods in Combinatorics Probabilistic Methods in Combinatorics Joshua Erde Department of Mathematics, Universit¨atHamburg. Contents 1 Preliminaries 5 1.1 Probability Theory . .5 1.2 Useful Estimates . .8 2 The Probabilistic Method 10 2.1 Ramsey Numbers . 10 2.2 Set Systems . 11 3 The First Moment Method 15 3.1 Hamiltonian Paths in a Tournament . 16 3.2 Tur´an'sTheorem . 17 3.3 Crossing Number of Graphs . 18 4 Alterations 20 4.1 Ramsey Numbers Again . 20 4.2 Graphs of High Girth and High Chromatic Numbers . 21 5 Dependent random choice 24 5.1 Tur´anNumbers of Bipartite Graphs . 25 1 5.2 The Ramsey Number of the Cube . 26 5.3 Improvements . 27 6 The Second Moment Method 29 6.1 Variance and Chebyshev's Inequality . 29 6.2 Threshold Functions . 30 6.3 Balanced Subgraphs . 33 7 The Hamiltonicity Threshold 35 7.1 The Connectivity Threshold . 35 7.2 Pos´a'sRotation-Extension Technique . 39 7.3 Hamiltonicty Threshold . 42 8 Strong Concentration 44 8.1 Motivation . 44 8.2 The Chernoff Bound . 44 8.3 Combinatorial Discrepancy . 48 8.4 A Lower Bound for the Binomial Distribution . 49 9 The Lov´asLocal Lemma 52 9.1 The Local Lemma . 52 9.2 Ramsey Bounds for the last time . 55 9.3 Directed Cycles . 56 9.4 The Linear Aboricity of Graphs . 57 10 Martingales and Strong Concentration 62 10.1 The Azuma-Hoeffding Inequality . 62 2 10.2 The Chromatic Number of a Dense Random Graph . 66 10.3 The Chromatic Number of Sparse Random Graphs . 70 11 Talagrand's Inequality 73 11.1 Longest Increasing Subsequence . 75 11.2 Chromatic Number of Graph Powers . 76 11.3 Exceptional outcomes . 80 12 Entropy Methods 86 12.1 Basic Results . 86 12.2 Br´egman'sTheorem . 88 12.3 Shearer's lemma and the Box theorem . 91 12.4 Independent Sets in a Regular Bipartite Graph . 95 12.5 Bipartite Double Cover . 96 13 Derandomization and Combinatorial Games 98 13.1 Maximum Cuts in Graphs . 98 13.2 Ramsey graphs . 99 13.3 Positional Games . 100 13.4 Weak Games . 105 13.5 The Neighbourhood Conjecture . 107 14 The Algorithmic Local Lemma 111 3 Preface These notes were used to lecture a course at UHH for masters level students in the winter semester of 2015, and again in the Summer Semester of 2018. A large part of the course material was heavily based on the excellent set of lecture notes \The Probabilistic Method" by Matouˇsek and Vondr´ak,which themselves follow the book of the same name by Alon and Spencer. The latter cannot be recommended highly enough as an introductory text to the subject and covers most of the material in these lectures (and much more) in depth. There are a few reasons for writing yet another set of notes on the same subject. Firstly there is simply the matter of length. German semesters are slightly longer than some other university terms, and so the lecture course was to run for 26 lectures of 90 minutes each, which would easily have exhausted the material in the previous notes with plenty of time to spare. This extra time has allowed us to include more examples of applications of the method, such as the beautiful result on the threshold for Hamiltonicity in Section 7, as well as chapters on wider topics not usually covered in an introductory course. Secondly the course was designed with graph theorists in mind, with examples chosen as much as possible with an emphasis on graph theoretical problems. This was not always possible, and indeed in some cases not always preferable as there are many beautiful and instructive examples arising naturally from set systems or hypergraphs, but hopefully the result was that the students were able to easily understand the statement of the results and motivations for them. The material from Section 5 on the method of dependent random choice comes from the survey paper \Dependent Random Choice" by Fox and Sudakov, which is a good introductory text if you wish to know more about the technique. Similarly the material from Section 15 on pseudorandomness follows in part the survey paper \Pseudo-random graphs" by Krivelevich and Sudakov. Section 11.3 comes from the paper \A stronger bound for the strong chromatic index" by Bruhn and Joos and Section 14 follows in part a lecture of Alistair Sinclair. 4 1 Preliminaries 1.1 Probability Theory This section is intended as a short introduction to the very basics of probability theory, covering only the basic facts about finite probability spaces that we will need to use in this course. Ω Definition. A probability space is a triple (Ω; Σ; P), where Ω is a set, Σ ⊆ 2 is a σ-algebra (A non-empty collection of subsets of Ω which is closed under taking complements and countable unions/intersections), and P is a measure on Σ with P(Ω) = 1. That means: • P is non-negative; • P(;) = 0; n • For all countable collections of disjoint sets fAigi=1 in Σ, 1 ! 1 [ X P Ai = P(Ai): i=1 i=1 The elements of Σ are called events and the elements of Ω are called elementary events. For an event A, P(A) is called the probability of A. During this course we will mostly consider finite probability spaces, those where Ω is finite Ω and Σ = 2 . In this case the probability measure P is determined by the value it takes on P elementary events. That is, given any function p :Ω ! [0; 1] that satisfies !2Ω p(!) = 1, then P the function on Σ given by P(A) = !2A p(!) is a probability measure. In a finite probability space, the most basic example of a probability measure is the uniform distribution on Ω, where jAj (A) = for all A ⊆ Ω: P jΩj An important example of a probability space, which we will return to throughout the course, is that of a random graph. Definition. The probability space of random graphs G(n; p) is a finite probability space whose elementary events are all graphs on a fixed set of n vertices, and where the probability of each graph with m edges is m n −m p(G) = p (1 − p)(2) This corresponds to the usual notion of picking a random graph by including every potential edge independently with probability p. We will often denote an arbitary event from this space by G(n; p). We note that p can, and often will be, a function of n. Given a property of graphs P we say that G(n; p) satisfies P almost surely or with high probability if lim (G(n; p) satisfies P ) = 1: n!1 P 5 Another natural, and often utilized, probability space related to graphs is that of G(n; M) where the elementary events are all graphs on n vertices with M edges, with the uniform probability measure. G(n; M) is, except in certain cases, much less nice to work with than G(n; p). We won't work with this space in the course, and mention that, quite often in applications you can avoid working with G(n; M) at all by proving the result you want in G(n; p) for an appropriate p and using some standard theorems that translate results between the spaces. One elementary fact that we will use often is the following, often referred to as the union bound: Lemma 1.1 (Union bound). For any A1;A2;:::;An 2 Σ, n ! [ n P Ai ≤ Σi=1P(Ai) i=1 Proof. For each 1 ≤ i ≤ n let us define Bi = Ai n (A1 [ A2 [ ::: [ Ai−1): S S Then Bi ⊂ Ai, and so P(Bi) ≤ P(Ai), and also Bi = Ai. Therefore, since the events B1;B2;:::;Bn are disjoint, by the additivity of P n ! n ! [ [ n n P Ai = P Bi = Σi=1P(Bi) ≤ Σi=1P(Ai) i=1 i=1 Definition. Two events A; B 2 Σ are independent if P(A \ B) = P(A)P(B): More generally, a set of events fA1;A2;:::;Ang is mutually independent if, for any subset of indices I ⊆ [n], ! \ Y P Ai = P(Ai): i2I i2I It is important to note that the notion of mutual independence is stronger than simply having pairwise independence of all the pairs Ai;Aj. Intuitively, the property of independence of two events, A and B, should mean that knowledge about whether or not A occurs should not influence the likelihood of B occurring. This intuition is made formal with the idea of conditional probability. Definition. Given two events A; B 2 Σ such that P(B) 6= 0, we define the conditional probability of A, given that B occurs, as P(A \ B) P(AjB) = : P(B) Note that, as expected, A and B are independent if and only if P(AjB) = P(A). Definition. A real random variable on probability space (Ω; Σ; P) is a function X :Ω ! R that is P-measurable. (That is, for any a 2 R; f! 2 Ω: X(!) ≤ ag 2 Σ.) 6 For the most part all the random variables we consider will be real valued, so for convenience throughout the course we will suppress the prefix "real". However we will in some contexts later in the course consider random variables which are functions from Ω to more general spaces. In the case of a finite probability space, any function X :Ω ! R will define a random variable. Given a measurable set A ⊆ R the probability that the value X takes lies in A is P(f! 2 Ω: X(!) 2 Ag) which we will write as P(X 2 A).
Recommended publications
  • Automorphisms of the Double Cover of a Circulant Graph of Valency at Most 7
    AUTOMORPHISMS OF THE DOUBLE COVER OF A CIRCULANT GRAPH OF VALENCY AT MOST 7 ADEMIR HUJDUROVIC,´ ¯DOR¯DE MITROVIC,´ AND DAVE WITTE MORRIS Abstract. A graph X is said to be unstable if the direct product X × K2 (also called the canonical double cover of X) has automorphisms that do not come from automorphisms of its factors X and K2. It is nontrivially unstable if it is unstable, connected, and non-bipartite, and no two distinct vertices of X have exactly the same neighbors. We find all of the nontrivially unstable circulant graphs of valency at most 7. (They come in several infinite families.) We also show that the instability of each of these graphs is explained by theorems of Steve Wilson. This is best possible, because there is a nontrivially unstable circulant graph of valency 8 that does not satisfy the hypotheses of any of Wilson's four instability theorems for circulant graphs. 1. Introduction Let X be a circulant graph. (All graphs in this paper are finite, simple, and undirected.) Definition 1.1 ([16]). The canonical bipartite double cover of X is the bipartite graph BX with V (BX) = V (X) 0; 1 , where × f g (v; 0) is adjacent to (w; 1) in BX v is adjacent to w in X: () Letting S2 be the symmetric group on the 2-element set 0; 1 , it is clear f g that the direct product Aut X S2 is a subgroup of Aut BX. We are interested in cases where this subgroup× is proper: Definition 1.2 ([12, p.
    [Show full text]
  • Graph Operations and Upper Bounds on Graph Homomorphism Counts
    Graph Operations and Upper Bounds on Graph Homomorphism Counts Luke Sernau March 9, 2017 Abstract We construct a family of countexamples to a conjecture of Galvin [5], which stated that for any n-vertex, d-regular graph G and any graph H (possibly with loops), n n d d hom(G, H) ≤ max hom(Kd,d,H) 2 , hom(Kd+1,H) +1 , n o where hom(G, H) is the number of homomorphisms from G to H. By exploiting properties of the graph tensor product and graph exponentiation, we also find new infinite families of H for which the bound stated above on hom(G, H) holds for all n-vertex, d-regular G. In particular we show that if HWR is the complete looped path on three vertices, also known as the Widom-Rowlinson graph, then n d hom(G, HWR) ≤ hom(Kd+1,HWR) +1 for all n-vertex, d-regular G. This verifies a conjecture of Galvin. arXiv:1510.01833v3 [math.CO] 8 Mar 2017 1 Introduction Graph homomorphisms are an important concept in many areas of graph theory. A graph homomorphism is simply an adjacency-preserving map be- tween a graph G and a graph H. That is, for given graphs G and H, a function φ : V (G) → V (H) is said to be a homomorphism from G to H if for every edge uv ∈ E(G), we have φ(u)φ(v) ∈ E(H) (Here, as throughout, all graphs are simple, meaning without multi-edges, but they are permitted 1 to have loops).
    [Show full text]
  • A Note on Indicator-Functions
    proceedings of the american mathematical society Volume 39, Number 1, June 1973 A NOTE ON INDICATOR-FUNCTIONS J. MYHILL Abstract. A system has the existence-property for abstracts (existence property for numbers, disjunction-property) if when- ever V(1x)A(x), M(t) for some abstract (t) (M(n) for some numeral n; if whenever VAVB, VAor hß. (3x)A(x), A,Bare closed). We show that the existence-property for numbers and the disjunc- tion property are never provable in the system itself; more strongly, the (classically) recursive functions that encode these properties are not provably recursive functions of the system. It is however possible for a system (e.g., ZF+K=L) to prove the existence- property for abstracts for itself. In [1], I presented an intuitionistic form Z of Zermelo-Frankel set- theory (without choice and with weakened regularity) and proved for it the disfunction-property (if VAvB (closed), then YA or YB), the existence- property (if \-(3x)A(x) (closed), then h4(t) for a (closed) comprehension term t) and the existence-property for numerals (if Y(3x G m)A(x) (closed), then YA(n) for a numeral n). In the appendix to [1], I enquired if these results could be proved constructively; in particular if we could find primitive recursively from the proof of AwB whether it was A or B that was provable, and likewise in the other two cases. Discussion of this question is facilitated by introducing the notion of indicator-functions in the sense of the following Definition. Let T be a consistent theory which contains Heyting arithmetic (possibly by relativization of quantifiers).
    [Show full text]
  • Soft Set Theory First Results
    An Inlum~md computers & mathematics PERGAMON Computers and Mathematics with Applications 37 (1999) 19-31 Soft Set Theory First Results D. MOLODTSOV Computing Center of the R~msian Academy of Sciences 40 Vavilova Street, Moscow 117967, Russia Abstract--The soft set theory offers a general mathematical tool for dealing with uncertain, fuzzy, not clearly defined objects. The main purpose of this paper is to introduce the basic notions of the theory of soft sets, to present the first results of the theory, and to discuss some problems of the future. (~) 1999 Elsevier Science Ltd. All rights reserved. Keywords--Soft set, Soft function, Soft game, Soft equilibrium, Soft integral, Internal regular- ization. 1. INTRODUCTION To solve complicated problems in economics, engineering, and environment, we cannot success- fully use classical methods because of various uncertainties typical for those problems. There are three theories: theory of probability, theory of fuzzy sets, and the interval mathematics which we can consider as mathematical tools for dealing with uncertainties. But all these theories have their own difficulties. Theory of probabilities can deal only with stochastically stable phenomena. Without going into mathematical details, we can say, e.g., that for a stochastically stable phenomenon there should exist a limit of the sample mean #n in a long series of trials. The sample mean #n is defined by 1 n IZn = -- ~ Xi, n i=l where x~ is equal to 1 if the phenomenon occurs in the trial, and x~ is equal to 0 if the phenomenon does not occur. To test the existence of the limit, we must perform a large number of trials.
    [Show full text]
  • Information Theory
    Information Theory Gurkirat,Harsha,Parth,Puneet,Rahul,Tushant November 8, 2015 1 Introduction[1] Information theory is a branch of applied mathematics, electrical engineering, and computer science involving the quantification of information . Firstly, it provides a general way to deal with any information , be it language, music, binary codes or pictures by introducing a measure of information , en- tropy as the average number of bits(binary for our convenience) needed to store one symbol in a message. This abstraction of information through entropy allows us to deal with all kinds of information mathematically and apply the same principles without worrying about it's type or form! It is remarkable to note that the concept of entropy was introduced by Ludwig Boltzmann in 1896 as a measure of disorder in a thermodynamic system , but serendipitously found it's way to become a basic concept in Information theory introduced by Shannon in 1948 . Information theory was developed as a way to find fundamental limits on the transfer , storage and compression of data and has found it's way into many applications including Data Compression and Channel Coding. 2 Basic definitions 2.1 Intuition behind "surprise" or "information"[2] Suppose someone tells you that the sun will come up tomorrow.This probably isn't surprising. You already knew the sun will come up tomorrow, so they didn't't really give you much information. Now suppose someone tells you that aliens have just landed on earth and have picked you to be the spokesperson for the human race. This is probably pretty surprising, and they've given you a lot of information.The rarer something is, the more you've learned if you discover that it happened.
    [Show full text]
  • What Is Fuzzy Probability Theory?
    Foundations of Physics, Vol.30,No. 10, 2000 What Is Fuzzy Probability Theory? S. Gudder1 Received March 4, 1998; revised July 6, 2000 The article begins with a discussion of sets and fuzzy sets. It is observed that iden- tifying a set with its indicator function makes it clear that a fuzzy set is a direct and natural generalization of a set. Making this identification also provides sim- plified proofs of various relationships between sets. Connectives for fuzzy sets that generalize those for sets are defined. The fundamentals of ordinary probability theory are reviewed and these ideas are used to motivate fuzzy probability theory. Observables (fuzzy random variables) and their distributions are defined. Some applications of fuzzy probability theory to quantum mechanics and computer science are briefly considered. 1. INTRODUCTION What do we mean by fuzzy probability theory? Isn't probability theory already fuzzy? That is, probability theory does not give precise answers but only probabilities. The imprecision in probability theory comes from our incomplete knowledge of the system but the random variables (measure- ments) still have precise values. For example, when we flip a coin we have only a partial knowledge about the physical structure of the coin and the initial conditions of the flip. If our knowledge about the coin were com- plete, we could predict exactly whether the coin lands heads or tails. However, we still assume that after the coin lands, we can tell precisely whether it is heads or tails. In fuzzy probability theory, we also have an imprecision in our measurements, and random variables must be replaced by fuzzy random variables and events by fuzzy events.
    [Show full text]
  • Probabilistic Methods in Combinatorics
    Lecture notes (MIT 18.226, Fall 2020) Probabilistic Methods in Combinatorics Yufei Zhao Massachusetts Institute of Technology [email protected] http://yufeizhao.com/pm/ Contents 1 Introduction7 1.1 Lower bounds to Ramsey numbers......................7 1.1.1 Erdős’ original proof..........................8 1.1.2 Alteration method...........................9 1.1.3 Lovász local lemma........................... 10 1.2 Set systems................................... 11 1.2.1 Sperner’s theorem............................ 11 1.2.2 Bollobás two families theorem..................... 12 1.2.3 Erdős–Ko–Rado theorem on intersecting families........... 13 1.3 2-colorable hypergraphs............................ 13 1.4 List chromatic number of Kn;n ......................... 15 2 Linearity of expectations 17 2.1 Hamiltonian paths in tournaments....................... 17 2.2 Sum-free set................................... 18 2.3 Turán’s theorem and independent sets.................... 18 2.4 Crossing number inequality.......................... 20 2.4.1 Application to incidence geometry................... 22 2.5 Dense packing of spheres in high dimensions................. 23 2.6 Unbalancing lights............................... 26 3 Alterations 28 3.1 Ramsey numbers................................ 28 3.2 Dominating set in graphs............................ 28 3.3 Heilbronn triangle problem........................... 29 3.4 Markov’s inequality............................... 31 3.5 High girth and high chromatic number.................... 31 3.6 Greedy random coloring............................ 32 2 4 Second moment method 34 4.1 Threshold functions for small subgraphs in random graphs......... 36 4.2 Existence of thresholds............................. 42 4.3 Clique number of a random graph....................... 47 4.4 Hardy–Ramanujan theorem on the number of prime divisors........ 49 4.5 Distinct sums.................................. 52 4.6 Weierstrass approximation theorem...................... 54 5 Chernoff bound 56 5.1 Discrepancy..................................
    [Show full text]
  • Be Measurable Spaces. a Func- 1 1 Tion F : E G Is Measurable If F − (A) E Whenever a G
    2. Measurable functions and random variables 2.1. Measurable functions. Let (E; E) and (G; G) be measurable spaces. A func- 1 1 tion f : E G is measurable if f − (A) E whenever A G. Here f − (A) denotes the inverse!image of A by f 2 2 1 f − (A) = x E : f(x) A : f 2 2 g Usually G = R or G = [ ; ], in which case G is always taken to be the Borel σ-algebra. If E is a topological−∞ 1space and E = B(E), then a measurable function on E is called a Borel function. For any function f : E G, the inverse image preserves set operations ! 1 1 1 1 f − A = f − (A ); f − (G A) = E f − (A): i i n n i ! i [ [ 1 1 Therefore, the set f − (A) : A G is a σ-algebra on E and A G : f − (A) E is f 2 g 1 f ⊆ 2 g a σ-algebra on G. In particular, if G = σ(A) and f − (A) E whenever A A, then 1 2 2 A : f − (A) E is a σ-algebra containing A and hence G, so f is measurable. In the casef G = R, 2the gBorel σ-algebra is generated by intervals of the form ( ; y]; y R, so, to show that f : E R is Borel measurable, it suffices to show −∞that x 2E : f(x) y E for all y. ! f 2 ≤ g 2 1 If E is any topological space and f : E R is continuous, then f − (U) is open in E and hence measurable, whenever U is op!en in R; the open sets U generate B, so any continuous function is measurable.
    [Show full text]
  • Expectation and Functions of Random Variables
    POL 571: Expectation and Functions of Random Variables Kosuke Imai Department of Politics, Princeton University March 10, 2006 1 Expectation and Independence To gain further insights about the behavior of random variables, we first consider their expectation, which is also called mean value or expected value. The definition of expectation follows our intuition. Definition 1 Let X be a random variable and g be any function. 1. If X is discrete, then the expectation of g(X) is defined as, then X E[g(X)] = g(x)f(x), x∈X where f is the probability mass function of X and X is the support of X. 2. If X is continuous, then the expectation of g(X) is defined as, Z ∞ E[g(X)] = g(x)f(x) dx, −∞ where f is the probability density function of X. If E(X) = −∞ or E(X) = ∞ (i.e., E(|X|) = ∞), then we say the expectation E(X) does not exist. One sometimes write EX to emphasize that the expectation is taken with respect to a particular random variable X. For a continuous random variable, the expectation is sometimes written as, Z x E[g(X)] = g(x) d F (x). −∞ where F (x) is the distribution function of X. The expectation operator has inherits its properties from those of summation and integral. In particular, the following theorem shows that expectation preserves the inequality and is a linear operator. Theorem 1 (Expectation) Let X and Y be random variables with finite expectations. 1. If g(x) ≥ h(x) for all x ∈ R, then E[g(X)] ≥ E[h(X)].
    [Show full text]
  • Lombardi Drawings of Graphs 1 Introduction
    Lombardi Drawings of Graphs Christian A. Duncan1, David Eppstein2, Michael T. Goodrich2, Stephen G. Kobourov3, and Martin Nollenburg¨ 2 1Department of Computer Science, Louisiana Tech. Univ., Ruston, Louisiana, USA 2Department of Computer Science, University of California, Irvine, California, USA 3Department of Computer Science, University of Arizona, Tucson, Arizona, USA Abstract. We introduce the notion of Lombardi graph drawings, named after the American abstract artist Mark Lombardi. In these drawings, edges are represented as circular arcs rather than as line segments or polylines, and the vertices have perfect angular resolution: the edges are equally spaced around each vertex. We describe algorithms for finding Lombardi drawings of regular graphs, graphs of bounded degeneracy, and certain families of planar graphs. 1 Introduction The American artist Mark Lombardi [24] was famous for his drawings of social net- works representing conspiracy theories. Lombardi used curved arcs to represent edges, leading to a strong aesthetic quality and high readability. Inspired by this work, we intro- duce the notion of a Lombardi drawing of a graph, in which edges are drawn as circular arcs with perfect angular resolution: consecutive edges are evenly spaced around each vertex. While not all vertices have perfect angular resolution in Lombardi’s work, the even spacing of edges around vertices is clearly one of his aesthetic criteria; see Fig. 1. Traditional graph drawing methods rarely guarantee perfect angular resolution, but poor edge distribution can nevertheless lead to unreadable drawings. Additionally, while some tools provide options to draw edges as curves, most rely on straight-line edges, and it is known that maintaining good angular resolution can result in exponential draw- ing area for straight-line drawings of planar graphs [17,25].
    [Show full text]
  • More on Discrete Random Variables and Their Expectations
    MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2018 Lecture 6 MORE ON DISCRETE RANDOM VARIABLES AND THEIR EXPECTATIONS Contents 1. Comments on expected values 2. Expected values of some common random variables 3. Covariance and correlation 4. Indicator variables and the inclusion-exclusion formula 5. Conditional expectations 1COMMENTSON EXPECTED VALUES (a) Recall that E is well defined unless both sums and [X] x:x<0 xpX (x) E x:x>0 xpX (x) are infinite. Furthermore, [X] is well-defined and finite if and only if both sums are finite. This is the same as requiring! that ! E[|X|]= |x|pX (x) < ∞. x " Random variables that satisfy this condition are called integrable. (b) Noter that fo any random variable X, E[X2] is always well-defined (whether 2 finite or infinite), because all the terms in the sum x x pX (x) are nonneg- ative. If we have E[X2] < ∞,we say that X is square integrable. ! (c) Using the inequality |x| ≤ 1+x 2 ,wehave E [|X|] ≤ 1+E[X2],which shows that a square integrable random variable is always integrable. Simi- larly, for every positive integer r,ifE [|X|r] is finite then it is also finite for every l<r(fill details). 1 Exercise 1. Recall that the r-the central moment of a random variable X is E[(X − E[X])r].Showthat if the r -th central moment of an almost surely non-negative random variable X is finite, then its l-th central moment is also finite for every l<r. (d) Because of the formula var(X)=E[X2] − (E[X])2,wesee that: (i) if X is square integrable, the variance is finite; (ii) if X is integrable, but not square integrable, the variance is infinite; (iii) if X is not integrable, the variance is undefined.
    [Show full text]
  • The Operations Invariant Properties on Graphs Yanzhong Hu, Gang
    International Conference on Education Technology and Information System (ICETIS 2013) The Operations Invariant Properties on Graphs Yanzhong Hu, Gang Cheng School of Computer Science, Hubei University of Technology, Wuhan, 430068, China [email protected], [email protected] Keywords:Invariant property; Hamilton cycle; Cartesian product; Tensor product Abstract. To determine whether or not a given graph has a Hamilton cycle (or is a planar graph), defined the operations invariant properties on graphs, and discussed the various forms of the invariant properties under the circumstance of Cartesian product graph operation and Tensor product graph operation. The main conclusions include: The Hamiltonicity of graph is invariant concerning the Cartesian product, and the non-planarity of the graph is invariant concerning the tensor product. Therefore, when we applied these principles into practice, we testified that Hamilton cycle does exist in hypercube and the Desargues graph is a non-planarity graph. INTRODUCTION The Planarity, Eulerian feature, Hamiltonicity, bipartite property, spectrum, and so on, is often overlooked for a given graph. The traditional research method adopts the idea of direct proof. In recent years, indirect proof method is put forward, (Fang Xie & Yanzhong Hu, 2010) proves non- planarity of the Petersen graph, by use the non-planarity of K3,3, i.e., it induces the properties of the graph G with the properties of the graph H, here H is a sub-graph of G. However, when it comes to induce the other properties, the result is contrary to what we expect. For example, Fig. 1(a) is a Hamiltonian graph which is a sub-graph of Fig.
    [Show full text]