3 a Few More Good Inequalities, Martingale Variety 1 3.1 From

Total Page:16

File Type:pdf, Size:1020Kb

3 a Few More Good Inequalities, Martingale Variety 1 3.1 From 3 A few more good inequalities, martingale variety1 3.1 From independence to martingales...............1 3.2 Hoeffding’s inequality for martingales.............4 3.3 Bennett's inequality for martingales..............8 3.4 *Concentration of random polynomials............. 10 3.5 *Proof of the Kim-Vu inequality................ 14 3.6 Problems............................. 18 3.7 Notes............................... 18 Printed: 8 October 2015 version: 8Oct2015 Mini-empirical printed: 8 October 2015 c David Pollard x3.1 From independence to martingales 1 Chapter 3 A few more good inequalities, martingale variety Section 3.1 introduces the method for bounding tail probabilities using mo- ment generating functions. Section 3.2 discusses the Hoeffding inequality, both for sums of independent bounded random variables and for martingales with bounded increments. Section 3.3 discusses the Bennett inequality, both for sums of indepen- dent random variables that are bounded above by a constant and their martingale analogs. *Section3.4 presents an extended application of a martingale version of the Bennett inequality to derive a version of the Kim-Vu inequality for poly- nomials in independent, bounded random variables. 3.1 From independence to martingales BasicMG::S:intro Throughout this Chapter f(Si; Fi): i = 1; : : : ; ng is a martingale on some probability space (Ω; F; P). That is, we have a sub-sigma fields F0 ⊆ F1 ⊆ · · · ⊆ Fn ⊆ F and integrable, Fi-measurable random variables Si for which PFi−1 Si = Si−1 almost surely. Equivalently, the martingale differ- ences ξi := Si − Si−1 are integrable, Fi-measurable, and PFi−1 ξi = 0 almost surely, for i = 1; : : : ; n. In that case Si = S0 + ξ1 + ··· + ξi = Si−1 + ξi. For simplicity I assume F0 is trivial, so that S0 = µ = PSn. Also, I usually omit the \almost sure" qualifiers, unless there is some good reason to Draft: 8Oct2015 c David Pollard x3.1 From independence to martingales 2 remind you that the conditional expectations are unique only up to almost sure equivalence. And to avoid a plethora of subscripts, I abbreviate the conditional expectation operator PFi−1 to Pi−1. The inequalities from Sections 2.5 and 2.6 have martingale analogs, which have proved particularly useful for the development of concentration inequalities. The analog of the Hoeffding inequality is often attributed to Azuma(1967), even though Hoeffding(1963, pages 17{18) had already noted that \The inequalities of this section can be strengthened in the following way. Let Sm = X1 + ··· + Xm for m = 1; 2; : : : ; n. Furthermore, the inequalities of Theorems 1 and 2 remain true if the assumption that X1;X2;:::;Xn. are independent 0 is replaced by the weaker assumption that the sequence Sm = Sm − ESm, m = 1; 2; : : : ; n, is a martingale, that is, . Indeed, Doob's inequality (2.17) is true under this assumption. On the other hand, (2.18) implies that the conditional mean of Xm for 0 Sm−1 fixed is equal to its unconditional mean. A slight modifica- tion of the proofs of Theorems 1 and 2 yields the stated result." In principle, the martingale results can be proved by conditional analogs of the moment generating function technique described in Section 2.2, with just a few precautions to avoid problems with negligible sets. In that Sec- tion I started with a random variable X that lives on some probability space (Ω; F; P). By means of arguments involving differentiation inside ex- pectations I derived a Taylor expansion for the cumulant generating func- tion, θX 1 2 \E@ L.Taylor <1> L(θ) = log Pe = θPX + 2 θ varθ∗ (X); where θ∗ lies between 0 and θ and the variance is calculated under the ∗ ∗ probability measure Pθ∗ that has density exp (θ X − L(θ )) with respect to P. The obvious analog for expectations conditional on some sub-sigma- field G of F is θX 1 2 \E@ L.Taylor.condit <2> log PGe = θPGX + 2 θ varG,θ∗ (X) almost surely.: θX Here the conditional expectation PGe is defined in the usual Kolmogorov way as the almostly surely unique G-measurable random variable M!(θ) for Draft: 8Oct2015 c David Pollard x3.1 From independence to martingales 3 which θX P g(!)e = P (g(!)M!(θ)) at least for all nonnegative, G-measurable random variables g. The random variable M!(θ) could be changed on some G-measurable negligible set Nθ without affecting the defining equality. Unfortunately, such negligible am- biguities can become a major obstacle if we want the map θ 7! M!(θ) to be differentiable, or even continuous. Regular conditional distributions eliminate the obstacle, by consolidating all the bookkeeping with negligible sets into one step. Recall that the distribution of a real-valued random variable X is a prob- ability measure P on B(R) for which Pf(X) = P f, at least for all bounded, B(R)-measurable real functions f.A regular conditional distribution for X given a sub-sigma-field G is a Markov kernel, a set fP! : ! 2 Ωg of probability measures on B(R) for which P!f is a version of the conditional expectation PGf(X) whenever it is well defined. That is, ! 7! P!f is G- measurable and Pg(!)f(X(!)) = P (g(!)P!f) for a suitably large collection of B(R)-measurable functions f and G-measurable functions g. Such regular conditional distributions always exist (Breiman, 1968, Section 4.3). Remark. As noted by Breiman(1968, page 80), regular conditional distributions also justify the familiar idea that for the purpose of calculating PG conditional expectations we may treat G-measurable functions as constants. For example, if f is a bounded B(R2)nB(R)- measurable real function on R2 and G is a G-measurable random x variable then PGf(X; G) = P! f(x; G(!)) almost surely. You might find it instructive to prove this assertion using a generating class argument, starting from the fact that it is valid when f factorizes: f(x; y) = f1(x)f2(y). θx If we choose P!e as the version of the conditional moment generating function M!(θ) then the arguments from Section 2.2 work as before: on the interior of the interval J! = fθ 2 R : M!(θ) < 1g we have L!(θ) := log M!(θ) 0 θx L!(θ) = P! xe =M!(θ) = P!,θx 00 2 L!(θ) = var!,θ(x) = P!,θ(x − µ!,θ) where µ!,θ := P!,θx: As before, the probability measure P!,θ is defined on B(R) by its den- xθ sity, dP!,θ=dP! = e =M!(θ). If 0 and θ are interior points of J! then Draft: 8Oct2015 c David Pollard x3.2 Hoeffding's inequality for martingales 4 equality <2> takes the cleaner form 1 2 L!(θ) = θP!x + 2 θ var!,θ∗ (x) for all !, with θ∗, which might depend on !, lying between 0 and θ. Now suppose there exist functions a! and b! for which P![a!; b!] = 1. As in Section 2.5 we have θ(x−P!x) 1 2 2 P!e ≤ exp 8 θ (b! − a!) so that 2 2 θ(X−PGX) θ R!=8 \E@ condit.Hoeff.mgf <3> PGe ≤ e if R! ≥ b! − a!. a:s: Remark. Strangely enough, the first inequality does not require any sort of measurability assumption about a! and b!, because ! is being held fixed. However, we usually need the range R! to be G-measurable when integrating out <3>. 2 Similarly, if P!(−∞; b!] = 1 and v! ≥ P!x then θ(X−PGX) 1 2 \E@ condit.Bennett.mgf <4> PGe ≤ exp 2 θ v!∆(θb!) for θ ≥ 0; a:s: where, as before, ∆ denotes the convex increasing function on R for which 1 2 x 2 x ∆(x) = Φ1(x) := e − 1 − x: 3.2 Hoeffding’s inequality for martingales BasicMG::S:HoeffdingMG Consider the martingale f(Si; Fi): i = 1; : : : ; ng described at the start of Section 3.1. Suppose the conditional distribution of the increment ξi given Fi−1 con- centrates on an interval whose length is bounded above by an Fi−1-measurable function Ri(!). Then inequality <3> gives θξi 1 2 2 \E@ mda.mgf <5> Pi−1e ≤ exp 8 θ Ri : If the Ri's are actually constants then θSi θSi−1 θξi 1 2 2 θSi−1 Pe = P e Pi−1e ≤ exp 8 θ Ri Pe ; Draft: 8Oct2015 c David Pollard x3.2 Hoeffding's inequality for martingales 5 which, by repeated substitution, gives 2 θSn θS0 Y 1 2 2 θPSn+θ W=8 P 2 Pe ≤ Pe exp 8 θ Ri = e where W = 1≤i≤n Ri : 1≤i≤n For each nonnegative x, 2 2 \E@ Hoeff.promised <6> PfSn ≥ x + PSng ≤ inf exp(−xθ + W θ =8) = exp −2x =W : θ≥0 We have the same bound as for a sum of independent increments, as Ho- effding promised. If the Ri's are truly random, a slightly more subtle argument leads to the inequality 2 X 2 \E@ range.squares <7> fSn ≥ x + Sng ≤ exp(−2 =W ) + f R > W g P P P i≤n i for each constant W . Compare with McDiarmid, 1998, Theorem 3.7. P 2 To prove <7> define Vi := k≤i Rk, which is Fi−1 measurable. For each i, h i 2 2 θξi P exp θSi − θ Vi=8 = P exp θSi−1 − θ Vi=8) Pi−1e 2 ≤ P exp θSi−1 − θ Vi−1=8 by <5>. Repeated substitution now gives 2 θPSn P exp θSn − θ Vn=8) ≤ P exp (θS0) = e : The strange Vn contribution is not a problem on the set where Vn ≤ W : 2 2 θSn θSn+θ (W −Vn)=8 θPSn+θ W=8 Pe fVn ≤ W g ≤ Pe ≤ e so that −θ(x+PSn) θSn PfSn ≥ x + PSng ≤ PfVn > W g + infθ>0 e Pe fVn ≤ W g: And so on, until inequality <7> emerges.
Recommended publications
  • Modern Discrete Probability I
    Preliminaries Some fundamental models A few more useful facts about... Modern Discrete Probability I - Introduction Stochastic processes on graphs: models and questions Sebastien´ Roch UW–Madison Mathematics September 6, 2017 Sebastien´ Roch, UW–Madison Modern Discrete Probability – Models and Questions Preliminaries Review of graph theory Some fundamental models Review of Markov chain theory A few more useful facts about... 1 Preliminaries Review of graph theory Review of Markov chain theory 2 Some fundamental models Random walks on graphs Percolation Some random graph models Markov random fields Interacting particles on finite graphs 3 A few more useful facts about... ...graphs ...Markov chains ...other things Sebastien´ Roch, UW–Madison Modern Discrete Probability – Models and Questions Preliminaries Review of graph theory Some fundamental models Review of Markov chain theory A few more useful facts about... Graphs Definition (Undirected graph) An undirected graph (or graph for short) is a pair G = (V ; E) where V is the set of vertices (or nodes, sites) and E ⊆ ffu; vg : u; v 2 V g; is the set of edges (or bonds). The V is either finite or countably infinite. Edges of the form fug are called loops. We do not allow E to be a multiset. We occasionally write V (G) and E(G) for the vertices and edges of G. Sebastien´ Roch, UW–Madison Modern Discrete Probability – Models and Questions Preliminaries Review of graph theory Some fundamental models Review of Markov chain theory A few more useful facts about... An example: the Petersen graph Sebastien´ Roch, UW–Madison Modern Discrete Probability – Models and Questions Preliminaries Review of graph theory Some fundamental models Review of Markov chain theory A few more useful facts about..
    [Show full text]
  • Markov Chain Monte Carlo Estimation of Exponential Random Graph Models
    Markov Chain Monte Carlo Estimation of Exponential Random Graph Models Tom A.B. Snijders ICS, Department of Statistics and Measurement Theory University of Groningen ∗ April 19, 2002 ∗Author's address: Tom A.B. Snijders, Grote Kruisstraat 2/1, 9712 TS Groningen, The Netherlands, email <[email protected]>. I am grateful to Paul Snijders for programming the JAVA applet used in this article. In the revision of this article, I profited from discussions with Pip Pattison and Garry Robins, and from comments made by a referee. This paper is formatted in landscape to improve on-screen readability. It is read best by opening Acrobat Reader in a full screen window. Note that in Acrobat Reader, the entire screen can be used for viewing by pressing Ctrl-L; the usual screen is returned when pressing Esc; it is possible to zoom in or zoom out by pressing Ctrl-- or Ctrl-=, respectively. The sign in the upper right corners links to the page viewed previously. ( Tom A.B. Snijders 2 MCMC estimation for exponential random graphs ( Abstract bility to move from one region to another. In such situations, convergence to the target distribution is extremely slow. To This paper is about estimating the parameters of the exponential be useful, MCMC algorithms must be able to make transitions random graph model, also known as the p∗ model, using frequen- from a given graph to a very different graph. It is proposed to tist Markov chain Monte Carlo (MCMC) methods. The exponen- include transitions to the graph complement as updating steps tial random graph model is simulated using Gibbs or Metropolis- to improve the speed of convergence to the target distribution.
    [Show full text]
  • Exploring Healing Strategies for Random Boolean Networks
    Exploring Healing Strategies for Random Boolean Networks Christian Darabos 1, Alex Healing 2, Tim Johann 3, Amitabh Trehan 4, and Am´elieV´eron 5 1 Information Systems Department, University of Lausanne, Switzerland 2 Pervasive ICT Research Centre, British Telecommunications, UK 3 EML Research gGmbH, Heidelberg, Germany 4 Department of Computer Science, University of New Mexico, Albuquerque, USA 5 Division of Bioinformatics, Institute for Evolution and Biodiversity, The Westphalian Wilhelms University of Muenster, Germany. [email protected], [email protected], [email protected], [email protected], [email protected] Abstract. Real-world systems are often exposed to failures where those studied theoretically are not: neuron cells in the brain can die or fail, re- sources in a peer-to-peer network can break down or become corrupt, species can disappear from and environment, and so forth. In all cases, for the system to keep running as it did before the failure occurred, or to survive at all, some kind of healing strategy may need to be applied. As an example of such a system subjected to failure and subsequently heal, we study Random Boolean Networks. Deletion of a node in the network was considered a failure if it affected the functional output and we in- vestigated healing strategies that would allow the system to heal either fully or partially. More precisely, our main strategy involves allowing the nodes directly affected by the node deletion (failure) to iteratively rewire in order to achieve healing. We found that such a simple method was ef- fective when dealing with small networks and single-point failures.
    [Show full text]
  • Percolation Threshold Results on Erdos-Rényi Graphs
    arXiv: arXiv:0000.0000 Percolation Threshold Results on Erd}os-R´enyi Graphs: an Empirical Process Approach Michael J. Kane Abstract: Keywords and phrases: threshold, directed percolation, stochastic ap- proximation, empirical processes. 1. Introduction Random graphs and discrete random processes provide a general approach to discovering properties and characteristics of random graphs and randomized al- gorithms. The approach generally works by defining an algorithm on a random graph or a randomized algorithm. Then, expected changes for each step of the process are used to propose a limiting differential equation and a large deviation theorem is used to show that the process and the differential equation are close in some sense. In this way a connection is established between the resulting process's stochastic behavior and the dynamics of a deterministic, asymptotic approximation using a differential equation. This approach is generally referred to as stochastic approximation and provides a powerful tool for understanding the asymptotic behavior of a large class of processes defined on random graphs. However, little work has been done in the area of random graph research to investigate the weak limit behavior of these processes before the asymptotic be- havior overwhelms the random component of the process. This context is par- ticularly relevant to researchers studying news propagation in social networks, sensor networks, and epidemiological outbreaks. In each of these applications, investigators may deal graphs containing tens to hundreds of vertices and be interested not only in expected behavior over time but also error estimates. This paper investigates the connectivity of graphs, with emphasis on Erd}os- R´enyi graphs, near the percolation threshold when the number of vertices is not asymptotically large.
    [Show full text]
  • Probability on Graphs Random Processes on Graphs and Lattices
    Probability on Graphs Random Processes on Graphs and Lattices GEOFFREY GRIMMETT Statistical Laboratory University of Cambridge c G. R. Grimmett 1/4/10, 17/11/10, 5/7/12 Geoffrey Grimmett Statistical Laboratory Centre for Mathematical Sciences University of Cambridge Wilberforce Road Cambridge CB3 0WB United Kingdom 2000 MSC: (Primary) 60K35, 82B20, (Secondary) 05C80, 82B43, 82C22 With 44 Figures c G. R. Grimmett 1/4/10, 17/11/10, 5/7/12 Contents Preface ix 1 Random walks on graphs 1 1.1 RandomwalksandreversibleMarkovchains 1 1.2 Electrical networks 3 1.3 Flowsandenergy 8 1.4 Recurrenceandresistance 11 1.5 Polya's theorem 14 1.6 Graphtheory 16 1.7 Exercises 18 2 Uniform spanning tree 21 2.1 De®nition 21 2.2 Wilson's algorithm 23 2.3 Weak limits on lattices 28 2.4 Uniform forest 31 2.5 Schramm±LownerevolutionsÈ 32 2.6 Exercises 37 3 Percolation and self-avoiding walk 39 3.1 Percolationandphasetransition 39 3.2 Self-avoiding walks 42 3.3 Coupledpercolation 45 3.4 Orientedpercolation 45 3.5 Exercises 48 4 Association and in¯uence 50 4.1 Holley inequality 50 4.2 FKGinequality 53 4.3 BK inequality 54 4.4 Hoeffdinginequality 56 c G. R. Grimmett 1/4/10, 17/11/10, 5/7/12 vi Contents 4.5 In¯uenceforproductmeasures 58 4.6 Proofsofin¯uencetheorems 63 4.7 Russo'sformulaandsharpthresholds 75 4.8 Exercises 78 5 Further percolation 81 5.1 Subcritical phase 81 5.2 Supercritical phase 86 5.3 Uniquenessofthein®nitecluster 92 5.4 Phase transition 95 5.5 Openpathsinannuli 99 5.6 The critical probability in two dimensions 103 5.7 Cardy's formula 110 5.8 The
    [Show full text]
  • Random Boolean Networks As a Toy Model for the Brain
    UNIVERSITY OF GENEVA SCIENCE FACULTY VRIJE UNIVERSITEIT OF AMSTERDAM PHYSICS SECTION Random Boolean Networks as a toy model for the brain MASTER THESIS presented at the science faculty of the University of Geneva for obtaining the Master in Theoretical Physics by Chlo´eB´eguin Supervisor (VU): Pr. Greg J Stephens Co-Supervisor (UNIGE): Pr. J´er^ome Kasparian July 2017 Contents Introduction1 1 Biology, physics and the brain4 1.1 Biological description of the brain................4 1.2 Criticality in the brain......................8 1.2.1 Physics reminder..................... 10 1.2.2 Experimental evidences.................. 15 2 Models of neural networks 20 2.1 Classes of models......................... 21 2.1.1 Spiking models...................... 21 2.1.2 Rate-based models.................... 23 2.1.3 Attractor networks.................... 24 2.1.4 Links between the classes of models........... 25 2.2 Random Boolean Networks.................... 28 2.2.1 General definition..................... 28 2.2.2 Kauffman network.................... 30 2.2.3 Hopfield network..................... 31 2.2.4 Towards a RBN for the brain.............. 32 2.2.5 The model......................... 33 3 Characterisation of RBNs 34 3.1 Attractors............................. 34 3.2 Damage spreading........................ 36 3.3 Canonical specific heat...................... 37 4 Results 40 4.1 One population with Gaussian weights............. 40 4.2 Dale's principle and balance of inhibition - excitation..... 46 4.3 Lognormal distribution of the weights.............. 51 4.4 Discussion............................. 55 i 5 Conclusion 58 Bibliography 60 Acknowledgements 66 A Python Code 67 A.1 Dynamics............................. 67 A.2 Attractor search.......................... 69 A.3 Hamming Distance........................ 73 A.4 Canonical specific heat.....................
    [Show full text]
  • An Introduction to Exponential Random Graph (P*) Models for Social Networks
    Social Networks 29 (2007) 173–191 An introduction to exponential random graph (p*) models for social networks Garry Robins ∗, Pip Pattison, Yuval Kalish, Dean Lusher Department of Psychology, School of Behavioural Science, University of Melbourne, Vic. 3010, Australia Abstract This article provides an introductory summary to the formulation and application of exponential random graph models for social networks. The possible ties among nodes of a network are regarded as random variables, and assumptions about dependencies among these random tie variables determine the general form of the exponential random graph model for the network. Examples of different dependence assumptions and their associated models are given, including Bernoulli, dyad-independent and Markov random graph models. The incorporation of actor attributes in social selection models is also reviewed. Newer, more complex dependence assumptions are briefly outlined. Estimation procedures are discussed, including new methods for Monte Carlo maximum likelihood estimation. We foreshadow the discussion taken up in other papers in this special edition: that the homogeneous Markov random graph models of Frank and Strauss [Frank, O., Strauss, D., 1986. Markov graphs. Journal of the American Statistical Association 81, 832–842] are not appropriate for many observed networks, whereas the new model specifications of Snijders et al. [Snijders, T.A.B., Pattison, P., Robins, G.L., Handock, M. New specifications for exponential random graph models. Sociological Methodology, in press] offer substantial improvement. © 2006 Elsevier B.V. All rights reserved. Keywords: Exponential random graph models; Statistical models for social networks; p* models In recent years, there has been growing interest in exponential random graph models for social networks, commonly called the p* class of models (Frank and Strauss, 1986; Pattison and Wasserman, 1999; Robins et al., 1999; Wasserman and Pattison, 1996).
    [Show full text]
  • A Review on Statistical Inference Methods for Discrete Markov Random Fields Julien Stoehr, Richard Everitt, Matthew T
    A review on statistical inference methods for discrete Markov random fields Julien Stoehr, Richard Everitt, Matthew T. Moores To cite this version: Julien Stoehr, Richard Everitt, Matthew T. Moores. A review on statistical inference methods for discrete Markov random fields. 2017. hal-01462078v2 HAL Id: hal-01462078 https://hal.archives-ouvertes.fr/hal-01462078v2 Preprint submitted on 11 Apr 2017 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. A review on statistical inference methods for discrete Markov random fields Julien Stoehr1 1School of Mathematical Sciences & Insight Centre for Data Analytics, University College Dublin, Ireland Abstract Developing satisfactory methodology for the analysis of Markov random field is a very challenging task. Indeed, due to the Markovian dependence structure, the normalizing constant of the fields cannot be computed using standard analytical or numerical methods. This forms a central issue for any statistical approach as the likelihood is an integral part of the procedure. Furthermore, such unobserved fields cannot be integrated out and the likelihood evaluation becomes a doubly intractable problem. This report gives an overview of some of the methods used in the literature to analyse such observed or unobserved random fields.
    [Show full text]
  • A Review on Statistical Inference Methods for Discrete Markov Random Fields Julien Stoehr
    A review on statistical inference methods for discrete Markov random fields Julien Stoehr To cite this version: Julien Stoehr. A review on statistical inference methods for discrete Markov random fields. 2017. hal-01462078v1 HAL Id: hal-01462078 https://hal.archives-ouvertes.fr/hal-01462078v1 Preprint submitted on 8 Feb 2017 (v1), last revised 11 Apr 2017 (v2) HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. A review on statistical inference methods for discrete Markov random fields Julien Stoehr1 1School of Mathematical Sciences & Insight Centre for Data Analytics, University College Dublin, Ireland February 8, 2017 Abstract Developing satisfactory methodology for the analysis of Markov random field is a very challenging task. Indeed, due to the Markovian dependence structure, the normalizing con- stant of the fields cannot be computed using standard analytical or numerical methods. This forms a central issue for any statistical approach as the likelihood is an integral part of the procedure. Furthermore, such unobserved fields cannot be integrated out and the like- lihood evaluation becomes a doubly intractable problem. This report gives an overview of some of the methods used in the literature to analyse such observed or unobserved random fields.
    [Show full text]
  • The Random Graph
    Chapter 1 The Random Graph Summary. Erd˝os and R´enyi showed the paradoxical result that there is a unique (and highly symmetric) countably infinite random graph. This graph, and its automorphism group, form the subject of the present survey. 1.1 Introduction In 1963, Erd˝os and R´enyi [25] showed: Theorem 1. There exists a graph R with the following property. If a countable 1 graph is chosen at random, by selecting edges independently with probability 2 from the set of 2-element subsets of the vertex set, then almost surely (i.e., with probability 1), the resulting graph is isomorphic to R. This theorem, on first acquaintance, seems to defy common sense — a ran- dom process whose outcome is predictable. Nevertheless, the argument which establishes it is quite short. (It is given below.) Indeed, it formed a tailpiece to the paper of Erd˝os and R´enyi, which mainly concerned the much less pre- dictable world of finite random graphs. (In their book Probabilistic Methods in Combinatorics, Erd˝os and Spencer [26] remark that this result “demolishes the theory of infinite random graphs.”) arXiv:1301.7544v1 [math.CO] 31 Jan 2013 I will give the proof in detail, since it underlies much that follows. The key is to consider the following property, which a graph may or may not have: ( ) Given finitely many distinct vertices u ,...,um, v ,...,vn, there exists a ∗ 1 1 vertex z which is adjacent to u1,...,um and nonadjacent to v1,...,vn. Often I will say, for brevity, “z is correctly joined”.
    [Show full text]
  • Critical Points for Random Boolean Networks James F
    Physica D 172 (2002) 49–64 Critical points for random Boolean networks James F. Lynch∗ Department of Mathematics and Computer Science, Box 5815, Clarkson University, Potsdam, NY 13699-5815, USA Received 22 May 2002; accepted 30 July 2002 Communicated by A.C. Newell Abstract A model of cellular metabolism due to S. Kauffman is analyzed. It consists of a network of Boolean gates randomly assembled according to a probability distribution. It is shown that the behavior of the network depends very critically on certain simple algebraic parameters of the distribution. In some cases, the analytic results support conclusions based on simulations of random Boolean networks, but in other cases, they do not. © 2002 Elsevier Science B.V. All rights reserved. PACS: 87.10; 82.39.R; 02.10; 89.75.F Keywords: Boolean networks; Cellular metabolism; Random graphs; Stability 1. Introduction Many dynamical systems are modeled by networks of interacting elements. Examples come from diverse areas of science and engineering and over enormous scales of time and space, from biochemical networks within a cell [1] to food webs [3] and collaboration networks in human organizations [16]. Often, these systems are subjected to random or unpredictable processes. In this paper, we analyze a class of random networks that Kauffman [9,10] proposed as models of cellular metabolism. These are networks of Boolean gates, where each gate corresponds to a gene or protein, and the network describes the interactions among these chemical compounds. Although Boolean networks capture at least some of the salient features of the operation of the genome, researchers have been mainly interested in certain abstract properties of their dynamics.
    [Show full text]
  • Learning Loosely Connected Markov Random Fields
    Stochastic Systems LEARNING LOOSELY CONNECTED MARKOV RANDOM FIELDS By Rui Wu∗,z, R Srikant∗ and Jian Niy,x University of Illinois at Urbana-Champaign∗ and IBM T. J. Watson Research Centery We consider the structure learning problem for graphical mod- els that we call loosely connected Markov random fields, in which the number of short paths between any pair of nodes is small, and present a new conditional independence test based algorithm for learning the underlying graph structure. The novel maximization step in our al- gorithm ensures that the true edges are detected correctly even when there are short cycles in the graph. The number of samples required by our algorithm is C log p, where p is the size of the graph and the constant C depends on the parameters of the model. We show that several previously studied models are examples of loosely connected Markov random fields, and our algorithm achieves the same or lower computational complexity than the previously designed algorithms for individual cases. We also get new results for more general graphi- cal models, in particular, our algorithm learns general Ising models on c the Erd}os-R´enyi random graph G(p; p ) correctly with running time O(np5). 1. Introduction. In many models of networks, such as social networks and gene regulatory networks, each node in the network represents a ran- dom variable and the graph encodes the conditional independence relations among the random variables. A Markov random field is a particular such representation which has applications in a variety of areas (see [3] and the references therein).
    [Show full text]