Introduction to Natural Computation Lecture 18 Random Boolean

Total Page:16

File Type:pdf, Size:1020Kb

Introduction to Natural Computation Lecture 18 Random Boolean Introduction to Natural Computation Lecture 18 Random Boolean Networks Alberto Moraglio 1 / 25 Random Boolean Networks Extension of Cellular Automata. Introduced by Stuart Kauffman (1969) as model for genetic regulatory networks. Uncover fundamental principles of living systems. Hypothesis: living organisms can be constructed from random elements. Simplification: Boolean states for all nodes in the network. 2 / 25 Definition Random Boolean Network (RBN) A random Boolean network consists of N nodes, each with a state 0 or 1 K edges for each node to K different randomly selected nodes a lookup table for each node that determines the next state, according to states of its K neighbours. Last few lectures: fixed graph, random process This time: random graph, deterministic process Note: randomness used to create network, afterwards network remains fixed! 3 / 25 Relation to Cellular Automata http://www.metafysica.nl/boolean.html 4 / 25 Lookup Tables Recall rules from Cellular Automata: RBN: each node has its own random lookup table: inputs at time t state at time t + 1 000 1 001 0 010 0 011 1 100 1 101 0 110 1 111 1 5 / 25 Example of a Simple RBN Gershenson C. Introduction to Random Boolean Networks. ArXiv Nonlinear Sciences e-prints. 2004; 6 / 25 States State space has size 2N . After at most 2N + 1 steps a state will be repeated. 7 / 25 States State space has size 2N . After at most 2N + 1 steps a state will be repeated. Attractors As the system is deterministic, it will get stuck in an attractor. one state → point attractor / steady state two or more states → cycle attractor / state cycle The set of states that flow towards an attractor is called attractor basin. 8 / 25 Number of RBNs Q: How many possible functions exists for each node? 9 / 25 Number of RBNs Q: How many possible functions exists for each node? K A: All possible 0-1-choices for 2K inputs: 22 . 10 / 25 Number of RBNs Q: How many possible functions exists for each node? K A: All possible 0-1-choices for 2K inputs: 22 . Q: How many possible wirings to K neighbours exist for each node? 11 / 25 Number of RBNs Q: How many possible functions exists for each node? K A: All possible 0-1-choices for 2K inputs: 22 . Q: How many possible wirings to K neighbours exist for each node? A: There are N!/(N − K)! ordered combinations. 12 / 25 Number of RBNs Q: How many possible functions exists for each node? K A: All possible 0-1-choices for 2K inputs: 22 . Q: How many possible wirings to K neighbours exist for each node? A: There are N!/(N − K)! ordered combinations. Q: How large is the number of possible networks for given N, K? 13 / 25 Number of RBNs Q: How many possible functions exists for each node? K A: All possible 0-1-choices for 2K inputs: 22 . Q: How many possible wirings to K neighbours exist for each node? A: There are N!/(N − K)! ordered combinations. Q: How large is the number of possible networks for given N, K? A: Pretty big! 14 / 25 Number of RBNs Q: How many possible functions exists for each node? K A: All possible 0-1-choices for 2K inputs: 22 . Q: How many possible wirings to K neighbours exist for each node? A: There are N!/(N − K)! ordered combinations. Q: How large is the number of possible networks for given N, K? A: Pretty big! K N 22 N! − (N K)! ! 15 / 25 Number of RBNs Q: How many possible functions exists for each node? K A: All possible 0-1-choices for 2K inputs: 22 . Q: How many possible wirings to K neighbours exist for each node? A: There are N!/(N − K)! ordered combinations. Q: How large is the number of possible networks for given N, K? A: Pretty big! K N 22 N! (N − K)! ! N = 3,K = 3 : 3623878656 16 / 25 Number of RBNs Q: How many possible functions exists for each node? K A: All possible 0-1-choices for 2K inputs: 22 . Q: How many possible wirings to K neighbours exist for each node? A: There are N!/(N − K)! ordered combinations. Q: How large is the number of possible networks for given N, K? A: Pretty big! K N 22 N! (N − K)! ! N = 3,K = 3 : 3623878656 N = 4,K = 3 : 1424967069597696 17 / 25 Number of RBNs Q: How many possible functions exists for each node? K A: All possible 0-1-choices for 2K inputs: 22 . Q: How many possible wirings to K neighbours exist for each node? A: There are N!/(N − K)! ordered combinations. Q: How large is the number of possible networks for given N, K? A: Pretty big! K N 22 N! (N − K)! ! N = 3,K = 3 : 3623878656 N = 4,K = 3 : 1424967069597696 N = 8,K = 4 : 21593035501811706443110335226483118854343255930579818905600000000 18 / 25 Order vs. Chaos Random Boolean networks can be in three phases: ordered, chaotic, and critical. Gershenson C. Introduction to Random Boolean Networks. ArXiv Nonlinear Sciences e-prints. 2004; 19 / 25 Order vs. Chaos Ordered regime Occurs when K ≤ 2. System is insensitive to initial conditions and “mutations”: flip the state of a node change a connection change the lookup table of a node “Damage” typically does not spread far. 20 / 25 Order vs. Chaos Ordered regime Occurs when K ≤ 2. System is insensitive to initial conditions and “mutations”: flip the state of a node change a connection change the lookup table of a node “Damage” typically does not spread far. Chaotic regime Occurs when K ≥ 3. Small changes can have a huge impact–“butterfly effect”. Similar states tend to diverge. 21 / 25 View of an Attractor Basin N = 13,K = 3 http://www.metafysica.nl/boolean.html 22 / 25 All Attractor Basins N = 13,K = 3 http://www.metafysica.nl/boolean.html from Andrew Wuensche’s DLLab gallery 23 / 25 Other Update Schemes Attractor cycles occur because the system is deterministic and all nodes are updated synchronously. Criticism about synchronicity: genes do not march in step! Variants of RBN Asynchronous RBNs: randomly choose a node to be updated. Deterministic asynchronous RBNs: create random periods Pi,Qi for each node that remain fixed afterwards. Update node i at time t if t mod Pi ≡ Qi. Other update schemes can change the behaviour significantly. 24 / 25 Further Reading Gershenson C. Introduction to Random Boolean Networks. ArXiv Nonlinear Sciences e-prints. 2004;. 25 / 25.
Recommended publications
  • Modern Discrete Probability I
    Preliminaries Some fundamental models A few more useful facts about... Modern Discrete Probability I - Introduction Stochastic processes on graphs: models and questions Sebastien´ Roch UW–Madison Mathematics September 6, 2017 Sebastien´ Roch, UW–Madison Modern Discrete Probability – Models and Questions Preliminaries Review of graph theory Some fundamental models Review of Markov chain theory A few more useful facts about... 1 Preliminaries Review of graph theory Review of Markov chain theory 2 Some fundamental models Random walks on graphs Percolation Some random graph models Markov random fields Interacting particles on finite graphs 3 A few more useful facts about... ...graphs ...Markov chains ...other things Sebastien´ Roch, UW–Madison Modern Discrete Probability – Models and Questions Preliminaries Review of graph theory Some fundamental models Review of Markov chain theory A few more useful facts about... Graphs Definition (Undirected graph) An undirected graph (or graph for short) is a pair G = (V ; E) where V is the set of vertices (or nodes, sites) and E ⊆ ffu; vg : u; v 2 V g; is the set of edges (or bonds). The V is either finite or countably infinite. Edges of the form fug are called loops. We do not allow E to be a multiset. We occasionally write V (G) and E(G) for the vertices and edges of G. Sebastien´ Roch, UW–Madison Modern Discrete Probability – Models and Questions Preliminaries Review of graph theory Some fundamental models Review of Markov chain theory A few more useful facts about... An example: the Petersen graph Sebastien´ Roch, UW–Madison Modern Discrete Probability – Models and Questions Preliminaries Review of graph theory Some fundamental models Review of Markov chain theory A few more useful facts about..
    [Show full text]
  • Markov Model Checking of Probabilistic Boolean Networks Representations of Genes
    Markov Model Checking of Probabilistic Boolean Networks Representations of Genes Marie Lluberes1, Jaime Seguel2 and Jaime Ramírez-Vick3 1, 2 Electrical and Computer Engineering Department, University of Puerto Rico, Mayagüez, Puerto Rico 3 General Engineering Department, University of Puerto Rico, Mayagüez, Puerto Rico or, on contrary, to prevent or to stop an undesirable Abstract - Our goal is to develop an algorithm for the behavior. This “guiding” of the network dynamics is automated study of the dynamics of Probabilistic Boolean referred to as intervention. The power to intervene with the Network (PBN) representation of genes. Model checking is network dynamics has a significant impact in diagnostics an automated method for the verification of properties on and drug design. systems. Continuous Stochastic Logic (CSL), an extension of Biological phenomena manifest in the continuous-time Computation Tree Logic (CTL), is a model-checking tool domain. But, in describing such phenomena we usually that can be used to specify measures for Continuous-time employ a binary language, for instance, expressed or not Markov Chains (CTMC). Thus, as PBNs can be analyzed in expressed; on or off; up or down regulated. Studies the context of Markov theory, the use of CSL as a method for conducted restricting genes expression to only two levels (0 model checking PBNs could be a powerful tool for the or 1) suggested that information retained by these when simulation of gene network dynamics. Particularly, we are binarized is meaningful to the extent that it is remains in a interested in the subject of intervention. This refers to the continuous domain [2].
    [Show full text]
  • Online Spectral Clustering on Network Streams Yi
    Online Spectral Clustering on Network Streams By Yi Jia Submitted to the graduate degree program in Electrical Engineering and Computer Science and the Graduate Faculty of the University of Kansas in partial fulfillment of the requirements for the degree of Doctor of Philosophy Jun Huan, Chairperson Swapan Chakrabarti Committee members Jerzy Grzymala-Busse Bo Luo Alfred Tat-Kei Ho Date defended: The Dissertation Committee for Yi Jia certifies that this is the approved version of the following dissertation : Online Spectral Clustering on Network Streams Jun Huan, Chairperson Date approved: ii Abstract Graph is an extremely useful representation of a wide variety of practical systems in data analysis. Recently, with the fast accumulation of stream data from various type of networks, significant research interests have arisen on spectral clustering for network streams (or evolving networks). Compared with the general spectral clustering problem, the data analysis of this new type of problems may have additional requirements, such as short processing time, scalability in distributed computing environments, and temporal variation tracking. However, to design a spectral clustering method to satisfy these requirement cer- tainly presents non-trivial efforts. There are three major challenges for the new algorithm design. The first challenge is online clustering computation. Most of the existing spectral methods on evolving networks are off-line methods, using standard eigensystem solvers such as the Lanczos method. It needs to recompute solutions from scratch at each time point. The second challenge is the paralleliza- tion of algorithms. To parallelize such algorithms is non-trivial since standard eigen solvers are iterative algorithms and the number of iterations can not be pre- determined.
    [Show full text]
  • Inference of a Probabilistic Boolean Network from a Single Observed Temporal Sequence
    Hindawi Publishing Corporation EURASIP Journal on Bioinformatics and Systems Biology Volume 2007, Article ID 32454, 15 pages doi:10.1155/2007/32454 Research Article Inference of a Probabilistic Boolean Network from a Single Observed Temporal Sequence Stephen Marshall,1 Le Yu,1 Yufei Xiao,2 and Edward R. Dougherty2, 3, 4 1 Department of Electronic and Electrical Engineering, Faculty of Engineering, University of Strathclyde, Glasgow, G1 1XW, UK 2 Department of Electrical and Computer Engineering, Texas A&M University, College Station, TX 77843-3128, USA 3 Computational Biology Division, Translational Genomics Research Institute, Phoenix, AZ 85004, USA 4 Department of Pathology, University of Texas M. D. Anderson Cancer Center, Houston, TX 77030, USA Received 10 July 2006; Revised 29 January 2007; Accepted 26 February 2007 Recommended by Tatsuya Akutsu The inference of gene regulatory networks is a key issue for genomic signal processing. This paper addresses the inference of proba- bilistic Boolean networks (PBNs) from observed temporal sequences of network states. Since a PBN is composed of a finite number of Boolean networks, a basic observation is that the characteristics of a single Boolean network without perturbation may be de- termined by its pairwise transitions. Because the network function is fixed and there are no perturbations, a given state will always be followed by a unique state at the succeeding time point. Thus, a transition counting matrix compiled over a data sequence will be sparse and contain only one entry per line. If the network also has perturbations, with small perturbation probability, then the transition counting matrix would have some insignificant nonzero entries replacing some (or all) of the zeros.
    [Show full text]
  • Boolean Network Control with Ideals
    Portland State University PDXScholar Mathematics and Statistics Faculty Fariborz Maseeh Department of Mathematics Publications and Presentations and Statistics 1-2021 Boolean Network Control with Ideals Ian H. Dinwoodie Portland State University, [email protected] Follow this and additional works at: https://pdxscholar.library.pdx.edu/mth_fac Part of the Physical Sciences and Mathematics Commons Let us know how access to this document benefits ou.y Citation Details Dinwoodie, Ian H., "Boolean Network Control with Ideals" (2021). Mathematics and Statistics Faculty Publications and Presentations. 302. https://pdxscholar.library.pdx.edu/mth_fac/302 This Working Paper is brought to you for free and open access. It has been accepted for inclusion in Mathematics and Statistics Faculty Publications and Presentations by an authorized administrator of PDXScholar. Please contact us if we can make this document more accessible: [email protected]. Boolean Network Control with Ideals I H Dinwoodie Portland State University January 2021 Abstract A method is given for finding controls to transition an initial state x0 to a target set in deterministic or stochastic Boolean network control models. The algorithms use multivariate polynomial algebra. Examples illustrate the application. 1 Introduction A Boolean network is a dynamical system on d nodes (or coordinates) with binary node values, and with transition map F : {0, 1}d → {0, 1}d. The c coordinate maps F = (f1,...,fd) may use binary parameters u ∈ {0, 1} that can be adjusted or controlled at each step, so the image can be writ- ten F (x, u) and F takes {0, 1}d+c → {0, 1}d, usually written in logical nota- tion.
    [Show full text]
  • Markov Chain Monte Carlo Estimation of Exponential Random Graph Models
    Markov Chain Monte Carlo Estimation of Exponential Random Graph Models Tom A.B. Snijders ICS, Department of Statistics and Measurement Theory University of Groningen ∗ April 19, 2002 ∗Author's address: Tom A.B. Snijders, Grote Kruisstraat 2/1, 9712 TS Groningen, The Netherlands, email <[email protected]>. I am grateful to Paul Snijders for programming the JAVA applet used in this article. In the revision of this article, I profited from discussions with Pip Pattison and Garry Robins, and from comments made by a referee. This paper is formatted in landscape to improve on-screen readability. It is read best by opening Acrobat Reader in a full screen window. Note that in Acrobat Reader, the entire screen can be used for viewing by pressing Ctrl-L; the usual screen is returned when pressing Esc; it is possible to zoom in or zoom out by pressing Ctrl-- or Ctrl-=, respectively. The sign in the upper right corners links to the page viewed previously. ( Tom A.B. Snijders 2 MCMC estimation for exponential random graphs ( Abstract bility to move from one region to another. In such situations, convergence to the target distribution is extremely slow. To This paper is about estimating the parameters of the exponential be useful, MCMC algorithms must be able to make transitions random graph model, also known as the p∗ model, using frequen- from a given graph to a very different graph. It is proposed to tist Markov chain Monte Carlo (MCMC) methods. The exponen- include transitions to the graph complement as updating steps tial random graph model is simulated using Gibbs or Metropolis- to improve the speed of convergence to the target distribution.
    [Show full text]
  • Exploring Healing Strategies for Random Boolean Networks
    Exploring Healing Strategies for Random Boolean Networks Christian Darabos 1, Alex Healing 2, Tim Johann 3, Amitabh Trehan 4, and Am´elieV´eron 5 1 Information Systems Department, University of Lausanne, Switzerland 2 Pervasive ICT Research Centre, British Telecommunications, UK 3 EML Research gGmbH, Heidelberg, Germany 4 Department of Computer Science, University of New Mexico, Albuquerque, USA 5 Division of Bioinformatics, Institute for Evolution and Biodiversity, The Westphalian Wilhelms University of Muenster, Germany. [email protected], [email protected], [email protected], [email protected], [email protected] Abstract. Real-world systems are often exposed to failures where those studied theoretically are not: neuron cells in the brain can die or fail, re- sources in a peer-to-peer network can break down or become corrupt, species can disappear from and environment, and so forth. In all cases, for the system to keep running as it did before the failure occurred, or to survive at all, some kind of healing strategy may need to be applied. As an example of such a system subjected to failure and subsequently heal, we study Random Boolean Networks. Deletion of a node in the network was considered a failure if it affected the functional output and we in- vestigated healing strategies that would allow the system to heal either fully or partially. More precisely, our main strategy involves allowing the nodes directly affected by the node deletion (failure) to iteratively rewire in order to achieve healing. We found that such a simple method was ef- fective when dealing with small networks and single-point failures.
    [Show full text]
  • Percolation Threshold Results on Erdos-Rényi Graphs
    arXiv: arXiv:0000.0000 Percolation Threshold Results on Erd}os-R´enyi Graphs: an Empirical Process Approach Michael J. Kane Abstract: Keywords and phrases: threshold, directed percolation, stochastic ap- proximation, empirical processes. 1. Introduction Random graphs and discrete random processes provide a general approach to discovering properties and characteristics of random graphs and randomized al- gorithms. The approach generally works by defining an algorithm on a random graph or a randomized algorithm. Then, expected changes for each step of the process are used to propose a limiting differential equation and a large deviation theorem is used to show that the process and the differential equation are close in some sense. In this way a connection is established between the resulting process's stochastic behavior and the dynamics of a deterministic, asymptotic approximation using a differential equation. This approach is generally referred to as stochastic approximation and provides a powerful tool for understanding the asymptotic behavior of a large class of processes defined on random graphs. However, little work has been done in the area of random graph research to investigate the weak limit behavior of these processes before the asymptotic be- havior overwhelms the random component of the process. This context is par- ticularly relevant to researchers studying news propagation in social networks, sensor networks, and epidemiological outbreaks. In each of these applications, investigators may deal graphs containing tens to hundreds of vertices and be interested not only in expected behavior over time but also error estimates. This paper investigates the connectivity of graphs, with emphasis on Erd}os- R´enyi graphs, near the percolation threshold when the number of vertices is not asymptotically large.
    [Show full text]
  • Probability on Graphs Random Processes on Graphs and Lattices
    Probability on Graphs Random Processes on Graphs and Lattices GEOFFREY GRIMMETT Statistical Laboratory University of Cambridge c G. R. Grimmett 1/4/10, 17/11/10, 5/7/12 Geoffrey Grimmett Statistical Laboratory Centre for Mathematical Sciences University of Cambridge Wilberforce Road Cambridge CB3 0WB United Kingdom 2000 MSC: (Primary) 60K35, 82B20, (Secondary) 05C80, 82B43, 82C22 With 44 Figures c G. R. Grimmett 1/4/10, 17/11/10, 5/7/12 Contents Preface ix 1 Random walks on graphs 1 1.1 RandomwalksandreversibleMarkovchains 1 1.2 Electrical networks 3 1.3 Flowsandenergy 8 1.4 Recurrenceandresistance 11 1.5 Polya's theorem 14 1.6 Graphtheory 16 1.7 Exercises 18 2 Uniform spanning tree 21 2.1 De®nition 21 2.2 Wilson's algorithm 23 2.3 Weak limits on lattices 28 2.4 Uniform forest 31 2.5 Schramm±LownerevolutionsÈ 32 2.6 Exercises 37 3 Percolation and self-avoiding walk 39 3.1 Percolationandphasetransition 39 3.2 Self-avoiding walks 42 3.3 Coupledpercolation 45 3.4 Orientedpercolation 45 3.5 Exercises 48 4 Association and in¯uence 50 4.1 Holley inequality 50 4.2 FKGinequality 53 4.3 BK inequality 54 4.4 Hoeffdinginequality 56 c G. R. Grimmett 1/4/10, 17/11/10, 5/7/12 vi Contents 4.5 In¯uenceforproductmeasures 58 4.6 Proofsofin¯uencetheorems 63 4.7 Russo'sformulaandsharpthresholds 75 4.8 Exercises 78 5 Further percolation 81 5.1 Subcritical phase 81 5.2 Supercritical phase 86 5.3 Uniquenessofthein®nitecluster 92 5.4 Phase transition 95 5.5 Openpathsinannuli 99 5.6 The critical probability in two dimensions 103 5.7 Cardy's formula 110 5.8 The
    [Show full text]
  • 3 a Few More Good Inequalities, Martingale Variety 1 3.1 From
    3 A few more good inequalities, martingale variety1 3.1 From independence to martingales...............1 3.2 Hoeffding’s inequality for martingales.............4 3.3 Bennett's inequality for martingales..............8 3.4 *Concentration of random polynomials............. 10 3.5 *Proof of the Kim-Vu inequality................ 14 3.6 Problems............................. 18 3.7 Notes............................... 18 Printed: 8 October 2015 version: 8Oct2015 Mini-empirical printed: 8 October 2015 c David Pollard x3.1 From independence to martingales 1 Chapter 3 A few more good inequalities, martingale variety Section 3.1 introduces the method for bounding tail probabilities using mo- ment generating functions. Section 3.2 discusses the Hoeffding inequality, both for sums of independent bounded random variables and for martingales with bounded increments. Section 3.3 discusses the Bennett inequality, both for sums of indepen- dent random variables that are bounded above by a constant and their martingale analogs. *Section3.4 presents an extended application of a martingale version of the Bennett inequality to derive a version of the Kim-Vu inequality for poly- nomials in independent, bounded random variables. 3.1 From independence to martingales BasicMG::S:intro Throughout this Chapter f(Si; Fi): i = 1; : : : ; ng is a martingale on some probability space (Ω; F; P). That is, we have a sub-sigma fields F0 ⊆ F1 ⊆ · · · ⊆ Fn ⊆ F and integrable, Fi-measurable random variables Si for which PFi−1 Si = Si−1 almost surely. Equivalently, the martingale differ- ences ξi := Si − Si−1 are integrable, Fi-measurable, and PFi−1 ξi = 0 almost surely, for i = 1; : : : ; n. In that case Si = S0 + ξ1 + ··· + ξi = Si−1 + ξi.
    [Show full text]
  • Random Boolean Networks As a Toy Model for the Brain
    UNIVERSITY OF GENEVA SCIENCE FACULTY VRIJE UNIVERSITEIT OF AMSTERDAM PHYSICS SECTION Random Boolean Networks as a toy model for the brain MASTER THESIS presented at the science faculty of the University of Geneva for obtaining the Master in Theoretical Physics by Chlo´eB´eguin Supervisor (VU): Pr. Greg J Stephens Co-Supervisor (UNIGE): Pr. J´er^ome Kasparian July 2017 Contents Introduction1 1 Biology, physics and the brain4 1.1 Biological description of the brain................4 1.2 Criticality in the brain......................8 1.2.1 Physics reminder..................... 10 1.2.2 Experimental evidences.................. 15 2 Models of neural networks 20 2.1 Classes of models......................... 21 2.1.1 Spiking models...................... 21 2.1.2 Rate-based models.................... 23 2.1.3 Attractor networks.................... 24 2.1.4 Links between the classes of models........... 25 2.2 Random Boolean Networks.................... 28 2.2.1 General definition..................... 28 2.2.2 Kauffman network.................... 30 2.2.3 Hopfield network..................... 31 2.2.4 Towards a RBN for the brain.............. 32 2.2.5 The model......................... 33 3 Characterisation of RBNs 34 3.1 Attractors............................. 34 3.2 Damage spreading........................ 36 3.3 Canonical specific heat...................... 37 4 Results 40 4.1 One population with Gaussian weights............. 40 4.2 Dale's principle and balance of inhibition - excitation..... 46 4.3 Lognormal distribution of the weights.............. 51 4.4 Discussion............................. 55 i 5 Conclusion 58 Bibliography 60 Acknowledgements 66 A Python Code 67 A.1 Dynamics............................. 67 A.2 Attractor search.......................... 69 A.3 Hamming Distance........................ 73 A.4 Canonical specific heat.....................
    [Show full text]
  • An Introduction to Exponential Random Graph (P*) Models for Social Networks
    Social Networks 29 (2007) 173–191 An introduction to exponential random graph (p*) models for social networks Garry Robins ∗, Pip Pattison, Yuval Kalish, Dean Lusher Department of Psychology, School of Behavioural Science, University of Melbourne, Vic. 3010, Australia Abstract This article provides an introductory summary to the formulation and application of exponential random graph models for social networks. The possible ties among nodes of a network are regarded as random variables, and assumptions about dependencies among these random tie variables determine the general form of the exponential random graph model for the network. Examples of different dependence assumptions and their associated models are given, including Bernoulli, dyad-independent and Markov random graph models. The incorporation of actor attributes in social selection models is also reviewed. Newer, more complex dependence assumptions are briefly outlined. Estimation procedures are discussed, including new methods for Monte Carlo maximum likelihood estimation. We foreshadow the discussion taken up in other papers in this special edition: that the homogeneous Markov random graph models of Frank and Strauss [Frank, O., Strauss, D., 1986. Markov graphs. Journal of the American Statistical Association 81, 832–842] are not appropriate for many observed networks, whereas the new model specifications of Snijders et al. [Snijders, T.A.B., Pattison, P., Robins, G.L., Handock, M. New specifications for exponential random graph models. Sociological Methodology, in press] offer substantial improvement. © 2006 Elsevier B.V. All rights reserved. Keywords: Exponential random graph models; Statistical models for social networks; p* models In recent years, there has been growing interest in exponential random graph models for social networks, commonly called the p* class of models (Frank and Strauss, 1986; Pattison and Wasserman, 1999; Robins et al., 1999; Wasserman and Pattison, 1996).
    [Show full text]