Modelling Gene Expression Data Using Dynamic Bayesian Networks

Total Page:16

File Type:pdf, Size:1020Kb

Modelling Gene Expression Data Using Dynamic Bayesian Networks Modelling Gene Expression Data using Dynamic Bayesian Networks Kevin Murphy and Saira Mian Computer Science Division, University of California Life Sciences Division, Lawrence Berkeley National Laboratory Berkeley, CA 94720 Tel (510) 642 2128, Fax (510) 642 5775 [email protected], [email protected] Abstract capable of handling noisy data. For example, suppose the underlying system really is a boolean network, but that we Recently, there has been much interest in reverse have noisy observations of some of the variables. Then the engineering genetic networks from time series data set might contain inconsistencies, i.e., there might not data. In this paper, we show that most of the be any boolean network which can model it. Rather than proposed discrete time models Ð including the giving up, we should look for the most probable model boolean networkmodel [Kau93, SS96], thelinear given the data; this of course requires that our model have model of D'haeseleer et al. [DWFS99], and the a well-de®ned probabilistic semantics. nonlinear model of Weaver et al. [WWS99] Ð The ability of our models to handle hidden variables is also are all special cases of a general class of mod- important. Typically, what is measured (usually mRNA els called Dynamic Bayesian Networks (DBNs). levels) is only one of the factors that we care about; other The advantages of DBNs include the ability to ones include cDNA levels, protein levels, etc. Often we model stochasticity, to incorporate prior knowl- can model the relationship between these factors, even if edge, and to handle hidden variables and missing we cannot measure their values. This prior knowledge can data in a principled way. This paper provides a be used to constrain the set of possible models we learn. review of techniques for learning DBNs. Key- words: Genetic networks, boolean networks, The models we use are called Bayesian (belief) Networks Bayesian networks, neural networks, reverse en- (BNs) [Pea88], which have become the method of choice gineering, machine learning. for representing stochastic models in the UAI (Uncertainty in Arti®cial Intelligence) community. In Section 2, we explain what BNs are, and show how they generalize the 1 Introduction boolean network model [Kau93, SS96], Hidden Markov Models [DEKM98], and other models widely used in the Recently, it has become possible to experimentally mea- computational biology community. In Sections 3 to 7, we sure the expression levels of many genes simultaneously, review various techniques for learning BNs from data, and as they change over time and react to external stimuli (see show how REVEAL [LFS98] is a special case of such an e.g., [WFM+ 98, DLB97]). In the future, the amount of algorithm. In Section 8, we consider BNs with continuous such experimental data is expected to increase dramatically. (as opposed to discrete) state, and discuss their relationship This increases the need for automated ways of discovering to the the linear model of D'haeseleer et al. [DWFS99], the patterns in such data. Ultimately, we would like to auto- nonlinear model of Weaver et al. [WWS99], and techniques matically discover the structure of the underlying causal from the neural network literature [Bis95]. network that is assumed to generate the observed data. In this paper, we consider learning stochastic, discrete time 2 Bayesian Networks models with discrete or continuous state, and hidden vari- ables. This generalizes the linear model of D'haeseleer et al. BNs are a special case of a more general class called graph- [DWFS99], the nonlinear model of Weaver et al. [WWS99], and the popular boolean network model [Kau93, SS96], all ical models in which nodes represent random variables, and the lack of arcs represent conditional independence assump- of which are deterministic and fully observable. tions. Undirected graphical models, also called Markov The fact that our models are stochastic is very important, Random Fields (MRFs; see e.g., [WMS94] for an applica- since it is well known that gene expression is an inher- tion in biology), have a simple de®nition of independence: B ently stochastic phenomenon [MA97]. In addition, even if two (sets of) nodes A and are conditionally indepen- the underlying system were deterministic, it might appear dent given all the other nodes if they are separated in the stochastic due to our inability to perfectly measure all the graph. By contrast, directed graphical models (i.e., BNs) variables. Hence it iscrucial that our learning algorithms be have a more complicated notion of independence, which X1 X2 X3 X1 X2 X3 Y1 Y2 Y3 (a) (b) Figure 2: (a) A Markov Chain represented as a Dynamic Bayesian Net (DBN). (b) A Hidden Markov Model (HMM) represented as a DBN. Shaded nodes are observed, non- Figure 1: The Bayes-Ball algorithm. Two (sets of) nodes A shaded nodes are hidden. and B are conditionally independent (d-separated [Pea88]) given all the others if and only if there is no way for a 2.1 Relationship to HMMs B ball to get from A to in the graph. Hidden nodes are nodes whose values are not known, and are depicted as For our ®rst example of aBN, considerFigure2(a). Wecall unshaded; observed nodes are shaded. The dotted arcs this a Dynamic Bayesian Net (DBN) because it represents indicate direction of ¯ow of the ball. The ball cannot pass how the random variable X evolves over time (three time throughhidden nodes with convergent arrows (top left), nor X + slices are shown). From the graph, we see that t is through observed nodes with any outgoing arrows. See 1 X X X t t independent of t given (since blocks the only [Sha98] for details. 1 X X t+ path for the Bayes ball between t 1 and 1). This, of course, is the (®rst-order) Markov property, which states that the future is independent of the past given the present. takes into account the directionality of the arcs (see Fig- X Now consider Figure 2(b). t is as before, but is now ure 1). Graphical models with both directed and undirected Y hidden. What we observe at each time step is t , which arcs are called chain graphs. is another random variable whose distribution depends on X (and only on) t . Hence this graph captures all and only B In a BN, one can intuitively regard an arc from A to the conditional independence assumptions that are made in B as indicating the fact that A ªcausesº . (For a more formal treatment of causality in the context of BNs, see a Hidden Markov Model (HMM) [Rab89]. [HS95].) Since evidence can be assigned to any subset of In addition to the graph structure, a BN requires that we the nodes (i.e., any subset of nodes can be observed), BNs specify the Conditional Probability Distribution (CPD) of can be used for both causal reasoning (from known causes each node given its parents. In an HMM, we assume that to unknown effects) an diagnostic reasoning (from known X the hidden state variables t are discrete, and have a dis- X = j jX = i) = T (i; j ) T effects to unknown causes), or any combination of the two. ( t t t tribution given by Pr t 1 . ( The inference algorithms which are needed to do this are is the transition matrix for time slice t.) If the observed brie¯y discussed in Section 5.1. Note that, if all the nodes Y variables t are discrete, we can specify their distribution Y = j jX = i) = O (i; j ) O are observed, there is no need to do inference, although we ( t t t by Pr t . ( is the observa- might still want to do learning. tion matrix for time slice t.) However, in an HMM, it is In additionto causal and diagnosticreasoning, BNs support also possible for the observed variables to be Gaussian, in the powerful notion of ªexplaining awayº: if a node is which case we must specify the mean and covariance for observed, then its parents become dependent, since they are each value of the hidden state variable, and each value of t: rival causes for explaining the child's value (see the bottom see Section 8. left case in Figure1.) Incontrast, inan undirected graphical As a (hopefully!) familiar example of HMMs, let us con- model, the parents would be independent, since the child sider the way that they are used for aligning protein se- separates (but does not d-separate) them. quences [DEKM98]. In this case, the hidden state variable X 2 fD;I;M g Some other important advantages of directed graphical can take on three possible values, t , which models over undirected ones include the fact that BNs can represent delete, insert and match respectively. In protein encode deterministic relationships, and that it is easier to alignment, the t subscript does not refer to time, but rather learn BNs (see Section 3) since they are separable models topositionalonga staticsequence. This isan importantdif- (in the sense of [Fri98]). Hence we shall focus exclusively ference from gene expression, where t really does represent on BNs in this paper. For a careful study of the relation- time (see Section 3). ship between directed and undirected graphical models, see The observable variable can take on 21 possible values, [Pea88, Whi90, Lau96].1 which represent the 20 possible amino acids, plus the gap alignment character ª-º. The probability distribution over these 21 values depends on the current position t and of X 1It is interesting to note that much of the theory underlying the current state of the system, t . Thus the distribution (Y jX = M ) t (Y jX = t t t graphical models involves concepts such as chordal (triangulated) Pr t is the pro®le for position , Pr ) graphs [Gol80], which also arise in other areas of computational I is the (ªtimeº-invariant) ªbackgroundº distribution, and 00 00 (Y = jX ) = : X = D X 6= D t t t biology, such as evolutionary tree construction (perfect phyloge- Pr t 1 0 if and is 0.0 if , nies) and physical mapping (interval graphs).
Recommended publications
  • Markov Model Checking of Probabilistic Boolean Networks Representations of Genes
    Markov Model Checking of Probabilistic Boolean Networks Representations of Genes Marie Lluberes1, Jaime Seguel2 and Jaime Ramírez-Vick3 1, 2 Electrical and Computer Engineering Department, University of Puerto Rico, Mayagüez, Puerto Rico 3 General Engineering Department, University of Puerto Rico, Mayagüez, Puerto Rico or, on contrary, to prevent or to stop an undesirable Abstract - Our goal is to develop an algorithm for the behavior. This “guiding” of the network dynamics is automated study of the dynamics of Probabilistic Boolean referred to as intervention. The power to intervene with the Network (PBN) representation of genes. Model checking is network dynamics has a significant impact in diagnostics an automated method for the verification of properties on and drug design. systems. Continuous Stochastic Logic (CSL), an extension of Biological phenomena manifest in the continuous-time Computation Tree Logic (CTL), is a model-checking tool domain. But, in describing such phenomena we usually that can be used to specify measures for Continuous-time employ a binary language, for instance, expressed or not Markov Chains (CTMC). Thus, as PBNs can be analyzed in expressed; on or off; up or down regulated. Studies the context of Markov theory, the use of CSL as a method for conducted restricting genes expression to only two levels (0 model checking PBNs could be a powerful tool for the or 1) suggested that information retained by these when simulation of gene network dynamics. Particularly, we are binarized is meaningful to the extent that it is remains in a interested in the subject of intervention. This refers to the continuous domain [2].
    [Show full text]
  • Online Spectral Clustering on Network Streams Yi
    Online Spectral Clustering on Network Streams By Yi Jia Submitted to the graduate degree program in Electrical Engineering and Computer Science and the Graduate Faculty of the University of Kansas in partial fulfillment of the requirements for the degree of Doctor of Philosophy Jun Huan, Chairperson Swapan Chakrabarti Committee members Jerzy Grzymala-Busse Bo Luo Alfred Tat-Kei Ho Date defended: The Dissertation Committee for Yi Jia certifies that this is the approved version of the following dissertation : Online Spectral Clustering on Network Streams Jun Huan, Chairperson Date approved: ii Abstract Graph is an extremely useful representation of a wide variety of practical systems in data analysis. Recently, with the fast accumulation of stream data from various type of networks, significant research interests have arisen on spectral clustering for network streams (or evolving networks). Compared with the general spectral clustering problem, the data analysis of this new type of problems may have additional requirements, such as short processing time, scalability in distributed computing environments, and temporal variation tracking. However, to design a spectral clustering method to satisfy these requirement cer- tainly presents non-trivial efforts. There are three major challenges for the new algorithm design. The first challenge is online clustering computation. Most of the existing spectral methods on evolving networks are off-line methods, using standard eigensystem solvers such as the Lanczos method. It needs to recompute solutions from scratch at each time point. The second challenge is the paralleliza- tion of algorithms. To parallelize such algorithms is non-trivial since standard eigen solvers are iterative algorithms and the number of iterations can not be pre- determined.
    [Show full text]
  • Inference of a Probabilistic Boolean Network from a Single Observed Temporal Sequence
    Hindawi Publishing Corporation EURASIP Journal on Bioinformatics and Systems Biology Volume 2007, Article ID 32454, 15 pages doi:10.1155/2007/32454 Research Article Inference of a Probabilistic Boolean Network from a Single Observed Temporal Sequence Stephen Marshall,1 Le Yu,1 Yufei Xiao,2 and Edward R. Dougherty2, 3, 4 1 Department of Electronic and Electrical Engineering, Faculty of Engineering, University of Strathclyde, Glasgow, G1 1XW, UK 2 Department of Electrical and Computer Engineering, Texas A&M University, College Station, TX 77843-3128, USA 3 Computational Biology Division, Translational Genomics Research Institute, Phoenix, AZ 85004, USA 4 Department of Pathology, University of Texas M. D. Anderson Cancer Center, Houston, TX 77030, USA Received 10 July 2006; Revised 29 January 2007; Accepted 26 February 2007 Recommended by Tatsuya Akutsu The inference of gene regulatory networks is a key issue for genomic signal processing. This paper addresses the inference of proba- bilistic Boolean networks (PBNs) from observed temporal sequences of network states. Since a PBN is composed of a finite number of Boolean networks, a basic observation is that the characteristics of a single Boolean network without perturbation may be de- termined by its pairwise transitions. Because the network function is fixed and there are no perturbations, a given state will always be followed by a unique state at the succeeding time point. Thus, a transition counting matrix compiled over a data sequence will be sparse and contain only one entry per line. If the network also has perturbations, with small perturbation probability, then the transition counting matrix would have some insignificant nonzero entries replacing some (or all) of the zeros.
    [Show full text]
  • Boolean Network Control with Ideals
    Portland State University PDXScholar Mathematics and Statistics Faculty Fariborz Maseeh Department of Mathematics Publications and Presentations and Statistics 1-2021 Boolean Network Control with Ideals Ian H. Dinwoodie Portland State University, [email protected] Follow this and additional works at: https://pdxscholar.library.pdx.edu/mth_fac Part of the Physical Sciences and Mathematics Commons Let us know how access to this document benefits ou.y Citation Details Dinwoodie, Ian H., "Boolean Network Control with Ideals" (2021). Mathematics and Statistics Faculty Publications and Presentations. 302. https://pdxscholar.library.pdx.edu/mth_fac/302 This Working Paper is brought to you for free and open access. It has been accepted for inclusion in Mathematics and Statistics Faculty Publications and Presentations by an authorized administrator of PDXScholar. Please contact us if we can make this document more accessible: [email protected]. Boolean Network Control with Ideals I H Dinwoodie Portland State University January 2021 Abstract A method is given for finding controls to transition an initial state x0 to a target set in deterministic or stochastic Boolean network control models. The algorithms use multivariate polynomial algebra. Examples illustrate the application. 1 Introduction A Boolean network is a dynamical system on d nodes (or coordinates) with binary node values, and with transition map F : {0, 1}d → {0, 1}d. The c coordinate maps F = (f1,...,fd) may use binary parameters u ∈ {0, 1} that can be adjusted or controlled at each step, so the image can be writ- ten F (x, u) and F takes {0, 1}d+c → {0, 1}d, usually written in logical nota- tion.
    [Show full text]
  • Random Boolean Networks As a Toy Model for the Brain
    UNIVERSITY OF GENEVA SCIENCE FACULTY VRIJE UNIVERSITEIT OF AMSTERDAM PHYSICS SECTION Random Boolean Networks as a toy model for the brain MASTER THESIS presented at the science faculty of the University of Geneva for obtaining the Master in Theoretical Physics by Chlo´eB´eguin Supervisor (VU): Pr. Greg J Stephens Co-Supervisor (UNIGE): Pr. J´er^ome Kasparian July 2017 Contents Introduction1 1 Biology, physics and the brain4 1.1 Biological description of the brain................4 1.2 Criticality in the brain......................8 1.2.1 Physics reminder..................... 10 1.2.2 Experimental evidences.................. 15 2 Models of neural networks 20 2.1 Classes of models......................... 21 2.1.1 Spiking models...................... 21 2.1.2 Rate-based models.................... 23 2.1.3 Attractor networks.................... 24 2.1.4 Links between the classes of models........... 25 2.2 Random Boolean Networks.................... 28 2.2.1 General definition..................... 28 2.2.2 Kauffman network.................... 30 2.2.3 Hopfield network..................... 31 2.2.4 Towards a RBN for the brain.............. 32 2.2.5 The model......................... 33 3 Characterisation of RBNs 34 3.1 Attractors............................. 34 3.2 Damage spreading........................ 36 3.3 Canonical specific heat...................... 37 4 Results 40 4.1 One population with Gaussian weights............. 40 4.2 Dale's principle and balance of inhibition - excitation..... 46 4.3 Lognormal distribution of the weights.............. 51 4.4 Discussion............................. 55 i 5 Conclusion 58 Bibliography 60 Acknowledgements 66 A Python Code 67 A.1 Dynamics............................. 67 A.2 Attractor search.......................... 69 A.3 Hamming Distance........................ 73 A.4 Canonical specific heat.....................
    [Show full text]
  • A Random Boolean Network Model and Deterministic Chaos
    A Random Boolean Network Model and Deterministic Chaos Mihaela T. Matache* Jack Heidel Department of Mathematics University of Nebraska at Omaha Omaha, NE 68182-0243, USA *[email protected] Abstract This paper considers a simple Boolean network with N nodes, each node’s state at time t being determined by a certain number of parent nodes, which may vary from one node to another. This is an extension of a model studied by Andrecut and Ali ( [5]) who consider the same number of parents for all nodes. We make use of the same Boolean rule as the authors of [5], provide a generalization of the formula for the probability of finding a node in state 1 at a time t and use simulation methods to generate consecutive states of the network for both the real system and the model. The results match well. We study the dynamics of the model through sensitivity of the orbits to initial values, bifurcation diagrams, and fixed point analysis. We show that the route to chaos is due to a cascade of period- doubling bifurcations which turn into reversed (period - halving) bifurcations for certain combinations of parameter values. 1. Introduction In recent years various studies ( [5], [16], [19], [20], [21], [22], [30], [31]) have shown that a large class of biological networks and cellular automata can be modelled as Boolean networks. These models, besides being easy to understand, are relatively easy to handle. The interest for Boolean networks and their appli- cations in biology and automata networks has actually started much earlier ( [8], [13], [14], [15], [25], [29], [32]), with publications such as the one of Kauffman ( [25]) whose work on the self-organization and adap- tation in complex systems has inspired many other research endeavors.
    [Show full text]
  • Reward-Driven Training of Random Boolean Network Reservoirs for Model-Free Environments
    Portland State University PDXScholar Dissertations and Theses Dissertations and Theses Winter 3-27-2013 Reward-driven Training of Random Boolean Network Reservoirs for Model-Free Environments Padmashri Gargesa Portland State University Follow this and additional works at: https://pdxscholar.library.pdx.edu/open_access_etds Part of the Dynamics and Dynamical Systems Commons, and the Electrical and Computer Engineering Commons Let us know how access to this document benefits ou.y Recommended Citation Gargesa, Padmashri, "Reward-driven Training of Random Boolean Network Reservoirs for Model-Free Environments" (2013). Dissertations and Theses. Paper 669. https://doi.org/10.15760/etd.669 This Thesis is brought to you for free and open access. It has been accepted for inclusion in Dissertations and Theses by an authorized administrator of PDXScholar. Please contact us if we can make this document more accessible: [email protected]. Reward-driven Training of Random Boolean Network Reservoirs for Model-Free Environments by Padmashri Gargesa A thesis submitted in partial fulfillment of the requirements for the degree of Master of Science in Electrical and Computer Engineering Thesis Committee: Christof Teuscher, Chair Richard P. Tymerski Marek A. Perkowski Portland State University 2013 Abstract Reservoir Computing (RC) is an emerging machine learning paradigm where a fixed kernel, built from a randomly connected "reservoir" with sufficiently rich dynamics, is capable of expanding the problem space in a non-linear fashion to a higher dimensional feature space. These features can then be interpreted by a linear readout layer that is trained by a gradient descent method. In comparison to traditional neural networks, only the output layer needs to be trained, which leads to a significant computational advantage.
    [Show full text]
  • Solving Satisfiability Problem by Quantum Annealing
    UNIVERSITY OF CALIFORNIA Los Angeles Towards Quantum Computing: Solving Satisfiability Problem by Quantum Annealing A dissertation submitted in partial satisfaction of the requirements for the degree Doctor of Philosophy in Electrical Engineering by Juexiao Su 2018 c Copyright by Juexiao Su 2018 ABSTRACT OF THE DISSERTATION Towards Quantum Computing: Solving Satisfiability Problem by Quantum Annealing by Juexiao Su Doctor of Philosophy in Electrical Engineering University of California, Los Angeles, 2018 Professor Lei He, Chair To date, conventional computers have never been able to efficiently handle certain tasks, where the number of computation steps is likely to blow up as the problem size increases. As an emerging technology and new computing paradigm, quantum computing has a great potential to tackle those hard tasks efficiently. Among all the existing quantum computation models, quantum annealing has drawn significant attention in recent years due to the real- ization of the commercialized quantum annealer, sparking research interests in developing applications to solve problems that are intractable for classical computers. However, designing and implementing algorithms that manage to harness the enormous computation power from quantum annealer remains a challenging task. Generally, it requires mapping of the given optimization problem into quadratic unconstrained binary optimiza- tion(QUBO) problem and embedding the subsequent QUBO onto the physical architecture of quantum annealer. Additionally, practical quantum annealers are susceptible to errors leading to low probability of the correct solution. In this study, we focus on solving Boolean satisfiability (SAT) problem using quantum annealer while addressing practical limitations. We have proposed a mapping technique that maps SAT problem to QUBO, and we have further devised a tool flow that embeds the QUBO onto the architecture of a quantum annealing device.
    [Show full text]
  • A Multi-Agent System for Environmental Monitoring Using Boolean Networks and Reinforcement Learning
    Journal of Cyber Security DOI:10.32604/jcs.2020.010086 Article A Multi-Agent System for Environmental Monitoring Using Boolean Networks and Reinforcement Learning Hanzhong Zheng1 and Dejie Shi2,* 1Department of Computer Science, The University of Pittsburgh, Pittsburgh, PA 15213, USA 2School of Computer and Information Engineering, Hunan University of Technology and Business, Changsha, 410205, China *Corresponding Author: Dejie Shi. Email: [email protected] Received: 10 February 2020; Accepted: 28 June 2020 Abstract: Distributed wireless sensor networks have been shown to be effective for environmental monitoring tasks, in which multiple sensors are deployed in a wide range of the environments to collect information or monitor a particular event, Wireless sensor networks, consisting of a large number of interacting sensors, have been successful in a variety of applications where they are able to share information using different transmission protocols through the communication network. However, the irregular and dynamic environment requires traditional wireless sensor networks to have frequent communications to exchange the most recent information, which can easily generate high communication cost through the collaborative data collection and data transmission. High frequency communication also has high probability of failure because of long distance data transmission. In this paper, we developed a novel approach to multi-sensor environment monitoring network using the idea of distributed system. Its communication network can overcome the difficulties of high communication cost and Single Point of Failure (SPOF) through the decentralized approach, which performs in-network computation. Our approach makes use of Boolean networks that allows for a non-complex method of corroboration and retains meaningful information regarding the dynamics of the communication network.
    [Show full text]
  • List of Glossary Terms
    List of Glossary Terms A Ageofaclusterorfuzzyrule 1054 A correlated equilibrium 3064 Agent 58, 76, 105, 1767, 2578, 3004 Amechanism 1837 Agent architecture 105 Anearestneighbor 790 Agent based models 2940 A social choice function 1837 Agent (or software agent) 2999 A stochastic game 3064 Agent-based computational models 2898 Astrategy 3064 Agent-based model 58 Abelian group 2780 Agent-based modeling 1767 Absolute temperature 940 Agent-based modeling (ABM) 39 Absorbing state 1080 Agent-based simulation 18, 88 Abstract game 675 Aggregation 862 Abstract game of network formation with respect to Aggregation operators 122 irreflexive dominance 2044 Aging 2564, 2611 Abstract game of network formation with respect to path Algebra 2925 dominance 2044 Algebraic models 2898 Accuracy 161, 827 Algorithm 2496 Accuracy (rate) 862 Algorithmic complexity of object x 132 Achievable mate 3235 Algorithmic self-assembly of DNA tiles 1894 Action profile 3023 Allowable set of partners 3234 Action set 3023 Almost equicontinuous CA 914, 3212 Action type 2656 Alphabet of a cellular automaton 1 Activation function 813 ˛-Level set and support 1240 Activator 622 Alternating independent-2-paths 2953 Active membranes 1851 Alternating k-stars 2953 Actors 2029 Alternating k-triangles 2953 Adaptation 39 Amorphous computer 147 Adaptive system 1619 Analog 3187 Additive cellular automata 1 Analog circuit 3260 Additively separable preferences 3235 Ancilla qubits 2478 Adiabatic switching 1998 Anisotropic elements 754 Adjacency matrix 1746, 3114 Annealed law 2564 Adjacent 2864,
    [Show full text]
  • Introduction to Natural Computation Lecture 18 Random Boolean
    Introduction to Natural Computation Lecture 18 Random Boolean Networks Alberto Moraglio 1 / 25 Random Boolean Networks Extension of Cellular Automata. Introduced by Stuart Kauffman (1969) as model for genetic regulatory networks. Uncover fundamental principles of living systems. Hypothesis: living organisms can be constructed from random elements. Simplification: Boolean states for all nodes in the network. 2 / 25 Definition Random Boolean Network (RBN) A random Boolean network consists of N nodes, each with a state 0 or 1 K edges for each node to K different randomly selected nodes a lookup table for each node that determines the next state, according to states of its K neighbours. Last few lectures: fixed graph, random process This time: random graph, deterministic process Note: randomness used to create network, afterwards network remains fixed! 3 / 25 Relation to Cellular Automata http://www.metafysica.nl/boolean.html 4 / 25 Lookup Tables Recall rules from Cellular Automata: RBN: each node has its own random lookup table: inputs at time t state at time t + 1 000 1 001 0 010 0 011 1 100 1 101 0 110 1 111 1 5 / 25 Example of a Simple RBN Gershenson C. Introduction to Random Boolean Networks. ArXiv Nonlinear Sciences e-prints. 2004; 6 / 25 States State space has size 2N . After at most 2N + 1 steps a state will be repeated. 7 / 25 States State space has size 2N . After at most 2N + 1 steps a state will be repeated. Attractors As the system is deterministic, it will get stuck in an attractor. one state → point attractor / steady state two or more states → cycle attractor / state cycle The set of states that flow towards an attractor is called attractor basin.
    [Show full text]
  • UCLA Samueli Engineering Annoucemente 2019-20
    ANNOUNCEMENT 2019-20 HENRY SAMUELI SCHOOL OF ENGINEERING AND APPLIED SCIENCE UNIVERSITY OF CALIFORNIA, LOS ANGELES OCTOBER 1, 2019 Contents A Message from the Dean . .3 Exceptional Student Admissions Program . 16 Henry Samueli School of Engineering and Applied Science. 4 Official Publications . 16 Grades . 16 Officers of Administration . 4 Nondiscrimination . 16 The Campus . 4 Harassment . 17 The School . 4 Undergraduate Programs . 19 Endowed Chairs . 4 Admission . 19 The Engineering Profession . 5 Requirements for B.S. Degrees . 22 Calendars. 8 Policies and Regulations . 24 General Information . 9 Honors. 25 Facilities and Services . 9 Graduate Programs . 26 Library Facilities . 9 Master of Science Degrees. 26 Services . 9 Continuing Education . 9 Admission . 27 Career Services . 10 Departments and Programs of the School. 28 Health Services . 10 Bioengineering. 28 Services for Students with Disabilities. 10 Chemical and Biomolecular Engineering. 40 International Student Services . 10 Civil and Environmental Engineering . 50 Fees and Financial Support. 11 Computer Science . 62 Fees and Expenses . 11 Living Accommodations . 11 Electrical and Computer Engineering . 83 Financial Aid . 11 Materials Science and Engineering. 101 Special Programs, Activities, and Awards . 13 Mechanical and Aerospace Engineering . 109 Center for Excellence in Engineering and Diversity (CEED) . 13 Master of Science in Engineering Online Programs . 125 Student Organizations. 15 Schoolwide Programs and Courses. 127 Women in Engineering . 15 Externally Funded Research
    [Show full text]