Probability, Algorithmic Complexity, and Subjective Randomness

Total Page:16

File Type:pdf, Size:1020Kb

Probability, Algorithmic Complexity, and Subjective Randomness Probability, algorithmic complexity, and subjective randomness Thomas L. Griffiths Joshua B. Tenenbaum [email protected] [email protected] Department of Psychology Brain and Cognitive Sciences Department Stanford University Massachusetts Institute of Technology Abstract accounts that can express the strong prior knowl- We present a statistical account of human random- edge that contributes to our inferences. The struc- ness judgments that uses the idea of algorithmic tures that people find simple form a strict (and flex- complexity. We show that an existing measure of ible) subset of those easily expressed in a computer the randomness of a sequence corresponds to the as- sumption that non-random sequences are generated program. For example, the sequence of heads and by a particular probabilistic finite state automaton, tails TTHTTTHHTH appears quite complex to us, even and use this as the basis for an account that evalu- though, as the parity of the first 10 digits of π, it ates randomness in terms of the length of programs is easily generated by a computer. Identifying the for machines at different levels of the Chomsky hi- kinds of regularities that contribute to our sense of erarchy. This approach results in a model that pre- dicts human judgments better than the responses simplicity will be an important part of any cognitive of other participants in the same experiment. theory, and is in fact necessary since Kolmogorov complexity is not computable (Kolmogorov, 1965). The development of information theory prompted There is a crucial middle ground between Kol- cognitive scientists to formally examine how humans mogorov complexity and the arbitrary encoding encode experience, with a variety of schemes be- schemes to which Simon (1972) objected. We will ing used to predict subjective complexity (Leeuwen- explore this middle ground using an approach that berg, 1969), memorization difficulty (Restle, 1970), combines rational statistical inference with algorith- and sequence completion (Simon & Kotovsky, 1963). mic information theory. This approach gives an in- This proliferation of similar, seemingly arbitrary tuitive transparency to measures of complexity by theories was curtailed by Simon's (1972) observation expressing them in terms of probabilities, and uses that the inevitable high correlation between mea- computability to establish meaningful differences be- sures of information content renders them essentially tween them. We will test this approach on judg- equivalent. The development of algorithmic infor- ments of the randomness of binary sequences, since mation theory (see Li & Vitanyi, 1997, for a detailed randomness is one of the key applications of Kol- account) has revived some of these ideas, with code mogorov complexity: Kolmogorov (1965) suggested lengths playing a central role in recent accounts of that random sequences are irreducibly complex, a human concept learning (Feldman, 2000), subjective notion that has inspired several psychological theo- randomness (Falk & Konold, 1997), and the role of ries (eg. Falk & Konold, 1997). We will analyze sub- simplicity in cognition (Chater, 1999). Algorithmic jective randomness as an inference about the source information theory avoids the arbitrariness of ear- of a sequence X, comparing its probability of being lier approaches by using a single universal code: the generated by a random source, P (X random), with complexity of an object (called the Kolmogorov com- its probability of generation by a morej regular pro- plexity after Kolmogorov, 1965) is the length of the cess, P (X regular). Since probabilities map directly shortest computer program that can reproduce it. to code lengths,j P (X regular) uniquely identifies a Chater and Vitanyi (2003) argue that a preference measure of complexity.j This formulation allows us for simplicity can be seen throughout cognition, from to identify the properties of an existing complexity perception to language learning. Their argument is measure (Falk & Konold, 1997), and extend it to based upon the important constraints that simplicity capture more of the statistical structure detected provides for solving problems of induction, which are by people. While Kolmogorov complexity is ex- central to cognition. Kolmogorov complexity gives a pressed in terms of programs for a universal Turing formal means of addressing \asymptotic" questions machine, many of the regularities people detect are about induction, such as why anything is learnable computable by simpler devices. We will use Chom- at all, but the constraints it imposes are too weak sky's (1956) hierarchy of formal languages to orga- to support the rapid inferences that characterize hu- nize our analysis, testing a set of nested models that man cognition. In order to explain how human be- can be interpreted in terms of the length of programs ings learn so much from so little, we need to consider for automata at different levels of the hierarchy. Complexity and randomness Falk & Konold (1997) DP model The idea of using a code based upon the length of Finite state model computer programs was independently proposed by Solomonoff (1964), Kolmogorov (1965), and Chaitin (1969), although it has come to be associated with Kolmogorov. A sequence X has Kolmogorov com- plexity K(X) equal to the length of the shortest program p for a (prefix) universal Turing machine U that produces X and then halts, Subjective randomness K(X) = min l(p); (1) p:U(p)=X 2 4 6 8 10 12 14 16 18 20 where l(p) is the length of p in bits. Kolmogorov Number of alternations complexity can be used to define algorithmic proba- bility, with the probability of X being Figure 1: Mean randomness ratings from Falk and Konold (1987, Experiment 1), shown with the pre- K(X) l(p) R(X) = 2− = max 2− : (2) dictions of DP and the finite state model. p:U(p)=X with 10 alternations. The mean DP has a similar There is no requirement that R(X) sum to one over profile, achieving a maximum at 12 alternations and all sequences; many probability distributions that giving a correlation of r = 0:93. correspond to codes are unnormalized, assigning the missing probability to an undefined sequence. Subjective randomness as Kolmogorov complexity can be used to mathemat- a statistical inference ically define the randomness of sequences, identify- Psychologists have claimed that the way we think ing a sequence X as random if l(X) K(X) is small − about chance is inconsistent with probability the- (Kolmogorov, 1965). While not necessarily follow- ory (eg. Kahneman & Tversky, 1972). For ex- ing the form of this definition, psychologists have ample, people are willing to say that X1=HHTHT preserved its spirit in proposing that the perceived is more random than X2=HHHHH, while they are randomness of a sequence increases with its complex- equally likely to arise by chance: P (X1 random) = ity. Falk and Konold (1997) consider a particular 1 5 j P (X2 random) = ( 2 ) . However, many of the ap- measure of complexity they call the “difficulty pre- parentlyj irrational aspects of human judgments can dictor" (DP ), calculated by counting the number of be understood by considering the possibility that runs (sub-sequences containing only heads or tails), people are assessing a different kind of probability { and adding twice the number of alternating sub- instead of P (X random), we evaluate P (random X) sequences. For example, the sequence TTTHHHTHTH (Griffiths & Tenenbaum,j 2001). j is a run of tails, a run of heads, and an alternating The statistical basis of subjective randomness be- sub-sequence, DP = 4. If there are several parti- comes clear if we view randomness judgments in tions into runs and alternations, DP is calculated terms of a signal detection task (cf. Lopes, 1982; 1 on the partition that results in the lowest score. Lopes & Oden, 1987). On seeing a stimulus X, Falk and Konold (1997) showed that DP corre- we consider two hypotheses: X was produced by lates remarkably well with subjective randomness a random process, or X was produced by a regular judgments. Figure 1 shows the results of Falk and process. Finding regularities is an important part Konold (1997, Experiment 1), in which 97 partici- of identifying predictable processes, a fundamental pants each rated the apparent randomness of ten bi- component of induction (Lopes, 1982). The deci- nary sequences of length 21, with each sequence con- sion about the source of X can be formalized as a taining between 2 and 20 alternations (transitions Bayesian inference, from heads to tails or vice versa). The mean rat- ings show the classic preference for overalternating P (random X) P (X random) P (random) j = j ; (3) sequences: the sequences perceived as most random P (regular X) P (X regular) P (regular) are those with 14 alternations, while a truly random j j process would be most likely to produce sequences in which the posterior odds in favor of a random gen- erating process are obtained from the likelihood ratio 1We modify DP slightly from the definition of Falk and the prior odds. The only part of the right hand and Konold (1997), who seem to require alternating sub- side of the equation affected by X is the likelihood sequences to be of even length. The equivalence results shown below also hold for their original version, but it ratio, which led Griffiths and Tenenbaum (2001) to makes the counter-intuitive interpretation of HTHTH as define the subjective randomness of X as a run of a single head, followed by an alternating sub- sequence, DP = 3. Under our formulation it would be P (X random) random(X) = log j ; (4) parsed as an alternating sequence, DP = 2. P (X regular) j being the evidence that X provides towards the con- 1 for zi = 1; 3; 5 and 0 for zi = 2; 4; 6 we have a clusion that it was produced by a random process.
Recommended publications
  • Complexity” Makes a Difference: Lessons from Critical Systems Thinking and the Covid-19 Pandemic in the UK
    systems Article How We Understand “Complexity” Makes a Difference: Lessons from Critical Systems Thinking and the Covid-19 Pandemic in the UK Michael C. Jackson Centre for Systems Studies, University of Hull, Hull HU6 7TS, UK; [email protected]; Tel.: +44-7527-196400 Received: 11 November 2020; Accepted: 4 December 2020; Published: 7 December 2020 Abstract: Many authors have sought to summarize what they regard as the key features of “complexity”. Some concentrate on the complexity they see as existing in the world—on “ontological complexity”. Others highlight “cognitive complexity”—the complexity they see arising from the different interpretations of the world held by observers. Others recognize the added difficulties flowing from the interactions between “ontological” and “cognitive” complexity. Using the example of the Covid-19 pandemic in the UK, and the responses to it, the purpose of this paper is to show that the way we understand complexity makes a huge difference to how we respond to crises of this type. Inadequate conceptualizations of complexity lead to poor responses that can make matters worse. Different understandings of complexity are discussed and related to strategies proposed for combatting the pandemic. It is argued that a “critical systems thinking” approach to complexity provides the most appropriate understanding of the phenomenon and, at the same time, suggests which systems methodologies are best employed by decision makers in preparing for, and responding to, such crises. Keywords: complexity; Covid-19; critical systems thinking; systems methodologies 1. Introduction No one doubts that we are, in today’s world, entangled in complexity. At the global level, economic, social, technological, health and ecological factors have become interconnected in unprecedented ways, and the consequences are immense.
    [Show full text]
  • Higher-Order Asymptotics
    Higher-Order Asymptotics Todd Kuffner Washington University in St. Louis WHOA-PSI 2016 1 / 113 First- and Higher-Order Asymptotics Classical Asymptotics in Statistics: available sample size n ! 1 First-Order Asymptotic Theory: asymptotic statements that are correct to order O(n−1=2) Higher-Order Asymptotics: refinements to first-order results 1st order 2nd order 3rd order kth order error O(n−1=2) O(n−1) O(n−3=2) O(n−k=2) or or or or o(1) o(n−1=2) o(n−1) o(n−(k−1)=2) Why would anyone care? deeper understanding more accurate inference compare different approaches (which agree to first order) 2 / 113 Points of Emphasis Convergence pointwise or uniform? Error absolute or relative? Deviation region moderate or large? 3 / 113 Common Goals Refinements for better small-sample performance Example Edgeworth expansion (absolute error) Example Barndorff-Nielsen’s R∗ Accurate Approximation Example saddlepoint methods (relative error) Example Laplace approximation Comparative Asymptotics Example probability matching priors Example conditional vs. unconditional frequentist inference Example comparing analytic and bootstrap procedures Deeper Understanding Example sources of inaccuracy in first-order theory Example nuisance parameter effects 4 / 113 Is this relevant for high-dimensional statistical models? The Classical asymptotic regime is when the parameter dimension p is fixed and the available sample size n ! 1. What if p < n or p is close to n? 1. Find a meaningful non-asymptotic analysis of the statistical procedure which works for any n or p (concentration inequalities) 2. Allow both n ! 1 and p ! 1. 5 / 113 Some First-Order Theory Univariate (classical) CLT: Assume X1;X2;::: are i.i.d.
    [Show full text]
  • The Foundations of Solomonoff Prediction
    The Foundations of Solomonoff Prediction MSc Thesis for the graduate programme in History and Philosophy of Science at the Universiteit Utrecht by Tom Florian Sterkenburg under the supervision of prof.dr. D.G.B.J. Dieks (Institute for History and Foundations of Science, Universiteit Utrecht) and prof.dr. P.D. Gr¨unwald (Centrum Wiskunde & Informatica; Universiteit Leiden) February 2013 ii Parsifal: Wer ist der Gral? Gurnemanz: Das sagt sich nicht; doch bist du selbst zu ihm erkoren, bleibt dir die Kunde unverloren. – Und sieh! – Mich dunkt,¨ daß ich dich recht erkannt: kein Weg fuhrt¨ zu ihm durch das Land, und niemand k¨onnte ihn beschreiten, den er nicht selber m¨ocht’ geleiten. Parsifal: Ich schreite kaum, - doch w¨ahn’ ich mich schon weit. Richard Wagner, Parsifal, Act I, Scene I iv Voor mum v Abstract R.J. Solomonoff’s theory of Prediction assembles notions from information theory, confirmation theory and computability theory into the specification of a supposedly all-encompassing objective method of prediction. The theory has been the subject of both general neglect and occasional passionate promotion, but of very little serious philosophical reflection. This thesis presents an attempt towards a more balanced philosophical appraisal of Solomonoff’s theory. Following an in-depth treatment of the mathematical framework and its motivation, I shift attention to the proper interpretation of these formal results. A discussion of the theory’s possible aims turns into the project of identifying its core principles, and a defence of the primacy of the unifying principle of Universality supports the development of my proposed interpretation of Solomonoff Prediction as the statement, to be read in the context of the philosophical problem of prediction, that in a universal setting, there exist universal predictors.
    [Show full text]
  • Dissipative Structures, Complexity and Strange Attractors: Keynotes for a New Eco-Aesthetics
    Dissipative structures, complexity and strange attractors: keynotes for a new eco-aesthetics 1 2 3 3 R. M. Pulselli , G. C. Magnoli , N. Marchettini & E. Tiezzi 1Department of Urban Studies and Planning, M.I.T, Cambridge, U.S.A. 2Faculty of Engineering, University of Bergamo, Italy and Research Affiliate, M.I.T, Cambridge, U.S.A. 3Department of Chemical and Biosystems Sciences, University of Siena, Italy Abstract There is a new branch of science strikingly at variance with the idea of knowledge just ended and deterministic. The complexity we observe in nature makes us aware of the limits of traditional reductive investigative tools and requires new comprehensive approaches to reality. Looking at the non-equilibrium thermodynamics reported by Ilya Prigogine as the key point to understanding the behaviour of living systems, the research on design practices takes into account the lot of dynamics occurring in nature and seeks to imagine living shapes suiting living contexts. When Edgar Morin speaks about the necessity of a method of complexity, considering the evolutive features of living systems, he probably means that a comprehensive method should be based on deep observations and flexible ordering laws. Actually designers and planners are engaged in a research field concerning fascinating theories coming from science and whose playground is made of cities, landscapes and human settlements. So, the concept of a dissipative structure and the theory of space organized by networks provide a new point of view to observe the dynamic behaviours of systems, finally bringing their flowing patterns into the practices of design and architecture. However, while science discovers the fashion of living systems, the question asked is how to develop methods to configure open shapes according to the theory of evolutionary physics.
    [Show full text]
  • Fractal Curves and Complexity
    Perception & Psychophysics 1987, 42 (4), 365-370 Fractal curves and complexity JAMES E. CUTI'ING and JEFFREY J. GARVIN Cornell University, Ithaca, New York Fractal curves were generated on square initiators and rated in terms of complexity by eight viewers. The stimuli differed in fractional dimension, recursion, and number of segments in their generators. Across six stimulus sets, recursion accounted for most of the variance in complexity judgments, but among stimuli with the most recursive depth, fractal dimension was a respect­ able predictor. Six variables from previous psychophysical literature known to effect complexity judgments were compared with these fractal variables: symmetry, moments of spatial distribu­ tion, angular variance, number of sides, P2/A, and Leeuwenberg codes. The latter three provided reliable predictive value and were highly correlated with recursive depth, fractal dimension, and number of segments in the generator, respectively. Thus, the measures from the previous litera­ ture and those of fractal parameters provide equal predictive value in judgments of these stimuli. Fractals are mathematicalobjectsthat have recently cap­ determine the fractional dimension by dividing the loga­ tured the imaginations of artists, computer graphics en­ rithm of the number of unit lengths in the generator by gineers, and psychologists. Synthesized and popularized the logarithm of the number of unit lengths across the ini­ by Mandelbrot (1977, 1983), with ever-widening appeal tiator. Since there are five segments in this generator and (e.g., Peitgen & Richter, 1986), fractals have many curi­ three unit lengths across the initiator, the fractionaldimen­ ous and fascinating properties. Consider four. sion is log(5)/log(3), or about 1.47.
    [Show full text]
  • The Method of Maximum Likelihood for Simple Linear Regression
    08:48 Saturday 19th September, 2015 See updates and corrections at http://www.stat.cmu.edu/~cshalizi/mreg/ Lecture 6: The Method of Maximum Likelihood for Simple Linear Regression 36-401, Fall 2015, Section B 17 September 2015 1 Recapitulation We introduced the method of maximum likelihood for simple linear regression in the notes for two lectures ago. Let's review. We start with the statistical model, which is the Gaussian-noise simple linear regression model, defined as follows: 1. The distribution of X is arbitrary (and perhaps X is even non-random). 2. If X = x, then Y = β0 + β1x + , for some constants (\coefficients", \parameters") β0 and β1, and some random noise variable . 3. ∼ N(0; σ2), and is independent of X. 4. is independent across observations. A consequence of these assumptions is that the response variable Y is indepen- dent across observations, conditional on the predictor X, i.e., Y1 and Y2 are independent given X1 and X2 (Exercise 1). As you'll recall, this is a special case of the simple linear regression model: the first two assumptions are the same, but we are now assuming much more about the noise variable : it's not just mean zero with constant variance, but it has a particular distribution (Gaussian), and everything we said was uncorrelated before we now strengthen to independence1. Because of these stronger assumptions, the model tells us the conditional pdf 2 of Y for each x, p(yjX = x; β0; β1; σ ). (This notation separates the random variables from the parameters.) Given any data set (x1; y1); (x2; y2);::: (xn; yn), we can now write down the probability density, under the model, of seeing that data: n n (y −(β +β x ))2 Y 2 Y 1 − i 0 1 i p(yijxi; β0; β1; σ ) = p e 2σ2 2 i=1 i=1 2πσ 1See the notes for lecture 1 for a reminder, with an explicit example, of how uncorrelated random variables can nonetheless be strongly statistically dependent.
    [Show full text]
  • Measures of Complexity a Non--Exhaustive List
    Measures of Complexity a non--exhaustive list Seth Lloyd d'Arbeloff Laboratory for Information Systems and Technology Department of Mechanical Engineering Massachusetts Institute of Technology [email protected] The world has grown more complex recently, and the number of ways of measuring complexity has grown even faster. This multiplication of measures has been taken by some to indicate confusion in the field of complex systems. In fact, the many measures of complexity represent variations on a few underlying themes. Here is an (incomplete) list of measures of complexity grouped into the corresponding themes. An historical analog to the problem of measuring complexity is the problem of describing electromagnetism before Maxwell's equations. In the case of electromagnetism, quantities such as electric and magnetic forces that arose in different experimental contexts were originally regarded as fundamentally different. Eventually it became clear that electricity and magnetism were in fact closely related aspects of the same fundamental quantity, the electromagnetic field. Similarly, contemporary researchers in architecture, biology, computer science, dynamical systems, engineering, finance, game theory, etc., have defined different measures of complexity for each field. Because these researchers were asking the same questions about the complexity of their different subjects of research, however, the answers that they came up with for how to measure complexity bear a considerable similarity to eachother. Three questions that researchers frequently ask to quantify the complexity of the thing (house, bacterium, problem, process, investment scheme) under study are 1. How hard is it to describe? 2. How hard is it to create? 3. What is its degree of organization? Here is a list of some measures of complexity grouped according to the question that they try to answer.
    [Show full text]
  • Use of the Kurtosis Statistic in the Frequency Domain As an Aid In
    lEEE JOURNALlEEE OF OCEANICENGINEERING, VOL. OE-9, NO. 2, APRIL 1984 85 Use of the Kurtosis Statistic in the FrequencyDomain as an Aid in Detecting Random Signals Absmact-Power spectral density estimation is often employed as a couldbe utilized in signal processing. The objective ofthis method for signal ,detection. For signals which occur randomly, a paper is to compare the PSD technique for signal processing frequency domain kurtosis estimate supplements the power spectral witha new methodwhich computes the frequency domain density estimate and, in some cases, can be.employed to detect their presence. This has been verified from experiments vith real data of kurtosis (FDK) [2] forthe real and imaginary parts of the randomly occurring signals. In order to better understand the detec- complex frequency components. Kurtosis is defined as a ratio tion of randomlyoccurring signals, sinusoidal and narrow-band of a fourth-order central moment to the square of a second- Gaussian signals are considered, which when modeled to represent a order central moment. fading or multipath environment, are received as nowGaussian in Using theNeyman-Pearson theory in thetime domain, terms of a frequency domain kurtosis estimate. Several fading and multipath propagation probability density distributions of practical Ferguson [3] , has shown that kurtosis is a locally optimum interestare considered, including Rayleigh and log-normal. The detectionstatistic under certain conditions. The reader is model is generalized to handle transient and frequency modulated referred to Ferguson'swork for the details; however, it can signals by taking into account the probability of the signal being in a be simply said thatit is concernedwith detecting outliers specific frequency range over the total data interval.
    [Show full text]
  • An Algorithmic Approach to Information and Meaning a Formal Framework for a Philosophical Discussion∗
    An Algorithmic Approach to Information and Meaning A Formal Framework for a Philosophical Discussion∗ Hector Zenil [email protected] Institut d'Histoire et de Philosophie des Sciences et des Techniques (Paris 1/ENS Ulm/CNRS) http://www.algorithmicnature.org/zenil Abstract I'll survey some of the aspects relevant to a philosophical discussion of information taking into account the developments of algorithmic information theory. I will propose that meaning is deep in Bennett's logical depth sense, and that algorithmic probability may provide the stability for a robust algorithmic definition of meaning, taking into consideration the interpretation and the receiver's own knowledge encoded in the story of a message. Keywords: information content; meaning; algorithmic probability; algorithmic complexity; logical depth; philosophy of information; in- formation theory. ∗Presented at the Interdisciplinary Workshop: Ontological, Epistemological and Methodological Aspects of Computer Science, Philosophy of Simulation (SimTech Clus- ter of Excellence), Institute of Philosophy, at the Faculty of Informatics, University of Stuttgart, Germany, July 7, 2011. 1 1 Introduction Information can be a cornerstone for interpreting all kind of world phenom- ena as it can constitute the basis for the description of objects. While it is legitimate to study ideas and concepts related to information in their broad- est sense, that the use of information outside formal contexts amounts to misuse cannot and should not be overlooked. It is not unusual to come across surveys and volumes devoted to information (in the larger sense) in which the mathematical discussion does not venture beyond the state of the field as Shannon [30] left it some 60 years ago.
    [Show full text]
  • Lecture Notes on Descriptional Complexity and Randomness
    Lecture notes on descriptional complexity and randomness Peter Gács Boston University [email protected] A didactical survey of the foundations of Algorithmic Information The- ory. These notes introduce some of the main techniques and concepts of the field. They have evolved over many years and have been used and ref- erenced widely. “Version history” below gives some information on what has changed when. Contents Contents iii 1 Complexity1 1.1 Introduction ........................... 1 1.1.1 Formal results ...................... 3 1.1.2 Applications ....................... 6 1.1.3 History of the problem.................. 8 1.2 Notation ............................. 10 1.3 Kolmogorov complexity ..................... 11 1.3.1 Invariance ........................ 11 1.3.2 Simple quantitative estimates .............. 14 1.4 Simple properties of information ................ 16 1.5 Algorithmic properties of complexity .............. 19 1.6 The coding theorem....................... 24 1.6.1 Self-delimiting complexity................ 24 1.6.2 Universal semimeasure.................. 27 1.6.3 Prefix codes ....................... 28 1.6.4 The coding theorem for ¹Fº . 30 1.6.5 Algorithmic probability.................. 31 1.7 The statistics of description length ............... 32 2 Randomness 37 2.1 Uniform distribution....................... 37 2.2 Computable distributions..................... 40 2.2.1 Two kinds of test..................... 40 2.2.2 Randomness via complexity............... 41 2.2.3 Conservation of randomness............... 43 2.3 Infinite sequences ........................ 46 iii Contents 2.3.1 Null sets ......................... 47 2.3.2 Probability space..................... 52 2.3.3 Computability ...................... 54 2.3.4 Integral.......................... 54 2.3.5 Randomness tests .................... 55 2.3.6 Randomness and complexity............... 56 2.3.7 Universal semimeasure, algorithmic probability . 58 2.3.8 Randomness via algorithmic probability........
    [Show full text]
  • A Review of Graph and Network Complexity from an Algorithmic Information Perspective
    entropy Review A Review of Graph and Network Complexity from an Algorithmic Information Perspective Hector Zenil 1,2,3,4,5,*, Narsis A. Kiani 1,2,3,4 and Jesper Tegnér 2,3,4,5 1 Algorithmic Dynamics Lab, Centre for Molecular Medicine, Karolinska Institute, 171 77 Stockholm, Sweden; [email protected] 2 Unit of Computational Medicine, Department of Medicine, Karolinska Institute, 171 77 Stockholm, Sweden; [email protected] 3 Science for Life Laboratory (SciLifeLab), 171 77 Stockholm, Sweden 4 Algorithmic Nature Group, Laboratoire de Recherche Scientifique (LABORES) for the Natural and Digital Sciences, 75005 Paris, France 5 Biological and Environmental Sciences and Engineering Division (BESE), King Abdullah University of Science and Technology (KAUST), Thuwal 23955, Saudi Arabia * Correspondence: [email protected] or [email protected] Received: 21 June 2018; Accepted: 20 July 2018; Published: 25 July 2018 Abstract: Information-theoretic-based measures have been useful in quantifying network complexity. Here we briefly survey and contrast (algorithmic) information-theoretic methods which have been used to characterize graphs and networks. We illustrate the strengths and limitations of Shannon’s entropy, lossless compressibility and algorithmic complexity when used to identify aspects and properties of complex networks. We review the fragility of computable measures on the one hand and the invariant properties of algorithmic measures on the other demonstrating how current approaches to algorithmic complexity are misguided and suffer of similar limitations than traditional statistical approaches such as Shannon entropy. Finally, we review some current definitions of algorithmic complexity which are used in analyzing labelled and unlabelled graphs. This analysis opens up several new opportunities to advance beyond traditional measures.
    [Show full text]
  • Nonlinear Dynamics and Entropy of Complex Systems with Hidden and Self-Excited Attractors
    entropy Editorial Nonlinear Dynamics and Entropy of Complex Systems with Hidden and Self-Excited Attractors Christos K. Volos 1,* , Sajad Jafari 2 , Jacques Kengne 3 , Jesus M. Munoz-Pacheco 4 and Karthikeyan Rajagopal 5 1 Laboratory of Nonlinear Systems, Circuits & Complexity (LaNSCom), Department of Physics, Aristotle University of Thessaloniki, Thessaloniki 54124, Greece 2 Nonlinear Systems and Applications, Faculty of Electrical and Electronics Engineering, Ton Duc Thang University, Ho Chi Minh City 700000, Vietnam; [email protected] 3 Department of Electrical Engineering, University of Dschang, P.O. Box 134 Dschang, Cameroon; [email protected] 4 Faculty of Electronics Sciences, Autonomous University of Puebla, Puebla 72000, Mexico; [email protected] 5 Center for Nonlinear Dynamics, Institute of Research and Development, Defence University, P.O. Box 1041 Bishoftu, Ethiopia; [email protected] * Correspondence: [email protected] Received: 1 April 2019; Accepted: 3 April 2019; Published: 5 April 2019 Keywords: hidden attractor; complex systems; fractional-order; entropy; chaotic maps; chaos In the last few years, entropy has been a fundamental and essential concept in information theory. It is also often used as a measure of the degree of chaos in systems; e.g., Lyapunov exponents, fractal dimension, and entropy are usually used to describe the complexity of chaotic systems. Thus, it will be important to study entropy in nonlinear systems. Additionally, there has been an increasing interest in a new classification of nonlinear dynamical systems including two kinds of attractors: self-excited attractors and hidden attractors. Self-excited attractors can be localized straightforwardly by applying a standard computational procedure.
    [Show full text]