Dynamical Systems, Prediction, Inference. Time Series Data

Total Page:16

File Type:pdf, Size:1020Kb

Dynamical Systems, Prediction, Inference. Time Series Data Modeling time series data - understanding the underlying processes Dynamical systems, prediction, inference. Time series data Values of a variable or variables that come with a time stamp. Time can be continuous: x(t) or discrete ..., xt T , ..., xt, xt+1, ..., xt+T , ... − ! The intervals between measurements do not have to be equal. Analysis tools Many! Some examples: Frequency analysis: Fourier, wavelet, etc: find representation of time dependent signal x(t) in frequency domain, using different basis functions (sinusoids, step functions, ...). Nonlinear analysis: Poincare map, deterministic chaos. Models Linear models. Examples: LPC (linear predictive coding), linear filters. Nonlinear models: Dynamical Systems modeling - model of the underlying process. Dynamical System = {State space S; Dynamic f: S -> S; Initial condition} s˙ = f(s) Dynamic = equations of motion s(t) = f[s(t 1)] − p(s! s) s, s! S or: transition probabilities | ∈ Tasks Prediction: infer future from past Understanding: build model that gives insight into underlying physical phenomena Diagnostic: what possible pasts may have let to current state? Applications financial and economic forecasting environmental modeling medical diagnosis industrial equipment diagnosis speech recognition many more... Linear models and LPC Linear model: y = αixi i M Possible use: !∈ Interpolation - y = xˆ , k / M k ∈ Prediction - xi = xt; y = xˆt+1 Task: find those parameters α i, that minimize the mean squared error (MSE). Prediction (linear filters) T Model: xˆt+1 = αixt i − i=0 ! 1 MSE: (x xˆ )2 = ? 2 t+1 − t+1 ! " Set derivative w.r.t. α ito zero => system of linear equations => solution. Prediction (linear filters) T Model: xˆt+1 = αixt i − i=0 MSE: ! 1 2 1 (xt+1 xˆt+1) = αiαj xt ixt j 2 − 2 " − − # i j ! " # # 1 2 αi xt ixt+1 + xt+1 − " − # 2 i # ! Derivative: αj xt ixt j xt ixt+1 = 0 ! − − " − ! − " j ! In matrix notation: α!C = !γ Cij = xt ixt j ! − − " 1 γi = xt ixt+1 Solution: α! = !γC− ! − " Issues Need to compute the ensemble average - in reality replace that by a time average. What if the data are non-stationary? Adaptive filters: - replace average by instantaneous values - assume: slowly changing objective fkt. - change parameters into the direction of the gradient in each time step. Frequently used! Example: echo cancellation. But - most processes are nonlinear. Dynamical Systems Distinguish cases according to: States: discrete or continuous Time: discrete or continuous Both continuous - model with differential equations. Both discrete - use HMMs. Spacial system? Space: discrete or continuous? Cellular automata; reaction- diffusion equations. Stochastic processes chain of random variables Xt realization ..., xt T , ..., xt, xt+1, ..., xt+T , ... − ! Past Future x Alphabet: t ∈ A T Block of length T: Xt = XtXt+1...Xt+T 1 − T T Length T word: xt = xtXt+1...xt+T 1 − ∈ A Stochastic processes Uniform Process: Equal-length sequences occur with same probability T 1 P r(Xt ) = P r(XtXt+1...Xt+T 1) = − L A Example: Fair coin A = {H,T}; Pr(H) = Pr(T) = 0.5 Independent, Identically Distributed (IID) Process: T T P r(Xt ) = P r(Xt+i 1) − i=1 ! Example: Biased coin Pr(H) = p; Pr(T) = 1-p = q; n: T n T n number of heads in sequence; P r(Xt ) = p q − Stochastic processes L-Block Process: Example: 2-block process with no consecutive zeros A = {0,1} Pr(00) = 0; Pr(01) = 0; Pr(10) = 0.5; Pr(11) = 0.5 Pr(11101011) = Pr(11)Pr(10)Pr(10)Pr(11) = 0.175 Markov process P r( X ) = P (X X ) { t} t+1| t t Example: No Consecutive! zeros (Golden Mean Process). A = {0;1} Pr(0|0) = 0; Pr(1|0) = 1; Pr(0|1) = Pr(1|1) = 0.5 n-th order Markov process: P r(Xt Xt 1, ..., Xt n, ...) = P r(Xt Xt 1, ..., Xt n) | − − | − − more general than n-block process n n n n n P r(X1 X2 ) = P (X2 X1 )P (X1 ) n| n = P (X2 )P (X1 ) only if blocks are independent. Hidden Markov Process Internal n-th order Markov Process over internal states S. P r(St St 1, ..., St n, ...) = P r(St St 1, ..., St n) | − − | − − When the system is in state s, it produces a symbol x with probability p(x|s). Observation process. Observations: ..., xt T , ..., xt, xt+1, ..., xt+T , ... − ! Observed process: Block distribution P r(XT ) Hidden Markov Model Model the underlying dynamics as a hidden Markov process. p(s=2|s=1) p(s=1|s=1) state 1 state 2 ... state K p(s=1|s=2) p(x|s=1) p(x|s=1) observations x(t) Fitting HMM to data Infer the most likely dynamical system from the observable sequence of outputs. Find max likelihood parameters (bayesian estimation). Distributional parameters: transition probabilities p(s(t)|s(t-1)) observation probabilities p(x|s) initial state distribution p(s(0)) Baum-Welch Algorithm EM-alg: Iteratively estimate parameters and calculated likelihood of data; expected frequencies, given model. 1. Choose arbitrary parameters 2. Calculate likelihood 3. Expected frequencies so obtained are substituted for the old parameters. Iterate until there is no improvement. Learing HMMs Baum-Welch algorithm will find local optimum. Interpretation of the resulting hidden state model? Are there special HMMs which have a (physical) meaning? How about Prediction? Let’s play a little game... Past: 111111111111111111111111111111111111111111111111 Future? Your model? Past: 01010101010101010101010101010101 Future? Your model? Past: 001010001001001010100010101000001 Future? Your model? Past: 001010001001001010100010101000001 Future: 0 ... Model: p(s=2|s=1,x=1) = 1 p(s=1|s=1,x=0) = 1 s1 s2 p(s=1|s=2,x=0) = 1 p(x=0|s=1) = 0.5 p(x=0|s=2) = 1 Causal States J. Crutchfield, 1989 Some histories are equal with regards to the development of the future! Equivalency class: Two histories h and h’ are equivalent if the distribution over the future given either of those histories is the same: p(future|h) = p(future|h’) -> h~h’ Define: Causal states s: For all h in the same equivalence class, p(future|s) = p(future|h). Causal States Model the underlying dynamical system: Causal state set S; causal states model the underlying states of the system. Conditional transition probabilities p(s’, x | s) [x is the next observation]; model the dynamics of the system. Initial conditions. The ! -machine J. Crutchfield, 1989 A set containing: the mapping from histories to causal states: s = ! (h) -> the set of causal states, S. together with the transition probabilities T (x) = p(s , x s ) ij j | i M = S, T (x), x { { ∈ A}} Properties of ! -machine Causal shielding: Conditional independence of future and past, given the causal states. Deterministic Markovian Optimal predictor Minimal size Unique “Probabilistic bisimulation” (1989). Bisimulation coined by R. Milner (80s) in the CS literature. Conditional independence of future & past Past and Future are Independent given Causal State: p(future,past|s) = p(future|s) p(past|s) Causal states shield past and future from each other: Markov chain. Why is this true? By definition! Because p(future|s) = p(future|past) and therefore p(future|past,s) = p(future|s), since p(future|past,s) = p(future|past). [Data Proc. Ineq.] Deterministic The epsilon-machine is deterministic, in the sense that given a causal state s at time t, and a measurement x, there is a unique next causal state. deterministic mapping f: (s,x) -> s’ To proof this, we need to assume that the process is stationary! Markovian p(st st 1, st 2, ...) = p(st st 1) | − − | − Optimal predictor let r be a state from a rival partition R H[future s] H[future r] | ≤ | because H[future s] = H[future past] H[future r] | | ≤ | therefore: entropy rate of causal states = entropy rate of time series L L h(s) = lim H[Xfuture s] = H[Xfuture past] = h L | | →∞ Causal states contain every difference (in past) that makes a difference (to future). Causal states are sufficient statistics for predicting the future! Minimal Causal states are most compact description, out of all partitions of histories which have equal predictive power: H[S] H[R] ≤ because rival partitions R have the same predictive power only when they are refinements of S, otherwise their prediction is a statistical mixture of the causal state predictions. But that increases the entropy. H c p H[p ] i i ≥ i ! i # i " " Minimal sufficient statistics! Unique Any rival partition which is as predictive and of the same size is the same as the causal state partition (up to relabeling of the states). Because for a rival partition R of the same size as S: H[R] = H[S] and “as predictive as” means H[future|R] = H[future|S]. Unique minimal sufficient statistics! The ! -machine Optimal predictor: Lower prediction error than any rival model. Minimal size: Smallest of the prescient rivals. Unique: Smallest optimal predictor is equivalent. Model of the process: Reproduces all of process’s statistics. Renders process’s future independent of its past. You can calculate entropy rate and all statistics of a process from its epsilon-machine! Issues Efficient algorithm for finding the causal state partition and the transition probabilities. What if we are content with some prediction error, because we want to find a more compact representation? Is there some systematic way of trading error for model complexity? Will be addressed in next lecture... Readings: LPC: There are many places. I find “Numerical Methods in C” gives an extremely useful and compact presentation. HMMs: Rabiner, IEEE, 1994 and refs therein. Causal states and epsilon machine: Crutchfield and Shalizi, 1999 and refs therein..
Recommended publications
  • A Gentle Introduction to Dynamical Systems Theory for Researchers in Speech, Language, and Music
    A Gentle Introduction to Dynamical Systems Theory for Researchers in Speech, Language, and Music. Talk given at PoRT workshop, Glasgow, July 2012 Fred Cummins, University College Dublin [1] Dynamical Systems Theory (DST) is the lingua franca of Physics (both Newtonian and modern), Biology, Chemistry, and many other sciences and non-sciences, such as Economics. To employ the tools of DST is to take an explanatory stance with respect to observed phenomena. DST is thus not just another tool in the box. Its use is a different way of doing science. DST is increasingly used in non-computational, non-representational, non-cognitivist approaches to understanding behavior (and perhaps brains). (Embodied, embedded, ecological, enactive theories within cognitive science.) [2] DST originates in the science of mechanics, developed by the (co-)inventor of the calculus: Isaac Newton. This revolutionary science gave us the seductive concept of the mechanism. Mechanics seeks to provide a deterministic account of the relation between the motions of massive bodies and the forces that act upon them. A dynamical system comprises • A state description that indexes the components at time t, and • A dynamic, which is a rule governing state change over time The choice of variables defines the state space. The dynamic associates an instantaneous rate of change with each point in the state space. Any specific instance of a dynamical system will trace out a single trajectory in state space. (This is often, misleadingly, called a solution to the underlying equations.) Description of a specific system therefore also requires specification of the initial conditions. In the domain of mechanics, where we seek to account for the motion of massive bodies, we know which variables to choose (position and velocity).
    [Show full text]
  • A Cell Dynamical System Model for Simulation of Continuum Dynamics of Turbulent Fluid Flows A
    A Cell Dynamical System Model for Simulation of Continuum Dynamics of Turbulent Fluid Flows A. M. Selvam and S. Fadnavis Email: [email protected] Website: http://www.geocities.com/amselvam Trends in Continuum Physics, TRECOP ’98; Proceedings of the International Symposium on Trends in Continuum Physics, Poznan, Poland, August 17-20, 1998. Edited by Bogdan T. Maruszewski, Wolfgang Muschik, and Andrzej Radowicz. Singapore, World Scientific, 1999, 334(12). 1. INTRODUCTION Atmospheric flows exhibit long-range spatiotemporal correlations manifested as the fractal geometry to the global cloud cover pattern concomitant with inverse power-law form for power spectra of temporal fluctuations of all scales ranging from turbulence (millimeters-seconds) to climate (thousands of kilometers-years) (Tessier et. al., 1996) Long-range spatiotemporal correlations are ubiquitous to dynamical systems in nature and are identified as signatures of self-organized criticality (Bak et. al., 1988) Standard models for turbulent fluid flows in meteorological theory cannot explain satisfactorily the observed multifractal (space-time) structures in atmospheric flows. Numerical models for simulation and prediction of atmospheric flows are subject to deterministic chaos and give unrealistic solutions. Deterministic chaos is a direct consequence of round-off error growth in iterative computations. Round-off error of finite precision computations doubles on an average at each step of iterative computations (Mary Selvam, 1993). Round- off error will propagate to the mainstream computation and give unrealistic solutions in numerical weather prediction (NWP) and climate models which incorporate thousands of iterative computations in long-term numerical integration schemes. A recently developed non-deterministic cell dynamical system model for atmospheric flows (Mary Selvam, 1990; Mary Selvam et.
    [Show full text]
  • Thermodynamic Properties of Coupled Map Lattices 1 Introduction
    Thermodynamic properties of coupled map lattices J´erˆome Losson and Michael C. Mackey Abstract This chapter presents an overview of the literature which deals with appli- cations of models framed as coupled map lattices (CML’s), and some recent results on the spectral properties of the transfer operators induced by various deterministic and stochastic CML’s. These operators (one of which is the well- known Perron-Frobenius operator) govern the temporal evolution of ensemble statistics. As such, they lie at the heart of any thermodynamic description of CML’s, and they provide some interesting insight into the origins of nontrivial collective behavior in these models. 1 Introduction This chapter describes the statistical properties of networks of chaotic, interacting el- ements, whose evolution in time is discrete. Such systems can be profitably modeled by networks of coupled iterative maps, usually referred to as coupled map lattices (CML’s for short). The description of CML’s has been the subject of intense scrutiny in the past decade, and most (though by no means all) investigations have been pri- marily numerical rather than analytical. Investigators have often been concerned with the statistical properties of CML’s, because a deterministic description of the motion of all the individual elements of the lattice is either out of reach or uninteresting, un- less the behavior can somehow be described with a few degrees of freedom. However there is still no consistent framework, analogous to equilibrium statistical mechanics, within which one can describe the probabilistic properties of CML’s possessing a large but finite number of elements.
    [Show full text]
  • Writing the History of Dynamical Systems and Chaos
    Historia Mathematica 29 (2002), 273–339 doi:10.1006/hmat.2002.2351 Writing the History of Dynamical Systems and Chaos: View metadata, citation and similar papersLongue at core.ac.uk Dur´ee and Revolution, Disciplines and Cultures1 brought to you by CORE provided by Elsevier - Publisher Connector David Aubin Max-Planck Institut fur¨ Wissenschaftsgeschichte, Berlin, Germany E-mail: [email protected] and Amy Dahan Dalmedico Centre national de la recherche scientifique and Centre Alexandre-Koyre,´ Paris, France E-mail: [email protected] Between the late 1960s and the beginning of the 1980s, the wide recognition that simple dynamical laws could give rise to complex behaviors was sometimes hailed as a true scientific revolution impacting several disciplines, for which a striking label was coined—“chaos.” Mathematicians quickly pointed out that the purported revolution was relying on the abstract theory of dynamical systems founded in the late 19th century by Henri Poincar´e who had already reached a similar conclusion. In this paper, we flesh out the historiographical tensions arising from these confrontations: longue-duree´ history and revolution; abstract mathematics and the use of mathematical techniques in various other domains. After reviewing the historiography of dynamical systems theory from Poincar´e to the 1960s, we highlight the pioneering work of a few individuals (Steve Smale, Edward Lorenz, David Ruelle). We then go on to discuss the nature of the chaos phenomenon, which, we argue, was a conceptual reconfiguration as
    [Show full text]
  • Role of Nonlinear Dynamics and Chaos in Applied Sciences
    v.;.;.:.:.:.;.;.^ ROLE OF NONLINEAR DYNAMICS AND CHAOS IN APPLIED SCIENCES by Quissan V. Lawande and Nirupam Maiti Theoretical Physics Oivisipn 2000 Please be aware that all of the Missing Pages in this document were originally blank pages BARC/2OOO/E/OO3 GOVERNMENT OF INDIA ATOMIC ENERGY COMMISSION ROLE OF NONLINEAR DYNAMICS AND CHAOS IN APPLIED SCIENCES by Quissan V. Lawande and Nirupam Maiti Theoretical Physics Division BHABHA ATOMIC RESEARCH CENTRE MUMBAI, INDIA 2000 BARC/2000/E/003 BIBLIOGRAPHIC DESCRIPTION SHEET FOR TECHNICAL REPORT (as per IS : 9400 - 1980) 01 Security classification: Unclassified • 02 Distribution: External 03 Report status: New 04 Series: BARC External • 05 Report type: Technical Report 06 Report No. : BARC/2000/E/003 07 Part No. or Volume No. : 08 Contract No.: 10 Title and subtitle: Role of nonlinear dynamics and chaos in applied sciences 11 Collation: 111 p., figs., ills. 13 Project No. : 20 Personal authors): Quissan V. Lawande; Nirupam Maiti 21 Affiliation ofauthor(s): Theoretical Physics Division, Bhabha Atomic Research Centre, Mumbai 22 Corporate authoifs): Bhabha Atomic Research Centre, Mumbai - 400 085 23 Originating unit : Theoretical Physics Division, BARC, Mumbai 24 Sponsors) Name: Department of Atomic Energy Type: Government Contd...(ii) -l- 30 Date of submission: January 2000 31 Publication/Issue date: February 2000 40 Publisher/Distributor: Head, Library and Information Services Division, Bhabha Atomic Research Centre, Mumbai 42 Form of distribution: Hard copy 50 Language of text: English 51 Language of summary: English 52 No. of references: 40 refs. 53 Gives data on: Abstract: Nonlinear dynamics manifests itself in a number of phenomena in both laboratory and day to day dealings.
    [Show full text]
  • A Simple Scalar Coupled Map Lattice Model for Excitable Media
    This is a repository copy of A simple scalar coupled map lattice model for excitable media. White Rose Research Online URL for this paper: http://eprints.whiterose.ac.uk/74667/ Monograph: Guo, Y., Zhao, Y., Coca, D. et al. (1 more author) (2010) A simple scalar coupled map lattice model for excitable media. Research Report. ACSE Research Report no. 1016 . Automatic Control and Systems Engineering, University of Sheffield Reuse Unless indicated otherwise, fulltext items are protected by copyright with all rights reserved. The copyright exception in section 29 of the Copyright, Designs and Patents Act 1988 allows the making of a single copy solely for the purpose of non-commercial research or private study within the limits of fair dealing. The publisher or other rights-holder may allow further reproduction and re-use of this version - refer to the White Rose Research Online record for this item. Where records identify the publisher as the copyright holder, users can verify any specific terms of use on the publisher’s website. Takedown If you consider content in White Rose Research Online to be in breach of UK law, please notify us by emailing [email protected] including the URL of the record and the reason for the withdrawal request. [email protected] https://eprints.whiterose.ac.uk/ A Simple Scalar Coupled Map Lattice Model for Excitable Media Yuzhu Guo, Yifan Zhao, Daniel Coca, and S. A. Billings Research Report No. 1016 Department of Automatic Control and Systems Engineering The University of Sheffield Mappin Street, Sheffield, S1 3JD, UK 8 September 2010 A Simple Scalar Coupled Map Lattice Model for Excitable Media Yuzhu Guo, Yifan Zhao, Daniel Coca, and S.A.
    [Show full text]
  • Deterministic Chaos: Applications in Cardiac Electrophysiology Misha Klassen Western Washington University, [email protected]
    Occam's Razor Volume 6 (2016) Article 7 2016 Deterministic Chaos: Applications in Cardiac Electrophysiology Misha Klassen Western Washington University, [email protected] Follow this and additional works at: https://cedar.wwu.edu/orwwu Part of the Medicine and Health Sciences Commons Recommended Citation Klassen, Misha (2016) "Deterministic Chaos: Applications in Cardiac Electrophysiology," Occam's Razor: Vol. 6 , Article 7. Available at: https://cedar.wwu.edu/orwwu/vol6/iss1/7 This Research Paper is brought to you for free and open access by the Western Student Publications at Western CEDAR. It has been accepted for inclusion in Occam's Razor by an authorized editor of Western CEDAR. For more information, please contact [email protected]. Klassen: Deterministic Chaos DETERMINISTIC CHAOS APPLICATIONS IN CARDIAC ELECTROPHYSIOLOGY BY MISHA KLASSEN I. INTRODUCTION Our universe is a complex system. It is made up A system must have at least three dimensions, of many moving parts as a dynamic, multifaceted and nonlinear characteristics, in order to generate machine that works in perfect harmony to create deterministic chaos. When nonlinearity is introduced the natural world that allows us life. e modeling as a term in a deterministic model, chaos becomes of dynamical systems is the key to understanding possible. ese nonlinear dynamical systems are seen the complex workings of our universe. One such in many aspects of nature and human physiology. complexity is chaos: a condition exhibited by an is paper will discuss how the distribution of irregular or aperiodic nonlinear deterministic system. blood throughout the human body, including factors Data that is generated by a chaotic mechanism will a ecting the heart and blood vessels, demonstrate appear scattered and random, yet can be dened by chaotic behavior.
    [Show full text]
  • Math Morphing Proximate and Evolutionary Mechanisms
    Curriculum Units by Fellows of the Yale-New Haven Teachers Institute 2009 Volume V: Evolutionary Medicine Math Morphing Proximate and Evolutionary Mechanisms Curriculum Unit 09.05.09 by Kenneth William Spinka Introduction Background Essential Questions Lesson Plans Website Student Resources Glossary Of Terms Bibliography Appendix Introduction An important theoretical development was Nikolaas Tinbergen's distinction made originally in ethology between evolutionary and proximate mechanisms; Randolph M. Nesse and George C. Williams summarize its relevance to medicine: All biological traits need two kinds of explanation: proximate and evolutionary. The proximate explanation for a disease describes what is wrong in the bodily mechanism of individuals affected Curriculum Unit 09.05.09 1 of 27 by it. An evolutionary explanation is completely different. Instead of explaining why people are different, it explains why we are all the same in ways that leave us vulnerable to disease. Why do we all have wisdom teeth, an appendix, and cells that if triggered can rampantly multiply out of control? [1] A fractal is generally "a rough or fragmented geometric shape that can be split into parts, each of which is (at least approximately) a reduced-size copy of the whole," a property called self-similarity. The term was coined by Beno?t Mandelbrot in 1975 and was derived from the Latin fractus meaning "broken" or "fractured." A mathematical fractal is based on an equation that undergoes iteration, a form of feedback based on recursion. http://www.kwsi.com/ynhti2009/image01.html A fractal often has the following features: 1. It has a fine structure at arbitrarily small scales.
    [Show full text]
  • Attractors: Nonstrange to Chaotic
    1 Attractors: Nonstrange to Chaotic Robert L. V. Taylor The College of Wooster [email protected] Advised by Dr. John David The College of Wooster [email protected] Abstract—The theory of chaotic dynamical systems can particular, a strange attractor. It is a common miscon- be a tricky area of study for a non-expert to break into. ception among non-experts that the term strange attractor Because the theory is relatively recent, the new student finds is simply another way of saying chaotic attractor. We himself immersed in a subject with very few clear and intuitive definitions. This paper aims to carve out a small will show why this is not necessarily true. In Section section of the theory of chaotic dynamical systems – that III, we apply these ideas by classifying and identifying of attractors – and outline its fundamental concepts from a properties of several example attractors. computational mathematics perspective. The motivation for It is assumed that the reader has a rudimentary under- this paper is primarily to define what an attractor is and standing of dynamical systems. For example, the reader to clarify what distinguishes its various types (nonstrange, strange nonchaotic, and strange chaotic). Furthermore, by should be familiar with basic concepts concerning initial providing some examples of attractors and explaining how conditions, orbits, and the differences between real and and why they are classified, we hope to provide the reader discrete systems. with a good feel for the fundamental connection between fractal geometry and the existence of chaos. II. Mathematical Concepts A. Fractals and Dimension I. Introduction Simply put, a fractal is a geometric object that is self- HE theory of dynamical systems is an extremely similar on all scales.
    [Show full text]
  • THE DYNAMICAL SYSTEMS APPROACH to DIFFERENTIAL EQUATIONS INTRODUCTION the Mathematical Subject We Call Dynamical Systems Was
    BULLETIN (New Series) OF THE AMERICAN MATHEMATICAL SOCIETY Volume 11, Number 1, July 1984 THE DYNAMICAL SYSTEMS APPROACH TO DIFFERENTIAL EQUATIONS BY MORRIS W. HIRSCH1 This harmony that human intelligence believes it discovers in nature —does it exist apart from that intelligence? No, without doubt, a reality completely independent of the spirit which conceives it, sees it or feels it, is an impossibility. A world so exterior as that, even if it existed, would be forever inaccessible to us. But what we call objective reality is, in the last analysis, that which is common to several thinking beings, and could be common to all; this common part, we will see, can be nothing but the harmony expressed by mathematical laws. H. Poincaré, La valeur de la science, p. 9 ... ignorance of the roots of the subject has its price—no one denies that modern formulations are clear, elegant and precise; it's just that it's impossible to comprehend how any one ever thought of them. M. Spivak, A comprehensive introduction to differential geometry INTRODUCTION The mathematical subject we call dynamical systems was fathered by Poin­ caré, developed sturdily under Birkhoff, and has enjoyed a vigorous new growth for the last twenty years. As I try to show in Chapter I, it is interesting to look at this mathematical development as the natural outcome of a much broader theme which is as old as science itself: the universe is a sytem which changes in time. Every mathematician has a particular way of thinking about mathematics, but rarely makes it explicit.
    [Show full text]
  • Visual Analysis of Nonlinear Dynamical Systems: Chaos, Fractals, Self-Similarity and the Limits of Prediction
    systems Communication Visual Analysis of Nonlinear Dynamical Systems: Chaos, Fractals, Self-Similarity and the Limits of Prediction Geoff Boeing Department of City and Regional Planning, University of California, Berkeley, CA 94720, USA; [email protected]; Tel.: +1-510-642-6000 Academic Editor: Ockie Bosch Received: 7 September 2016; Accepted: 7 November 2016; Published: 13 November 2016 Abstract: Nearly all nontrivial real-world systems are nonlinear dynamical systems. Chaos describes certain nonlinear dynamical systems that have a very sensitive dependence on initial conditions. Chaotic systems are always deterministic and may be very simple, yet they produce completely unpredictable and divergent behavior. Systems of nonlinear equations are difficult to solve analytically, and scientists have relied heavily on visual and qualitative approaches to discover and analyze the dynamics of nonlinearity. Indeed, few fields have drawn as heavily from visualization methods for their seminal innovations: from strange attractors, to bifurcation diagrams, to cobweb plots, to phase diagrams and embedding. Although the social sciences are increasingly studying these types of systems, seminal concepts remain murky or loosely adopted. This article has three aims. First, it argues for several visualization methods to critically analyze and understand the behavior of nonlinear dynamical systems. Second, it uses these visualizations to introduce the foundations of nonlinear dynamics, chaos, fractals, self-similarity and the limits of prediction. Finally, it presents Pynamical, an open-source Python package to easily visualize and explore nonlinear dynamical systems’ behavior. Keywords: visualization; nonlinear dynamics; chaos; fractal; attractor; bifurcation; dynamical systems; prediction; python; logistic map 1. Introduction Chaos theory is a branch of mathematics that deals with nonlinear dynamical systems.
    [Show full text]
  • Discrete and Continuous Dynamical Systems: Applications and Examples
    Discrete and Continuous Dynamical Systems: Applications and Examples Yonah Borns-Weil and Junho Won Mentored by Dr. Aaron Welters Fourth Annual PRIMES Conference May 18, 2014 J. Won, Y. Borns-Weil (MIT) Discrete and Continuous Dynamical Systems May 18, 2014 1 / 32 Overview of dynamical systems What is a dynamical system? Two flavors: Discrete (Iterative Maps) Continuous (Differential Equations) J. Won, Y. Borns-Weil (MIT) Discrete and Continuous Dynamical Systems May 18, 2014 2 / 32 Fixed points Periodic points (can be reduced to fixed points) Stability of fixed points By approximating f with a linear function, we get that a fixed point x∗ is stable whenever jf 0(x∗)j < 1: Iterative maps Definition (Iterative map) A (one-dimensional) iterative map is a sequence fxng with xn+1 = f (xn) for some function f : R ! R. Basic Ideas: J. Won, Y. Borns-Weil (MIT) Discrete and Continuous Dynamical Systems May 18, 2014 3 / 32 Periodic points (can be reduced to fixed points) Stability of fixed points By approximating f with a linear function, we get that a fixed point x∗ is stable whenever jf 0(x∗)j < 1: Iterative maps Definition (Iterative map) A (one-dimensional) iterative map is a sequence fxng with xn+1 = f (xn) for some function f : R ! R. Basic Ideas: Fixed points J. Won, Y. Borns-Weil (MIT) Discrete and Continuous Dynamical Systems May 18, 2014 3 / 32 Stability of fixed points By approximating f with a linear function, we get that a fixed point x∗ is stable whenever jf 0(x∗)j < 1: Iterative maps Definition (Iterative map) A (one-dimensional) iterative map is a sequence fxng with xn+1 = f (xn) for some function f : R ! R.
    [Show full text]