entropy Article A Novel Measure Inspired by Lyapunov Exponents for the Characterization of Dynamics in State-Transition Networks Bulcsú Sándor 1,* , Bence Schneider 1, Zsolt I. Lázár 1 and Mária Ercsey-Ravasz 1,2,* 1 Department of Physics, Babes-Bolyai University, 400084 Cluj-Napoca, Romania; [email protected] (B.S.); [email protected] (Z.I.L.) 2 Network Science Lab, Transylvanian Institute of Neuroscience, 400157 Cluj-Napoca, Romania * Correspondence: [email protected] (B.S.); [email protected] (M.E.-R.) Abstract: The combination of network sciences, nonlinear dynamics and time series analysis provides novel insights and analogies between the different approaches to complex systems. By combining the considerations behind the Lyapunov exponent of dynamical systems and the average entropy of transition probabilities for Markov chains, we introduce a network measure for character- izing the dynamics on state-transition networks with special focus on differentiating between chaotic and cyclic modes. One important property of this Lyapunov measure consists of its non-monotonous dependence on the cylicity of the dynamics. Motivated by providing proper use cases for studying the new measure, we also lay out a method for mapping time series to state transition networks by phase space coarse graining. Using both discrete time and continuous time dynamical systems the Lyapunov measure extracted from the corresponding state-transition networks exhibits similar behavior to that of the Lyapunov exponent. In addition, it demonstrates a strong sensitivity to boundary crisis suggesting applicability in predicting the collapse of chaos. Keywords: Lyapunov exponents; state-transition networks; time series analysis; dynamical systems Citation: Sándor, B.; Schneider, B.; Lázár, Z.I.; Ercsey-Ravasz, M. A 1. Introduction Novel Measure Inspired by Lyapunov Complex network theory has had many interdisciplinary applications in different Exponents for the Characterization of domains of social sciences, epidemiology, economy, neuroscience, biology etc. [1]. In recent Dynamics in State-Transition Networks. Entropy 2021, 23, 103. years different network approaches have been also developed for nonlinear time series https://doi.org/10.3390/e23010103 analysis. For a detailed review see [2]. Proper mapping between a discrete time series and a complex network in order to apply the tools of network theory in an efficient manner is Received: 15 December 2020 not a trivial question. In case of continuous-time dynamical systems it can be even more Accepted: 7 January 2021 complicated. There are several approaches to this problem, here we mention three large cat- Published: 12 January 2021 egories [2]: (1) Proximity networks are created based on the statistical or metric proximity of two time series segments. The most studied variant of proximity networks are recurrence Publisher’s Note: MDPI stays neu- networks [3]. These have found many applications, in the characterization of discrete [4] tral with regard to jurisdictional clai- and continuous dynamical systems [5,6], in the classification of medical signals [7], and in ms in published maps and institutio- the analysis of two-phase flows [8]. A special version are joint recurrence networks, which nal affiliations. were developed for the detection of synchronization phenomena in coupled systems [9]. (2) Visibility graphs capture the convexity of subsequent observations [10]. The methods of natural and horizontal visibility graphs belong to this class. Visibility graphs were used for the analysis of geophysical time series [11], for the characterization of seismic activity [12], Copyright: © 2021 by the authors. Li- censee MDPI, Basel, Switzerland. two-phase fluid flows [13], and for algorithmic detection of autism [14]. (3) State-transition This article is an open access article networks (STN) represent the transition probabilities between discretized states of the distributed under the terms and con- dynamics. These can be threshold-based networks [15] or ordinal partition networks [16]. ditions of the Creative Commons At- These also have found many applications in the domain of biological regulatory net- tribution (CC BY) license (https:// works [17], study of signals of chaotic circuits [18], electrocardiography [19], economic creativecommons.org/licenses/by/ models [20] and climate time series [21]. STNs are in fact an equivalent representation of 4.0/). discrete-time finite-state Markov chains with a time-homogeneous transition matrix [22]. Entropy 2021, 23, 103. https://doi.org/10.3390/e23010103 https://www.mdpi.com/journal/entropy Entropy 2021, 23, 103 2 of 15 The adjacency matrix of the STN is the transition matrix of the Markov chain, and it is a right stochastic matrix (a real square matrix, with each row summing up to one [23]), having many known properties [22], see Section 2.1 for details. One of the frequently used entropy-type measures for characterizing Markov chains is the Kolmogorov–Sinai entropy [24]. While these approaches have been extremely useful - as shown by the wide range of applications, people have always concentrated on applying graph theory tools to obtain a new understanding of the dynamics, employing several traditional network measures [1,2]. Here we would like to take an inverse step and generalize a prominent measure from nonlinear dynamics theory on STNs. The Lyapunov exponent is one of the most used quantities for analyzing dynamical systems [25]. However, its estimation for time series, when the dynamical equations are not available, is seldom trivial [26]. Here we introduce an analogous measure on STNs that can become an effective measure also in time series analysis. Inspired by the theory of dynamical systems we look at the trajectory length of a system evolving over discrete time according to the transition probabilities defining its STN. We show that by associating the appropriate length measure to the transitions the combined ensemble and time average of the length of a single step yields the Kolmogorov–Sinai entropy. The new quantity, we name it Lyapunov measure, is defined in analogy to the Lyapunov exponent by estimating the variance of trajectory lengths during a random walk over the network. We find that the Lyapunov measure is able to distinguish between peri- odic and chaotic time series, and detect furthermore crisis-type bifurcations by presenting pronounced peaks in the vicinity of these parameters. After a short description of the STNs, we present the new Lyapunov measure and its properties. We test and compare its behaviour with the Kolmogorov–Sinai entropy on a theoretical network model with cyclic properties, the discrete-time Henon map [27], and the continuous-time Lorenz system [28]. 2. Results 2.1. State-Transition Networks Mapping a dynamical system into an STN requires us to assign the different states of the dynamics to certain nodes of the network [15,16]. Directed edges of the network correspond to transitions between the discrete (or discretized) states of the system, charac- terized by the transition probability pij from state i to j, where the probability of leaving the node equals to one (we have a right stochastic matrix), ∑ pij = 1 , pijk = pij pjk . (1) j Trajectories consisting of several consecutive timesteps of the dynamics determine a path between distant nodes i and k of the STN, with the probability of selecting a particular path, e.g., i ! j ! k, given that the trajectory starts from node i is hence being given by the product of the respective transition probabilities. The transition probabilities, pij introduced in Equation (1) are conditional by their definition assuming that the starting node of the transition is i. Therefore, the non-conditional probability of visiting edge i ! j can be expressed by the Bayes-formula, qij = xi pij , (2) where xi is the probability of residing in node i. Analogously to geometric distance, one may assign a length lij as weights to the respective edges [29], which takes low values for high probabilities and vice-versa: lij = − ln pij , Lijk = lij + ljk . (3) The total length Lijk of such a i ! j ! k path is then given by the sum of lengths of the links along the path. A time series can hence be encoded by an STN as described above. Nodes Entropy 2021, 23, 103 3 of 15 of the network represent the spatial structure, while the time-like behavior is encoded by weighted and directed edges. Mathematically, the weighted adjacency matrix of transition probabilities, (P)ij = pij, determines the time evolution of an ensemble of trajectories on the STN: x(t + 1) = P>x(t) , jjx(t)jj = 1 , (4) > where x(t) = (x1(t), ... , xi(t), ... , xN(t)) , and xi(t) denotes the probability of finding the system in state (node) i at time t. This process may also be seen as a time-homogeneous discrete-time Markov chain with a finite state space [24]. For STNs the stationary distribu- tion is given by the renormalized eigenvector corresponding to the unit eigenvalue, P>x∗ = 1 · x∗ , x∗(t + 1) = x∗(t) = x∗ , jjx∗jj = 1 . (5) Note that since the evolution operator P> of the STNs considered here is a stochastic irre- ducible matrix [23], its largest eigenvalue is always 1 and the existence of the corresponding positive eigenvector is guaranteed by the Perron-Frobenius theorem [30]. For aperiodic transition matrices the long-term distribution is independent of the initial conditions, the system evolving over time to the stationary state x∗. In case of periodic solutions however the long-time averaged distribution is also given by the stationary solution, 1 t lim x(k) = x∗ . (6) t!¥ ∑ t k 2.2. Lyapunov Measure In the context of dynamical systems theory, Lyapunov exponents are the best known quantities used to characterize the system’s behavior [25,26]. Here, we introduce an analogous quantity for STNs. Given a STN, one may define trajectories similarly to random walks on graphs: for any given initial state i the next state j is chosen randomly, using the transition probabilities pij.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages15 Page
-
File Size-