Reconstructing Bifurcation Diagrams with Lyapunov Exponents from Only Time-Series Data Using an Extreme Learning Machine
NOLTA, IEICE
Paper Reconstructing bifurcation diagrams with Lyapunov exponents from only time-series data using an extreme learning machine
Yoshitaka Itoh 1 a), Yuta Tada 1 , and Masaharu Adachi 1
1 Department of Electrical and Electronic Engineering, Tokyo Denki University, 5 Senju-Asahicho Adachi-ku, Tokyo, Japan
Received March 12, 2016; Revised August 1, 2016; Published January 1, 2017
Abstract: We describe a method for reconstructing bifurcation diagrams with Lyapunov ex- ponents for chaotic systems using only time-series data. The reconstruction of bifurcation dia- grams is a problem of time-series prediction and predicts oscillatory patterns of time-series data when parameters change. Therefore, we expect the reconstruction of bifurcation diagram could be used for real-world systems that have variable environmental factors, such as temperature, pressure, and concentration. In the conventional method, the accuracy of the reconstruction can be evaluated only qualitatively. In this paper, we estimate Lyapunov exponents for re- constructed bifurcation diagrams so that we can quantitatively evaluate the reconstruction. We also present the results of numerical experiments that confirm that the features of the reconstructed bifurcation diagrams coincide with those of the original ones. Key Words: Chaos, reconstruction of bifurcation diagram, time-series prediction, extreme learning machine
1. Introduction The reconstruction of a bifurcation diagram (BD) is an application of time-series prediction. The time-series prediction problem has been studied by many researchers using various techniques, such as neural networks and learning algorithms. Tokunaga et al. has proposed the reconstruction of BDs from only time-series data [1]. In conventional time-series prediction, no dynamical system parameters are estimated. In contrast, the reconstruction of a BD estimates the number of significant parameters of the unknown system and recognizes oscillatory patterns when parameters change. Here we briefly describe the algorithm for reconstructing BDs that was proposed by Tokunaga et al.. First, we make a time-series predictor for target time-series data by using a neural network. Next, we estimate the number of significant parameters of the target dynamical system from the time- series predictor, by principal component analysis [2]. Then, we reconstruct the BDs in the estimated parameter space. This method employs a three-layer neural network as a time-series predictor. The neural network is trained via back propagation (BP) learning of synaptic weights and biases [3]. Ogawa et al. [4] have reconstructed BDs using an algorithm that is different from the algorithm
2
Nonlinear Theory and Its Applications, IEICE, vol. 8, no. 1, pp. 2–14 c IEICE 2017 DOI: 10.1587/nolta.8.2 proposed by Tokunaga. This algorithm uses radial basis function networks with an additive parameter. Bifurcation diagrams are reconstructed by changing the parameter, and Lyapunov exponents are estimated from the radial basis function networks. Therefore, this method is limited to target systems that have an additive parameter. Bagarinao et al. [5–7] have proposed the reconstruction of BDs using nonlinear autoregressive (NAR) models as time-series predictors. They employ NAR models as time-series predictors because neural networks have several shortcomings, such as difficulty in handling time series corrupted by dynamical noise. They claim that the advantage of using NAR models is in both their robustness to noise and their simple structure [7]. Tokunaga et al. used neural networks with BP learning. The reconstruction of a BD by this method requires the time-series predictor to be made repeatedly; therefore, the computational cost is very high. To avoid this problem, we have proposed [8, 9] the reconstruction of BDs using extreme learning machines (ELMs) [10]. The learning of connection weights in an ELM needs only linear regression. Therefore, the computational cost is much lower when using the ELM rather than BP learning. In Ref. [8], we demonstrated the reconstruction of BDs with an ELM for the R¨ossler equation, which has two parameters. In addition, we quantitatively evaluated the accuracy of the reconstruction of the BD in the case of two parameters. In this paper, we estimate the Lyapunov exponents of reconstructed systems, that is, we estimate typical indices for a chaotic system. Then, we can quantitatively evaluate the reconstruction using these indices. The target chaotic systems are the Logistic map [11], the H´enon Map [12] and the R¨ossler equation [13]. The rest of the paper is organized as follows. In Section 2, we explain the reconstruction of the BDs. In Section 3, we explain the use of ELMs as time-series predictors. In Section 4, we describe the estimation of Lyapunov exponents for reconstructed BDs. In Section 5, we show the results of numerical experiments. Finally, we give conclusions in Section 6. 2. Reconstructing the bifurcation diagrams In this section, we briefly summarize the reconstruction method proposed by Tokunaga et al. [1]. Let us assume that the nonlinear map that generates the time-series data is
y (t +1)=G (pn, y (t)) , (n =1, 2, ··· ,N) , (1) where N is the number of time-series data sets to be used in the reconstruction of BDs, G (·)isa A E E nonlinear map, pn ∈ R are parameters (one for each data set), and y (t) ∈ R and y (t +1)∈ R are the input and output of the nonlinear map, respectively. Here, A and E are the parameter dimensionality, and the input and output dimensionality of the nonlinear map, respectively. We assume that each given parameter value is close to the others. The time-series data set that is generated with parameter pn is called Sn. Thus, time-series data sets S1,S2, ··· ,SN correspond to parameter sets p1, p2, ··· , pN . We use only these time-series data in reconstructing the BDs. Next, we explain the method used for the reconstruction of BDs. First, we prepare a time-series predictor for each generated time-series data set. The time-series predictor is
y (t +1)=P (wn, y (t)) (2) B I where P (·) is a nonlinear function, wn ∈ R are trained connection weights, and y (t) ∈ R and y (t +1)∈ RI are the input and output of the predictor. Here, B and I are the number of trained connection weights, and the input and output dimensionality of the nonlinear function, respectively. The trained connection weights w1, w2, ··· , wN correspond to time-series data sets S1,S2, ··· ,SN . Next we estimate the number of significant parameters and a corresponding low-dimensional space by principal component analysis (PCA). Before the PCA, we prepare the deviation vector
δwn = wn − w0, (n =1, 2, ··· ,N)(3) N 1 w0 = wn. (4) N n=1
3 B where w0 ∈ R is the mean vector of the connection weights. A variance-covariance matrix is B constructed from the deviation vectors δwn ∈ R . The eigenvalues and eigenvectors of the variance– covariance matrix are obtained by the PCA. The eigenvalues are then arranged in descending order
λ1 ≥ λ2 ≥···≥λB (5)
B and the eigenvector corresponding to eigenvalue λi is denoted by ui ∈ R . Using the eigenvectors, the deviation matrix δW ∈ RB×N is related to the matrix of principal component coefficients Γ ∈ RB×N by δW =[u1, u2, ··· , uB] Γ. (6) Next, we estimate the number of significant parameters F from the contribution ratio. We assume that the number of significant parameters is Q when the Q-th cumulative contribution ratio is the first to exceed 80%. The contribution ratio CR of the Q-th principal component is
λQ × CR = B 100 [%] (7) i=1 λi and the Q-th cumulative contribution ratio CCR of λ1 to λQ is