Quick viewing(Text Mode)

Reconstructing Bifurcation Diagrams with Lyapunov Exponents from Only Time-Series Data Using an Extreme Learning Machine

Reconstructing Bifurcation Diagrams with Lyapunov Exponents from Only Time-Series Data Using an Extreme Learning Machine

NOLTA, IEICE

Paper Reconstructing bifurcation diagrams with Lyapunov exponents from only time-series data using an extreme learning machine

Yoshitaka Itoh 1 a), Yuta Tada 1 , and Masaharu Adachi 1

1 Department of Electrical and Electronic Engineering, Tokyo Denki University, 5 Senju-Asahicho Adachi-ku, Tokyo, Japan

a) [email protected]

Received March 12, 2016; Revised August 1, 2016; Published January 1, 2017

Abstract: We describe a method for reconstructing bifurcation diagrams with Lyapunov ex- ponents for chaotic systems using only time-series data. The reconstruction of bifurcation dia- grams is a problem of time-series prediction and predicts oscillatory patterns of time-series data when parameters change. Therefore, we expect the reconstruction of bifurcation diagram could be used for real-world systems that have variable environmental factors, such as temperature, pressure, and concentration. In the conventional method, the accuracy of the reconstruction can be evaluated only qualitatively. In this paper, we estimate Lyapunov exponents for re- constructed bifurcation diagrams so that we can quantitatively evaluate the reconstruction. We also present the results of numerical experiments that confirm that the features of the reconstructed bifurcation diagrams coincide with those of the original ones. Key Words: Chaos, reconstruction of bifurcation diagram, time-series prediction, extreme learning machine

1. Introduction The reconstruction of a bifurcation diagram (BD) is an application of time-series prediction. The time-series prediction problem has been studied by many researchers using various techniques, such as neural networks and learning algorithms. Tokunaga et al. has proposed the reconstruction of BDs from only time-series data [1]. In conventional time-series prediction, no parameters are estimated. In contrast, the reconstruction of a BD estimates the number of significant parameters of the unknown system and recognizes oscillatory patterns when parameters change. Here we briefly describe the algorithm for reconstructing BDs that was proposed by Tokunaga et al.. First, we make a time-series predictor for target time-series data by using a neural network. Next, we estimate the number of significant parameters of the target dynamical system from the time- series predictor, by principal component analysis [2]. Then, we reconstruct the BDs in the estimated parameter space. This method employs a three-layer neural network as a time-series predictor. The neural network is trained via back propagation (BP) learning of synaptic weights and biases [3]. Ogawa et al. [4] have reconstructed BDs using an algorithm that is different from the algorithm

2

Nonlinear Theory and Its Applications, IEICE, vol. 8, no. 1, pp. 2–14 c IEICE 2017 DOI: 10.1587/nolta.8.2 proposed by Tokunaga. This algorithm uses radial basis function networks with an additive parameter. Bifurcation diagrams are reconstructed by changing the parameter, and Lyapunov exponents are estimated from the radial basis function networks. Therefore, this method is limited to target systems that have an additive parameter. Bagarinao et al. [5–7] have proposed the reconstruction of BDs using nonlinear autoregressive (NAR) models as time-series predictors. They employ NAR models as time-series predictors because neural networks have several shortcomings, such as difficulty in handling time series corrupted by dynamical noise. They claim that the advantage of using NAR models is in both their robustness to noise and their simple structure [7]. Tokunaga et al. used neural networks with BP learning. The reconstruction of a BD by this method requires the time-series predictor to be made repeatedly; therefore, the computational cost is very high. To avoid this problem, we have proposed [8, 9] the reconstruction of BDs using extreme learning machines (ELMs) [10]. The learning of connection weights in an ELM needs only linear regression. Therefore, the computational cost is much lower when using the ELM rather than BP learning. In Ref. [8], we demonstrated the reconstruction of BDs with an ELM for the R¨ossler equation, which has two parameters. In addition, we quantitatively evaluated the accuracy of the reconstruction of the BD in the case of two parameters. In this paper, we estimate the Lyapunov exponents of reconstructed systems, that is, we estimate typical indices for a chaotic system. Then, we can quantitatively evaluate the reconstruction using these indices. The target chaotic systems are the [11], the H´enon Map [12] and the R¨ossler equation [13]. The rest of the paper is organized as follows. In Section 2, we explain the reconstruction of the BDs. In Section 3, we explain the use of ELMs as time-series predictors. In Section 4, we describe the estimation of Lyapunov exponents for reconstructed BDs. In Section 5, we show the results of numerical experiments. Finally, we give conclusions in Section 6. 2. Reconstructing the bifurcation diagrams In this section, we briefly summarize the reconstruction method proposed by Tokunaga et al. [1]. Let us assume that the nonlinear map that generates the time-series data is

y (t +1)=G (pn, y (t)) , (n =1, 2, ··· ,N) , (1) where N is the number of time-series data sets to be used in the reconstruction of BDs, G (·)isa A E E nonlinear map, pn ∈ R are parameters (one for each data set), and y (t) ∈ R and y (t +1)∈ R are the input and output of the nonlinear map, respectively. Here, A and E are the parameter dimensionality, and the input and output dimensionality of the nonlinear map, respectively. We assume that each given parameter value is close to the others. The time-series data set that is generated with parameter pn is called Sn. Thus, time-series data sets S1,S2, ··· ,SN correspond to parameter sets p1, p2, ··· , pN . We use only these time-series data in reconstructing the BDs. Next, we explain the method used for the reconstruction of BDs. First, we prepare a time-series predictor for each generated time-series data set. The time-series predictor is

y (t +1)=P (wn, y (t)) (2) B I where P (·) is a nonlinear function, wn ∈ R are trained connection weights, and y (t) ∈ R and y (t +1)∈ RI are the input and output of the predictor. Here, B and I are the number of trained connection weights, and the input and output dimensionality of the nonlinear function, respectively. The trained connection weights w1, w2, ··· , wN correspond to time-series data sets S1,S2, ··· ,SN . Next we estimate the number of significant parameters and a corresponding low-dimensional space by principal component analysis (PCA). Before the PCA, we prepare the deviation vector

δwn = wn − w0, (n =1, 2, ··· ,N)(3) N 1 w0 = wn. (4) N n=1

3 B where w0 ∈ R is the mean vector of the connection weights. A variance-covariance matrix is B constructed from the deviation vectors δwn ∈ R . The eigenvalues and eigenvectors of the variance– covariance matrix are obtained by the PCA. The eigenvalues are then arranged in descending order

λ1 ≥ λ2 ≥···≥λB (5)

B and the eigenvector corresponding to eigenvalue λi is denoted by ui ∈ R . Using the eigenvectors, the deviation matrix δW ∈ RB×N is related to the matrix of principal component coefficients Γ ∈ RB×N by δW =[u1, u2, ··· , uB] Γ. (6) Next, we estimate the number of significant parameters F from the contribution ratio. We assume that the number of significant parameters is Q when the Q-th cumulative contribution ratio is the first to exceed 80%. The contribution ratio CR of the Q-th principal component is

λQ × CR = B 100 [%] (7) i=1 λi and the Q-th cumulative contribution ratio CCR of λ1 to λQ is

Q λq q=1 × CCR = B 100 [%] . (8) i=1 λi

We define a bifurcation path to be a sequence of points (p1 → p2 →···→pN )intheparameter space of the target system and a bifurcation locus to be a sequence of points (γ1 → γ2 →···→γN ) in the space of the principal component coefficients. We assume that the space of the principal component coefficients that duplicates the parameter space of the target system has been determined if the relations between the points in each of these two sets are similar in their trends. A set of new connection weights that can be used in the reconstruction of the BDs is then obtained:

T w˜ =[u1, u2, ··· , uQ][γ1,γ2, ··· ,γQ] + w0. (9)

The nonlinear map for the reconstruction of the BDs is then

y(t +1)=P (w˜ , y(t)) . (10)

3. Extreme learning machine We use an ELM as the time-series predictor. The ELM is a three-layer feed forward neural network whose structure is shown in Fig. 1. The ELM is trained only on the connection weights of the output neurons. The connection weights and biases of the hidden neurons are initially randomly generated, after which they are taken as fixed. Here, the range of connection weights is [−1, 1] and biases are in the dynamic range of the target time series. With randomly generated connection weights and biases, we cannot always obtain the same reconstruction. In addition, the reconstruction of BDs sometimes failed. Therefore, in the future we will attempt to analyze the dynamics of the learning mechanism for the reconstruction of BD and, by analyzing this dynamics, we will consider methods for selecting better connection weights and biases for the reconstruction of BDs. The output of the l-th hidden neuron hl is K (h) hl(t)=sig wlk xk(t)+bl (11) k=1

(h) where wlk denotes the hidden weight from the k-th input neuron to the l-th hidden neuron, bl is the bias of the l-th hidden neuron, K is the number of input neurons, and sig(·) is a sigmoid function: c sig(v)= − e (12) (1 + exp (−zv))

4 Fig. 1. Structure of the ELM. where c, e and z are parameters that adjust the range of output of this function to the dynamic range of the target time series. The output of the m-th output neuron is L (o) om(t)= wml hl(t) (13) l=1

(o) where wml denotes the output weight from the l-th hidden neuron to the m-th output neuron and L is the number of hidden neurons. In this paper, the number of output neurons M is set to be equal to the number of input neurons K:thatis,M = K. The numbers M and K are set to be equal to the dimensionality of the target dynamical system because we use the Jacobian matrix of the time-series predictor for estimating the Lyapunov exponents. A matrix W(o) whose elements (o) are wml is obtained by the following linear regression.

T W(o) = H−1D (14) where T indicates transposition and H−1 is the pseudo-inverse matrix of H. The matrices H, D and W(0) are given by ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ (o) (o) h1(1) ··· hL(1) d1(1) ··· dM (1) w ··· w ⎢ 1,1 M,1 ⎥ ⎢ . . . ⎥ ⎢ . . . ⎥ (o) ⎢ . . . ⎥ H = ⎣ . .. . ⎦ , D = ⎣ . .. . ⎦ , W = ⎣ . .. . ⎦ (15) ··· ··· (o) (o) h1(U) hL(U) d1(U) dM (U) w1,L ··· wM,L where dm(·) is the desired output of the m-th output neuron and U is the length of the training data for the target system. In this study, we use the trained connection weights

T (o) (o) (o) (o) w = w1,1 ··· w1,L w2,1 ··· wM,L (16) as wn in Eq. (2) for the reconstruction of BDs.

4. Estimating the Lyapunov exponents using the obtained nonlinear map We estimate the Lyapunov exponents [14, 15] for the time-series predictor of the reconstructed BDs. The algorithm is as follows. First we decompose the Jacobian matrix of the time-series predictor P (·) in Eq. (10) by QR de- composition: JP w˜ (o), y(t) Q(t)=Q(t +1)R(t +1)(t =0, ··· ,T) (17)

5 where Q(t) is an orthogonal matrix (Q(0) is the identity matrix), R is an upper triangular matrix and ⎡ ⎤ ∂P(w˜ (o),y(t)) ∂P(w˜ (o),y(t)) 1 ··· 1 ⎢ ∂y1(t) ∂yK (t) ⎥ ⎢ ⎥ (o) ⎢ . .. . ⎥ JP w˜ , y(t) = ⎣ . . . ⎦ . (18) ∂P(w˜ (o),y(t)) ∂P(w˜ (o),y(t)) M ··· M ∂y1(t) ∂yK (t)

(o) ∂P(w˜ m ,y(t)) Here an element ∂yκ(t) of the matrix JP is obtained from Eqs. (11) and (13) for the trained ELM using the method of Ref. [16]: (o) L ∂P(w˜ m , y(t)) ∂ (o) = wml hl(t) ∂yκ(t) ∂yκ(t) l=1 L K ∂ (o) (h) = wml · sig wlk yk(t)+bl ∂yκ(t) l=1 k=1 L (o) hl(t)+e (h) = wml (cz − c(hl(t)+e)) wlκ (19) l=1 c

The Lyapunov exponents are obtained from ϕ 1 μi = lim log rii(t)(i =1, ··· ,M) (20) ϕ→∞ ϕ t=1 where rii(t)isthei-th diagonal component of R(t)andϕ is the number of trials.

5. Numerical experiments In this section, we show the results of reconstructing BDs with Lyapunov exponents for the logistic and H´enon maps and the R¨ossler equation of Refs. [11, 12] and [13], respectively.

5.1 Reconstruction of bifurcation diagrams for the logistic map 5.1.1 Experimental condition for the logistic map The logistic map is given by ρ(t +1)=aρ(t)(1− ρ(t)) . (21)

To generate time-series data Sn (n =1, ··· ,N =9)weseta = an, where

an = −0.15 cos (2π(n − 1)/8) + 3.7. (22)

The length of each time-series data set is 1000. For the logistic map, the numbers of input, hidden and output neurons of the time-series predictor are one, fifteen and one, respectively. In this case, the time-series predictor is trained with the desired output ρ(t+1) when ρ(t) is inputted. The parameters c, e and z of the sigmoid function are set to 1, 0 and 1, respectively.

5.1.2 Results for the logistic map We show the contribution ratio CR in Fig. 2(a). As only the first principal component is large, the number of significant parameters is estimated as one. We show the bifurcation path and locus in Figs. 2(b) and (c), respectively. A comparison of these figures shows that the relationships between the points coincide. Therefore the first principal component has sufficient information and we can use the space of principal component coefficients instead of the parameter space of the target system. We show the original and reconstructed BDs with Lyapunov exponents in Figs. 3(a) and (b), respectively, where the top and bottom of each figure show the BD and the Lyapunov exponents, respectively. The reconstructed BD corresponds closely to the original one in these figures because

6 Fig. 2. Results for the logistic map.

Fig. 3. Bifurcation diagrams with Lyapunov exponents for the logistic map. the reconstructed BD has qualitatively the same bifurcation structure, such as a period-doubling bifurcation and the number of periods in the window. In addition the Lyapunov exponents also correspond to the original ones. Next, we show enlarged portions of the reconstructed BD in order to confirm the relationship between the BD and the Lyapunov exponent. Figure 4(a) shows an enlargement of the reconstructed BD around the periodic to chaotic transition and Fig. 4(b) is an enlargement of the window of the reconstructed BD. These figures show the Lyapunov exponent is negative in periodic solution regions and close to zero at the points of a period doubling bifurcation. Moreover, we see that the structure is reproduced. Figures 5(a) and (b) show original and reconstructed BDs whose parameter region includes negative values. These figures show that, although we used time-series data that are generated for only positive a as shown in Eq. (22), the BD is also successfully reconstructed in the negative parameter region. This result is remarkable because it demonstrates that the proposed method not only interpolates but also extrapolates the BDs.

5.1.3 Computational cost We conducted a comparison of the computational cost for reconstructing BD using the ELM and the three-layer neural network with BP learning. The computer used for the numerical experiments runs Mac OS X and has an Intel Core i5 1.3 GHz CPU with 4.00 GB of main memory. The numerical computing environment is MATLABR . Here, the computational cost is the execution time for the computation of the training set for Eq. (2). The computational costs of the ELM and the three-layer neural network with BP learning are 1.833 s and 1947.271 s, respectively. We see that the ELM is 1000 times faster than BP learning. Here, the structure of the three-layer neural network is the same as that of the ELM. The number of training rounds for BP learning is 1000 and the number of points for the bifurcation path is 18: that is, N = 18 in Eq. (22). In addition, we use nine trained connection weights for PCA: that is, n =10, 11, ··· , 18 in Eq. (4). The other conditions are the same as in the

7 Fig. 4. Enlargements of portions of the reconstructed bifurcation diagram with Lyapunov exponents for the logistic map.

Fig. 5. Reconstruction of a bifurcation diagram whose parameter region in- cludes negative values of the logistic map parameter.

ELM case. In this condition, the computational of BP learning is 200-fold that of the ELM. Here, the learning algorithm of the ELM and the BP learning for reconstructing BDs have the computational complexity O(NUL2)andO(NTUL), respectively, where L is the number of the hidden neurons, U is the length of training data for a target system, and T is the number of training rounds for BP learning. These computational and the execution times shown above show reasonable agreement.

5.2 Reconstruction of bifurcation diagram for the H´enon map 5.2.1 Experimental condition for the H´enon map The H´enon map is given by

χ(t +1)=ψ(t)+1− αχ2(t), (23) ψ(t +1)=βχ(t), (24) where α and β are parameters and χ(t) is used for the reconstruction of BDs. However, in this paper the target is only one-parameter systems, so β is fixed at 0.3. We generated time-series data Sn for n =1, ··· ,N =9withα = αn where

αn = −0.2cos(2π(n − 1)/8) + 1.2. (25)

The length of each time-series data set is 1000. For the H´enon map, the numbers of input, hidden and output neurons of the time-series predictor are two, ten and two, respectively. In this case, the

8 Fig. 6. Results for the H´enon map.

Fig. 7. Bifurcation diagrams with Lyapunov spectra for the H´enon map. time-series predictor is trained with the desired outputs χ(t +1)andχ(t) when χ(t)andχ(t − 1) are inputted. The parameters c, e and z of the sigmoid function are set to 3, 1.5 and 1, respectively.

5.2.2 Results for the H´enon map We show the contribution ratio CR in Fig. 6(a). As only the first principal component is large, the number of significant parameters is estimated as one. The bifurcation path and locus are shown in Figs. 6(b) and (c), respectively. A comparison of these figures shows that the relations between the points coincide. Therefore, the first principal component has sufficient information, and we can use the space of principal component coefficients instead of the parameter space of the target system. We show the original and reconstructed BDs with the Lyapunov spectrum in Figs. 7(a) and (b), respectively. In these figures, the solid and dashed lines show the largest and the second-largest Lyapunov exponents, respectively. It can be seen that the reconstructed BD corresponds closely to the original one. The Lyapunov exponents also correspond to the original ones.

5.3 Reconstruction of bifurcation diagram for the R¨ossler equation 5.3.1 Experimental condition for the R¨ossler equation The R¨ossler equation is given by dξ = −η − ζ, (26) dτ dη = ξ + νη, (27) dτ dζ = ξ − (σ − ξ)ζ, (28) dτ where ν, ,andσ are parameters. We generated time-series data Sn for n =1, ··· ,N =9with σ = σn where

9 σn = −0.5cos(2π(n − 1)/8) + 3.7. (29) The parameters ν and  are fixed at 0.33 and 0.3, respectively. The length of each time-series data set is 5000. The time-series data were generated by a third-order Runge–Kutta method where the step time increment was Δτ =0.01. Then, we used the η-component time series sampled at steps of 5Δτ for training the ELM. For the R¨ossler equation, the numbers of input, hidden, and output neurons of the time-series predictor are three, 50, and three, respectively. In this case, the ELM is trained with the following data. The desired outputs for o1(t), o2(t), and o3(t)areη(τ +5Δτ), η(τ − 15 · 5Δτ)and η(τ − 31 · 5Δτ), respectively, when η(τ), η(τ − 16 · 5Δτ)andη(τ − 32 · 5Δτ) are inputted to x1(t), x2(t), and x3(t), respectively. Where one step for the predictor corresponds to 5Δτ in the target system. The parameters c, e and z of the sigmoid function are set to 16, 8, and 0.1, respectively.

5.3.2 Desired output for the R¨ossler equation For the R¨ossler equation, we use a difference time series as the desired output to obtain the output weights of the ELM by Eq. (14) except for the first time series S1. The reason for this is we cannot obtain synaptic weights that are close to each of the others when reconstructing BDs for the R¨ossler equation when using the original time series but we can do so by using the difference time series [8]. The difference time series is generated by subtracting the output of the ELM from the desired output: (o) Dn − HnW n−1, (30) (o) where Hn and Dn are the matrices of Eq. (15) for the time-series data set Sn,andW n−1 is the matrix of Eq. (15) for the time-series data set Sn−1. We substitute the difference time series for Eq. (14):

(o) (o) HnΔW n = Dn − HnW n−1 (o) −1 (o) ΔW n = Hn Dn − W n−1 (31)

(o) (o) where ΔW n is difference output weight. Therefore, the matrix W n is obtained by

(o) (o) (o) W n = W n−1 + ΔW n. (32)

(o) Note that we use each matrix W n as wn in Eq. (2) for the reconstruction of the BD. Here each (o) matrix W n is converted to a vector by Eq. (16).

5.3.3 Results for the R¨ossler equation We show the contribution ratio CR in Fig. 8(a). From this figure, the number of significant parameters is estimated as one because the first principal component exceeds 80%. The bifurcation path and locus are shown in Figs. 8(b) and (c), respectively. A comparison of these figures shows that the bifurcation locus loses the shape of the bifurcation path, especially for locus numbers between 3 and 4 and for those between 6 and 7. The reason for the mismatch is that the second and the third principal components in Fig. 8(a) are higher than those of Figs. 2(a) and 6(a). However, we presume that this result does not affect the reconstruction of the BDs, because the relations between the points are preserved. Therefore, the first principal component has sufficient information, and we can use the space of principal component coefficients instead of the parameter space of the target system. We show the original and reconstructed BDs with Lyapunov spectra in Figs. 9(a) and (b), re- spectively. In the middle panel of these figures, the solid and dashed lines show the largest and the second largest Lyapunov exponents, respectively, and in the lower panel of these figures, the lines show the third Lyapunov exponents. It can be seen that the reconstructed BD corresponds closely to the original one. The Lyapunov exponents also correspond to the original ones.

5.4 Comparison of the Lyapunov exponents We quantitatively compared Lyapunov exponents for the original and reconstructed BDs. For the

10 comparison, we used the Lyapunov exponents that are estimated at the points of the bifurcation path and locus. We show the results of this comparison for the logistic map, the H´enon map and the R¨ossler equation. First, we show the Lyapunov exponents for each parameters point of bifurcation path or locus for the logistic map in Table I. The bifurcation path or locus number in this table corresponds to the path and locus number in Figs. 2(a) and (b), respectively. The gray rows in Table I show that the Lyapunov exponents of original and reconstructed BDs do not match. The Lyapunov exponents of original and reconstructed BDs are fairly good agreement at the points numbered 2, 3, 4, 6, 7 and 8. The Lyapunov exponents do not match at point number 5. In addition, the Lyapunov exponents of the original and reconstructed BDs at this point are negative and positive, respectively. However these parameters are in the window of Figs. 3(a) and (b) and there is a parameter value of −332.9 that is the neighboring point of locus point number 5, whose Lyapunov exponent is 0.022±0.006. Regarding the points numbered 1 and 9, the Lyapunov exponents also do not match. However, these Lyapunov exponents are negative and the time series are four cycles in Figs. 3(a) and (b). For periodic solutions, there is a tendency for it to be relatively difficult to estimate the absolute value of the Lyapunov exponent with high accuracy. Table II shows the same information for the H´enon map as is shown in Table I for the Logistic map. Regarding points 2 to 8, the largest Lyapunov exponents of the original and reconstructed BDs are nearly in agreement. For points 1 and 9, the Lyapunov exponents of the reconstructed BD do not match the originals. However, these Lyapunov exponents are negative and the time series are four cycles in Figs. 7(a) and (b). Table III shows the same information for the R¨ossler equation as shown in Table I for the Logistic map. Regarding the points 1, 3, 5, 7 and 9, the Lyapunov exponents of the original and reconstructed BDs are in fairly good agreement. For points 2 and 8, these parameters are around periodic to chaotic solution transitions in Figs. 9(a) and (b). There is a parameter value of −1596 that is a neighboring point of the locus points 2 and 8, whose Lyapunov exponents are both 0.0006 ± 0.0008. Note that

Fig. 8. Results for the R¨ossler equation.

Fig. 9. Bifurcation diagrams with Lyapunov spectra for the R¨ossler equation.

11 Table I. Comparison of Lyapunov exponents for the Logistic map. Bifurcation path or Original. Reconstructed. locus number Parameter Lyapunov exponent Parameter Lyapunov exponent 1 3.550 −0.09972±0.00004 −632.3 −0.00456 2 3.594 0.1753±0.0004 −585.7 0.144±0.001 3 3.700 0.353±0.002 −471.7 0.351±0.001 4 3.806 0.425±0.002 −358.3 0.429±0.003 5 3.850 0.022±0.006 −322.1 −0.0985±0.0003 6 3.806 0.425±0.002 −358.5 0.429±0.003 7 3.700 0.353±0.002 −471.7 0.351±0.001 8 3.594 0.1753±0.0004 −585.7 0.144±0.001 9 3.550 −0.09972±0.00004 −632.3 −0.00456

Table II. Comparison of largest Lyapunov exponents for the H´enon map. Original. Reconstructed. Bifurcation path or Largest Lyapunov Largest Lyapunov locus number Parameter Parameter exponent exponent 1 1.000 −0.161±0.001 −97.41 −0.6027±0.0002 2 1.058 0.0248±0.0004 −53.21 0.0759±0.0007 3 1.200 0.308±0.002 31.48 0.315±0.003 4 1.341 0.367±0.001 70.10 0.2562±0.0009 5 1.400 0.418±0.002 98.06 0.382±0.003 6 1.341 0.367±0.001 70.10 0.2562±0.0009 7 1.200 0.307±0.002 31.48 0.315±0.003 8 1.058 0.0248±0.0004 −53.21 0.0759±0.0007 9 1.000 −0.161±0.001 −97.41 −0.6027±0.0002

Table III. Comparison of largest Lyapunov exponents for the R¨ossler equa- tion. Original. Reconstructed. Bifurcation path or Largest Lyapunov Largest Lyapunov locus number Parameter Parameter exponent exponent 1 3.200 0.0006±0.0007 −4232 0.00025±0.00008 2 3.346 0.0006±0.0008 −1640 0.0125±0.0004 3 3.700 0.056±0.002 1921 0.0682±0.0005 4 4.053 0.001±0.002 2340 0.0619±0.0006 5 4.200 0.077±0.002 3221 0.0631±0.0008 6 4.053 0.001±0.002 2340 0.0609±0.0008 7 3.700 0.057±0.002 1921 0.0731±0.0009 8 3.346 0.0006±0.0008 −1640 0.0109±0.0005 9 3.200 0.0006±0.0007 −4232 0.00025±0.00008 in this case the range of the parameter is about 20000, as shown on the horizontal axis of Fig. 9(b), therefore −1640 and −1596 can be said to be “neighboring”. For points 4 and 6, the parameters are in the window in Figs. 9(a) and (b). There is a parameter value of 2396 that is a neighboring point of points 4 and 6, whose Lyapunov exponents are both 0.001 ± 0.002. In addition, the time series of points 2, 4, 6 and 8 are periodic, because these Lyapunov exponents converge to near zero as the number of trials tends to infinity in Eq. (20). These results demonstrate that the original BD with Lyapunov exponents in the chaotic region can be quantitatively compared with the reconstructed one. In addition, even if the Lyapunov exponents at the points of the bifurcation path and locus do not match, there are parameter values in the neigh-

12 borhood of the locus point whose Lyapunov exponents are close to the exponent of the corresponding bifurcation path. 6. Conclusion We have reconstructed bifurcation diagrams with Lyapunov exponents for the logistic map, the H´enon map and the R¨ossler equation. By estimating the Lyapunov Exponents, we have quantitatively evaluated the reconstructed bifurcation diagrams. The results of numerical experiments show that the bifurcation diagrams and their Lyapunov exponents were successfully reconstructed. In particular, the enlargements of the reconstructed bifurcation diagrams show features of the Lyapunov exponents, such as the period doubling bifurcation. Moreover, it has been demonstrated that the proposed method for the reconstruction of bifurcation diagrams can not only interpolate but also extrapolate the diagrams. In addition we compared the computational costs for reconstructing BDs using the ELM and a three-layer neural network with BP learning. We found the execution time for training the ELM is significantly smaller than that for BP learning. In future work, we will try to estimate other indices for chaotic systems through the reconstruction of bifurcation diagrams from time-series data and analyze the dynamics of the learning mechanism for this method. We will also apply the proposed method to real world data (e.g., [17]). However, before the application to real world data, we have some tasks to undertake. For example, we have to develop a method for determining the bifurcation path, because we need some time-series data whose parameters are close to those of other time series. In addition, it is better that time series show various oscillatory patterns. Acknowledgments The authors would like to thank anonymous reviewers for their fruitful comments and suggestions. References [1] R. Tokunaga, S. Kajiwara, and S. Matsumoto, “Reconstructing bifurcation diagrams only from time-waveforms,” Physica D, vol. 79, pp. 348–360, 1994. [2] R.W. Preisendorfer and C.D. Mobley, “Principal component analysis in meteorology and oceanography,” ed, elsevier, 1988. [3] D.E. Rumelhart, G.E. Hinton, and R.J. Williams, “Learning representations by back- propagating errors,” Nature, 323(6088), pp. 533–536, 1986. [4] S. Ogawa, T. Ikeguchi, T. Matozaki, and K. Aihara, “Nonlinear modeling by radial basis function networks,” IEICE Trans. Fundamentals, vol. E79-A, no. 10, 1996. [5] E. Bagarinao, K. Pakdaman, T. Nomura, and S. Sato, “Reconstructing bifurcation diagrams from noisy time series using nonlinear autoregressive models,” Physical Review E, vol. 60, no. 1, 1999. [6] E. Bagarinao, K. Pakdaman, T. Nomura, and S. Sato, “Time series-based bifurcation diagram reconstruction,” Physica D, vol. 130, pp. 211–231, 1999. [7] E. Bagarinao, K. Pakdaman, T. Nomura, and S. Sato, “Reconstructing bifurcation diagrams of dynamical systems using measured time series,” Method Inform Med, vol. 39, pp. 146–149, 2000. [8] Y. Tada and M. Adachi, “Reconstruction of bifurcation diagrams using extreme learning ma- chines,” 2013 IEEE International Conference on Systems, Man, and Cybernetics, vol. 79, pp. 1127–1131, 2013. [9] Y. Itoh, Y. Tada, and M. Adachi, “Reconstruction of bifurcation diagrams with lyapunov ex- ponents for chaotic systems from only time-series data,” 2015 International Symposium on Nonlinear Theory and its Applications, pp. 692–695, 2015. [10] G.B. Huang, Q.Y. Zhu, and C.K. Siew, “Extreme learning machine: Theory and applications,” Neurocomputing, vol. 70, pp. 489–501, 2006. [11] May, M. Robert, “Simple mathematical models with very complicated dynamics,” Nature, 261(5569), pp. 459–467, 1976.

13 [12] M. H´enon, “A two-dimensional mapping with a strange ,” Communications in Mathe- matical Physics, 50(1), pp. 69–77, 1976. [13] O.E. R¨ossler, “Continuous Chaos,” Ann. N. Y. Acad. Sci.31, vol. 35, pp. 376–392, 1979. [14] I. Shimada and T. Nagashima, “A numerical approach to ergodic problem of dissipative dy- namical systems,” Prog. Theor. Phys., vol. 61, no. 6, pp. 1605–1616, 1979. [15] M. Sano and Y. Sawada, “Measurement of the Lyapunov spectrum from Chaotic time series,” Phys. Rev. Lett., vol. 55, 1985. [16] M. Adachi and M. Kotani, “Identification of Chaotic dynamical systems with back-propagation neural networks,” IEICE Trans. Fundamentals, vol. E77-A, no. 1, 1994. [17] G. Langer and U. Parlitz, “Modeling parameter dependence from time series,” Phys. Rev. E, vol. 70, 2004.

14