DISCRETE AND CONTINUOUS doi:10.3934/dcds.2013.33.701 DYNAMICAL SYSTEMS Volume 33, Number 2, February 2013 pp. 701–721

ON TRANSVERSE STABILITY OF RANDOM

Xiangnan He∗†, Wenlian Lu∗†‡ and Tianping Chen† ∗Center for Computational Systems Biology †Laboratory of for Nonlinear Sciences School of Mathematical Sciences, Fudan University Shanghai, 200433, China

(Communicated by Jianhong Wu)

Abstract. In this paper, we study the transverse stability of random dy- namical systems (RDS). Suppose a RDS on a Riemann manifold possesses a non-random invariant submanifold, what conditions can guarantee that a ran- dom of the RDS restrained on the invariant submanifold is a random attractor with respect to the whole manifold? By the linearization technique, we prove that if all the normal Lyapunov exponents with respect to the tangent space of the submanifold are negative, then the attractor on the submanifold is also a random attractor of the whole manifold. This result extends the idea of the transverse stability analysis of deterministic dynamical systems in [1,3]. As an explicit example, we discuss the complete synchronization in network of coupled maps with both stochastic topologies and maps, which extends the well-known master stability function (MSF) approach for deterministic cases to stochastic cases.

1. Introduction. Many natural and social phenomena can be properly described by dynamical systems, such as swarming of fish [11], flocking of birds [6], biolog- ical population [13], and the world wide web (WWW) clicking [10]. The analysis of these dynamical systems have attracted increasing interests of scientists from diverse fields in recent years. Among them, the discussion of synchronization is a hot topic. For example, [18] investigates the synchronization of Vicsek Model; [7] quantifies genuine and random synchronization in multivariate neural series; [14, 15] discusses the synchronization of discrete-time dynamical networks with time-varying couplings. The well-known master stability function (MSF) provides an efficient approach to analyze (complete) synchronization [19]. With this approach, the synchroniza- tion is regarded as the transverse stability of certain invariant submanifold, named synchronization manifold, of the dynamical system, see [1,3,4]. In these works, The authors provided rigorous analysis of the transverse stability of deterministic dynamical systems, which is defined as that an attractor (in certain sense) of the dynamical system restrained in the invariant submanifold is also an attractor of the dynamical system with respect to the whole manifold. They proved that all the

2010 Mathematics Subject Classification. Primary: 37H15, 37D10; Secondary: 60F10. Key words and phrases. RDS, transverse stability, Lyapunov exponents, stochastic topologies and maps, complete synchronization. ‡Corresponding author, Email:[email protected].

701 702 XIANGNAN HE, WENLIAN LU AND TIANPING CHEN normal Lyapunov exponents are negative is sufficient to guarantee the transverse stability. Moreover, the sense of stability depends on the measure, according to which the Lyapunov exponents are defined. In many situations, randomness often occurs in natural and artificial systems. Random dynamical system (RDS) was introduced to provide a more realistic math- ematical model. [2] provided a comprehensive general introduction of RDS. There are also a lot of literatures discussing random dynamical systems and their applica- tions. More closely relating to this paper, [5] compared the random with attractors in the deterministic dynamics; [16, 17] investigate stochastic differential delay equations with Markovian switching and discusses the exponential stability of the equations; [12] discusses the problem of robust stability for stochastic inter- val delayed additive neural networks with Markovian switching; [9] proposes a new approach to stochastic stability analysis of linear system with randomly varying parameters in certain sense. However, to our best knowledge, there is little works discussing the transverse stability of random dynamical system. The purpose of this paper is to study the transverse stability of random dynamical systems and analyze the complete syn- chronization of coupled dynamical networks with stochastic topologies and maps. Combining the large deviation theory, we extend the basic idea in [1,3] to the RDS and consider the stability in the Lyapunov sense and the almost sure probability sense. We introduce a new definition of random attractor, µ-uniform random at- tractor, which is stronger than the existing definition of random attractor. We also prove that if the normal Lyapunov exponents transverse to the tangent space of the invariant submanifold are all negative, then a µ-uniform random attractor of the RDS restrained on this invariant submanifold is a random attractor in almost sure sense. Furthermore, we employ the result to synchronization analysis of networks of coupled random maps with randomly switching topologies. Similar to the mas- ter stability function approach, we can use the transverse Lyapunov exponents (in almost sure sense) to measure the synchronizability with switching topology. This paper is organized as follows. Sec. 2 gives, besides the notations, necessary definitions and propositions, which are used afterwards. Sec. 3 presents the main results of transverse stability of RDS and their proofs. Sec. 4 shows the application of our main results to complete synchronization analysis for networks of coupled random maps with randomly switching topologies. Sec. 5 concludes this paper.

2. Preliminaries. In this section, we provide some necessary notations, defini- tions, and brief introductions of the theory and terminology. Before that, for the readers’ convenience, Table1 lists the notations which will be used afterward. Let θt :Ω → Ω, t ∈ T be a measure preserving shift on a (Ω, F, P) such that (t, ω) 7→ θtω is measurable and satisfies θ0 = id and θt+s = θt◦θs, where T stands for the time set, which can be integer set or real set, one sided or + two-sided, i.e., R, Z, R , or Z≥0. Hereby, we can define a random dynamical system (RDS) over this shift as follows. A RDS ϕ associated with a shift θt on an m-dimensional compact smooth Riemannian manifold M and a Borel σ-algebra B is a measurable map:

ϕ : T × M × Ω −→ M (t, p, ω) 7−→ ϕ(t, p, ω) ON TRANSVERSE STABILITY OF RANDOM DYNAMICAL SYSTEM 703

Table 1. Notations

M The Riemannian manifold. N The non-random invariant submanifold of M w.r.t. the RDS. R+ The set of positive real numbers. Z≥0 The set of nonnegative integers. θt A measure preserving shift. ϕ(t, p, ω) A RDS on M with the initial condition (p, ω). t fω(·) A map equivalent to ϕ(t, ·, ω) with fixed t and ω. ◦ Composition of maps or functions. [x] The nearest integer less than x.

k · kM The norm defined on the Riemannian manifold M. k · k The norm of Euclidean Space.

dM(·, ·) The Geodesic distance on the Riemannian manifold M. 2M The power set of M, i.e., set composed of all subsets of M. A(ω) A random set with its value on fixed ω (a regular set) denoted

by Aω.

β(ω) A random variable with its value on fixed ω denoted by βω.

O(Aω, µ) µ-neighborhood of Aω.

ON (Aω, µ) µ-neighborhood of Aω embedded in submanifold N. βω ON (Aω, µ) The ball of ON (Aω, µ) with a radius βω in the complement space of N.

βω βω ON (Aω, µ) The closed set of ON (Aω, µ). TpN The tangent space of N at p. t TNt The tangent space of N at fω(p). ⊥ t (TNt) The normal space of N at fω(p). (TN)⊥ The normal bundle of N.

ΠV The orthogonal projection onto the vector subspace V . t t dpfω The tangent map of fω at p. ⊥ t t dp fω The normal derivative of fω at p. SW (ω) The unit normal sphere bundle of W (ω). ⊗ The Kronecker product of matrices.

such that ϕ :(t, p, ω) 7−→ ϕ(t, p, ω) is continuous w.r.t. t ∈ T and p ∈ M for all ω ∈ Ω and satisfies the cocycle property:

s ϕ(0, ·, ω) = id(·), ϕ(t + s, ·, ω) = ϕ(t, ·, θ ω) ◦ ϕ(s, ·, ω) for all t, s ∈ T, ω ∈ Ω. If considering two-sided time, ϕ(t, ·, ω) is assumed to be invertible and satisfy ϕ(t, ·, ω)−1 = ϕ(−t, ·, θtω). For given t and ω, ϕ can be regard as a map on M. So t t in the following, we equivalently denote a map by fω such that fω(·) = ϕ(t, ·, ω). t+s t s t Thus, by induction, we have fω (·) = ϕ(s, ϕ(t, ·, ω), θ ω) = fθtω ◦ fω, according to the cocycle property. 704 XIANGNAN HE, WENLIAN LU AND TIANPING CHEN

A random set A(ω) is defined as a set-valued map A :Ω → 2M , satisfying that for each p ∈ M, dM(p, A(ω)) is measurable with respect to (Ω, F), where dM (·, ·) is defined in Table1. A random compact set is defined as that A(ω) is compact for all ω ∈ Ω. Definition 2.1. A random compact set A(ω) is said to be (forward) invariant under the RDS ϕ if t t fω(A(ω)) ⊆ A(θ ω), ∀ t ∈ T, ω ∈ Ω, and strictly (forward) invariant if t t fω(A(ω)) = A(θ ω), ∀ t ∈ T, ω ∈ Ω. Throughout this paper, we consider the attraction in the forward and almost surely (a.s. for abbrev.) sense. Therefore, we apply the ’pointwise’ definition to defined random attractor as in [5](Def3.3,3.5,3.6). Definition 2.2. Suppose A(ω) is a random invariant set and the random basin of attraction of a random invariant set A(ω) is defined as: t t B(A)(ω) = {p ∈ M : lim dM(fω(p),A(θ ω)) → 0}. t→+∞ A random invariant set A(ω) is said to attract another random set B(ω) forwardly if t t lim dM(fω(B(ω)),A(θ ω)) → 0. t→+∞ Remark 1. The attraction basin B(A)(ω) defined above is (forward) invariant. In fact, for any s > 0 and any point p ∈ B(A)(ω), we have t s t s lim dM(fθsω(fω(p)),A(θ (θ ω))) t→+∞ t+s t+s = lim dM(fω (p),A(θ ω))) → 0, t→+∞ s s which implies fω(p) ∈ B(A)(θ ω). Definition 2.3. Let ϕ be a RDS and A(ω) be a random compact set which is invariant under ϕ. A(ω) is called a random attractor if B(A)(ω) contains a random neighborhood of A(ω) almost surely. Moreover, A is called uniformly stable if B(A)(ω) contains a (non-random) neighborhood of A(ω). Besides the “pointwise” random attractor, there are also other definitions of random attractors, such as the point and set attractor proposed by [21]. Definition 2.4. An invariant random compact set A(ω) ⊂ B(ω) is called a B(ω) point attractor, if A(ω) attracts random points in probability, i.e. t t lim {ω : dM(fω(x(ω)),A(θ ω)) > } = 0, t→+∞ P for every  > 0 and every random variable x(ω) ⊂ B(ω). A(ω) is called a B(ω) set attractor, if for every  > 0 and every random compact set C(ω) ⊂ B(ω) it holds t t lim {ω : distM(fω(C(ω)),A(θ ω)) > } = 0, t→+∞ P

Here distM(A(ω),B(ω)) = sup inf dM(A(ω),B(ω)) is the semi Hausdorff dis- a∈A(ω) b∈B(ω) tance. ON TRANSVERSE STABILITY OF RANDOM DYNAMICAL SYSTEM 705

Remark 2. We compare the definition of random attractor above with the set at- tractor proposed by [21]. Generally, if the random attractor A(ω) has an attraction basin B(A)(ω), then A(ω) can also be regard as a B(A)(ω) set attractor. As a direct consequence, A(ω) is also a B(A)(ω) point attractor. For a uniform stable attractor A(ω), there is a constant υ > 0 such that the υ-neighborhood O(A(ω), υ) of A(ω) is contained in B(A)(ω) for all ω ∈ Ω. To calculate the probability of staying near the random attractor A(ω), we propose a stronger definition of random attractor: the µ-uniform random attractor for some µ > 0, as follows: Definition 2.5. For some µ > 0, let µ U = {ω : d (f n[O(A(ω), µ)],A(θnω)) > }. (1) n M ω 2 A uniform random attractor is called µ-uniform if ∞ X P(Un) < +∞. n=1 Lyapunov exponents provide significant quantitative information of asymptotic expansion and contraction rates for dynamical systems. Here, we give a brief in- troduction of Lyapunov exponents in RDS. For more details, we refer readers to the textbook [2]. Let ϕ be a RDS on an m-dimensional non-random Riemannian manifold M, embedded in the m-dimensional Euclidean space. We suppose that ϕ has an n-dimensional (n < m) non-random submanifold N invariant under the RDS t t 1+a ϕ, and the corresponding map fω(·) relating to ϕ(t, ·, ω) satisfies fω ∈ C (N,N) t for some 0 < a < 1. It follows that for p ∈ N, d f (T N) ⊂ T t N, where T N p ω p fω (p) p t t denotes the tangent space of N at p and dpfω the tangent mapping of fω at p. The formula 1 t λ(p, v) = lim log kdpfω(v)kTM (2) t→∞ t in the almost sure sense (a.s. for abbrev.) defines a Lyapunov exponent specified by t ⊥ fω at the point p in the direction v. By the splitting TMt = TNt ⊕ (TNt) , where TM = T t M, we define the normal Lyapunov exponent at p in the direction v t fω (p) as ⊥ 1 t λ (p, v) = lim log kΠ(TN )⊥ ◦ dpfω ◦ Π(TN )⊥ (v)kTM , a.s. (3) t→∞ t t 0

Here, symbol ΠV stands for the orthogonal projection onto the vector subspace V S and TM is the tangent bundle defined by TM = p∈M TpM. ⊥ ⊥ ⊥ ⊥ In what follows, we refer to Π(TN1) ◦ dpfω ◦ Π(TN0) :(TN0) → (TN1) as ⊥ the normal derivative of fω at p and denote it by dp fω. Thus equation (3) can be rewritten as ⊥ 1 ⊥ t λp (v) = lim log kdp fω(v)kTM , a.s. (4) t→∞ t ⊥ For convenience, we assume that the normal Lyapunov exponents λp (v) defined in (4) has a lower bound throughout this paper. This condition is automatically ⊥ satisfied if f is a diffeomorphism or the normal derivative dp f is injective as stated as Thm 2.8 in [3]. Therefore, according to the Oseledec multiplicative ergodic theorem of the random dynamical systems [2], under several mild conditions, the 706 XIANGNAN HE, WENLIAN LU AND TIANPING CHEN

RDS ϕ possesses m − n normal Lyapunov exponents almost surely. We denote the largest normal Lyapunov exponent by λ⊥, i.e.,

⊥ 1 ⊥ t λ = sup lim log kdp fω(v)kTM , a.s. (5) (p,v)∈TM t→∞ t However, Oseledec multiplicative ergodic theorem could only guarantee the exis- tence of Lyapunov exponent. In the application part (Thm4.4,Thm4.5) below, we still need the following two propositions derived by Large Deviation Lemma [8] to estimate the convergence rate of the limit (5). In general, let H(·): M → Rn,n be a square-matrix-valued nonsingular map and ξt be a general . We define t−1 1 Y α = lim log k H(ξk)k, (6) t→∞ t k=0 if the limit exists. Similarly, we define δ-moment Lyapunov exponent: t−1 1 Y k δ α(δ) = lim log Ek H(ξ )k , t→∞ t k=0 0 as a function w.r.t. δ. If |H(·)| is bounded, then the right-hand derivative α (0+) exists. t Qt−1 k Let Y = log k k=0 H(ξ )k. Theorem II.2,[8] tells 0 Proposition 1. For any α¯ > α (0+), there exist σ > 0 and T > 0 such that t−1 1 Y { k H(ξk)k ≥ α¯} ≤ exp(−σt), ∀ t ≥ T ; (7) P t k=0 moreover, if the limit t−1 1 Y k δ α(δ) = lim log Ek H(ξ )k t→∞ t k=0 0 exists and is finite for some open interval containing 0, and α (0) exists and is finite, then we have t−1 1 Y α = lim log k H(ξk)k (8) t→∞ t k=0 0 exists and is finite almost surely with α = α (0). In addition, if ξt is homogeneous Markov process defined on a finite space with an irreducible transition probability matrix, then we have Proposition 2. [9] Suppose H(·) are all nonsingular and the homogenous Markov process ξt is irreducible defined on a finite with a unique ergodic invariant class, then for any  > 0 there exists σ > 0 such that t−1 1 Y  log | H(ξk)| ∈/ (α − , α + ) ≤ exp(−σt) P t k=0 holds for all t ≥ 0, where α is defined in (8). ON TRANSVERSE STABILITY OF RANDOM DYNAMICAL SYSTEM 707

3. Transverse Stability Analysis. Let M be an m-dimensional compact Rie- mann manifold, embedded in a Euclidean space, and ϕ be a RDS defined on M. Suppose that there is an n-dimensional non-random compact submanifold N that is t invariant with respect to ϕ and A(ω) is a random attractor of ϕ in N. Denote fω as t the corresponding map to ϕ for each t ∈ T, ω ∈ Ω, i.e., fω(p) = ϕ(t, p, ω), ∀p ∈ M. According to the Oseledec multiplicative theorem of RDS [2], ϕ also possesses m−n normal Lyapunov exponents almost surely determined by the direction v. In this paper, we aim to extend the transverse stability analysis approach in [3] to random t dynamical systems. First, we present two assumptions for the map fω. t 1+¯a H1: fω : M 7→ M is equi-C (0 < a < a¯ < 1) with respect to all ω ∈ Ω. Herea ¯ can be chosen as any number larger than a. “equi” means that the property C1+¯a uniformly holds for all ω ∈ Ω. Therefore, the 1 + a(a < a¯) derivative has a uniform bound independent of ω. Since M is a compact manifold, there exists a positive constant K independent of ω such that

t t t 1+a dM(fω(p), fω(q)) ≤ (K) kdM(p, q)k , ∀ p, q ∈ M, ∀ω ∈ Ω. (9) As a direct consequence, f is also Lipschitz continuous and we still denote the Lipschhitz constant by K for convenience. H2: For any given µ > 0, there exists a constant λ < 0 such that

∞ X λ P(Vn ) < +∞, n=1 where 1 V λ = {ω : sup log kd⊥f n(v)k > λ}. n n p ω p ∈ O(A(ω), µ), kvk = 1

We will prove

Theorem 3.1. Under hypotheses H1,2, if the embedding submanifold N is invariant with respect to ϕ and A(ω) is a µ-uniform attractor of ϕ restrained on N, then A(ω) is also a random attractor of ϕ on the whole manifold M.

Proof. Since A(ω) is a µ-uniform random attractor of ϕ restrained on the sub- manifold N, there is a positive constant µ such that the compact (non-random) neighborhood ON (A(ω), µ) of A(ω) is forward invariant and for ∀p ∈ ON (A(ω), µ),

t t dM(fω(p),A(θ ω)) → 0, as t → ∞.

In the following discussion, we denote ON (A(ω), µ) by W (ω) for convenience. Define a normal bundle of W (ω) as follows:

⊥ [ ⊥ TW (ω) = (TpN) . p∈W (ω)

⊥ ⊥ An element of TW (ω) is denoted as (p, v), where p ∈ W (ω), v ∈ (TpN) . The unit ⊥ normal sphere bundle is denoted by SW (ω) = {p ∈ W (ω), v ∈ (TpN) : kvk = 1)}. Due to the compactness of N, there is a sufficiently small constant  > 0 such that h is a diffeomorphism between N˜  = {(p, v) ∈ TN ⊥ : kvk ≤ } and a compact -neighborhood of N denoted by N  in M (Vol.1 Chap.9 Thm.20 in [20]). Therefore 708 XIANGNAN HE, WENLIAN LU AND TIANPING CHEN we conclude that there is a δ > 0 such that the commuting diagram below holds:

f˜ N˜  −→ω N˜ δ h ↓ ↓ h f N  −→ω N δ. (10)

Noting that for any ω ∈ Ω, W (ω) = ON (A(ω), µ) is a subset of the submanifold ⊥ S ⊥ S ⊥ ⊥ N, we have TW (ω) = (TpN) ⊂ (TpN) = TN . Denoting the image p∈W (ω) p∈N of W˜ (ω) = {(p, v) ∈ TW (ω)⊥ : kvk ≤ } under h by W (ω), the commuting diagram can also holds:

f˜ W˜ (ω) −→ω W˜ δ(ω) h ↓ ↓ h f W (ω) −→ω W δ(ω), where ˜ −1 fθnω = h ◦ fθnω ◦ h. (11) By induction, we define n−1 ˜n Y ˜ fω = fθkω. (12) k=0 Note that h is a diffeomorphism between two compact sets N˜  and N , f˜ is also Lipschitz with constant K˜ . For convenience, denote max{K, K˜ } still by K in the following discussion. Let λ be the negative constant with which H2 holds and γ = exp(λ). Pick a sufficiently large integer κ0 and a sufficiently small positive constant α such that µ (K)2/a(γ)κ0 < 1/4, (K)2/aα < 1/4, 0 < α < min{, }. (13) 2 Proof Sketch: We will prove this theorem along the following line. First, for any time t > 0, we ∞ Pl construct an integer time sequence {tn}n=0 such that t = n=0 tn + r, 0 ≤ r < κ0. Second, we choose a small number βω (depending on κ0, α, and ω) such that under t 0 βω µ the RDS ϕ(t, ω)(or map fω) any point (p, v ) starting from ON (Aω, 2 ) will stay in ˜ α Wθtω for all t ∈ [0, r]. By the linearization techenique, we will show that actually 0 ˜ α (p, v ) will stay in Wθtω for t ∈ [0, r + t0] and the transverse distance will decrease. ˜t To be specific, by defining {fω(p, 0) : t ∈ [r, r + t0]} as a referenced trajectory ˜t 0 corresponding to the objective trajectory {fω(p, v ): t ∈ [r, r + t0]}, the distance between above two trajectories maximize how far the initial point (p, v0) could go ˜t away from the attractor A(ω) during the time interval [r, r+t0] under fθr ω. Finally, 0 ˜ α we will prove that the point (p, v ) will stay in Wθtω for all time t and its projector distance to invariant submanifold N tends to zero as t → +∞. Thus, the point (p, v0) will go to A(θtω), which means A(ω) is a random attractor of M. In fact, at ˜r+t0 0 time r + t0, we could make a “switching” by regarding fω (p, v ) as a new initial point instead of the original (p, v0) and repeat the linearization process. ON TRANSVERSE STABILITY OF RANDOM DYNAMICAL SYSTEM 709

∞ Lemma 3.2. For any time t > 0, there exists an integer sequence {tn}n=0, such that the following two conditions satisfied: l X t = tn + r, 0 ≤ r < κ0; n=0 tn+1 − tn = 1 or 2, t0 − κ0 = 0 or 1. (14)

Here κ0 is determined in (13). ∞ Proof. Define an integer sequence {κn}n=0 by κn+1 = κn + 1, n ≥ 0. Then for any n0−1 n0 P P (1) time t, there exists n0 > 0 such that κn < t ≤ κn. Let κ = t − κn0 n=0 n=0 n1−1 n1 P (1) P ∗ (2) (1) and there exists n1 > 0 such that κn < κ ≤ κn ; let κ = κ − κn1 n=0 n=0 (l) (l−1) and repeat the above iteration until 0 < κ = κ − κnl−1 < κ0 + 1 holds. If (l) (l) κ ≥ κ0, then we let κnl = κ0 and denote r = κ − κ0; otherwise, we denote (l) l−1 l r = κ directly. Thus we obtain a subsequence {κnk }k=0 (or {κnk }k=0 for the second case above) with 0 ≤ r < κ0. Rearranging this subsequence in ascending order leads the following statement: for any time t, there exists a positive number l P l > 0 such that t = κnk + r, 0 ≤ r < κ0 with the first item κn0 = κ0 or k=0 l l κn0 = κ0 + 1. For convenience, we denote the subsequence {κnk }k=0 by {tn}n=0 and let τl be the sum of the first l terms of the subsequence. Therefore, t = τl + r and the claim is proved.

Noting that A(ω) is µ-uniform and by the hypothesis H2, we have [ [ [ [ X { (V λ U )} ≤ { (V λ U )} ≤ ( (V λ) + (U )) → 0, (15) P tn tn P n n P n P n n≥0 n≥t0 n≥t0 S λ S c as t0 → +∞. By letting Cm = [ (Vn Un)] , one can verify that C1 ⊂ C2 ⊂ n≥m C3 · · · ⊂ Cm ... and deduce lim (Cm) = 1 from (15). Thus for almost every m→∞ P ω ∈ Ω, there exists an unique k > 0 such that ω ∈ Ck\Ck−1. By choosing t0 = k, the following two inequality automatically holds for any t > t0:

1 ⊥ t sup log kdp fω(v)k < λ, (16) (p,v)∈SW (ω) t

µ d (f˜t (O (A(ω), µ)),A(θtω)) < . (17) M ω N 2 r ˜r 0 0 Let (q, v ) = fω(p, v ) with (p, v ) to be the initial point. Due to the continuity of ˜r 0 0 0 fω(p, v ) respect to r and v , there exists a sufficiently small βω(or equivalentlly kv k) < α such that (K)κ0+1kvrka < α; ˜r 0 ˜ α fω(p, v ) ∈ Wθr ω, ∀r ∈ [0, κ0]. (18)

Here, α and κ0 are assigned in (13).

n0−1 n0−1 ∗ P P Actually κn0 − κn1 = 2 if κn < t ≤ κn + 1 , otherwise κn0 − κn1 = 1. n=0 n=0 710 XIANGNAN HE, WENLIAN LU AND TIANPING CHEN

r ˜ α ˜t0 For (q, v ) ∈ Wθr ω, we linearly expand fθr ω at (q, 0) as follows: ˜t0 r ˜t0 ˜t0 r t0 r fθr ω(q, v ) = fθr ω(q, 0) + dqfθr ω(v ) + R (v ), where Rt0 (vr) ≤ (K)t0 kvrk1+a is the remaining high-order term. Noting that at ˜t0 t0 point (q, 0), fθr ω equals to fθr ω, thus we have r+τ0 ˜t0 r ˜t0 kv k ≤ kΠ ⊥ ◦ [f r (q, v ) − f r (q, 0)]k (TNr+t0 ) θ ω θ ω ˜t0 r t0 r ≤ kΠ ⊥ ◦ dqf r (v )k + kR (v )k (TNr+t0 ) θ ω ⊥ ˜t0 r t0 r 1+a ≤ kdq fθr ωkkv k + (K) kv k

⊥ t0 r t0 r 1+a = kdq fθr ωkkv k + (K) kv k ≤ [exp(λ)t0 + (K)t0 kvrka]kvrk = [(γ)t0 + (K)t0 kvrka]kvrk (19) where the last but two inequality holds because (16). 0 r n+1 r+τn By letting βω = kv k and βω = kv k, the above inequality can be written as: 1 t0 t0 0 a 0 βω ≤ [(γ) + (K) (βω) ]βω. 0 r 2/a t0 From (13), since t0, α and βω (i.e.kv k) are be chosen to satisfy (K) (γ) < 2/a µ t0 r a t0 0 a 1/4, (K) α < 1/4, α < 2 ,(K) kv k = (K) (βω) < α, we have

t1 1 a t0+2 t0 t0 0 a 0 a (K) (βω) ≤ (K) [((γ) + (K) (βω) )βω] 2 t0 t0 0 a a t0 0 a = (K) [(γ) + (K) (βω) ] (K) (βω) 2 t0 a t0 0 a ≤ (K) [(γ) + α] (K) (βω) 2/a t0 2/a a t0 0 a ≤ [(K) (γ) + (K) α] (K) (βω) a t0 0 a < (1/4 + 1/4) (K) (βω) µ < (K)t0 (β0 )a ≤ (K)κ0+1(β0 )a < α < . (20) ω ω 2 1 The first inequality is due to (14) and the second one is due to (18). Also, βω < 0 t1 1 a a t0 0 a (1/2)βω can be obtained from (K) (βω) < (1/4 + 1/4) (K) (βω) . ˜ r ˜ α Since f is also locally Lipschitz for all (q, v ) ∈ Wθr ω, by applying (17) we have the following estimates: µ d (f˜t0 (q, vr), f˜t0 (q, 0)) ≤ (K)t0 β0 < (K)t0 (β0 )a < α < , M θr ω θr ω ω ω 2

t0 t0 µ d (f˜ (q, 0),A r+t )) = d (f (q, 0),A r+t ) < . (21) M θr ω θ 0 ω M θr ω θ 0 ω 2 Therefore, according to triangle inequality, we have

˜t0 r ˜t0 r ˜t0 ˜t0 dM (fθr ω(q, v ),Aθr+t0 ω) ≤ dM (fθr ω(q, v ), fθr ω(q, 0)) + dM (fθr ω(q, 0),Aθr+t0 ω) µ µ ≤ + = µ. (22) 2 2 ˜t0 r The inequality (22) implies that the projection of fθr ω(q, v ) to the invariant sub- 1 0 manifold N will stay in Wθr+t0 ω. Together with the fact βω < βω, we can conclude ˜r+t0 0 ˜ α ˜t0 r fω (p, v ) ∈ Wθr+t0 ω, noting that it equals to fθr ω(q, v ). ˜r+t0 Consequently, at time r+t0, we make a “switching” process by regarding fω (p, v0) as a new initial point instead of the original (p, v0) and changing the referenced trajectory used for calculating normal Lyapunov exponent in Eq. (19) accordingly. t2 2 a t1 1 a By repeating the “linearization” process above, we have (K) (βω) < (K) (βω) < µ 2 1 1 α < 2 and βω < (1/2)βω < βω. By the same algebras and reasoning above, ON TRANSVERSE STABILITY OF RANDOM DYNAMICAL SYSTEM 711

0 ˜ α we can conclude that the initial point (p, v ) stays in Wθtω for all time and the n n−1 projection distance βω decreases to 1/2 times of last projection distance βω for each “switching” process. Thus, the projection distance converges to zero as t goes to infinity. βω µ In conclusion, for almost every ω ∈ Ω, there is a random set ON (Aω, 2 ) with βω a small positive number (depending on t0, α and ω) such that it holds

0 lim dM(ϕ(t, (p, v ), ω),Aθtω) = 0 (23) t→∞

0 βω µ for any initial point (p, v ) starting from ON (Aω, 2 ). β(ω) µ This implies that the random set ON (A(ω), 2 ) is contained in the random basin B(A)(ω) and thus A(ω) is a random (local) attractor of M.

If the large deviation property is satisfied for the largest normal Lyapunov expo- nent, then the hypothesis H2 holds always. As a consequence of previous theorem, we obtain

Theorem 3.3. Under hypothesis H1, suppose A(ω) is an µ-uniform bounded attrac- tor of ϕ restrained on the invariant submanifold N and λ⊥ is a negative constant. If the normal Lyapunov exponent satisfies the large deviation property, i.e. for any  > 0, there exist δ > 0 such that

 1  (| sup log kd⊥f tn (v)k − λ⊥| > ) ≤ exp(−δt ), P p θtn−1 ω n (p,v)∈SW (ω) tn then A(ω) is also a random attractor of M.

Proof. In fact, we only need to show the probability of the transverse stability by which H2 holds. Let 1 B = {ω : sup log kd⊥f tn (v)k > λ⊥}. tn p θtn−1 ω (p,v)∈SW (ω) tn

We have [ X X P( Btn ) ≤ exp(−δtn) = exp(−δt0) exp(−δn) n≥0 n≥0 n≥0 1 = exp(−δt ) → 0, as t → +∞. 0 1 − exp(−δ) 0

Noting that A(ω) is µ-uniform, we have [ [ P{ Utn } ≤ P{ Un} → 0, as t0 → +∞. n≥0 n≥t0 Therefore, [ [ X X P{ (Btn Utn )} ≤ P(Btn ) + P(Utn ) → 0, as t0 → +∞. n≥0 n≥0 n≥0

This completes the proof. 712 XIANGNAN HE, WENLIAN LU AND TIANPING CHEN

4. Applications: Complete Synchronization. In the section, we apply our main results to analyze the complete synchronization of coupled dynamical systems with randomness. The discussion in this section is restricted to Euclid space, thus we could provide some numerical examples to verify our theoretical results. Since the synchronized space is invariant and compact, it can be regard as the subman- ifold N as we have mentioned in Theorem 3.1, and the complete synchronization can be transformed into the transverse stability of the system w.r.t. the invariant submanifold N, which is named “synchronization manifold” in [19].

4.1. Theoretical results. We employ our general transverse stability results to investigate local complete synchronization of the following system:

t 0 0 t 0 0 xi = ϕi(t, x1, ··· , xm, ω) = fi,ω(x1, ··· , xm), i = 1, ··· , m, (24) or in an iteration form

t+1 t t xi = fi,θtω(x1, ··· , xm), i = 1, ··· , m, (25) t n t where xi ∈ R is the state variable vector for vertex i at time t, θ is a dynamical system on (Ω, F, P) with state space Ω, Borel-σ algebra F, and invariant probability t m×n n measure P. We assume the maps fi,ω : R → R corresponding to ϕi(t, ·, ω) on t t vertex i, share a common synchronization space: fi,ω(s, ··· , s) = fω(s) holding for t n some map fω, all s ∈ R , and all ω ∈ Ω. By denoting the invariant set m×n S = {x ∈ R : xi = xj, ∀ i, j = 1, ··· , m}. (26) as the synchronization space of system (24), this coupled system can be regarded m×n m×n > > > as a RDS Φ : T × R × Ω → R (defined by Φ = [ϕ1 , ··· , ϕm] ), with an n-dimensional invariant subspace S, i.e., Φ(t, ·, ω)|S = ϕ(t, ·, ω). Thus, system (24) on the synchronization space S can be simplified as:

t 0 0 t 0 s = Φ(t, s , ω)|S = ϕ(t, s , ω) = fω(s ). (27) t Similarly, let A(ω) be the random attractor of the uncoupled fω(·) on synchro- nization space S. However, the synchronization space S does not possesses the property of compactness as the submanifold N in theorem3 .1, which results in an ω-depending  in (10). To avoid this, we assume the random attractor A(ω) satisfies the uniform bounded property defined below: Definition 4.1. A random attractor A(ω) is called uniform bounded if there exist an non-random bounded subset D of S, such that [ A(ω) ⊂ D. ω∈Ω Now the bounded non-random set D can be regard as the compact submanifold N in theorem3 .1. Moreover, suppose that the random attractor A(ω) is also µ- uniform. Since m is finite, the random set defined below \ S(ω) = Am(ω) S can be regard as a µ-uniform attractor of the synchronization subspace S, where Am(·) denotes the Cartesian product of m times and S denotes the synchronization space. ON TRANSVERSE STABILITY OF RANDOM DYNAMICAL SYSTEM 713

Definition 4.2. The coupled system (24) is said to be locally synchronized if there exists a random neighborhood E(ω) of S(ω) such that for almost every ω and each initial data x0 ∈ E(ω), t 0 t 0 lim kxi(x , ω) − xj(x , ω)k = 0, ∀ i, j = 1, ··· , m. t→∞ As we show in the main results, if the attraction of A(ω) under the RDS Φ is µ-uniform and the convergence of the largest nonzero normal Lypapunov exponent less than zero is sufficiently fast in probability, then S(ω) is a random attractor of the whole space Rm×n. For system (24), we make following corresponding assumptions: H3: > > > 1+¯a m×n 1. Let F = [f1 , ··· , fm] . F (·, ·) is equi-C (0 < a < a¯ < 1) on R with respect to Ω. n n n 2. fi,ω(s, ··· , s) = fω(s) holds for some map fω : R × Ω → R , all s ∈ R , and all ω ∈ Ω;

H4: For some µ > 0, letting

λ 1 ⊥ t Vt = {ω : log sup kD Fω(p, v)k ≥ λ}, t (p,v)∈SW there exists λ < 0 such that ∞ X λ P(Vt ) < +∞. (28) n=1

Theorem 4.3. Under hypotheses H3,4, if the uncoupled system (27) possesses a µ-uniform attractor A(ω) which is uniform bounded, then the coupled system (24) is locally synchronized. Proof. Since the synchronization space S is an invariant subspace of the RDS Φ, all the conditions in Theorem 3.1 are satisfied. Consequently, S(ω) is a random attractor of the whole space Rm×n under RDS Φ, which implies the synchronization of system (24). This completes the proof directly. Then, we can use this result to revisit the synchronization of networks of coupled maps with randomness, which has partially been investigated in [15]. We consider the following networks of coupled maps with random topologies and maps as a special case of system (25) m t+1 X t t t xi = Gij(ξ )ψ(xj, υ ), i = 1, ··· , m, (29) j=1 where ξt and υt are stationary stochastic processes on state spaces Ξ and Υ respec- t t m ∗ n n tively; G(ξ ) = [Gij(ξ )]i,j=1 is a stochastic matrix , ψ(x, υ): R × Υ → R is a equi-C1+r map w.r.t. x ∈ Rn for υ ∈ Υ. Let P = [p1,P2] be a non-singular matrix with the first column p1 = e0, where > e0 = [1, 1, ··· , 1] denotes the synchronization direction. Let  ∗  −1 p1 P = ∗ P2

∗ m A square matrix G = [Gij ]i,j=1 is said to be a stochastic matrix if Gij ≥ 0 hold for all Pm i, j = 1, ··· , m and j=1 Gij = 1 holds for all i = 1, ··· , m. 714 XIANGNAN HE, WENLIAN LU AND TIANPING CHEN

Noting that e0 is the eigenvector of G(·) associated with eigenvalue 1, it follows   −1 1 X P G(·)P = ∗ . 0 P2 G(·)P2 ˜ ˜ ∗ Let G(·) be the (skew) projection of G(·) along e0 by the way: G(·) = P2 G(·)P2. The projection δ-moment Lyapunov exponents can be defined as follows t−1 1 Y ˜ k δ gP (δ) = lim log Ek G(ξ )k . (30) t→∞ t k=0 We re-define system (29) as a RDS Φ over (Ω, F, P, θt), where its state space Ω t t is composed of all possible joint sequence: ω = {(ξ , υ )}t≥0; its Borel σ-algebra F t1 t1 tr tr is composed of the basis having form {(ξ , υ ) ∈ B1, ··· , (ξ , υ ) ∈ Br} for some t1 ≤ t2 ≤ · · · ≤ tr and Bl ∈ B for all l = 1, ··· , r, where B is composed of all subset t t of Ξ × Υ; θ denotes the right-shift map, θω = {(ξ , υ )}t≥1; P denotes the invariant probability induced by the stationary probability joint distribution of (ξt, υt). Consider system (29) on synchronization space S as defined in (26), we obtian m m t+1 t+1 X t t t X t t t xi = xj = Gjk(ξ )ψ(xk, υ ) = Gjk(ξ )ψ(xj, υ ) k=1 k=1 t t t t = ψ(xj, υ ) = ψ(xi, υ ), ∀i, j = 1, ··· , m. Since the above equation does not depend on the coupling matrix G, it can be rewrit- t t t ten as the following uncoupled system by letting xi = xj = s , ∀i, j = 1, ··· , m st+1 = ψ(st, υt). (31)

Denoting the restriction of RDS Φ|S by ϕ, we have t+1 t t t s = ϕ(1, s , θ ω) = fθtω(s ). (32) Similar to (30), the δ-moment Lyapunov exponents of ϕ on the synchronization space S could be defined as t−1 1 Y 0 δ αµ(δ) = lim log E sup k Dfθkω(ϕ(k, s , ω))k . (33) t→∞ t 0 s ∈OS (S(ω),µ) k=0

Moreover, we suppose that this uncoupled system has a local Lipschitz map fω: H5: fω(·) is locally Lipschitz with a positive constant K independent of ω such that n kfω(p) − fω(q)k ≤ Kkp − qk, ∀ p, q ∈ R , ∀ ω ∈ Ω.

Theorem 4.4. Under the hypotheses H5, suppose the uncoupled system (32) pos- sesses a µ-uniform attractor S(ω) which is uniform bounded, then the coupled net- work (29) is locally synchronized if the corresponding αµ, gp defined above satisfies 0 0 0 0 αµ(0+) + gP (0+) < 0, . Here, αµ(0+), gP (0+) stands for the derivative of αµ(·), gp(·) at 0 from the right-hand side. Proof. The proof of the theorem is quite similar to that of Theorem 3.1 with t minor reversion. In fact, the map fω in H1 has been divided into two parts: stochastic topologies and stochastic maps on uncoupled system. Thus for sys- tem (32), there should be an alternative assumption to substitute H2, that is, 0 0 αµ(0+) + gP (0+) < 0. By regarding the µ-uniform attractor S(ω) on the syn- chronization space S as the µ-uniform attractor A(ω) on the invariant subman- 0 0 0 ifold N in Theorem 3.1, we could proof that any point (x1, x2, . . . , xm) starting ON TRANSVERSE STABILITY OF RANDOM DYNAMICAL SYSTEM 715

α from a βω-neighborhood of S(ω) will stay in OS(S(ω), µ) for all time and thus go to S(ω) eventually. Here all the parameters (such as κ0, α, βω,... ) are selected the same way as in Theorem 3.1. To be specific, for any time t, pick a time se- m 1 P r r r r ries {tn}n≥0 as defined in (14), and denote sr = m xj with (x1, x2, . . . , xm) = j=1 0 0 0 0 0 r r r Φ(r, (x1, x2, . . . , xm), (ξ , υ )). Note that the map (x1, x2, . . . , xm) 7→ (sr, sr, . . . , sr) is actually a projection from Rm+n to the synchronization space S, we could se- r lect a small βω as in (18) such that the projection point sr ∈ OS(S(θ ω), µ) and r r r m+n the transverse height k(x1, x2, . . . , xm) − (sr, sr, . . . , sr)kR ≤ α. Therefore, we linearize the system (29) along the referenced trajectory ϕ(t, sr, ω) to get   t+1 ˜ t ˜t t v ≤ G(ξ ) ⊗ Df θtω(ϕ(t, sr, ω)) + E v . (34)

0 0 0 0 Since αµ(0+) + gP (0+) < 0, there existg ¯ > gP (0+),α ¯ > αµ(0+) withg ¯ +α ¯ < 0. Pn Let τn = i=0 ti, there is a sufficient large t0 such that the following estimation holds,

t0−1   r+τ0 Y ˜ k ˜k r kv k ≤ kG(ξ ) ⊗ Df θkω(ϕ(k, sr, ω)) + E k kv k k=0  t0−1 t0−1  Y ˜ k Y t0 r a r ≤ k G(ξ )kk Dfθkω(ϕ(k, sr, ω))k + (K) kv k kv k k=0 k=0   ≤ (exp(¯g +α ¯))t0 + (K)t0 kvrka kvrk.

At time r + τ0, we make a “switch” on the referenced trajectory ϕ(t, sr, ω) by m 1 P r+τ0 defining sr+τ0 = m xi as a new initial point instead of sr and repeat the i=1 r+τ0 above lineariztions along the trajectory ϕ(t, sr+τ0 , θ ω). Thus, we have

n+1 tn tn n a n βω ≤ [(exp(¯g +α ¯)) + (K) (βω ) ]βω ,

0 r n+1 r+τn with βω = kv k and βω = kv k, n ≥ 1. From Proposition1, one can see that under the conditions of this theorem, for 0 0 g¯ > gP (0+),α ¯ > αµ(0+) and a sufficiently large {tn}n≥0, there exists σ > 0 and such that

tn−1 Y ˜ k P {k G(ξ )k ≥ exp(¯gtn)} < exp(−σtn), k=0

tn−1 Y k r P { sup k Dfθkω(ϕω(s ))k ≥ exp(¯αtn)} < exp(−σtn). sr ∈OS (S(ω),µ) k=0 This implies that the conditions in Theorem 3.3 are satisfied and this theorem can be concluded as its direct consequence.

In what follows, we suppose the ξt of random topology G(·) in (29) possesses a property of Markov. Let us review the theory of Markov jump linear system first. More details can be found in [9]. The Markov jump linear system can be formulated as follows: ut+1 = H(ξt)ut, (35) 716 XIANGNAN HE, WENLIAN LU AND TIANPING CHEN

t n n,n t where u ∈ R , H(·) ∈ R , and {ξ }t≥0 is a homogenous Markov chain with N a single invariant distribution π and transition probability matrix T = [tij]i,j=1. Avoiding complicated discussions, we suppose here that all H(·) are nonsingular and define the maximum Lyapunov exponent (MLE) as t−1 1 Y h = lim log k H(ξk)k, a.s.. t→∞ t k=0 or similarly the MLE w.r.t G(·) in (29) as t−1 1 Y g = lim log k G˜(ξk)k, a.s.. t→∞ t k=0 For uncoupled system (32) we denote

1 ⊥ t α = sup lim log kdp fω(v)k (p,v)∈SW t→+∞ t ⊥ t provided that dp fω(v) are all nonsingular. Generally, if ξt is a homogenous finite state Markov process with a unique ergodic invariant class, we can utilize Proposition2 to calculate the probability of transverse convergence. 0 H2: 1. ξt is a homogenous Markov process defined on a finite state with a unique ergodic invariant class; ⊥ t 2. Suppose G(·), dp fω(v) are all nonsingular and the correspond- ing g + α < 0.

Theorem 4.5. Under hypotheses H1,20 , if A(ω) is an µ-uniform attractor on syn- chronization space S which is uniform bounded, then the coupled network (29) is locally synchronized. ⊥ t Proof. Since dp fω(v) is nonsingular, the maximum Lyapunov exponent α exists and satisfies Proposition2. Consequently, for any  > 0, there exists σ > 0 such that 1 ⊥ t P{ω : sup log kdp fω(v)k 6∈ (α − , α + )} ≤ exp(−σt). (p,v)∈SW t similarly, the estimation could be derived for g, t−1 1 Y {ω : log k H(ξk)k 6∈ (g − , g + )} ≤ exp(−δt). P t k=0 Note that g + α < 0 coincides with the condition λ⊥ < 0 in Theorem 3.3, we finish the proof by directly using of Theorem 3.3. 4.2. Numerical Examples. In this subsection, we give two numerical examples to illustrate the theoretical results by the following networks of couple maps with time-varying topologies as follows m t+1 t X t t xi = (1 − )ft(xi) +  Gij(ξ )ft(xj), i = 1, ··· , m. (36) j=1 Here  is the coupling strength. As a special case of system (29), system (36) shares a common stochastic logistic map ft(s) = αts(1 − s) on the synchronization space ON TRANSVERSE STABILITY OF RANDOM DYNAMICAL SYSTEM 717 where αt is randomly chosen as 3.7 or 4 with an identical of equal probability 1/2, independently of time. The coupling adjacent matrices t t Aij(ξ ) are induced by i.i.d and Markov process respectively in the following. Gij(ξ ) represents the normalized adjacent matrix of the graph at time t. In detail, the t adjacent matrix Aij(ξ ) is defined as follows ( t 1, i = j or i, j is connected at time t Aij(ξ ) = ; 0, otherwise

t and the normalized adjacent matrix Gij(ξ ) is defined by ( 1 t , if Aij(ξ ) 6= 0 t #Ni(t) Gij(ξ ) = , 0, otherwise where #Ni(t) denotes the (in-)degree of vertex i at time t. Denote the largest nonzero Lyapunov exponent of the coupling matrix G(ξt) by g and the switching logistic map ft by α. By employing Theorem (4.5) to system (36), we conclude that a sufficient condition for synchronization is g + α < 0. In the rest of this subsection, we are trying to use numerical illustrations to verity our N 1 P theoretical result. We define K = N−1 |xi(t) − x¯(t)| to measure the variation of i=1 state variables at time t and average it over the last 100 times after 1000 iterations: KT =< K >, to measure the synchronization. I.i.d topology switching We start with N = 100 vertices and assume that ξt is independent with respect to time t. At any time t ≥ 0, every vertex is definitely self-connected and each pair of vertices (i, j) is connected with a given probability p. We pick the simulation time length T = 1000 and the connection probability p = 0.2. Fig.1 illustrates the relationship of g + α and KT with different . It can seen that the negative region of g + α coincides with the synchronization region of system (36) well. In Fig.2(a), we plot the convergence of variation K in term of time t while the coupling strength  = 0.5 is sufficiently large. Fig.2(b) shows that in the case of  = 0.4, the system (36) can not synchronized (K 6= 0) during the whole time interval [0, 1000], since the largest normal Lyapunov exponent is positive. Random waypoint model The “random waypoint” (RWP) model is widely used in the performance evalua- tion of protocols of ad hoc networks [22]. In detail, we generate 100 random agents in a square area of [100 × 100]. Each agent moves towards a randomly selected target in [100 × 100] with a random speed uniformly distributed in [1,2]. After reaching the unit disc of the target, the agent waits for a random time following the uniform distribution in [0,2] and then moves to the next target. Therefore, there are two states possible for any agent, moving to a given target or waiting for the next movement. We suppose every agent has a sensitive area with a radius R. Thus, the graph is constructed by the rule below: at any time t ≥ 0, there exists an edge between any pair of vertices i and j if and only if the distance between them is less than R. Moreover, every step of each agent is stochastically independent of the others and w.r.t. time. It can be seen that the adjacent matrix Aij(t + 1) only depends on the present position and velocity of each vertex at time t and is independent of any past state of the system before t − 1. Therefore, the adjacent matrix {Aij(t)}t≥0 is a Markov chain and so is with the normalized adjacent matrix Gij(t). We pick the simulation time length T=1000 and the sensitive radius R=70, 718 XIANGNAN HE, WENLIAN LU AND TIANPING CHEN

0.6 KT 0.4 g+α

0.2

0

−0.2

−0.4

−0.6

−0.8

−1

−1.2 0 0.2 0.4 0.6 0.8 1 ε

Figure 1. Variation of KT and g + α with respect to  for i.i.d graph

(a) (b) 0.14 0.16 ε=0.5 ε=0.4 0.12 0.14

0.12 0.1

0.1 0.08 K K 0.08 0.06 0.06 0.04 0.04

0.02 0.02

0 0 0 50 100 150 200 0 200 400 600 800 1000 Time Time

Figure 2. Comparatione of K with  = 0.5 and  = 0.4 for i.i.d graph.

K and g2 + α are plotted in Fig.3 both as a function of coupling strength , and system (36) synchronized (K=0) while g + α tends to negative. Fig.4 compare the convergence (Fig.4(a),  = 0.5) and divergence (Fig.4(b),  = 0.4) of the variation K with respect to time t. ON TRANSVERSE STABILITY OF RANDOM DYNAMICAL SYSTEM 719

0.6 KT 0.4 g+α

0.2

0

−0.2

−0.4

−0.6

−0.8

−1

−1.2

−1.4 0 0.2 0.4 0.6 0.8 1 ε

Figure 3. Variation of KT and g + α with respect to  for Markov process

(a) (b) 0.12 0.18 ε=0.5 ε=0.4 0.16 0.1 0.14

0.08 0.12

0.1

K 0.06 0.08

0.04 0.06

0.04 0.02 0.02

0 0 20 40 60 80 100 120 140 160 180 200 0 200 400 600 800 1000 Time Time

Figure 4. Comparatione of K with  = 0.5 and  = 0.4 for Markov process

5. Conclusions. In this paper, we discuss the transverse stability of random dy- namical systems (RDS). We used the linearization method to characterize the trans- verse dynamics near the attractor by the way of the normal Lyapunov exponents. We employ the large deviation property to estimate the probability of the contrac- tion of the transverse dynamics. For a homogenous irreducible Markov process with finite states, the probability of transverse contraction can be shown to satisfy the 720 XIANGNAN HE, WENLIAN LU AND TIANPING CHEN large deviation property. Since synchronization can be regarded as the stability of synchronization manifold, we are allowed to apply the main results to investigate the synchronization of coupled dynamical networks in Sec.4. In particular, we give two numerical examples to verify our theoretical results.

Acknowledgments. This work is jointly supported by the National Key Basic Research and Development Program (No. 2010CB731403), the National Natural Sciences Foundation of China under Grant Nos. 61273211, 60974015, and 61273309, the Foundation for the Author of National Excellent Doctoral Dissertation of PR China No. 200921, and the Laboratory of Mathematics for Nonlinear Science, Fudan University. W. Lu would thank Prof. J¨urgenJost and Dr. Fatihcan M. Atay from MIS MPG for valuable discussions and suggestions when he stayed there in 2007.

REFERENCES

[1] J. C. Alexander, I. Kan, J. A. Yorke and Z. You, Riddled basins, Int. J. Bifurcation Chaos, 2 (1992), 795–813. [2] L. Arnold, “Random Dynamical Systems,” Springer-Verlag Berlin Heidelberg, 1998. [3] P. Ashwin, J. Buescu and I. Stewart, From attractor to chaotic saddle: a tale of transverse instability, Nonlinearity, 9 (1996), 703–737. [4] P. Ashwin, E. Covas and R. Tavakol, Transverse instability for non-normal parameters, Non- linearity, 12 (1999), 563–577. [5] P. Ashwin, Minimal attractors and bifurcations of random dynamical systems, Proc. Rhys. Soc. Lond. A, 455 (1999), 2615–2634. [6] F. Cucker and S. Smale, Emergent behavior in flocks, IEEE Trans. Autom. Control, 52 (2007), 852–862. [7] D. Cui, X. Liu, Y. Wan and X. Li, Estimation of genuine and random synchronization in multivariate neural series, Neural Networks, 23 (2010), 698–704. [8] R. S. Ellis, Large deviation for a general class of random vectors, The Annals of Probability, 12 (1984), 1–12. [9] Y. Fang, “Stability Analysis of Linear Control Systems with Uncertain Parameters,” Ph. D thesis, Case Western Reserve University, 1994. [10] S. Floyd and V. Jacobson, The synchronization of periodic routing messages, IEEE Trans. Netw, 2 (1994), 122–136. [11] V. Gazi and K. Passino, Stability analysis of social foraging swarms, IEEE Trans. SMCS-Part B: Cybernetics, 34 (2004), 539–556. [12] H. Huang, W. Daniel and Y. Qu, Robust stability of stochastic delayed additive neural net- works with Markovian switching, Neural Networks, 20 (2007), 799–809. [13] A. Lotka, “Elements of Physical Biology,” Williams & Wilkins Company, 1925. [14] W. Lu, M. F. Atay and J. Jost, Synchronization of discrete-time dynamical networks with time-varying couplings, SIAM Journals on Mathematical Analysis, 39 (2007), 1231–1259. [15] W. Lu, M. F. Atay and J. Jost, Chaos synchronization in networks of coupled map with time-varying topologies, Eur. Phys. J. B, 63 (2008), 399–406. [16] X. Mao, A. Matasov and A. Piunovskiy, Stochastic differential delay equations with Markovian switching, Bernoulli, 6 (2000), 73–90. [17] X. Mao, Exponetial stability of stochasitic delay interval systems with Markovian swithching, IEEE Trans. on Autom. Control, 47 (2002), 1604–1612. [18] G. Tang and L. Guo, Convergence of a class of multi-agent systems in probabilistic framework, Jrl Syst Sci & Complexity, 20 (2007), 173–197. [19] L. M. Pecora and T. L. Carroll, Master stability functions for synchronized coupled systems, Phys. Rev. Lett, 80 (1998), 2109–2112. [20] M. Spivak, “A Comprehensive Introduction to Differential Geometry,” Houston, TX: Publish or Perish, 1970. ON TRANSVERSE STABILITY OF RANDOM DYNAMICAL SYSTEM 721

[21] G. Ochs, “Weak Random Attractors. Technical Report,” Report 449, Institut f¨ur Dynamische Systeme, Universit¨at Bremen, 1999. [22] D. B. Johnson and D. A. Maltz, Dynamic source routing in ad hoc wireless networks, Mobile Computing, 353 (1996), 153–181. Received July 2011; revised April 2012. E-mail address: [email protected] E-mail address: [email protected] E-mail address: [email protected]