RANDOM DYNAMICAL SYSTEMS IN ECONOMICS

MUKUL MAJUMDAR

Abstract. Random dynamical systems are useful in modeling the evolution of economic processes subject to exogenous shocks. One obtains strong results on the existence, uniqueness, stability of the invariant distribution of such systems when an appropriate splitting condition is satisfied. Also of importance has been the study of random iterates of maps from the quadratic family. Applications to economic growth models are reviewed.

Key words. dynamical systems, Markov processes, iterated random maps, invari- ant distributions, splitting, quadratic family, estimation, economic growth

1. Introduction. Consider a random (S, Γ, Q) where S is the (for example, a ), Γ an appropriate family of maps on S into itself (interpreted as the set of all possible laws of motion) and Q is a probability measure on (some σ-field of) Γ. The evolution of the system can be described as follows: initially, the system is in some state x; an element α1 of Γ is chosen randomly according to the probability measure Q and the system moves to the state X1 = α1(x) in period one. Again, independently of α1, an element α2 of Γ is chosen according to the probability measure Q and the state of the system in period two is obtained as X2 = α2(α1(x)). In general, starting from some x in S, one has

Xn+1(x) = αn+1(Xn(x)), (1.1) where the maps (αn) are indepenent with the common distribution Q. The initial point x can also be chosen (independently of (αn)) as a random vari- able X0. The sequence Xn of states obtained in this manner is a Markov process and has been of particular interest in dynamic economics. It may be noted that every Markov process (with an arbitrary given transition probability) may be constructed in this manner provided S is a Borel sub- set of a complete separable metric space, although such a construction is not unique [Bhattacharya and Waymire [1], p. 228]. Hence, random it- erates of affine, quadratic or monotone maps provide examples of Markov processes with specific structures that have engaged the attention of prob- ability theorists. Random dynamical systems have been studied in many contexts in economics, particularly in modeling long run evolution of economic sys- tems subject to exogenous random shocks. The framework (1.1) can be interpreted as a descriptive model; but one may also start with a dis- counted (stochastic) dynamic programming problem, and directly arrive at a stationary optimal policy function, which together with the given law of transition describes the optimal evolution of the states in the form (1.1). 1 2 MUKUL MAJUMDAR

Of particular significance are recent results on the “inverse optimal prob- lem under uncertainty” due to Mitra [10] which assert that a very broad class of random systems (1.1) can be so interpreted. To begin with, in order to provide the motivation, I present two ex- amples of deterministic dynamical systems arising in economics. The first is a descriptive growth model that leads to a dynamical system with an increasing law of motion. The second shows how laws of motion belonging to the quadratic family can be generated in dynamic optimization theory. In Section 3 we review some results on random dynamical systems that satisfy a splitting condition, first introduced by Dubins and Freedman [7] in their study of Markov processes. This condition has been recast in more general state spaces (see (3.10)). The results deal with: (i) The existence, uniqueness and global stability of a steady state (an invariant distribution): a general theorem proved in Bhattacharya and Majumdar [3] is first recalled (Theorem 3.1). The proof relies on a contrac- tion mapping argument that yields an estimate of the speed of convergence [see (3.11) and (3.13)]. Corollary 3.1 deals with “split” dynamical systems in which the admissible laws of motion are all monotone. (ii) Applications of the theoretical results to a few topics: (a) turnpike theorems in the literature on descriptive and op- timal growth under uncertainty: when each admissible law of motion is monotone increasing, and satisfies the appropriate Inada-type ‘end point’ condition, Corollary 3.1 can be applied directly: see Sections 3.2.1 - 3.2.2. (b) estimation of the invariant distribution: as noted above, an important implication of the splitting condition is an estimate of the speed of convergence. This estimate is used in Section 3.2.3 to prove a result on √n-consistency of the sample mean as an estimator of the expected long run equilibrium value (i.e., the value of the state variable with respect to the invariant distribution). Next, in Section 4 we briefly turn to qualitative properties of random iterates of quadratic maps: a growing literature has focused on this theme, in view of the discussion in Section 1.2 and of the privileged status of the quadratic family in understanding complex or chaotic behavior of dynam- ical systems. 1.1. The Solow Model: A Dynamical System with an Increas- ing Law of Motion. Here is a discrete time exposition of Solow’s model [11] of economic growth with full employment. There is only one producible commodity which can be either consumed or used as an input along with labor to produce more of itself. When consumed, it simply disappears from the scene. Net output at the “end” of period t, denoted by Yt(= 0) is related to the input of the producible good Kt (called “capital”) and la- bor Lt employed “at the beginning of” period t according to the following technological rule (“production function”):

Yt = F (Kt, Lt) (1.2) RANDOM DYNAMICAL SYSTEMS IN ECONOMICS 3 where Kt = 0, Lt = 0. The fraction of output saved (at the end of period t) is a constant s, so that total saving St in period t is given by

St = sYt, 0 < s < 1. (1.3)

Equilibrium of saving and investment plans requires

St = It (1.4) where It is the net investment in period t. For simplicity, assume that capital stock does not depreciate over time, so that at the beginning of period t + 1, the capital stock Kt+1 is given by

K K + I (1.5) t+1 ≡ t t

Suppose that the total supply of labor in period t, denoted by Lˆt is determined completely exogeneously, according to a “natural” law:

t Lˆt = Lˆ0(1 + η) , Lˆ0 > 0, η > 0. (1.6)

Full employment of the labor force, requires that

Lt = Lˆt (1.7)

Hence, from (1.2) - (1.7), we have

Kt+1 = Kt + sF (Kt, Lˆt)

Assume that F is homogeneous of degree one. We then have

K Lˆ K K t+1 t+1 = t + sF t , 1 Lˆt+1 · Lˆt Lˆt µ Lˆt ¶

Writing k K /Lˆ we get t ≡ t t

kt+1(1 + η) = kt + sf(kt) (1.8) where

f(k) F (K/L, 1). ≡ From (1.8)

kt+1 = [kt/(1 + η)] + [sf(kt)/(1 + η)] or

kt+1 = α(kt) (1.9) 4 MUKUL MAJUMDAR

where

α(k) [k/(1 + η)] + s[f(k)/(1 + η)] (1.10) ≡ Equation (1.9) is the fundamental dynamic equation describing the intertemporal behavior of kt when both the full employment condition and the condition of short run savings-investment equilibrium [see (1.4) and (1.7)] are satisfied. We shall refer to (1.9) as the law of motion of the Solow model in its reduced form. For any k > 0, the trajectory τ(k) from k is given j ∞ 0 1 j j−1 by τ(k) (α (k))j=0 where α (k) k, α (k) α(k), α (k) α(α (k)) for j = 2.≡ ≡ ≡ ≡ Assume that f(0) = 0, f 0(k) > 0, f 00(k) < 0 for k > 0; and lim f 0(k) = k↓0 , lim f 0(k) = 0. Then, using (1.10), we see that α(0) = 0; ∞ k↑∞

α0(k) = (1 + η)−1[1 + sf 0(k)] > 0 at k > 0; α00(k) = (1 + η)−1sf 00(k) < 0 at k > 0. (1.11)

Also, verify the boundary conditions:

limα0(k) = lim[(1 + η)−1 + (1 + η)−1sf 0(k)] = . k↓0 k↓0 ∞ lim α0(k) = (1 + η)−1 < 1. (1.12) k↑∞ The existence, uniqueness and stability of a steady state k∗ > 0 of the dynamical system (1.9) can be proved. Here is a summary of the results: Proposition 1.1. There is a unique k∗ > 0 such that

k∗ = α(k∗); equivalently,

k∗ = [k∗/(1 + η)] + s[f(k∗)/1 + η] (1.13)

If k < k∗, the trajectory τ(k) from k is increasing and converges to k∗. If k > k∗, the trajectory τ(k) from k is decreasing and converges to k∗. 1.2. The Quadratic Family in Dynamic Optimization Prob- lems. We consider a family of economies indexed by a parameter µ, where µ²A = [1, 4]. Each economy in this family has the same production function, f : + + and the same discount factor δ²(0, 1). The economies in this < → < 2 family differ in the specification of their return functions, w : + A + [depending on the parameter value of µ²A that is picked]. < × → < The following assumptions of f are used: (F.1) f(0) = 0. (F.2) f is non-decreasing, continuous and concave on +. (F.3) There is K > 0, such that f(x) < x for all x >< K, and f(x) > x for all 0

A program from an initial input x 0 is a sequence (x ) satisfying ≥ t

x = x, 0 x f(x − ) fort 1 0 ≤ t ≤ t 1 ≥

We interpret xt as the input in period t, and this leads to the output f(xt) in the subsequent period. The consumption sequence (ct), generated by a program (xt) is given by

c = f(x − ) x ( 0) for t 1 t t 1 − t ≥ ≥ It is standard to verify that for any program (x ) from x 0, we have x , t ≥ t ct+1 K(x) max(K,x) for t 0. Given≤ any≡ µ²A, the following≥ assumptions on w( , µ) are used: (W.1) w(c, x, µ) is non-decreasing inc given x, and· non-decreasing in x, given c. 2 (W.2) w(c, x, µ) is continuous on +. 2 < (W.3) w(c, x, µ) is concave on +. In defining “optimality” of a< program, we note that the notion has to be economy specific. Since we can keep track of the economies by simply noting its µ value, we find it convenient to rerer to the appropriate notion of optimality as µ-optimality. Given any µ²A, a program (ˆx ) from x 0 is µ optimal if t ≥ − ∞ ∞ δtw(ˆc , xˆ , µ) δtw(c ,x , µ) t+1 t ≥ t+1 t Xt=0 Xt=0 for every program (xt) from x. Define a set Y 2 by ⊂ <+ Y = (c, x)² 2 : c f(x) <+ ≤ © ª For much of our discussion of µ-optimal programs, what is crucial is the 2 behavior of w( , µ) on Y (rather than on +). We now proceed to assume: (W.4) Given· any µ²A, w(c, x, µ) is strictly< increasing and strictly concave in c given x, on the set Y . Standard arguments ensure that given any µ²A, there is a µ-optimal program from every x 0. Assumptions (F.2), (W.3), and (W.4) ensure that a µ-optimal program≥ is unique. Since there is a unique µ-optimal program from every x 0, one can define an optimal transition function h : A by ≥ <+ × → <+

hµ(x) =x ˆ1 where (ˆxt) is the µ-optimal program from x 0. It is easily checked that this definition also implies that for all t 0,≥ we have ≥

xˆt+1 = hµ(ˆxt) 6 MUKUL MAJUMDAR

Consider now the family of economies, where f, δ and w are numeri- cally specified as follows:

(16/3)x 8x2 + (16/3)x4 for x²[0, 0.5] f(x) = − (1.14) ½ 1 for x 0.5 ≥ δ = 0.0025

The function w is specified in a more involved fashion. To ease writing, denote L 98, a 425. Also, denote by I the closed interval [0, 1], and define the≡ function≡θ : I A I by × → θ(x, µ) = µx(1 x) for x²I, µ²A − and u : I2 A by × → < u(x, z, µ) = ax 0.5Lx2 + zθ(x, µ) 0.5z2 − − δ[az 0.5Lz2 + 0.5θ(z, µ)2] (1.15) − − Define a set D I2 by ⊂ D = (c, x)² I : c f(x) { <+ × ≤ } and a function w : D A by × → <+ w(c, x, µ) = u(x, f(x) c, µ) for (c, x)²D and µ²A (1.16) − We now extend the definition of w( , µ) to the domain Y . For (c, x)²Y with x > 1 [so that f(x) = 1, and c 1],· define ≤ w(c, x, µ) = w(c, 1, µ) (1.17)

2 Finally, we exend the definition of w( , µ) to the domain +. For (c, x)² 2 with c > f(x), define · < <+ w(c, x, µ) = w(f(x), x, µ) (1.18)

It can be checked [see Majumdar and Mitra [9]] that for the above specifi- cations, f satisfies (F.1) - (F.3), and given any µ²A, w( , µ) satisfies (W.1) - (W.4). · We observe that w(c, x, µ) w(0, 0, µ) [by (W.1)] = u(0, f(0) 0, µ) = 2 ≥ 2 − 0, for all (c, x)² +. Thus w( , µ) maps from + to +. Also, for all 2 < · < < (c, x)² +, w(c, x, µ) w(c, 1, µ) = w(1, 1, µ). One< can verify [see≤ Majumdar and Mitra [9]] the following: Proposition 1.2. The optimal transition functions for the family of economics (f, w( , µ), δ) are given by · h (x) = µx(1 x) for all x²I (1.19) µ − RANDOM DYNAMICAL SYSTEMS IN ECONOMICS 7

2. Random Dynamical Systems. Let S be a metric space and be the Borel σ-field of S. Endow Γ with a σ-field Σ such that the map (γ,xS) (γ(x)) on (Γ S, Σ ) into (S, ) is measurable. Let Q be a probability→ × ⊗ S S ∞ measure on (Γ, Σ). On some (Ω, z,P ) let (αn)n=1 be a sequence of random functions from Γ with a common distribution Q. For a given random variable X0 (with values in S), independent of the sequence ∞ (αn)n=1, define X α (X ) α X (2.1) 1 ≡ 1 0 ≡ 1 0

X = α (X ) α α ...α X (2.2) n+1 n+1 n ≡ n+1 n 1 0 We write Xn(x) for the case X0 = x; to simplify notation we write Xn = αn...α1X0 for the more general (random) X0. Then Xn is a Markov process with a stationary transition probability p(x, dy) given as follows: for x S, C , ∈ ∈ S p(x, C) = Q( γ Γ : γ(x) C ) (2.3) { ∈ ∈ } The stationary transition probability p(x, dy) is said to be weakly contin- uous or to have the Feller property if for any sequence xn converging to x, the sequence of probability measures p(xn, ) converges weakly to p(x, ). One can show that if Γ consists of a family· of continuous maps, p(x, dy· ) has the Feller property. 3. Evolution. To study the evolution of the process (2.2), it is conve- nient to define the map T ∗ [on the space M(S) of all finite signed measures on (S, )] by S T ∗µ(C) = p(x, C)µ(dx) = µ(γ−1C)Q(dγ), µ M(S). (3.1) ZS ZΓ ∈ Let (S) be the set of all probability measures on (S, ). An element π P S of (S) is invariant for p(x, dy) (or for the Markov process Xn) if it is a fixedP point of T ∗, i.e.,

π is invariant iff T ∗π = π (3.2)

Now write p(n)(x, dy) for the n-step transition probability with p(1) (n) ∗n ≡ p(x, dy). Then p (x, dy) is the distribution of αn.....α1x. Define T as the n-th iterate of T ∗:

T ∗nµ = T ∗(n−1)(T ∗µ)(n 2), T ∗1 = T ∗, T ∗0 = Identity (3.3) ≥ Then for any C , ∈ S (T ∗nµ)(C) = p(n)(x, C)µ(dx), (3.4) ZS 8 MUKUL MAJUMDAR

∗n so that T µ is the distribution of Xn when X0 has distribution µ. To express T ∗n in terms of the common distribution Q of the i.i.d. maps, let Γn denote the usual Cartesian product Γ Γ ... Γ (n terms), and let Qn be the product probability Q Q ...× Q× on× (Γn, ⊗n) where ⊗n is the product σ-field on Γn. Thus× Qn×is the× (joint) distributionS of Sα = n ∼ (α1, α2, ..., αn). For γ = (γ1, γ2, ..., γn) ² Γ let γ denote the composition

∼ γ := γnγn−1...γ1 (3.5)

We suppress the dependence ofγ ˜ on n for notational simplicity. Then, since ∗n ∗n T µ is the distribution of Xn = αn...α1X0, one has (T µ)(A) = Prob ∼−1 ∼ (X0 ² α A), where α = αnαn−1....α1. Therefore, by the independence of ∼ α and X0,

∼−1 (T ∗nµ)(A) = µ(γ A)Qn(dγ) (A ² , µ ² (S)). (3.6) ZΓn S P

Finally, we come to the definition of stability. A Markov process Xn is stable in distribution if there is a unique invariant probability measure π such that Xn(x) converges in distribution to π irrespective of the initial state x, i.e., if p(n)(x, dy) converges weakly to the same probability measure π for all x. 3.1. A General Theorem Under Splitting. Recall that is the Borel σ-field of the state space S. Let , define S A ⊂ S d(µ, ν) := sup µ(A) ν(A) (µ, ν² (S)). (3.7) A²A | − | P

(1) Consider the following hypothesis (H1) :

( (S), d) is a complete metric space; (3.8) P (2) there exists a positive integer N such that for all γ ² ΓN , one has

1 1 d(µ γ˜− , νγ˜− ) d(µ, ν)(µ, ν² (S)) (3.9) ≤ P (3) there exists δ > 0 such that A² , and with N as in (2), one has ∀ A P (˜α−1(A) = S or ϕ) δ > 0 (3.10) ≥

Theorem 3.1. Assume the hypothesis (H1). Then there exists a unique invariant probability π for the Markov process Xn := αn...α1X0, where X is independent of α := n 1 . Also, one has 0 { n ≥ } d(T ∗nµ, π) (1 δ)[n/N] (µ²P (S)) (3.11) ≤ − ∗n where T is the distribution of Xn when X0 has distribution µ, and [n/N] is the integer part of n/N. RANDOM DYNAMICAL SYSTEMS IN ECONOMICS 9

We now state two corollaries of Theorem 3.1 applied to i.i.d. mono- tone maps. Corollary 3.1 extends a result of Dubins and Freedman, [1966 Thm. (5.10)], to more general state spaces in and relaxes the require- < ment of continuity of αn. The set of monotone maps may include both nondecreasing and nonincreasing ones. Let S be a closed subset or an interval of . Denote by d (µ, ν) the < K Kolmogorov distance on (S). That is, if Fµ, Fν denote the distribution functions (d.f) of µ and νP, then

dK (µ, ν) = sup Fµ(x) Fν (x) sup Fµ(x) Fν (x) , (µ, ν² (S)). x²S | − | ≡ x²< | − | P (3.12) It should be noted that convergence in the distance dK on (S) implies weak convergence in (S). P P Corollary 3.1. Let S be an interval or a closed subset of . Suppose < αn(n 1) is a sequence of i.i.d. monotone maps on S satisfying the splitting≥ condition (H): (H) There exist x0²S, a positive integer N and a constant δ > 0 such that

Prob (α α − ...α x x x²S) δ N N 1 1 ≤ 0 ∀ ≥ Prob (α α − ...α x x x²S) δ N N 1 1 ≥ 0 ∀ ≥ ∗n (a) Then the sequence of distributions T µ of Xn := αn...α1X0 converges to a probability measure π on S exponentially fast in the Kol- mogorov distance dk irrespective of X0. Indeed,

d (T ∗nµ, π) (1 δ)[n/N] µ² (S), (3.13) K ≤ − ∀ P where [y] denotes the integer part of y. (b) π in (a) is the unique invariant probability of the Markov process Xn. Proofs of Theorem 3.1 and Corollary 3.1 are spelled out in Bhat- tacharya and Majumdar [3], [4]. 3.2. Applications of Splitting. 3.2.1. Stochastic Turnpike Theorems. We now turn to the prob- lem of economic growth under uncertainty. A complete list of references to the literature – influenced by ther works of Brock and Mirman – is in Majumdar, Mitra and Nyarko [8]. I indicate how the principal results of this literature can be obtained by using Corollary 3.1. Instead of a single law of motion (1.9), we allow for a class of admissible laws with properties suggested by the deterministic Solow model in its reduced form [see (1.10) and (1.11)]. Consider the case where S = ; and Γ = F ,F , ..., F , ..., F where <+ { 1 2 i N } the distinct laws of motion Fi satisfy: 10 MUKUL MAJUMDAR

F.1 Fi is strictly increasing, continuous, and there is some ri > 0 such that Fi(x) >x on (0, ri) and Fi(x) ri. Note that Fi(ri) = ri for all i = 1, ..., N. Next, assume: F.2 r = r for i = j. i 6 j 6 In other words, the unique positive fixed points ri of distinct laws of motion are all distinct. We choose the indices i = 1, 2, ..., N so that

r1 < r2 < .... < rN

Let Prob (α = F ) = p > 0(i i N). n i i ≤ ≤ Consider the Markov process Xn(x) with the state space (0, ). If y r , then F (y) F (r ) > r{ for i }= 2, ...N, and F (r ) = ∞r , so ≥ 1 i ≥ i 1 1 1 1 1 that Xn(x) r1 for all n 0 if x r1. Similarly, if y rN , then F (y) F (r≥ ) < r for i≥= 1, ..., N≥ 1 and F (r ) = r≤ , so that i ≤ i N N − N N N Xn(x) rN for all n 0 if x rN . Hence, if the initial state x is in [r , r ],≤ then the process≥ X (x)≤ : n 0 remains in [r , r ] forever. We 1 N { n ≥ } 1 N shall presently see that for a long run analysis we can consider [r1, rN ] as the effective state space. We shall first indicate that on the state space [r1, rN ] the splitting (2) condition (H) is satisfied. If x r1, F1(x) x, F1 (x) F1(x) etc. The ≥ (n) ≤ ≤ limit of this decreasing sequence F1 (x) must be a fixed point of F1, and n therefore must be r1. Similarly, if x rN , then FN (x) increases to rN . In particular, ≤

(n) (n) lim F (rN ) = r1, lim F (r1) = rN . n→∞ 1 n→∞ N

Thus, there must be a positive integer n0 such that

(n0) (n0) F1 (rN )

(n0) (n0) This means that if z0 ² [F1 (rN ),F1 (r1)], then

Prob(X (x) z x²[r , r ]) n0 ≤ 0 ∀ 1 N Prob(α = F for 1 n n ) = pn0 > 0 ≥ n 1 ≤ ≤ 0 1 Prob(X (x) z x²[r , r ]) n0 ≥ 0 ∀ 1 n Prob(α = F for 1 n n ) = pn0 > 0 ≥ n N ≤ ≤ 0 N

Hence, considering [r1, rN ] as the state space, and using Theorem 3.1, there is a unique invariant probability π with the stability property holding for all initial x²[r1, rN ]. Now, define m(x) = min Fi(x), and fix the initial state x²(0, r1). i=1,...,N One can verify that (i) m is continuous; (ii) m is strictly increasing; (iii) m(r1) = r1 and m(x) > x for x²(0, r1), and m(x) < x for x > r1. Clearly m(n)(x) increases with n, and m(n)(x) r . The limit of the sequence ≤ 1 RANDOM DYNAMICAL SYSTEMS IN ECONOMICS 11

(n) m (x) must be a fixed point, and is, therefore r1. Since Fi(r1) > r1 for i = 2, ..., N, there exists some ε > 0 such that Fi(y) > r1(2 i N) for nε ≤ ≤ all y²[r1 ε, r1]. Clearly there is some nε such that m (x) r1 ε. If τ = inf −n 1 : X (x) > r then it follows that for all k 1≥ − 1 { ≥ n 1} ≥ Prob(τ > n + k) pk. 1 ε ≤ 1 Since pk goes to zero as k , it follows that τ is finite almost surely. 1 → ∞ 1 Also, Xτ1 (x) rN , since for y r1, (i) Fi(y) < Fi(rN ) for all i and (ii) F (r ) < r for≤ i = 1, 2, ..., N ≤1 and F (r ) = r . (In a single period it i N N − N N N is not possible to go from a state less than r1 to one larger than rN ). By the strong , and our earlier result, Xτ+m(x) converges in distribution to π as m for all x²(0, r ). Similarly, one can check that → ∞ 1 as n , Xn(x) converges in distribution to π for all x > rN . The→ ∞ assumption that Γ is finite can be dispensed with if one has ad- ditional structures in the model. Here is a simple example.

3.2.2. Uncountable Γ: An Example. Let F : R+ R+ satisfy: F.1 F is strictly increasing and continuous. → We shall keep F fixed. Consider θ = [θ1,θ2], where 0 < θ1 < θ2, and assume the following concavity and “end point” conditions: 00 θ2F (x ) 00 F.2 F (x)/x is strictly decreasing in x > 0, 00 < 1 for some x > 0 0 x θ1F (x ) 0 x0 > 1 for some x > 0. θF (x) Since is also strictly decreasing in x, F.1 and F.2 implies that for x θF (xθ) each θ ² θ, there is a unique xθ > 0 such that = 1, i.e., θF (xθ) = xθ. xθ θF (x) θF (x) Observe that > 1 for 0 x . Now, x θ x θ 0 00 θ >θ implies xθ0 >xθ00 : 0 00 0 θ F (x 00 ) θ F (x 00 ) θ F (x 0 ) θ > θ = 1 = θ xθ00 xθ00 xθ0 xθ0 >xθ00 . Write Γ = f : f = θF, θ² θ , and { } f1 = θ1F f2 = θ2F Assume that θ is chosen i.i.d. according to a density function g(θ) on θ which is positive and continuous on θ. In our notation, f (x ) = x ; f (x ) = x . If x x , f(x) 1 θ1 θ1 2 θ2 θ2 ≥ θ1 ≡ θF (x) f(xθ1 ) f1(xθ1 ) = xθ1 . Hence Xn(x) xθ1 for all n 0 if x x ≥. If x x≥ ≥ ≥ ≥ θ1 ≤ θ2 f(x) f(x ) f (x ) = x . ≤ θ2 ≤ 2 θ2 θ2

Hence, if x ² [xθ1 ,xθ2 ] then the process Xn(x) remains in [xθ1 ,xθ2 ] for- (n) (n) ever. Now, lim f (xθ ) = xθ and lim f (xθ ) = xθ . There must n→∞ 1 2 1 n→∞ 2 1 2 12 MUKUL MAJUMDAR

(n0) (n0) be a positive integer n0 such that f1 (xθ2 ) < f2 (xθ1 ). Choose some z ² (f (n0)(x ), f (n0)(x )). There exist intervals [θ ,θ +m], [θ m0, θ ] 0 1 θ2 1 θ1 1 1 2 − 2 such that for all θ ² [θ ,θ + m] and θˆ ² [θ m,θ ] 1 1 2 − 2 (n0) ˆ (n0) (θF ) (xθ2 )

Then the splitting condition holds. Now fix x such that 0

θF (x) θ F (x) > x. ≥ 1 Let m be any given positive integer. Since (θ F )(n)(x) x as n , 1 → θ1 → ∞ 0 0 (n) 1 0 there exists n n (x) such that (θ1F ) (x) >xθ1 for all n > n . This ≡ − m = 1 0 implies that Xn(x) > xθ for all n n . Therefore, lim infXn(x) 1 − m ≥ n→∞ ≥ xθ . We now argue that with probability one, lim infXn(x) > xθ . For 1 n→∞ 1 θ θ this, note that if we choose δ = 2 − 1 and ε > 0 such that x ε > 0 2 θ1 − then min θF (y) θ1F (y) : θ θ1 + δ, y xθ1 ε δF (xθ1 ε) > 0. 0 { − ≥ ≥ − } ≥ − Write δ δF (xθ1 ε) > 0. Since with probability one, the i.i.d. sequence θ(n) : n≡= 1, 2, ...− takes values in [θ + δ, θ ] infinitely often so that { } 1 2 1 0 1 0 lim infXn(x) > xθ + δ . Choose m so that < δ . Then with n→∞ 1 − m m (n) probability one the sequence Xn(x) exceeds xθ1 . Since xθ2 = f2 (xθ2 ) X (x ) X (x) for all n, it follows that with probability one, X (x≥) n θ2 ≥ n n reaches the interval [xθ1 , xθ2 ] and remains in it thereafter. Similarly, one

can prove that if x>xθ2 then with probability one, the Markov process

Xn(x) will reach [xθ1 ,xθ2 ] in finite time and stay in the interval thereafter. Remark 3.1. The proof of this result holds for any bounded non- degenerate distributioin of θ (if is the support of the distribution of θ, define θ inf <θ sup ).L 1 ≡ L 2 ≡ L 3.2.3. An Estimation Problem. Consider a Xn with a unique stationary distribution π. Some of the celebrated results on the and the strong hold for π-almost every initial condition. However, even with [0, 1] as the state space the invariant distribution π may be hard to compute explicitly when the laws of motion are allowed to be non-linear, and its support may be difficult to determine or may be a set of zero Lebesgue measure [see Bhattacharya and Rao [2]]. Moreover, in many economic models, the initial condition may be historically given, and there may be little justification in assuming that it belongs to the support of π. Consider then a random dynamical system with state space [c, d] (with- out loss of generality for what follows choose c > 0). Assume Γ consists of a family of monotone maps from S with S, and the splitting condition (H ) holds. The process starts with a given x. There is, by Corollary 3.1, RANDOM DYNAMICAL SYSTEMS IN ECONOMICS 13 a unique invariant distribution (a stochastic equilibrium) π of the random dynamical system, and (3.13) holds. Suppose we want to estimate the n−1 1 equilibrium mean S yπ(dy) by sample means n Xj . We say that the j=0 1 R P estimator n Xj is √n-consistent if jP=0 n−1 1 −1/2 Xj = yπ(dy) + OP (n ) (3.14) n Z Xj=0

−1/2 −1/2 where Op(n ) is a random sequence εn such that εn/n is bounded in probability. Thus, if the estimator is √n-consistent,¯ the fluctuations¯ of ¯ ¯ −1/2 the empirical (or sample-) mean around the equilibrium mean is Op(n ). We shall outline the main steps in the verification (3.14) in our context. For any bounded (Borel) measurable f on [c, d], define the transition operator T as:

T f(x) = f(y)p(x,dy) ZS By using the estimate (3.13), one can show that (see Bhattacharya and Majumdar [4], pp. 217-219) if f(z) = z yπ(dy) − then R ∞ ∞ sup T nf(x) (d c) (1 δ)[n/N] 0 as m x | | ≤ − − → → ∞ n=Xm+1 n=Xm+1 ∞ Hence, g = T N f [where T 0 is the identity operator I] is well-defined, −n=0 P ∞ and g, and T g are bounded functions. Also, (T I)g = T nf + − −n=1 ∞ P T N f f. Hence, ≡ nP=0 n−1 n−1 f(X ) = (T I)g(X ) j − j Xj=0 Xj=0 n−1 = ((T g)(X ) g(X )) j − j Xj=0 n = [(T g)(X − ) g(X )] + g(X ) g(X ) j 1 − j n − 0 Xj=1 By the Markov property and the definition of T g it follows that

E((T g)(X − ) g(X ) − ) = 0 j 1 − j |Fj 1 14 MUKUL MAJUMDAR

where is the σ-field generated by X : 0 j r . Hence, (T g)(X − ) Fr { j ≤ ≤ } j 1 − g(Xj )(j 1) is a martingale difference sequence, and are uncorrelated, so that ≥

k n 2 2 E [(T g(X − ) g(X ))] = E((T g)(X − ) g(X )) (3.15) j 1 − j j 1 − j Xj=1 Xj=1 Given the boundedness of g and T g, the right side is bounded by n.α for some constant α. It follows that

− 1 n 1 E( f(X ))2 η0 for all n n j ≤ Xj=0

0 where η is a constant that does not depend on X0. Thus,

n−1 1 2 0 E( Xj yπ(dy)) η /n n − Z ≤ Xj=0 which implies,

n−1 1 −1/2 Xj = yπ(dy) + Op(n ) n Z Xj=0

For other examples of √n-consistent estimation, see Bhattacharya and Majumdar [4]. 4. Iterates of Quadratic Maps. On the state space S = (0, 1) consider the Markov process defined recursively by

Xn+1 = αεn+1 Xn (n = 0, 1, 2, ...) (4.1) where ε : n 1 is a sequence of i.i.d. random variables with values in { n ≥ } (0,4) and, for each value θ²(0, 4), αθ is the quadratic function (on S):

α x α (x) = θx(1 x) 0 < x < 1. (4.2) θ ≡ θ −

As always, the initial random variable X0 is independent of εn : n 1 . Our main result from Bhattacharya and Majumdar [5] provides{ a criterion≥ } for Harris recurrence and the existence of a unique invariant probability for the process Xn : n 0 . Recall that a sequence µn(n 1) of probability measures on{S is said≥ to} be tight if, for every ε > 0, there≥ exists a compact K S such that µ (K ) 1 ε for all n 1. ε ⊂ n ε ≥ − ≥ Theorem 4.1. Assume that the distribution of ε1 has a nonzero abso- lutely continuous component (w.r.t. Lebesgue measure on (0,4) whose den- sity is bounded away from zero on some nondegenerate interval in (1,4). If, in addition, 1 N p(n)(x,dy) : N 1 is tight on S = (0, 1) for some { N n=1 ≥ } P RANDOM DYNAMICAL SYSTEMS IN ECONOMICS 15 x, then (i) Xn : n 0 is Harris recurrent and has a unique invariant { ≥1 } N (n) probability π and (ii) N n=1 p (x,dy) converges to π in total variation distance, for every x, asPn . → ∞ Corollary 4.1. If ε1 has a nonzero density component which is bounded away from zero on some nondegenerate interval contained in (1,4) and if, in addition,

E log ε > 0 and E log(4 ε ) < , (4.3) 1 | − 1 | ∞ then X : n 0 has a unique invariant probability π on S = (0, 1) and { n ≥ } (1/N) N p(n)(x,dy) π in total variation distance, for every x²(0, 1). n=1 → RemarkP 4.1. Under the hypothesis of Theorem 4.1, the Markov pro- cess is not in general aperiodic. For example, one may take the distribution of ²n to be concentrated in an interval such that for every θ in this interval αθ has a stable periodic orbit of period m > 1. One may find an interval of this kind so that the process is irreducible and cyclical of period m. If εn has a density component bounded away from zero on a nondegenerate in- terval B containing a stable fixed point, i.e., B (0, 3) = φ, then the process is aperiodic and p(n)(x, ) converges in total variation∩ 6 distance to a unique invariant π. Assumptions· of this kind have been used by Bhattacharya and Rao [2] and Dai [6]. 16 MUKUL MAJUMDAR

REFERENCES

[1] Bhattacharya, R.N. and E.C. Waymire: Stochastic Processes with Applications, John Wiley, New York, (1990). [2] Bhattacharya, R.N. and B.V. Rao: Random Iterations of Two Quadratic Maps. In Stochastic Processes (eds. S. Cambanis, J.K. Ghosh, R.L. Karandikar and P.K. Sen), Springer Verlag, New York, (1993), pp. 13-21. [3] Bhattacharya, R.N. and M. Majumdar: On a Theorem of Dubins and Freedman, Journal of Theoretical Probability, 12 (1999), pp. 1067-1087. [4] Bhattacharya, R.N. and M. Majumdar: On a Class of Stable Random Dy- namical Systems: Theory and Applications, Journal of Economic Theory, 96 (2001), pp. 208-229. [5] Bhattacharya, R.N. and M. Majumdar: Stability in Distribution of Randomly Perturbed Quadratic Maps as Markov Processes, CAE Working Paper 02-03, Cornell University, (2002) [to appear in Annals of Applied Probability] [6] Dai, J.J.: A Result Regarding Convergence of Random Logistic Maps, and Probability Letters, 47 (2000), pp. 11-14. [7] Dubins, L.E. and D. Freedman (1966): Invariant Probabilities for Certain Markov Processes, Annals of , 37 pp. 837-858. [8] Majumdar, M. Mitra, T., and Y. Nyarko: Dynamic Optimization under Un- certainty; Non-convex Feasible Set. In Joan Robinson and Modern Economic Theory (ed. G.R. Feiwel), MacMillan, London, (1989), pp. 545-590. [9] Majumdar, M. and T. Mitra: Robust Ergodic Chaos in Discounted Dynamic Optimization Models, Economic Theory, 4 (1994), pp. 677-688. [10] Mitra, K. : On Capital Accumulation Paths in a Neoclassical Stochastic Growth Model, Economic Theory, 11 (1998), pp. 457-464. [11] Solow, R.M.: A Contribution of the Theory of Economic Growth, Quarterly Journal of Economics, 70 (1956), pp. 65-94.