Discrete Dynamical Systems

Trinity College – MATH 210-01

October 19, 2015

1 Discrete Dynamical Systems MATH 210-01 October 19, 2015

One-Dimensional Discrete Dynamical Systems

Suppose we invest P0 = $1000 into a savings account that accrues 1% interest annually. How much many do we have after the nth year? To find this solution, we first consider how much money we have after the first year. It is easy to see that we’ll have P1 dollars given by

P1 = P0 + 0.01P0

= (1 + 0.01)P0

= 1.01P0 = (1.01)(1000) = $1010.00.

Once we have the amount for the first year, we can find the amount that we will have after two years. We simply add 1% of P1 to the amount P1. This gives

P2 = P1 + 0.01P1

= (1 + 0.01)P1   = (1.01) (1.01)P0 2 = (1.01) P0 = (1.01)2(1000) = $1020.10.

If we consider the boxed terms above, we can see that after the nth year, we may expect to have

n Pn = (1.01) P0.

It is easy to see that as n → ∞, the amount of money we have in the account, call it Pn, will approach infinity. In fact, this amount will grow exponentially. This process described above is an example of a discrete because the amount of money (our quantity of interest) is changing by a determined amount based on the previous time step which is updated every year (a discrete update). We note that a dynamical system is nothing more than a sequence of numbers. In fact, it is a sequence of numbers (which may represent population totals, monetary amounts, concentrations, etc) that evolve in discrete time steps. In the previous example, the sequence of numbers we are interested in is the amount after every year:

{$1000.00, $1010.00, $1020.10,...} .

Written another way, we see that our sequence of interest is simply an iterative process that involves multiplying by 1.01 after every year:

 2 n P0, (1.01)P0, (1.01) P0,..., (1.01) P0,... .

We can generalize any dynamical system in the following way: Take a point x in the real number system (which was P0 in the previous example) and apply a function to it, say f(x) (which was

2 Discrete Dynamical Systems MATH 210-01 October 19, 2015

(1.01)P0 in the previous example), then the dynamical system is the sequence of numbers defined as a repeated composition of that function:

x, f(x), ff(x), f ff(x) ,... Notation: x, f(x), f 2(x), f 3(x), . . . , f n(x),... .

Each element of this sequence is in R, and we are interested in the limiting value of this sequence. If x0 is the initial point, we want to know if the limit

n lim f (x0) n→∞ exists or not. This limit may depend on the initial point x0. Essentially, a dynamical system is described by a function and its infinite iterations of composition of that function. The behavior of the system can be described by determining the above limit for all possible values of x0. Consider the interest problem again. In this example, our function is f(x) = 1.01x and our th initial point was x0 = 1000. We can find the n iteration as

0 n = 0 : f (x0) = x0 1 n = 1 : f (x0) = f(x0) = 1.01x0 2  2 n = 2 : f (x0) = f f(x0) = 1.01(1.01x0) = (1.01) x0 . . . . n n n = n : f (x0) = (1.01) x0, which is precisely what we found before. Now, we can describe the behavior by looking at the following limit: n n lim f (x0) = lim (1.01) x0 = ±∞, n→∞ n→∞ which depends on the sign of x0. If x0 < 0 (which is impractical for the interest rate problem), the limit is −∞; and if x0 > 0, the limit is ∞. In the case when x0 = 1000, we obtain positive infinity. In most cases, we will want to determine this limiting value for different starting positions x0. Another way to write a dynamical system is to display it as a recursive relationship. For example, some sequences are written recursively like the Fibonacci Sequence,

{1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89,...}, which can be written as x0 = 1, x1 = 1, xn+1 = xn + xn−1. Here, the “next” term is defined as an expression in the previous two terms. For the interest problem, we can simply write

xn+1 = 1.01xn, x0 = 1000, which describes the same system as before. We note that the function f(x) = 1.01x is preserved in this notation as simply the right hand side of the equation. We will be working with the above notation for dynamical systems. We’ve already come across a dynamical system of this form called Newton’s Method for finding roots of functions.

3 Discrete Dynamical Systems MATH 210-01 October 19, 2015

Newton’s Method is an iterative, recursive process designed to find the root of a specified function, say g(x). We’ve seen this sequence defined as

g(xn) xn+1 = xn − 0 , g (xn) which starts at some initial point x0 and converges (hopefully) to a root of the function g(x), i.e., as n → ∞, xn → r, where r is the solution to the equation g(r) = 0. We see here that the dynamical system that is Newton’s Method is the system described by the g(x) function f(x) = x − g0(x) , where g(x) is the function whose roots we desire. In this sense, given an initial point x0, we can write the sequence

2 n {x0, f(x0), f (x0), . . . , f (x0),...}, which we know converges (under appropriate conditions) to the root, say x = r. That is,

n lim f (x0) = r, n→∞

g(x) where f(x) = x − g0(x) . Here, we may need x0 to be sufficiently close to r to begin with as we’ve seen various results occur for different values of x0. The point is, Newton’s Method is a specific dynamical system whose behavior can be characterized as converging to a root of a specified function assuming our initial point is “close enough” to the desired root. This is a very important concept that can be generalized to any dynamical system. The key difference between Newton’s Method and the interest rate problem is that for most values of x0 using Newton’s Method, the sequence actually converges to some finite value. In the interest problem, the only starting value that doesn’t approach ±∞ as we repeatedly apply f(x) = 1.01x is x0 = 0. If x0 = 0, then the system just stays at 0 forever, i.e.

n n lim f (x0) = lim (1.01) (0) = 0. n→∞ n→∞ In the next section, we classify the dynamics of systems by considering what happens for any starting value x0.

Trajectories and Classifications of Points So far, we have been investigating certain dynamical systems to determine what happens to an initial point x0 when we repeatedly apply f to it. That is, if we are given a function f(x) and an initial point x0, then we call the sequence

2 n {x0, f(x0), f (x0), . . . , f (x0),...}, a single trajectory of the dynamical system and we want to determine what happens to this trajec- tory by considering the following limit:

n lim f (x0). n→∞ In some cases (as in the Newton’s Method case), this limit converges to a finite number. It turns out that this limit converges to what is called a fixed point. A fixed point is a specific type of trajectory.

4 Discrete Dynamical Systems MATH 210-01 October 19, 2015

Definition If f is a function and f(c) = c, then c is a fixed point.

By definition, a fixed point is an initial value x0 that satisfies f(x0) = x0. In this case, the trajectory (or sequence) 2 n {x0, f(x0), f (x0), . . . , f (x0),...}, is the same as the sequence {x0, x0, x0, . . . , x0,...}. 2  We can see this is true because f(x0) = x0, f (x0) = f f(x0) = f(x0) = x0, and so on. A fixed point never moves, hence it’s name. If we look at the function f(x) = x3, it follows that 0, −1, and 1 are fixed points. We determine this by setting f(x) = x and solving for x. In this case, we have f(x) = x ⇒ x3 = x ⇒ x3 − x = 0 ⇒ x(x2 − 1) = 0 ⇒ x(x − 1)(x + 1) = 0 ⇒ x = −1, 0, 1. Graphically, we can say that a fixed point is the point at which the graph of y = f(x) intersects the line y = x. Consider the following figure: Fixed Points of f(x) = x3

1.5

1

0.5

0 y = f(x) −0.5

−1

−1.5

−1 −0.5 0 0.5 1 x

From the figure it is easy to see that the fixed points of the system occur when x0 = −1, 0, 1. Another type of trajectory is called a periodic point or a cycle. Suppose we start with x0 and th we apply f to it many times. If the k iteration comes back to x0, then x0 is a periodic point with period k. A cycle is defined formally below.

Definition The point c is periodic point or cycle if f k(c) = c for some k > 0 The smallest such k is called the prime period of the trajectory.

5 Discrete Dynamical Systems MATH 210-01 October 19, 2015

In the case of a periodic point x0 with prime period k, the trajectory defined by the sequence 2 n {x0, f(x0), f (x0), . . . , f (x0),...}, reduces to the following: k−1 k−1 {x0, f(x0), . . . , f (x0), x0, f(x0), . . . , f (x0),...}. 2 For example, consider the function f(x) = x − 1. Let x0 = 0, then

x0 = 0

f(x0) = f(0) = −1 2 f (x0) = f(−1) = 0 3 f (x0) = f(0) = −1 4 f (x0) = f(−1) = 0 . . . .

Hence, the point x0 = 0 is periodic with prime period k = 2. We can see this because when we start at x0 = 0, we obtain the sequence {0, −1, 0, −1, 0,...}.

nd 2 Every 2 term is 0, i.e. the trajectory of f(x) = x − 1 starting at x0 = 0 is a cycle with period 2 k = 2. Another way to say this, is to say that x0 = 0 is a fixed point of the function f (x). Indeed, the function f 2(x) is given by f 2(x) = ff(x) = f(x2 − 1) = (x2 − 1)2 − 1 = x4 − 2x2. If we set f 2(x) = x, we find that x = 0 is a fixed point. In fact, x = −1 is also a fixed point of f 2(x), which we could already tell from the sequence above (every term that is not zero is -1). Actually, there is another fixed point around x = −0.6180 as the figure below confirms this fact. Fixed Points of f 2(x) = (x2 − 1)2 − 1

0 (x) 2 −0.5 y = f

−1

−1 −0.5 0 0.5 1 x

6 Discrete Dynamical Systems MATH 210-01 October 19, 2015

Hence, there are three cycles of prime period k = 2 in this system, x = −1, −0.6180, and 0. Typically, finding all the fixed and periodic points of a system is something we would like to accomplish. In general, it is very difficult to find periodic points because it may be very hard to find k a solution to f (x) = x. For example, say x0 has a prime period of k = 5 and f(x) is a polynomial of degree 2. Then we would be trying to solve f 5(x) = x, where f 5(x) is a polynomial of degree 25 = 32. This isn’t easy to say the least. Here, we can discuss other (albeit less important) facts about periodic points. Suppose x0 is k a periodic point of prime period k. Then we know that x0 is a fixed point of the function f (x) 2k 3 (we just confirmed this above). It follows that x0 is also a fixed point of f (x), f k(x0), etc. nk k That is, x0 is also a fixed point of f (x0) for any n. This is true because if f (x) = x, then f 2k(x) = f kf k(x) = f k(x) = x; and f 3k(x) = f kf 2k(x) = f k(x) = x; and so on. Furthermore, suppose x0 is a cycle of prime period k. Then we could list the trajectory as the sequence

{x0, x1, x2, . . . xk−1, x0, x1,...}.

On this trajectory, it also follows that x1 is a periodic point of prime period k. Actually, this hold for all points on the trajectory. All these facts hold, and are quite interesting! In a typical dynamical system, most trajectories are neither fixed nor periodic. If we’re trying to classify what happens to every value on the real number line, most points will not be fixed or periodic. As in the case with the interest problem, we saw that most initial values tended to infinity or negative infinity (except for x0 = 0). Indeed, if we consider the linear function f(x) = 2x, which n n is similar to the interest problem, then it follows for any x0 that f (x0) = 2 x0. In this case, as long as x0 6= 0, we have n lim f (x0) = ±∞. n→∞ n In general, if limn→∞ |f (x0)| = ∞, then we say x0 tends to infinity. But in the same system, it follows that x0 = 0 is a fixed point. That is, if we set f(x) = 2x = x, then x = 0 is a solution. Therefore, for this dynamical system, we can say the set of all x ∈ (−∞, 0) ∪ (0, ∞) tends to ±∞ and x = 0 is a fixed point. We have covered all the values on the real number line. In this specific case, the only point that converges to the fixed point 0 is x = 0 and every other point diverges. In this case, we say that the fixed point x = 0 is unstable because ever point not equal to 0 “flies 1 away” from it. Now, consider the linear function f(x) = 2 x. It is easy to see that for any starting n 1 point x0, we have f (x0) = 2n x0, which implies 1 lim x0 = 0, n→∞ 2n for all x0. This is the complete opposite of the previous example. We can see here that no matter what x0 is, the dynamical system will always end up at 0 as n → ∞. In this case, the only fixed 1 point is still x = 0 because if we set f(x) = 2 x = x, the only solution is x = 0. However, we see that every point on the real number line is attracted to this fixed point as n → ∞. In this case, we say that all numbers in R tend to the fixed point x = 0. We also say that the fixed point x = 0 is stable because every other starting point is “drawn to” it. Here, we have loosely defined what stable and unstable means. Soon, we will define this concept precisely. Just as a point can converge to or diverge from a certain value, a point can also be eventually fixed or eventually periodic. Consider the following definition.

7 Discrete Dynamical Systems MATH 210-01 October 19, 2015

Definition (Loose Definition) A point c is called eventually fixed or eventually periodic if c itself is not fixed or periodic, but some point on the trajectory of c is fixed or periodic. (Formal Definition) The point c is an eventually fixed point of the function f, if there exists an N such that f n+1(c) = f n(c) whenever n ≥ N. The point c is eventually periodic with period k, if there exists an N such that f n+k(c) = f n(c) whenever n ≥ N.

We’ll demonstrate the previous definition with a couple examples. Consider the function f(x) = x2 − 2. It is easily shown that x = 2 is a fixed point since f(2) = 22 − 2 = 4 − 2 = 2. If we start x0 = 0, the sequence becomes {0, −2, 2, 2, 2,...}.

Therefore, x0 = 0 is an eventually fixed point of this system. We can even say that 2 is a stable fixed 2 point or an attractor since it attracts x0 = 0. Furthermore, consider the function f(x) = x −√1. We saw before that 0 and −1 were both periodic points with prime period k = 2. If we let x0 = 2 our sequence becomes √ { 2, 1, 0, −1, 0, −1, 0,...}. √ √ Therefore, if we start at x0 = 2, the sequence eventually becomes a cycle. Hence, the√ point 2 is an eventually periodic point and, in fact, x = 0 is a stable cycle since it attracts x0 = 2. This is a brief introduction into the classification of dynamical systems. We have touched on the basic types of trajectories for discrete systems. There are plenty of points that do not fall into any of these categories, but for now, this short introduction will do. In the next section, we consider a graphical analysis of these systems and precisely define what it means to be stable and unstable.

Graphical Analysis and Stable Sets In this section, we consider a graphical method to classifying trajectories, which will allow us to better understand the stability of fixed points. We begin by, once again, considering the system 3 f(x) = x . We know that this function has a fixed point at x0 = 0 and x0 = 1. Now, consider the trajectory starting at x0 = 0.75:

{0.75, 0.421875, 0.075088, 0.000423, 0.000000,...}

We can see that this sequence converges to the fixed point at zero, which means zero is most likely stable. One way to graphically display this sequence is to graph the function y = f(x) with the line y = x on the same plot. We can place a point on the graph of y = x at (x0, x0) to represent the initial iterate, x0 = 0.75. We then want to apply f(x) to x0. To represent this step, we connect  the initial point to the point x0, f(x0) on the graph of y = f(x). From here, we connect this  point back to the line y = x by placing a point at f(x0), f(x0) . We then simply apply f(x) again 2  by connecting this point to the point f(x0), f (x0) on the graph of y = f(x). We can see that we are being lead to the origin, which indicates that the sequence is converging to zero. Repeating this process is known as cobwebbing, and it typically tells us what will happen to any given starting point x0. This process is demonstrated in the graphs below:

8 Discrete Dynamical Systems MATH 210-01 October 19, 2015

Cobwebbing the Function f(x) = x3 Cobwebbing the Function f(x) = x3 Cobwebbing the Function f(x) = x3

1 1 1

(x , x ) (x , x ) (x , x ) 0.8 0 0 0.8 0 0 0.8 0 0

0.6 0.6 0.6 !f (x0), f (x0)"

y = f(x) 0.4 y = f(x) 0.4 y = f(x) 0.4 !x0, f (x0)" !x0, f (x0)"

0.2 0.2 0.2

0 0 0 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 x x x Cobwebbing the Function f(x) = x3 Cobwebbing the Function f(x) = x3 Cobwebbing the Function f(x) = x3

1 1 1

(x0, x0) (x0, x0) (x0, x0) 0.8 0.8 0.8 Cobwebbing approaches fixed point $x = 0$ 0.6 0.6 0.6 !f (x0), f (x0)" !f (x0), f (x0)"

y = f(x) 0.4 y = f(x) 0.4 y = f(x) 0.4 !x0, f (x0)" !x0, f (x0)" 2 2 !f (x0), f (x0)" 0.2 0.2 0.2 2 2 !f (x0), f (x0)" !f (x0), f (x0)" 0 0 0 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 x x x

Using our graphical technique of interpreting discrete dynamical systems, we can determine if a fixed point is stable or not. Consider the the function f(x) = x3 again, but now let’s look at the trajectories starting with x0 = 0.95 and x0 = 1.05:

x0 = 0.95 : {0.95, 0.857375, 0.630249, 0.250344, 0.015689, 0.000004,...} → 0

x0 = 1.05 : {1.05, 1.157625, 1.551328, 3.733456, 52.03951, 140928.8,...} → ∞

From this, we can easily tell that the trajectory starting at x0 = 0.95 converges to zero and the trajectory starting at x0 = 1.05 diverges. This is apparent from the following cobwebbing diagram: Cobwebbing the Function f(x) = x3

4

3

2 y = f(x)

1

0 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 x

We might expect that this system has an attracting stable fixed point at x0 = 0 and an unstable, repelling fixed point at x0 = 1. The latter fact is apparent because points very close to x0 = 1 (i.e. the points x0 = 0.95 and x0 = 1.05) quickly “run away” from 1 when we repeatedly apply f. The only point that stays the same when put into the system is x0 = 1. How can we determine if a point is attracting or repelling without using the cobwebbing technique? 1/3 To answer this question, let’s consider the case when f(x) = x . It is easy to see that x0 = 0 and x0 = 1 are fixed points. We can look at the trajectories for this function starting with x0 = 0.95

9 Discrete Dynamical Systems MATH 210-01 October 19, 2015

and x0 = 1.05:

x0 = 0.95 : {0.95, 0.983048, 0.994317, 0.998102, 0.999367, 0.999788,...} → 1

x0 = 1.05 : {1.05, 1.016396, 1.005436, 1.001809, 1.000603, 1.000201,...} → 1 We can see that both of these points converge to 1 instead of repelling from it. In this case, the fixed point at x0 = 1 is a stable fixed point. The following figure shows this fact for starting values that are greater than zero. Cobwebbing the Function f(x) = √3 x

2

1.5

1 y = f(x)

0.5

0 0 0.5 1 1.5 2 x

3 A natural question arises: Why does f(x) = x have a repelling fixed point at x0 = 1, but 1/3 f(x) = x have an attracting fixed point at x0 = 1? The answer lies in the local behavior near 3 the fixed point. That is, what is happening at values very close to x0 = 1 on the graph of f(x) = x that is different on f(x) = x1/3? If we zoom in on the point, the graph of both functions becomes linear, and the slope of the graph at the point determines if the point will be stable or not. In this 3 1/3 case, the function f(x) = x is much steeper at x0 = 1 than f(x) = x . In fact, if the magnitude of the slope is greater than 1, then it will be repelling and if it is less than 1, it will attract. We can demonstrate this fact by considering linear functions. Below is the cobwebbing analysis of four linear functions: f(x) = (1/2)x, f(x) = 2x, f(x) = −(1/2)x, and f(x) = −2x. Each function has a fixed point at x0 = 0, but which ones are stable (attracting)? 1 Cobwebbing the Function f(x) = 2x Cobwebbing the Function f(x) = 2 x 2 5

4 1.5

3 1 y = f(x) y = f(x) 2

0.5 1

0 0 0 0.5 1 1.5 2 0 0.5 1 1.5 2 x x

10 Discrete Dynamical Systems MATH 210-01 October 19, 2015

− 1 Cobwebbing the Function f(x) = −2x Cobwebbing the Function f(x) = 2 x 2 4 1.5

1 2

0.5 0 y = f(x) y = f(x) 0 −2 −0.5

−1 −4 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 x x

We can see from the graphs that f(x) = (1/2)x and f(x) = −(1/2)x are both stable. The magnitude of slope in both of these cases is less than one. That is, |f 0(x)| = 1/2 < 1. With the other two functions, the magnitude of the slopes is 2. Therefore, we can write |f 0(x)| = 2 > 1. This fact gives us a general method for classifying fixed points as stable or unstable. In fact, to determine if a fixed point is stable, the slope of the function evaluated at the fixed point must be less than one.

0 Definition Suppose x0 is a fixed point for f(x). Then x0 is stable or attracting if |f (x0)| < 1. 0 0 The point x0 is repelling or unstable if |f (x0)| > 1. Finally, if |f (x0)| = 1, the fixed point is called neutral or indifferent.

It is easy to see how this definition works for the previous examples. The function f(x) = (1/2)x 0 has a fixed point at x0 = 0. The derivative is f (x) = 1/2 so that

0 |f (x0)| = 1/2 < 1.

Therefore, the point is stable. Now, consider the function f(x) = x3. This function has a fixed 0 2 point at x0 = 1 as we saw before. The derivative is f (x) = 3x , which gives

0 2 |f (x0)| = 3(1) = 3 > 1.

Therefore this fixed point is repelling. But, the function f(x) = x1/3 has derivative f 0(x) = 1/(3x2/3), which gives

0 1 1 |f (x0)| = = < 1. 3(1)2/3 3 Therefore, this fixed point is stable. In general, we can see how linear functions are stable and unstable simply because if the slope is greater than one, the cobwebbing gets farther and farther away from the point. If the slope is less than one at the point, the cobweb converges to it. We apply the same idea to nonlinear functions by looking at the local linearity at the point. This means finding the slope of the function and plugging the fixed point into it. This idea is sometimes called linearizing about the fixed point. There are two theorems associated with attracting and repelling point that we’ll mention here.

Theorem. (Attracting Fixed Point Theorem) Suppose c is an attracting fixed point for f(x), then n n there exists an interval I such that c ∈ I and for any x0 ∈ I, f (x0) ∈ I for all n and f (x0) → c, as n → ∞.

11 Discrete Dynamical Systems MATH 210-01 October 19, 2015

This theorem basically says that every stable fixed point contains a set such that every point in this set will be attracted to this fixed point. The set of values associated to the fixed point such that this is the case is called the stable set of that fixed point. For example, for the function 3 0 2 f(x) = x , it follows that x0 = 0 is a stable fixed point (since |f (0)| = 3(0) = 0 < 1) and the stable set of x0 = 0 is the set (−1, 1) since every value within this interval actually is attracted to zero. Similar definitions and theorems apply to periodic points. As we’ve seen before, a point x0 is periodic with period k if it is a fixed point of the function k f (x). We say this is an attracting or stable periodic point if the derivative at x0 is less than one. That is, the periodic point x0 is stable if

k 0 (f ) (x0) < 1.

k Just as before, if the derivative of f (x) is greater than one at x0, then the periodic point is unstable. This may in general be hard to show. It turns out that there is an easier way to determine this derivative. Let x0 be a periodic point of prime period k. Then our trajectory has the form,

k−1 k−1 {x0, f(x0), . . . , f (x0), x0, f(x0), . . . , f (x0),...}, which we can write as {x0, x1, x2, . . . xk−1, x0, x1,...}. 2 Now, consider the chain rule for the derivative of f (x) at x0:

2 0 0 0  0 0 0 (f ) (x0) = f(f(x0) = f f(x0) · f (x0) = f (x1) · f (x0).

3 Now, consider the derivative of the function f (x) at x0:

3 0  0 (f ) (x0) = f f(f(x0)) 0  0  0 = f f f(x0) · f f(x0) · f (x0) 0 0 0 = f (x2) · f (x1) · f (x0).

k We see that the derivative of the function f (x) at x0 is simply the product of the derivatives at every point within the periodic orbit. This allows us to determine the stability of periodic points more easily.

i Theorem. Suppose x0, x1,... xk−1 lie on a periodic cycle of period k for f(x) with xi = f (x0). Then k 0 0 0 0 (f ) (x0) = f (xk−1) ··· f (x1) · f (x0). To demonstrate a stable periodic orbit, consider the function f(x) = x2 − 1. We know that there is a periodic point with period k = 2 at x0 = 0. To determine if it is stable, we first find the 2 2 2 4 2 derivative of f (x) = (x − 1) − 1 = x − 2x . The derivative at x0 = 0 is

2 0 3 3 (f ) (x0) = 4x0 − 2x0 = 4(0) − 2(0) = 0.

Therefore, the point is stable. Consider the following cobwebbing diagram:

12 Discrete Dynamical Systems MATH 210-01 October 19, 2015

Cobwebbing the Function f(x) = x2 − 1 1

0.5

0

−0.5 y = f(x) −1

−1.5

−2 −2 −1.5 −1 −0.5 0 0.5 1 x

Here, the blue points at (0, 0) and (−1, −1) are the two periodic points of period k = 2. The red line represents the cycle between the two points. If we start at the black point, x0 = 0.75, we note that the cobweb eventually falls onto the red cycle. Hence, the cycle attracts the point 0.75. This means the points x0 = 0 and x0 = −1 are stable periodic points and x0 = 0.75 is in the stable set of this periodic cycle. Also, we can say that x0 = 0.75 is eventually periodic. The methods discussed here for determining if a point is stable or unstable form a basis for one-dimensional and two-dimensional discrete systems. We will use these facts to determine the stability of more complex systems. For now, we want to apply this analysis to population models and higher dimensional systems.

Discrete Population Models and Parameter Variation Recall from calculus that a population grows exponentially if its rate of change is proportional to its current population. In the language of discrete systems, we can write this as the following: Let Pt be the population of rabbits (for example) in year t. Assuming there are no restrictions on the growth of this population, we want to track the population by noting the current number of rabbits once per year. Then, using the simple exponential growth model, we could write

Pt+1 − Pt = kPt. | {z } change in pop. Note the similarities between this and the following differential equation: dP = kP. dt They both essentially say the same thing. The only difference is that the discrete system tracks the change in rabbits only once per year, whereas the differential equation tracks the instantaneous rate of change at every point during the year. Considering the discrete form of this equation, we can write the system as Pt+1 = rPt,

13 Discrete Dynamical Systems MATH 210-01 October 19, 2015

where r = 1 + k. We say that this new discrete dynamical system has one parameter, namely r. We want to investigate this system and determine for what values of r the fixed points are stable. In this sense, we can define the function f(P ) = rP (which is simply the right hand side), and find the fixed points. The fixed points of this function are

f(P ) = P ⇒ rP = P ⇒ rP − P = 0 ⇒ P (r − 1) = 0 ⇒ P ∗ = 0.

The only fixed point, denoted here as P ∗, is P ∗ = 0. To determine if this fixed point is stable or not, we have to look at the derivative at P ∗. The derivative of f(P ) with respect to P is f 0(P ) = r. Hence, we can conclude 0 ∗ f (P ) = |r| < 1, if |r| < 1. That is, the fixed point P ∗ = 0 is stable in this model only if −1 < r < 1. Otherwise the fixed point is unstable. We already knew this from the interest problem, which was our first example! In this case, and for our host-parasitoid problem, we are interested in the stability of a fixed point for values of some parameter. This is what we’ll call parameter variation. For this small example, we say that the stability region for the system

Pt+1 = rPt, is −1 < r < 1 because this is the only interval in which the system has a stable fixed point. This system, in general, is not very interesting. A much more interesting dynamical system is the logistic map. Recall from calculus that the logistic growth model is a model that exhibits exponential growth but is limited by a carrying capacity, call it M. This translates to the following: Let Pt be the population of rabbits at year t. We wish to model the growth of these rabbits year by year using the discrete logistic model given by   Pt Pt+1 − Pt = kPt 1 − . | {z } M Change in pop.

Here, we note the similarities between this discrete equation and the corresponding continuous differential equation dP  P  = kP 1 − . dt M Again, the only differences lie in the definition of rate of change: In the discrete case, the change is only considered yearly, whereas in the continuous case the rate of change is instantaneous. Let’s

14 Discrete Dynamical Systems MATH 210-01 October 19, 2015

rearrange the discrete equation in the following way:

 P  P − P = kP 1 − t t+1 t t M  P  ⇒ P = (1 + k)P 1 − t t+1 t M P P  P  ⇒ t+1 = (1 + k) t 1 − t . M M M

In the last step, we divided each side of the equation by M. Now, if we let r = 1 + k again and let Qt = Pt/M be the fraction of rabbits that can inhabit the area of interest, then we have the following logistic map: Qt+1 = rQt(1 − Qt). As before, we define the function f(Q) = rQ(1 − Q) to be our function on the right hand side, and now we want to know for what values of r is this dynamical system stable. Although this function is rather simple, the stability of this system turns out to be very complex and chaotic. To continue on with the analysis, we can find the fixed points of f:

f(Q) = Q ⇒ rQ(1 − Q) = Q ⇒ rQ − rQ2 − Q = 0 ⇒ Q(r − 1) − rQ = 0 r − 1 ⇒ Q∗ = 0, . r ∗ ∗ We see that there are exactly two fixed points, Q1 = 0 and Q2 = 1 − 1/r. We can determine the stability of each point with respect to r by finding the derivative. We have d f 0(Q) = rQ − rQ2 = r − 2rQ = r(1 − 2Q). dQ Plugging in the respective fixed points, we obtain:

∗ 0 ∗ Q1 = 0 : f (Q ) = |r|    ∗ 1 0 ∗ 1 Q = 1 − : f (Q ) = r 1 − 2 1 − = |2 − r|. 2 r r From this, we can find a stability region for each fixed point. In each case, we find the values of r ∗ ∗ that make the fixed point stable. Consider the fixed point at Q1 = 0. Since a fixed point, say x , is stable if and only if |f 0(x∗)| < 1, we must have

0 ∗ |f (Q1)| = |r| < 1 ⇒ −1 < r < 1.

∗ Furthermore, for Q2 = 1 − 1/r, to be stable, we must have

0 ∗ |f (Q2)| = |2 − r| < 1 ⇒ 1 < r < 3.

15 Discrete Dynamical Systems MATH 210-01 October 19, 2015

Each fixed point has its corresponding stability region. We note that if −1 < r < 1, we can tell ∗ ∗ that Q1 is stable and the fixed point Q2 is unstable. Also, when r = 1, both fixed points become ∗ neutral. Finally, when 1 < r < 3, a switch in stability occurs, and the point Q1 becomes unstable ∗ while the point Q2 turns to stable. This critical value at r = 1 is known as a bifurcation. At r = 3, ∗ ∗ Q1 remains unstable and Q2 becomes neutral. A natural question arises: What happens when r > 3? We know that the logistic model typically converges to something between 0 and M (or 0 and 1 in our case). So we know that it shouldn’t diverge to infinity. So what system does the dynamical system approach? One way to investigate this is to plot a few trajectories with varying values of k in Matlab. The following figure shows several trajectories of the logistic map starting at Q0 = 0, with k = 0.5, 1.5, 2.8, 3.2, and 3.6:

Logistic Map, Varying k r = 3.6 r = 3.2 1 r = 2.8 r = 1.5 0.8 r = 0.5

0.6 t Q 0.4

0.2

0

0 10 20 30 40 50 t (time step)

From this, we can see that beyond r = 3, two periodic points emerge (r = 3.2). In fact, the period of these new periodic points is k = 2. These two points are fixed points of the function f 2(Q). There is another threshold value for r around 3.45 such that these two period points become unstable, and four more periodic points emerge. This continues to happen as r increases and we see the system diverge into chaos, exhibiting many periodic points. Because the period of the fixed point doubles when r = 3 and doubles again when r ≈ 3.45, we call this a period doubling bifurcation. To demonstrate this, we can plot against r the values that Qt attains at the end of a trajectory. That is, in Matlab, we can run the system for N = 1000 time units, then only consider the values in the last 100 time units of this trajectory. We do this for each value of r ranging from 0 to 4. The figure below shows the output, which is a famous bifurcation diagram associated with the logistic map.

16 Discrete Dynamical Systems MATH 210-01 October 19, 2015

Logistic Map 1 ) t 0.8

0.6

0.4 (population for large

t 0.2 Q

0 0 0.5 1 1.5 2 2.5 3 3.5 4 r (growth rate)

From this figure, we can easily see that the system undergoes a bifurcation at r = 3 and somewhere ∗ near r ≈ 3.45. In fact, for 1 < r < 3 the curved line represents the value Q2 = 1 − 1/r. To find the periodic points beyond r = 3, one would have to solve the equation (f 2)(Q) = Q, which can be difficult. Overall, we can see that the basic logistic map can get quite complicated quickly. This ends the discussion of discrete dynamical systems and linear stability analysis of fixed points for one-dimensional systems.

Discrete Linear Dynamical Systems in Two Dimensions We are ready to apply the analysis of one-dimensional discrete systems to a system of two equations. The stability analysis does not cross-over exactly but there are a lot of similarities when we switch to two dimensions. To demonstrate this process, we’ll start with the one-dimensional exponential growth/decay equation: Pt+1 = rPt.

Note, that this equation is considered linear because the right-hand-side depends linearly on Pt. We say that this equation has one fixed point at P ∗ = 0 and it is stable if |r| < 1. Consider two equations of similar form (with no parameters):

Pt+1 = 2Pt 1 Q = Q . t+1 3 t This is a system of two discrete linear equations that is not very interesting. Here, we can see that Pt+1 only depends on the previous value, Pt, so this population will grow exponentially since the growth rate is 2 (and is bigger than 1). In the equation for Qt+1, this population will decay to zero since the growth rate is less than one. This system is simply two one-dimensional linear systems written next to each other, and the analysis from the previous section still works.

17 Discrete Dynamical Systems MATH 210-01 October 19, 2015

We are instead interested in two populations that interact with each other. Consider a popula- tion that may grow or decay only if another population is present. In this case, we might consider the following two equations:

Pt+1 = 2Qt 1 Q = P . t+1 3 t

This system is much more interesting because we can see that the population Pt will grow depending on the population Qt, but Qt decays based on Pt. That is, the populations are interacting. We are interested in what happens to both populations as time approaches infinity, i.e. we want to describe the fixed points of the system together. In this case, it might be easy to see that Qt will approach zero faster than Pt approaches infinity (growth rate 2 versus decay rate 1/3), which means they will both approach zero together. This will occur because once Qt gets small enough, Pt will not be able to grow any more. How do we show this fact mathematically? We first find the fixed points of the system. We do this by defining the right-hand-sides as functions of both P and Q (in this example, the right-hand-sides are only functions of one variable but we reserve notation for more complicated systems):

f(P,Q) = 2Q 1 g(P,Q) = P. 3 A fixed point for the system is the pair of points P ∗ and Q∗ such that

f(P ∗,Q∗) = P ∗ g(P ∗,Q∗) = Q∗.

In other words, we set f(P,Q) = P and g(P,Q) = Q and solve the resulting system of equations for P and Q. We obtain

2Q = P ⇒ P ∗ = 0 1 P = Q ⇒ Q∗ = 0 3 Here, we simply used substitution to solve this system: P = 2Q so that (1/3)(2Q) = Q, which means (−1/3)Q = 0 and so Q∗ = 0. This, in turn, gives P ∗ = 2Q∗ = 0. We see that the fixed point of the system is the vector P ∗ 0 = . Q∗ 0 The vector notation treats this set of fixed points as one entity so that this system only has one fixed point, namely when both P and Q are zero. To emphasize this vector notation, we define the bold faced letter y as the following t   Pt yt = , Qt so that the fixed point is defined as 0 y∗ = 0 = . 0

18 Discrete Dynamical Systems MATH 210-01 October 19, 2015

Using the vector notation, we can rewrite our original system in terms of matrices and vectors. This will give us a new platform, which is very similar to the one-dimensional case. To do this, we rewrite our system in the following way:

Pt+1 = 0Pt + 2Qt (1) 1 Q = P + 0Q . (2) t+1 3 t t

Here, the left-hand-side of this equation is simply yt+1. The right-hand-side is more complicated. We introduce the R to be the following 2 × 2 matrix:

0 2 R = 1 , 3 0 which, as you can see, is simply a representation of the coefficients of the rewritten system in Equations (1) and (2). Using this definition for R, we can now write the system as a vector equation:       Pt+1 0 2 Pt = 1 ⇒ yt+1 = Ryt. (3) Qt+1 3 0 Qt If we write the system in this new way then we must be careful in defining matrix-vector multipli- cation (which is present on the right-hand-side). That is, if we want Equation (3) to represent the system of equations (Equations (1) and (2)), then we have to ensure the following multiplication between matrices and vectors holds:         Pt+1 Pt+1 0 2 Pt 0Pt + 2Qt 2Qt = = 1 = 1 = 1 . Qt+1 Qt+1 3 0 Qt 3 Pt + 0Qt 3 Pt | {z } matrix-vector multiplication

This yields a general definition for matrix-vector multiplication. In general, an m × n matrix is a matrix with m rows and n columns. A column vector of size m × 1 is a matrix with only n = 1 columns and a row vector is a 1 × n matrix with only m = 1 rows. We will mostly only consider column vectors. Consider a general 2 × 2 matrix and a general 2 × 1 vector given by

a a  x  A = 11 12 , x = 1 . a21 a22 x2

Then the product Ax is a 2 × 1 column vector given by the following operation:

a a  x  a x + a x  Ax = 11 12 1 = 11 1 12 2 . a21 a22 x2 a21x1 + a22x2

We can also define matrix-matrix multiplication between two 2 × 2 matrices A and B. This yields another 2 × 2 matrix defined as the following:

a a  b b  (a b + a b )(a b + a b ) AB = 11 12 11 12 = 11 11 12 21 11 12 12 22 . a21 a22 b21 b22 (a21b11 + a22b21)(a21b12 + a22b22)

Finally, we can define a general rule for multiplying any two matrices, as long as their inner dimensions are the same. Consider the following definition of matrix multiplication.

19 Discrete Dynamical Systems MATH 210-01 October 19, 2015

Definition (Matrix Multiplication) If A is an m × n matrix and B is an n × p matrix, then the product of A and B forms an m × p matrix C denoted as C = AB,

th th where the cij entry is the inner product (or linear combination) of the i row of A with the j th th column of B. To compute the entry cij of the product, we consider the i row of A and the j column of B, both denoted in red text below:   a11 a12 ··· a1n   . . . b11 ··· b1j ··· b1n  . . .     b21 ··· b2j ··· b2n   a a ··· a    .  i1 i2 in   . . .   . . .   . . .   . . .    bm1 ··· bmj ··· bmn am1 am2 ··· amn

The element cij of the matrix C = AB is given by m X cij = ai1b1j + ai2b2j + ··· + aimbmj = aikbkj. k=1 We are now ready to return to our linear system in two dimensions. We have the following equation yt+1 = Ryt, which is remarkably similar to the one-dimensional case:

Pt+1 = rPt. In fact, they have the same fixed point, y∗ = 0 and P ∗ = 0. The only difference is that r is a real number and R is a 2 × 2 matrix of real numbers. As we saw before, the one-dimensional growth/decay model is stable at zero if |r| < 1. We seek the two-dimensional analog of this stability condition for the matrix R. It follows directly that we must have |R| < 1, but in the world of matrices, this means something completely different. Without getting into the details, the condition |R| < 1 means that the eigenvalues of the matrix R must be within the unit circle on the complex plane. A first course in Linear Algebra is needed in order to justify this claim. However, for our research, we never actually compute the eigenvalues of the matrix. Instead of computing the eigenvalues, we will use the following conditions to determine of the eigenvalues are within the unit circle. Theorem. (Jury Stability Criterion, part 1) Consider the two-dimensional discrete linear system given by a b y = Ry , with R = , where a, b, c, and d ∈ . t+1 t c d R Then the fixed point at y∗ = 0 is stable if the eigenvalues of R lie inside the unit circle within the complex plane, which is the case if and only if the following three conditions hold: 1 − Tr(R) + Det(R) > 0 (J.1) 1 + Tr(R) + Det(R) > 0 (J.2) 1 − Det(R) > 0, (J.3)

20 Discrete Dynamical Systems MATH 210-01 October 19, 2015

where Tr(R) = a + d and Det(R) = ad − bc.

This theorem provides us with a short cut for determining if the eigenvalues of any 2 × 2 matrix are within the unit circle. The functions denoted by Tr(R) and Det(R) are called the and , respectively. The trace is given by the sum of the diagonal entries and the determinant is given by the explicit formula above. We can apply this theorem to our linear system,

0 2 yt+1 = Ryt with R = 1 . 3 0 Here, we know that the fixed point y∗ = 0 is stable if |R| < 1, which is guaranteed if Equations (J.1) - (J.3) hold. In this example, it follows that

Tr(R) = 0 + 0 = 0 1 2 Det(R) = (0)(0) − (2) = − 3 3

Applying the Jury conditions, we obtain 2 1 1 − Tr(R) + Det(R) = 1 − = > 0 3 3 2 1 1 + Tr(R) + Det(R) = 1 − = > 0 3 3 2 5 1 − Det(R) = 1 + = > 0. 3 3 Therefore, the condition |R| < 1 holds and the fixed point at y∗ = 0 is stable. To confirm this fact, we generate a trajectory starting at P0 = 5 and Q0 = 10 in Matlab. The figure below shows that the fixed point at zero is indeed attracting. 2-D Discrete Linear System 20 P t Q t 15 t

,Q 10 t P

5

0 0 5 10 15 20 25 30 t (time step)

21 Discrete Dynamical Systems MATH 210-01 October 19, 2015

Discrete Non-Linear Dynamical Systems in Two Dimensions Now, we are ready to consider non-linear systems and the stability analysis of such systems. We’ll see that it is intimately related to the stability criterion for linear systems, in the sense that we must first linearize about the fixed point. As before, we begin with a one-dimensional example of a non-linear dynamical system – the logistic map:

Qt+1 = rQt(1 − Qt).

∗ ∗ Recall that this equation has two fixed points at Q1 = 0 and Q2 = 1. To determine for what values of r each fixed point is stable, we define the right-hand-side of the equation as f(Q) = rQ(1 − Q) and find the values of r such that |f 0(Q∗)| < 1. In this case, we find

0 ∗ |f (Q1)| < 1 ⇒ r ∈ (−1, 1) 0 ∗ |f (Q2)| < 1 ⇒ r ∈ (1, 3). We found that beyond r = 3 a period-doubling bifurcation occurs. This logistic map is non-linear because the function f is a non-linear function in Q. In fact, it is a quadratic function. When the function f is non-linear, our general strategy is to linearize about the fixed points to see if the slope is strictly less than one. We do this because around or near the fixed point, the graph of f is approximately linear. We saw that for a linear map (e.g. Pt+1 = rPt) the fixed point at zero is stable if |r| < 1. Since f 0(Q∗) is the slope of the function at the fixed point, we are technically zooming in near this point to determine if the slope is less than one (i.e. we are determining if the “linear form” of the non-linear equation is stable). This is a solid technique for one dimension, but how does it apply to a system of two equations? If we are to apply this technique to a system of two equations, then we have to define a two-dimensional derivative. Let us begin with a non-linear system that is similar to the logistic map in one dimension with fixed parameter values: 1 P = P (1 − Q ) t+1 2 t t 2 Q = Q (1 − P ). t+1 3 t t Here, we might expect that if there is a fixed point at zero, then it will be stable because the rate constant values are less than one. Let us show this mathematically. We first define the right-hand-sides of these equations as functions of two variables: 1 f(P,Q) = P (1 − Q) 2 2 g(P,Q) = Q(1 − P ). 3 To determine the fixed points of this system, we find the values P ∗ and Q∗ that both satisfy the equations f(P ∗,Q∗) = P ∗ and g(P ∗,Q∗) = Q∗. We see that we must solve the following equations for P ∗ and Q∗: 1 P ∗(1 − Q∗) = P ∗ 2 2 Q∗(1 − P ∗) = Q∗. 3

22 Discrete Dynamical Systems MATH 210-01 October 19, 2015

∗ ∗ We see that P1 = 0 and Q1 = 0 satisfies these equations, so the fixed point at zero still exists in this system. Now, assume both P ∗ and Q∗ are non-zero fixed points. Then we solve the system as follows: 1 1 P ∗(1 − Q∗) = P ∗ ⇒ (1 − Q∗) = 1 ⇒ Q∗ = −1. 2 2 2 2 1 Q∗(1 − P ∗) = Q∗ ⇒ (1 − P ∗) = 1 ⇒ P ∗ = − . 3 3 2 ∗ ∗ In this case, the other fixed point is at P2 = −1/2 and Q2 = −1. Now that we have the two fixed points, we wish to determine if they are stable or not. To do this, we have to find the two-dimensional derivative at the fixed point and determine if it less than one. As in the two- dimensional linear case, this means we have to find a matrix whose eigenvalues are within the unit circle. But what particular matrix do we seek because we cannot write a non-linear system in terms of a matrix-vector product? This leads us to the second part of the Jury stability criterion, which is presented below.

Theorem. (Jury Stability Criterion, part 2) Consider the two-dimensional discrete non-linear system given by

Pt+1 = f(Pt,Qt)

Qt+1 = g(Pt,Qt).

Suppose this system has a fixed point at (P ∗,Q∗), then the Jacobian matrix evaluated at the fixed point is given by " # ∂f ∂f ∂P ∂Q J = ∂g ∂g ∂P ∂Q (P ∗,Q∗) and the fixed point is stable if the eigenvalues of J lie inside the unit circle within the complex plane, which is the case if and only if the following three conditions hold:

1 − Tr(J) + Det(J) > 0 (NJ.1) 1 + Tr(J) + Det(J) > 0 (NJ.2) 1 − Det(J) > 0. (NJ.3)

The two-dimensional derivative is called the Jacobian matrix, which is simply a 2 × 2 matrix of partial derivatives in each variable evaluated at the desired fixed point. Once we have this matrix, the Jury conditions follow as if we are considering a linear system (i.e. in the non-linear case, think of the Jacobian matrix, J, as the matrix R in the linear case). To demonstrate this theorem, we will find the Jacobian matrix at both fixed points and see if they are stable according to the Jury conditions. To determine the Jacobian matrix, we first find the four partial derivatives: ∂f 1 ∂f 1 = (1 − Q) = − P ∂P 2 ∂Q 2 ∂g 2 ∂g 2 = − Q = (1 − P ). ∂P 3 ∂Q 3

23 Discrete Dynamical Systems MATH 210-01 October 19, 2015

Now, for each fixed point, we substitute the values of the fixed point in for P and Q. This gives the following Jacobian matrices at each fixed point:

 1   1  2 0 1 4 J 1 = 2 J 2 = 2 0 3 3 1 Now, to determine if the fixed point is stable, we simply apply the Jury conditions for each matrix. It turns out that J1 satisfies every condition, but for the non-zero fixed point, we have 5 1 1 − Tr(J ) + Det(J ) = 1 − 2 + = − < 0 2 2 6 6 5 23 1 + Tr(J ) + Det(J ) = 1 + 2 + = > 0 2 2 6 6 5 1 1 − Det(J ) = 1 − = > 0. 2 6 6 We see that the first condition is not satisfied. Therefore, the non-zero fixed point is unstable. Below, we produce a figure from Matlab that shows the fixed point at zero being stable for some initial points P0 and Q0. Two-Dimensional Discrete Non-Linear System 0.7 P t Q 0.6 t

0.5

0.4

(populations)0.3 t ,Q

t 0.2 P

0.1

0 0 2 4 6 8 10 t (time)

24