MAT1193 – 3a Discrete Time Dynamical Systems (DTDS) One of the main goals of this class is to give you the mathematical background to understand mathematical models of biological processes. The vast majority of math models in biology are indeed models of a biological process, and the word process implies that it is something that changes in time. In other words, most models in biology are models of dynamical systems – systems that change in time. To start, we’ll study discrete time dynamical systems. What are they? Discrete is defined as “consisting of distinct or unconnected elements – noncontinuous” (merriam-webster.com, 2nd definition). So discrete time systems are systems where time is measured in distinct elements or “steps in time.” The easiest example is to contrast an old analogue clock where the second sweeps continuously round the clock face vs. a digital clock where seconds are ticked off one by one. So discrete time dynamical systems are systems that change in time and where time is measured at distinct regular intervals, like every second, every day, etc.

As a prototypical example of such a system, consider an experiment where researchers are watching cancerous cells grow in tissue culture (inside a “dish” with nutrients). Every hour the researchers measure the number of cancer cells. In this case the system is the dish with the cancer cells and time is measured every hour. Obviously, time is an important variable in describing this system so we give it a name: t. (OK, so mathematicians tend not to be very creative in giving things names.) How do we describe how the system changes in time? In this case the relevant state of the system is the number of cancer cells. So we give the state variable a name as well: c. In most biological models, we know some facts about the detailed biological processes that happen in a system, and we want a mathematical model to tell us how those processes will play out over time. In this case suppose we know that in this particular type of cancer cell under these conditions, a given cell has a probability of 1 in 10 of dividing and making two cells in any given hour. Given that information, can we predict how many cells will be present after some longer period of time, say 2 days?

To formalize our problem, we take what we know about the biology and make an updating . This is a function that tells us how the system changes from one time step to the next. To make this function clear, we need a bit more notation. Let ct be the number of cancer cells at the time t. So c12 would be the number of cancer cells 12 hours into the experiment. Now suppose we knew c12. Can we predict how many cells there are one hour later (at c12+1=c13)? If we had 1000 cancer cells at time t=12, then 100 of them would divide in one hour, giving us (1+.1)*1000 cells by time t=13. Making this more general we can write

ct +1 =1.1*ct

This equation says, “the number of cancer cells one hour later than time t (=ct+1) is 10% more than (=110%=1.1) the number of cancer cells at time t.” Writing it like € ct +1 =1.1*ct makes most sense in terms of the problem, but in order to do some

€ math, we write our updating function “increase by 10%” using our usual function notation: h(x) = 1.1*x. So far we’ve described the three fundamental components of how any discrete time works: the state variable, which gives a description of where the system is at any one time, the time step, which tells us how often we are measuring the state of the system, and the updating function, which tells how the state changes from one time step to the next.

So how many cancer cells will there be 2 days (48 hours) after the start of the experiment? Obviously, this depends on how many cells there are at the start of the experiment. We call this number of cells the initial condition, and write it c0 . Since t=0 at the start of the experiment. To be specific suppose c0=100. To figure out c48 (how many cells there are at t=48) let’s start at c0 and repeatedly apply our updating

function ct +1 =1.1*ct

c1 =1.1*c0 =1.1*100 c =1.1*c =1.1*(1.1*100) € 2 1 c3 =1.1*c2 =1.1*(1.1*(1.1*100))

c4 =1.1*c3 =1.1*(1.1*(1.1*(1.1*100)))

Only 4 time steps and we still have a long way to get to c48. But if look closely, it’s easy to see the pattern: each time we multiply by 1.1, so by time t=48 we’ll have € multiplied our initial number of cells by 1.1 48 times. So we can write down the answer

48 c48 =1.1 *100 In fact, we now have a process for taking any time as input and giving as an output the number of cancer cells in our tissue culture at that time. We call such a € procedure the solution function for our dynamical system. Note that the domain (input) for this function is time and the output is the number of cancer cells (the state). So the solution function should be written as c(t). A few words on notation are in order here. First, we chose to write the number of cancer cells at time t as ct, but for the solution function we wrote it in the more usual function notation of c(t). What gives? The main reason for the different notations comes from the fact that we have two different, but related, functions related to our dynamical system. The updating function is a function whose input is a state (representing the state of the system at a give time) and whose output is also a state (the state of the system at the next time step). The solution function tells us how the state evolves over multiple time steps. The input is time and the output is the state of the system at that time. We could write

c(t +1) =1.1*c(t)

€ but that mixes the two functions. By using the notation ct we can talk about the state of the function at the particular time t, and downplay the fact that t is a variable. It takes a bit of practice, but work hard to understand the different ways that these two functions – the updating function and the solution function – fit in to the picture of a discrete time dynamical system.

To review, a discrete time dynamical system has 5 main parts: 1. A state variable, that describes the state of the system. In this class the state of the system will be characterized by a single number. In more complicated examples, it may take several numbers to characterize the current state of the system

2. A time step, that tells us how often the state of the system is measured. 3. An updating function, that tells how the state of the system at the next time step depends on the state of the system at the current time step. The input to this function will be a value of the state, and the output of this function will also be a value of the state. 4. An initial condition, that gives the state of the system at the start of the experiment. 5. A solution function, that describes what of the state of the system will be at any given time t. The input to this function will be the variable time and the output will be a value of the state. The initial condition will be a parameter for this function. In our example above, the state variable is the number of cancer cells, called c. The time step is one hour. The updating function is h(x) = 1.1*x (usual function notation) or ct+1=1.1*ct (notation more closely tied to application). These three components are necessary to describe the system. To understand what the system does in a particular case, we need to know the initial condition; in our example we took c0 = 100 cells. Finally, to solve the DTDS we want to know the state of the system at any future time; that is we want to know the solutions function c(t). The iterated function perspective.

The above introduction is the one that is most natural for linking to the actual biological system. But one of the advantages of mathematical modeling is to translate a real world situation into the language of math and then look at the structure of the problem from a purely mathematical perspective. The core of the discrete time dynamical system is the updating function, which takes as input the state of the system at a given time, and spits out the state of the system at the next time. Then the output of the system is put back in as input for the next time step and the system repeats:

In the Neuhauser book this is called a , and the updating function is sometimes referred to as the recursion function. If we start at a state s0 , then s1 = h(s0 ), s2 = h(s1 )= h(h(s0 )), s3 = h(s2)= h(h(h(s0 ))), etc. Since it is awkward to keep write the repeated application of h, we introduce some new notation: If h is a function then h2 is the function obtained from applying h twice. That is h2(x) = h(h(x))=hoh(x). (Remember that gof means the composition of the functions f and g.) Similarly, for any number k, hk is the function obtained by repeatedly applying the function h, k times. Notice that this notation can be confused with raising a variable to a power: x3 = x*x*x. In fact, similar notation is used precisely because the operations are so similar: in one case it represents repeated multiplication, in the other it represents repeated application of a function. Although this dual use of notation is a bit confusing, if you know the type of object the notation is applied to, then you know what the notation means: q3 means q*q*q if q is a variable, and it means q3(x) = q(q(q(x))) if q is a function.

Recall that h-1 denotes the inverse of the function h and represents the function that you get when you run h backward. If h is an updating function, running the function backward corresponds to running the dynamical system backward in time (just reverse the arrows in the above figure). It also explains why we use the superscript of -1 for the inverse. If h3 corresponds to applying h 3 times, then h-1 corresponds to ‘unapplying’ h one time: h-1o h3(x) = h-1(h( h(h(x)) )) = h(h(x)) = h2(x). In this way, the negative superscript for the inverse acts the same way as a negative power in a power function: x-1 * x3 = x-1 * x*x*x = x*x = x2.