Calculations on Randomness Effects on the Business Cycle of Corporations through Markov Chains

Christian Holm [email protected]

under the direction of Mr. Gaultier Lambert Department of Mathematics Royal Institute of Technology

Research Academy for Young Scientists Juli 10, 2013 Abstract

Markov chains are used to describe random processes and can be represented by both a matrix and a graph. Markov chains are commonly used to describe economical correlations. In this study we create a model in order to describe the competition between two hypothetical corporations. We use observations of this model in order to discuss how market randomness affects the corporations financial health. This is done by testing the model for different market parameters and analysing the effect of these on our model. Furthermore we analyse how our model can be applied on the real market, as to further understand the market. Contents

1 Introduction 1

2 Notations and Definitions 2 2.1 Introduction to Probability Theory ...... 2 2.2 Introduction to Graph and Matrix Representation ...... 3 2.3 Discrete Time Markov Chain on V ...... 6

3 General Theory and Examples 7 3.1 Hidden Markov Models in Financial Systems ...... 7

4 Application of Markov Chains on two Corporations in Competition 9 4.1 Model ...... 9 4.2 Simulations ...... 12 4.3 Results ...... 13 4.4 Discussion ...... 16

Acknowledgements 18

A Equation 10 20

B Matlab Code 20

C Matlab Code, Different β:s 21 1 Introduction

The study of Markov chains began in the early 1900s, when Russian mathematician Andrey Markov first formalised the theory of Markov processes [1]. A Markov process is a memoryless process that is used to describe probability measures evolving over time. Markov himself was the one who first applied the on mathematics, but not the first one who used this property. A famous use of the memoryless process that pre-dated Markov was Karl Marx, with his book “Das Kapital”, where he used the concept of memoryless processes to describe the economical development in a capitalist society [2]. Markov’s development of memoryless processes opened a new field of applications in mathematics that later made possible inventions such as hidden Markov models and the Google algorithm. This is a Markov processes algorithm called PageRank to give ranking to internet pages [3, 4]. These are two of the applications that have made Markov processes a popular subject of mathematical research. Markov processes have frequently been used within the financial sector and economi- cal research [1]. This paper will attempt to apply the Markov property on micro-financial systems. We will present a model which describes a system of two corporations in com- petition. Customers buy product at random from the two corporation according to a Markov chain and the corporation receive an increased amount of assets or increased fitness if they are chosen by the customers. In this paper we will formalize a mathematical model of this market in order to describe the market randomness effect on business cycle of corporations.

1 2 Notations and Definitions

2.1 Introduction to Probability Theory

In this section, we will review some elementary facts of probability theory. Throughout these notes, we will take V = {1,...,N} for some N ∈ N, as our probability space. The probability that the event A occurs we denote as P[A].

Definition 1. Let A, B be two events, then we denote by P[A|B] the probability that the event A happens knowing that B happens. P[A|B] is called the probability of A given B.

Example 1. To give an example of this definition imagine that you have a deck of cards, with 52 cards in it. Pick a card which you know is a head, what is the possibility of this card being a king? The easy answer is of course 4/12 or 1/3. Observe that this probability is strictly greater than the probability of just picking a king, which is P[king] = 1/13.

If P[B] > 0 then we compute the probability of A knowing that event B occurs by

P[A ∩ B] P[A|B] = (1) P[B]

A ∩ B is the event that both A and B happen. As such the probability P[A|B] that A happens given that B is happening is equal to the probability of A and B happening divided by the probability that B has occurred.

Definition 2. Two events A and B are independent if P[A ∩ B] = P[A]P[B].

This means that A and B are independent if the probability that they both happen is equal to the product of the probabilities that they happen separately.

Theorem 1. Two events A and B are independent if and only if P[A|B] = P[A]

Proof. [A|B] = P[A∩B] P P[B] It may be drawn from definition 2, that if A and B are independent then: [A|B] = P[A]·P[B] P P[B] We simplify the equation and get P[A|B] = P[A]

2 Definition 3. A probability measure on V = {1,...,N} is a vector µ ∈ RN that satisfy PN µi ≥ 0 and i=1 µi = 1 for i ∈ {1,...,N}.

In other words, if we let all possible states be elements i ∈ {1,...,N} in the space V ,

then µi is the probability of observing state i. The probability of all possible states must add up to 1. This is only true if all states are disjoint.

Example 2. If we have the three events of picking a king, a red card and a black card from the same deck as before, then our events are not disjoint. For example it is possible to pick a king and a red card. As such the sum of all events is not equal to 1, it is 1/13 + 1/2 + 1/2 = 14/13.

2.2 Introduction to Graph and Matrix Representation

Throughout this section we will review some elementary facts about graphs and matrices, facts that the paper is based on and occasionally will refer to. A directed graph G = (V, E~ ) is a collection of vertices V and a collection of edges −→ −→ ij ∈ E~ connecting them. An edge ij is represented by an arrow directed from the vertex i to the vertex j. In this report, we will consider the graph G to be finite and denote the set of vertices V = {1,...,N}.

Example 3. In this example we see the graph in Figure 1 with states 1, 2, 3 and 4, between these states there are edges, one of which is edge 13~ represented by an arrow. This edge is the possible path between the two states 1 and 3.

Definition 4. A N × N matrix P is an array of real numbers Pij for j, i ∈ {1,...,N}

  1 0     (2) 1 2

As an example of how to read a 2 × 2 matrix, the value of P12 = 0 and P22 = 2.

3 13~  1 , 3 < L

| 2 l , 4 L Figure 1: The directed graph of example 3.

A matrix P corresponds to a linear map from RN to RN . For any vector x ∈ RN , the i:th component of the vector P x is given by the equation

N X (P x)i = Pijxj (3) j=1

Definition 5. The number λ ∈ R is a real eigenvalue of the matrix P if there exists a vector x ∈ Rn, Rn\{0} which satisfy the equation

P x = λx

We call x is an eigenvector associated to λ.

A matrix does not necessarily have eigenvalues. Moreover, eigenvectors are not uniquely defined. For instance if x is an eigenvector associated to λ then so is cx for any constant c ∈ R. −→ Given a graph G = (V, E ) we can put probability weights on its edges. For any two ~ vertices i and j, if there is an edge ji~ ∈ E, we let Pij be the probability of going from j to i, otherwise, we let Pij = 0. In this way, we construct a matrix P which correspond to the graph G and its probability weights. Since the sum of probabilities of ending in any possible state i given any starting state PN j, is equal to 1, we must have i=1 Pij = 1. In the theory of Markov processes, a matrix

4 1/2 1/2  1 , 3 < L

1/3 5/6 1/2 1/2

1/2 | 2 l , 4 L 1/6 1/6

Figure 2: Figure 3 with weights.

P satisfying the latter condition is called a stochastic or transition matrix.

Example 4. We reuse Figure 1 and attach weights to its edges, as seen in Figure 2. This directed graph has edges that stretch between states, illustrating the possible changes of states in this system. To the edges there are weights attached, as an example the weight attached to edge 13~ is 1/2. The weight might be seen as the possibility of travelling through an edge.

  1/2 0 0 0      0 1/6 1/2 1/6   P =   1/2 1/3 0 5/6     0 1/2 1/2 0

This stochastic matrix is the corresponding matrix to the graph in example 4. The sum of its every column, the sum of the possibility of entering all i corresponding to a given starting value j, is equal to 1. As all stochastic matrices P it might be rewritten as a graph, both ways of illustrating a process are equivalent.

5 2.3 Discrete Time Markov Chain on V

In this section we will apply our newly gained knowledge of probability theory, graphs and matrices on discrete time Markov chains.

∞ Definition 6. A Markov Chain (xn)n=0 = (x1, x2,...) with transition matrix P is a sequence of random states xn ∈ V , such that for all n ≥ 0 we have

P[xn+1 = i|xn = j] = Pij ∀i, j ∈ V (4)

V is called the state space of the chain. Furthermore equation 4 gives us that, given

that the state of the chain at time n is xn = j, the probability that at time n + 1 the

∞ new state will be xn+1 = i is equal to Pij. We can interpret a Markov chain (xn)n=0 as a −→ random walk on the graph G = (V, E ) corresponding to the matrix P . x0 is the starting

position of the walker who travels along the edges of G. The position x1, x2 ... of the walker change randomly at each new time according to the following rule: If the walker −→ is in position j at time n and if there is an edge ij , then he can move to the position i

with a probability Pij. The Markov property is the special property of random walks. That the probability of the system entering future states does not require knowing the past states, it only depends on the current state. It is clear from the definition that this property holds for Markov chain since for all time n ≥ 0 and for all states i ∈ V we have that

P[xn+1 = i|xn = j] does not depend on previous states x0, x1, . . . , xn−1 (5)

Definition 7. D(V ) is the set of vectors µ ∈ RN .

∞ Theorem 2. Let (xn)n=0 be a Markov chain with transition matrix P and starting point

x0 distributed according to µ0 ∈ D(V ). Let µn ∈ D(V ) be the probability distribution of

6 position xn at time n ≥ 1. Then for all n ≥ 0 we have

µn+1 = P µn (6)

n µn = P µ0 (7)

Proof. By definition µn+1,i = P[xn+1 = i]. Since the events {xn = i} are disjoint and they cover all possible states of the system, we have the decomposition

N X P[xn+1 = i] = P[xn+1 = i ∩ xn = j] j=1 N X = P[xn+1 = i|xn = j] · P[xn = j] j=1 where we used equation 1. Now using equation 4 from the definition of Markov chain and the matrix product rule we get

N X P[xn+1 = i] = Pijµnj j=1

= (P µn)i

n µn = P µn−1 = P µ0 and thus proving the second equation of the theorem.

3 General Theory and Examples

3.1 Hidden Markov Models in Financial Systems

In order to use Markov chains to describe the market, we need to assume that it is possible to describe the market with Markov chains. Therefore we need the following postulate 1.

Postulate 1. All financial systems may be described as hidden Markov models [5].

Hidden Markov models may describe many of the connections that exist in the market

7  Stagnant Market M a

! Bear Market . Bull Market L n R Figure 3: This directed graph show the hypothetical correlation between different market states. as well as the equilibrium of the market. A market equilibrium is when the market settles, as to give constant values for scares resources. Equilibrium in a market may change rapidly. The market always strives toward an equilibrium[6, 8]. A famous Markov chain that describe the market is the Markov chain of bull-, bear- and stagnant markets, see Figure 3. A bull market being a market where many investments are made, a bear market a market where few investments are made and a stagnant market is a market in an economical slump or recession. The Markov chain models all possible changes of states in a market, given any starting state. The edges go from all possible states to all possible states, as all market states may evolve to each other. The chain represent a probability function for states, that is described by a hidden PN . This may be concluded as the chain uphold the property of j=1 Pij = 1, the definition of a stochastic matrix. This given that the weights put on the edges going from any state j sum up to 1. As this is a hidden process it may not be observed. We assume that the sum equals one, then according to the definition of hidden Markov chains [7], following from postulate 1, this is a hidden . In markets, hidden Markov models tend to strive toward an equilibrium, as stated

8 above. At equilibrium the market suffers from low volatility, where the value of scares resources change slowly, something that can be seen in for example stock prices. When these systems are put to stress they tend to strive for a new equilibrium and suffer from high volatility [5]. An example of such stress was the collapse of Lehman Brothers 2008. After such stress the market strives for a new equilibrium where it in the long run will settle. However this period of high volatility makes the market subject to waste, as investments are cancelled etcetera [9].

4 Application of Markov Chains on two Corporations

in Competition

4.1 Model

During the project, we have analysed a new model for the business cycles of two corpora- tions in competition, based on Taylor Sauders work on applying Markov chains on finance [7]. At time 0 of the model the corporations will have the same impact through commer- cials, the same financial health etcetera. In all aspects they are alike. This model will show how randomness in the market may either make these corporations equally profitable or make one of them go bankrupt. In this model, the two corporations 1 and 2 are fighting for the same unit of customers. At every new time, each corporations invest 1 unit of cash in their own business. Customers will at every new time buy 1 unit of products at the price of A from one of the two corporations. This model is a hypothetical describing this micro-financial competition between two companies.

We will denote p(z), z ∈ R the probability that customers buy products from cor- poration 1. Then q(z) = 1 − p(z) is the probability that they buy from corporation 2.

This is simulated by giving the two corporations fitness values 0 ≤ S1,S2 ≤ M. M rep- resents a bear market, when one of the corporations is maximally profitable and no new investments are made, while 0 represents a corporation going bankrupt. These values may

9 be seen as the amount of cash in the businesses. The choice of the customers are made randomly through a discrete time Markov chain. The probability that they choose to buy from corporation 1 at time n + 1 depend on the difference S1 − S2 at time n through the following rules

eβ(S1−S2) p(β) = (8) 1 + eβ(S1−S2) eβ(S2−S1) q(β) = (9) 1 + eβ(S2−S1)

We calculated this equation for our Markov model. It was chosen to represent our model as it follows the following properties: Define p(z) as the weight corresponding to choosing store 1 given the value of z = S1 − S2 and q(z) as the weight of choosing store 2, then we must have the property 1 − p(z) = q(−z), as 1 subtracted by the probability of using a corporation must be equal to the chance of using the other corporation. This property for the equation is proven in Appendix Equation 10. Moreover in order for the model to give an equal chance for both corporations to win, they must be equally attractive for customers at n = 0,(P (0) = 1/2). Finally it must hold this property 0 ≤ P (z) ≤ 1 as the probability of using a realistic corporation can not be less than 0% and not higher than 100%. This model create a system where customers preferably choose the healthier business in the market, which in principle offer the best products. Furthermore, the financial health a corporation may improve only if customers buy its products, otherwise it will deteriorate because of non-profitable investment. The constant β corresponds to the randomness in the customers’ choice. For instance in the case of β = 0 corresponds to irrational choices. Whatever the state of the market, the customers will at random choose corporation with equal probability, no matter the merits of that business. When β increases, they tend to choose the better corporation. In Figure 4, there are 3 graphs of how the probability of choosing corporation 1 depend on the difference between S1,S2, for different β-values, according to our model (calcula-

10 1

0.9

0.8

0.7

0.6

0.5

0.4

0.3 Probability of visiting corporation 1 0.2

0.1

0 −1000 −800 −600 −400 −200 0 200 400 600 800 1000 Different values for S1−S2

Figure 4: Graphs over probabilities changing for different β-values 0.1, 0.01 and 0.001. tions made in matlab, for full code see Appendix C). The steepest graph is corresponding to the β-value of 0.1, the middle graph corresponds to β = 0.01 and the most flat graph corresponds to β = 0.001. As the graphs show, the greater the β the more rational is the choice of corporation. In a system of large β a small difference between the corporations will make the customers prefer one corporation much more, this preference will greatly increase the chance of using that corporation. The higher the β, the more rational the customers choices are, as they become more eager to only use the more fit corporation.

The states of our model’s chain are given by pairs (S1(n),S2(n)) when n = 0,...,N is the time, N is the final time of the simulations. It is clear that this chain has the Markov

property since given S1(n) and S2(n) we can compute the probability that customers choose to buy from a given corporation at time n + 1. As an example, for a return on investment A = 2, the graph of our model is easy to figure out. For any beginning state,

when M ∈ N and A = 2 given our model, we get Figure 5.

11 p(−M) p(1−M) p(−2) (−1) p(0) p(1) p(M−2) p(M−1) p(M ) 0,M . 1,M − 1 ...* 0 M/2 − 1,M/2 + 1 - M/2,M/2 / M/2 + 1,M/2 − 1 +... - M − 1 * M q(−M)l n l o m p j m K q(1−M) q(2−M) q(−1) q(0) q(1) q(2) q(M−1) q(M)

Figure 5: Graph of our model when A=2.

  q(−M) q(1 − M) 0 ...... 0    .  p(−M) 0 q(2 − M) 0 ......       ......   0 p(1 − M) ......     ......   . . . . . 0 q(M − 1) .       0 ...... p(M − 2) 0 q(M)     0 ...... p(M − 1) p(M)

This is Figure 5:s corresponding matrix. The graph and matrix show us the possible states in our model, given that A = 2. The unit of customers can walk randomly in the chain. As might be seen the probabilities along this matrix of the dimensions (M + 1) × (M + 1). As the total amount cash in the system stays the same a random walker in this system can only move forwards and backwards along the chain. This provide us with an easy graph to calculate where the p(z)-values align along a lower diagonal and the q(z)-values align over a upper diagonal. Note that the function of p(z)-value will depend on β, as the weights depend on β. We have observed that in the cases of A = 3, 4,... the system becomes more complex.

We will start simulations from the symmetric state S1(0) = S2(0) = M/2 and we will take M = 1000 for the maximal wealth of a corporation. We refer to C in the appendix for the details of the algorithm used to run this Markov chain.

4.2 Simulations

In this section we will describe simulations of possible business cycles for corporations in competition. For entire simulation code see Appendix section B. Depending on if the β-value is high and low it display different properties of either

12 5

0

−5

−10 Log value of beta −15

−20

−25 0 1 2 3 4 5 6 7 8 9 Value of A

Figure 6: Graph of β∗-values in a logarithmic scale.

both corporations being successful or just one of them (for A > 1). From the Figures 8,9 we guess that there should be an interval of β that can react according to both properties, randomly. We call this β-value β∗.

The β∗ of the A-value of this model was found by testing different β-values. It was then noted whether the corporations shared the same faith, if just one was successful or if it varied. If it varied with a proportion between the possible events, of between 14:1-1:14,

over 30 simulations then the β of that simulation was accepted as a β∗-value within the margin of error. As it was impossible to predict if one or both corporations would survive. Figure 6 is a graphic representation of log β. The graph is in logarithmic scale as to

clarify the span of all β∗-values in the confidence interval of all β-values displaying the β∗ property. We do this as to make the confidence interval corresponding to A = 2 visible.

4.3 Results

In this section we will present the results of the simulations. Figure 7 is a graph corresponding to the β-value of 0. The graphs show that two corporations constantly change places and become leading and losing.

13 1000

900

800

700

600

500

Asset units 400

300

200

100

0 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 Time units x 106

Figure 7: Graph of β = 0, A = 2.

1100

1000

900

800

700 Asset units

600

500

400 0 200 400 600 800 1000 1200 1400 1600 1800 2000 Time units

Figure 8: Graph of β = 0.01,A = 3.

14 1200

1000

800

600 Asset units

400

200

0 0 100 200 300 400 500 600 700 800 900 1000 Time units

Figure 9: Graph of β = 0.1,A = 3.

Figure 8 show how our model behaves for small β where 2 ≤ A. The graph show that both corporations become successful. Figure 9 show how the model acts for large β where 2 ≤ A. The graph show that one company becomes successful, while the other perishes.

Figure 6 indicate that the β∗- value converges towards a stable interval when A is large. This indicates that β∗ does not depend on A nor does the confidence interval, for measured A:s A > 2. Furthermore the results of Figures 8,9 indicate that given A ≥ 2, for large β, one corporation will become successful and the other will become bankrupt. For small β both corporations survive. The simulated random model produced graphs of how the fitness of the corporation changed over time. We were able to deduce tendencies in this random system depending on the different values on the constants A, β chosen for the system. The case of A = 1, is the scenario where there was at most space in the market for one corporation and as such at least one corporation went bankrupt for all β. For the A = 2, the amount of cash in the corporations were exactly symmetrical to each other, see Figure 7, p(z)+q(z) = 1000. For

A > 2 both businesses could become successful, S1,S2 = 1000. For all values of A ≥ 2,

15 either both the corporations survive or one die, depending on the β-value.

4.4 Discussion

In this section we will discuss the result of the simulations and discuss our models rele- vance. Low β- values represent an irrational market, where customers barley base their choice of corporation on merit. High β-values represent a rational market, where customers base their choice of corporation on its merits. Both of these situations appear in a real market.

The β∗-value in the model describe what generally happens to companies suffering high volatility in the market. In the real market this may be caused by an unstable government or market. The β∗-value is very interesting as it simulates chaotic situations in the market such as appearing in a financial crisis. This model is an attempt to simulate a general hidden Markov model that to some extent applies for several types of competition, given postulate 1. This as A, β-values may be computed as to represent many hidden Markov models in the market. If the hidden Markov model of real businesses were to be calculated, it would be possible to predict what would happen in a volatile situation just by plotting the relevant β∗-value. By doing so it would make corporations more confident of investments.

The β∗-value converges to a constant value as the value of A increases. This means that no matter the market space, which A is an indication of as it decides the net expansion of the markets assets, it does not matter much for the chance of survival for the two corporations. The survival does therefore according to our model not depend so much on the space of the market, but of the rationality of the customers in it. This result is very unrealistic and a definite limitation of our model. It may logically be concluded that if the market space or demand expand the chances of corporations in it surviving must also increase, given the premise of our model. This limitation may depend on one of two reasons either our model is fault or the postulate is false. The later of the reasons was something professor Fredrich Hayek in-

16 tensively studied. He came to the conclusion that the market may not be described by mathematics [9]. However this is not a resolved debate and further studies of modeling the economy need to be made before being able to prove or dismiss that markets can be described by mathematics. Furthermore our model is very simplistic and as such it assumes that all aspects of two corporations except their financial health is alike. This is a very unrealistic assumption, that limits the model from taking into acount other values for a company as customer faithfulness, is the customer more easily appealed by one corporation. Moreover it does not take into account differences in management, location etcetera. For this model to be realistic, given that postulate 1 is true, it has to account for more variables and account for how randomness may affect them. The model need to have a factor increasing the chance of both business surviving as the market space increase. Of course an increased market space does not guarantee survival, but it increases the chance. Further research of this model might also be finding its equilibrium values and with the aid of it precise in what state the market is likely to be in both the short and the long run.

17 Acknowledgements

I would like to thank my mentor Gaultier Lambert for his invaluable guidance and sup- port, as well as for his work on Markov chains, which has been invaluable. Furthermore I would like to thank Mariusz Hynek for always being supportive and successfully teaching me at the most rapid pace I have ever experienced. I would also like to thank Stockholms Mathematical Center and "Teknikföretagen" for their most honourable financial contri- butions to our research, without which we would not have been able to fund this project. Finally, I want to thank Rays* and all parties involved for giving me the possibility of researching this area.

18 References

[1] Nationalencyklopedin: ”Markov-kedja”. Stenholm B, Henriksson S, editors. Bra Böcker, Höganäs; 1993. vol 12 p. 95.

[2] Nationalencyklopedin: ”Kapitalet”. Larsson H A, editor. Bra Böcker, Höganäs; 1993. vol 10 p. 414.

[3] Brin S, Page L. The Anatomy of a Large-Scale Hypertextual Web Search Engine. Stanford University, Stanford. 1998.

[4] Haveliwala T H, Kamvar D S. The second Eigenvalue of the Google Matrix. Stanford University, Stanford. March 2003.

[5] Liechty J. Regime Switching Models and Risk Management Tools. Marketing and Statistics . Penn State, Center for the Study of Global Financial Stability. Notes provided at lecture given 2011.

[6] Smith A. The Wealth of Nations. Book I. London. Everyman’s Library. 1991.

[7] Sauder T. A Review of Hidden Markov Models and Application to Finance. Foster College of Business Administration, Bradley University. 2011.

[8] Kara K. Investment Analysis. Maj Invest. Notes provided at lecture given 2013.

[9] White L H. Interviewed by John Papola. ”Fear the Boom” - The Austrian Theory of Boom and Bust. Econstories. 24 oct 2010

19 A Equation 10

Let us determine if p + q = 1

p + q (10) eβ(S1−S2) eβ(S2−S1) = + (11) 1 + eβ(S1−S2) 1 + eβ(S2−S1) eβ(S1−S2)(1 + eβ(S2−S1)) + eβ(S2−S1)(1 + eβ(S1−S2)) = (12) (1 + eβ(S1−S2))(1 + eβ(S2−S1)) 2 + eβ(S1−S2) + eβ(S2−S1) = (13) 2 + eβ(S1−S2) + eβ(S2−S1) = 1 (14)

B Matlab Code function [S1,S2]=Sim2(b,N) \% N is the finite time and b is the temperature S1(1)=500; %initial value S2(1)=500; M=10^3; %maximum wealth A=2; %income per unit customers for t=2:N; %time fron 2 to N if S1(t-1)>0; S1(t)=S1(t-1)-1; %S depend on time else S1(t)=S1(t-1); end if S2(t-1)>0; S2(t)=S2(t-1)-1; else S2(t)=S2(t-1);

20 end P=log(exp(b*(S1(t)-S2(t)))./(1+exp(b*(S1(t)-S2(t))))); %function x=log(rand); %Random value between 1 and 0 if x

C Matlab Code, Different β:s

» x=-1000:0.1:1000; %possible S1 values » P=exp(0.1.*(x))./(1+exp(0.1.*(x))); %the function for β = 0.1

21 » plot(x,P) » hold on » P=exp(0.01.*(x))./(1+exp(0.01.*(x))); %the function for β = 0.01 » plot(x,P) » hold on » P=exp(0.001.*(x))./(1+exp(0.001.*(x)));%the function for β = 0.001 » plot(x,P)

22