1

1 THE A. Chakraborti

TOPICS TO BE COVERED IN THIS CHAPTER: What is a Random Walk? • – The random walk formalism – Bio-box on Carl Freidrich Gauss and L Bachelier – The Gaussian distribution – Wiener process – Langevin equation and Do markets follow a random walk (From Bachelier to Eu- • gene Fama & beyond) – “Stylized” facts – ARCH/GARCH processes – Efficient Market Hypothesis (EMH) Power spectral density (PSD) • – Spectral density : Energy and Power – Relation of PSD to auto-correlation – Long-time correlations : Hurst exponent and DFA ex- ponent

Econophysics. Sinha, Chatterjee, Chakraborti and Chakrabarti Copyright c 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim ISBN: 3-527-XXXXX-X 2 1 THE RANDOM WALK

1.1 What is a Random Walk?

1.1.1 Definition of Random walk

The mathematical formalization of a trajectory that consists of taking succes- sive “random” (e.g. decided by the flips of an unbiased coin) steps, is known as a random walk. A particularly simple random walk would be that on the integers, which starts at time zero, S = 0 and at each step moves by 1 or 1 with equal prob- 0 − ability (e.g. decided by the flips of an unbiased coin). To define this walk formally, take independent random variables xi, each of which is 1 with prob- ability 1/2 and 1 with probability 1/2, and set S = Σn x .This sequence − n i=1 i Sn is called the simple random walk on integers. This walk can be illustrated (see 1.1) as follows: Say you flip an unbiased coin. If it lands on heads H, you move one to the right on the number line, and if it lands on tails T, then you move one to the left. So after five flips, you have the possibility of landing on 1, 1,3, 3,5, 5. You can land on 1 by − − − flipping three heads and two tails in any order. There are 10 possible ways of landing on 1. Similarly, there are 10 ways of landing on -1 (by flipping three tails and two heads), 5 ways of landing on 3 (by flipping four heads and one tail), 5 ways of landing on -3 (by flipping four tails and one head), 1 way of landing on 5 (by flipping five heads), and 1 way of landing on -5 (by flipping five tails). These results are directly related to the properties of Pascal’s triangle. The number of different walks of n steps where each step is +1 or -1 is clearly 2n. For the simple random walk, each of these walks are equally likely. In order for Sn to be equal to a number k, it is necessary and sufficient that the number of +1 in the walk exceeds those of -1 by k. Thus, the number of walks which satisfy Sn = k is precisely the number of ways of choosing (n + k)/2 elements from an n element set (for this to be non-zero, it is necessary that n + k be an even number), which is an entry in Pascal’s triangle denoted by n n n C(n+k)/2. Therefore, the probability that Sn = k is equal to 2− C(n+k)/2. This relation with Pascal’s triangle (see 1.2) is easily demonstrated for small values of n. At zero turns, the only possibility will be to remain at zero. How- ever, at one turn, you can move either to the left or the right of zero, meaning there is one chance of landing on -1 or one chance of landing on 1. At two turns, you examine the turns from before. If you had been at 1, you could move to 2 or back to zero. If you had been at -1, you could move to -2 or back to zero. So there is one chance of landing on -2, two chances of landing on zero, and one chance of landing on 2. We shall study more interesting aspects of the random walk later in this chapter. 1.1 What is a Random Walk? 3

Table 1.1 Random coin flips. If there is a head H we move right on the number line (add +1), and if there is a tail T we move left on the number line (add -1).

Table 1.2 Pascal’s triangle.

The results of random walk analysis is central in physics, chemistry, eco- nomics and a number of other fields as a fundamental model for random (stochastic) processes in time. There are many systems for which at smaller scales, the interactions with the environment and their influence are in the form of random fluctuations, as in the case of “Brownian motion” 1. If the mo- 1) The motion of the particle is called Brownian Motion, in honor to the botanist Robert Brown who observed it for the first time in his 4 1 THE RANDOM WALK

30

20

10

0

y −10

−20

−30

−40

−50 −100 −80 −60 −40 −20 0 20 x

Table 1.3 Simulated Brownian motion (5000 time steps).

tion of a pollen grain in a fluid like water is observed under a microscope, it would look somewhat like what is shown in the figure 1.3. It is interesting to note that the path traced by the pollen grain as it travels in a liquid (observed by R. Brown and studied first by A. Einstein), and the price of a fluctuating stock (studied first by L. Bachelier), can both be modeled as random walks (theory of stochastic processes). It is noteworthy that the formulation of the random walk model — as well as of a — was first done in the framework of the economic study by L. Bachelier [31, 32], even five years prior to the work of A. Einstein! There are of course other systems, that present unpredictable “chaotic” be- havior, this time due to dynamically generated internal “noise”. Noisy pro- cesses in general, either truly stochastic or chaotic in nature, represent the rule rather than the exception. In this chapter, we will concentrate only on the former theory of random or stochastic processes. ******************************************************************************* BIO- BOX ON JOHANN (from wikipedia) Johann Carl Friedrich Gauss (30 April 1777 âA¸S23˘ February 1855) was a German mathematician and scientist who contributed significantly to many fields, including number theory, statistics, analysis, differential ge- ometry, geodesy, geophysics, electrostatics, astronomy and optics. Some- times known as the Princeps mathematicorum (the Prince of Mathemati- cians) greatest mathematician since antiquity, Gauss had a remarkable in- fluence in many fields of mathematics and science and is ranked as one of history’s most influential mathematicians.

studies of pollen. In 1828 he wrote “the pollen become dispersed in water in a great number of small particles which were perceived to have an irregular swarming motion”. The theory of such motion, however, was derived by A. Einstein in 1905 when he wrote: “In this paper it will be shown that ... bodies of microscopically visible size suspended in a liquid perform movements of such magnitude that they can be easily observed in a microscope on account of the molecular motions of heat ...” 1.1 What is a Random Walk? 5

Gauss was born on April 30, 1777 in Braunschweig, in the Electorate of Brunswick-LÃijneburg, now part of Lower Saxony, Germany, as the sec- ond son of poor working-class parents. There are several stories of his early genius. According to one, his gifts became very apparent at the age of three when he corrected, mentally and without fault in his calculations, an error his father had made on paper while calculating finances. Gauss attended the Collegium Carolinum (now Technische Universität Braunschweig from 1792 to 1795, and subsequently he moved to the Uni- versity of Göttingen from 1795 to 1798. His breakthrough occurred in 1796 when he was able to show that any regular polygon with a number of sides which is a Fermat prime (and, consequently, those polygons with any number of sides which is the product of distinct Fermat primes and a power of 2) can be constructed by compass and straightedge. This was a major discovery in an important field of mathematics; construction prob- lems had occupied mathematicians since the days of the Ancient Greeks, and the discovery ultimately led Gauss to choose mathematics instead of philology as a career. In his 1799 doctorate in absentia, Gauss proved the fundamental theorem of algebra which states that every non-constant single-variable polyno- mial over the complex numbers has at least one root. Gauss also made im- portant contributions to number theory with his 1801 book Disquisitiones Arithmeticae, which contained a clear presentation of modular arithmetic and the first proof of the law of quadratic reciprocity. In that same year, Italian astronomer Giuseppe Piazzi discovered the dwarf planet Ceres, but could only watch it for a few days. Gauss predicted cor- rectly the position at which it could be found again, and it was rediscov- ered by Franz Xaver von Zach on 31 December 1801 in Gotha, and one day later by Heinrich Olbers in Bremen. In 1807, Gauss was appointed Professor of Astronomy and Director of the astronomical observatory in GÃ˝uttingen, a post he held for the remainder of his life. The discovery of Ceres by Piazzi on 1 January 1801 led Gauss to his work on a theory of the motion of planetoids disturbed by large planets, eventu- ally published in 1809. It introduced the Gaussian gravitational constant, and contained an influential treatment of the method of least squares, a procedure used in all sciences to this day to minimize the impact of mea- surement error. Gauss was able to prove the method in 1809 under the assumption of normally distributed errors. The Gaussian distribution is one of many things named after Carl Friedrich Gauss, who used it to analyze astronomical data, and determined the for- mula for its probability density function. However, Gauss was not the first to study this distribution or the formula for its density functionâA˘Tthatˇ had been done earlier by Abraham de Moivre (in 1733). His result was ex- tended by Laplace in his book Analytical theory of probabilities (1812), and is now called the theorem of de MoivreâA¸SLaplace.˘ Laplace used the nor- mal distribution in the analysis of errors of experiments. The important method of least squares was introduced by Legendre in 1805. Although Gauss justified the method rigorously only in 1809, he had been using it since 1794. In 1831 Gauss developed a fruitful collaboration with the physics profes- sor Wilhelm Weber, leading to new knowledge in magnetism (including finding a representation for the unit of magnetism in terms of mass, length and time) and the discovery of Kirchhoff’s circuit laws in electricity. He developed a method of measuring the horizontal intensity of the magnetic 6 1 THE RANDOM WALK

field which has been in use well into the second half of the 20th century and worked out the mathematical theory for separating the inner (core and crust) and outer (magnetospheric) sources of Earth’s magnetic field. Gauss died in 1855 at Göttingen. ******************************************************************************* BIO-BOX ON LOUIS BACHELIER Louis Jean-Baptiste Alphonse Bachelier (March 11, 1870 - April 28, 1946) was a French mathematician. In his PhD thesis “The Theory of Specula- tion” that was published in 1900, he discussed the use of Brownian motion to evaluate stock options. It is historically the first paper to use advanced mathematics in the study of finance. Thus, Bachelier is considered a pio- neer in the study of financial mathematics and stochastic processes. It is notable that Bachelier’s work on random walks predated Einstein’s cel- ebrated study of Brownian motion by five years. His instructor, Henri Poincare, is recorded to have given some positive feedback. The thesis re- ceived a note of honorable, and was accepted for publication in the presti- gious Annales Scientifiques de l’Ecole Normale Superieure. After his suc- cessful thesis defence, Bachelier, further developed the theory of diffusion processes, which was published in prestigious journals. In 1909 he became a “free professor” at the Sorbonne. In 1914, he published a book, “Le Jeu, la Chance, et le Hasard” (Games, Chance, and Risk). With the support of the Council of the University of Paris, Bachelier was given a perma- nent professorship at the Sorbonne. However, World War I intervened and Bachelier was drafted into the French army. After the completion of the war, he found a position in Besancon, replacing a regular professor on leave. When the professor returned in 1922, Bachelier replaced another professor at . He moved to in 1925, but was finally awarded a permanent professorship in 1927 at Besancon, where he worked for an- other 10 years. *******************************************************************************

1.1.2 The random walk formalism and derivation of the Gaussian distribution

The original statement of the random walk problem was posed by Pearson in 1905. If a drunkard begins at a lamp post and takes N steps of equal length in random directions, how far will the drunkard be from the lamp post? We will consider an idealized example of a random walk for which the steps of the walker are restricted to a line (a one-dimensional random walk). Each step is of equal length a, and at each interval of time, the walker either takes a step to the right with probability p or a step to the left with probability q = 1 p. − The direction of each step is independent of the preceding one. Let n be the number of steps to the right, and m the number of steps to the left. The total number of steps N = n + m. What is the probability that a random walker in one dimension has taken three steps to the right out of four steps? Instead of the above example, had we considered the probability distribu- tions of non-interacting magnetic moments or the flips of a coin we would arrive at identical results (and hence we will use the terms interchangebly). All these examples have two characteristics in common. First, in each trial 1.1 What is a Random Walk? 7 there are only two outcomes, for example, up or down, heads or tails, and right or left. Second, the result of each trial is independent of all previous trials, for example, the drunken sailor has no memory of his or her previous steps. This type of process is called a Bernoulli process (after the mathematician Jacob Bernoulli, 1654-1705). We will cast our discussion of Bernoulli processes in terms of the random walk. The main quantity of interest is the probability PN(n) which we now calculate for arbitrary N and n. We know that a partic- ular outcome with n right steps and m left steps occurs with probability pnqm. We write the probability PN(n) as

n m PN(n) = WN (n, m)p q , (1.1)

where m = N n and W (n, m) is the number of distinct configurations of − N N steps with n right steps and m left steps. From our earlier discussion of random coin flips, we will be able to deduce easily the first several values of WN (n, m). We can determine the general form of W (n, m) by obtaining a recursion relation between W and W 1. A N N N − total of n right steps and m left steps out of N total steps can be found by adding one step to N 1 steps. The additional step is either (a) right if there − are (n 1) right steps and m left steps, or (b) left if there are n right steps and − m left steps. Because there are W (n 1, m) ways of reaching the first case N − and W (n, m 1) ways in the second case, we obtain the recursion relation N − W (n, m) = W 1(n 1, m) + W 1(n, m 1). (1.2) N N − − N − − If we begin with the known values W0(0,0) = 1, W1(1,0) = W1(0,1) = 1, we can use the recursion relation to construct WN (n, m) for any desired N. For example, W (2,0) = W (1,0) + W (2, 1) = 1 + 0 = 1. 2 1 1 − W2(1,1) = W1(0,1) + W1(1,0) = 1 + 1 = 2. W (0,2) = W ( 1,2) + W (0,1) = 0 + 1. 2 1 − 1 Thus we identify that WN (n, m) forms the Pascal’s triangle. It is straightfor- ward to show by induction that the expression

N! N! W (n, m) = = (1.3) N n!m! n!(N n)! − satisfies the relation Eq 1.2, since by definition 0! = 1. We can combine Eqs 1.1 and 1.3 to find the desired result

N! n N n P (n) = p q − . (1.4) N n!(N n)! − 8 1 THE RANDOM WALK

The form Eq 1.4 is called the “binomial distribution”. Note that for p = q = 1/2, such as in the case of an unbiased coin, PN(n) reduces to

N! N P (n) = 2− . (1.5) N n!(N n)! − The reason that Eq 1.4 is called the binomial distribution is that its form rep- resents a typical term in the expansion of (p + q)N. By the binomial theorem we have N N N! n N n (p + q) = ∑ p q − . (1.6) n!(N n)! n=0 − We use Eq 1.4 and write

N N N! n N n N N ∑ P (n) = ∑ p q − =(p + q) = 1 = 1, (1.7) N n!(N n)! n=0 n=0 − where we have used Eq 1.6 and the fact that p + q = 1. Frequently we need to evaluate ln N! for N 1. A simple approximation ≫ for ln N! known as Stirling’s approximation is

ln N! N ln N N. (1.8) ≈ − A more accurate approximation is given by

1 ln N! NlnN N + ln(2πN). ≈ − 2 We note some properties of the Binomial distribution. Suppose first that we have exactly one Bernoulli trial. We have two possible outcomes, 1 and 0, with the first having probability p and the second having probability q = 1 p. − Then mean µ = p.1 + q.0 = p and variance σ2 =(1 p)2 p +(0 p)2q = pq. − − Now, for N such independent trials, we have

(i) mean µ = Np

(ii) variance σ2 = Npq

We also note that for large N, the binomial distribution has a well-defined maximum at n = pN and can be approximated by a smooth, continuous func- tion even though only integer values of n are physically possible. We now find the form of this function of n. The first step is to realize that for N 1, ≫ PN(n) is a rapidly varying function of n near n = pN, and for this reason we do not want to approximate PN(n) directly. Because the logarithm of PN(n) is a slowly varying function, we expect that the power series expansion of ln PN(n) to converge. Hence, we expand ln PN(n) in a Taylor series about the 1.1 What is a Random Walk? 9 value of n = n˜ at which ln PN(n) reaches its maximum value. We will write p(n) instead of PN(n) because we will treat n as a continuous variable and hence p(n) is a probability density. We find

2 d ln p(n) 1 2 d ln p(n) ln p(n) = ln p(n = n˜)+(n n˜) = + (n n˜) = + ... − dn |n n˜ 2 − dn2 |n n˜ (1.9) Because we have assumed that the expansion Eq 1.9 is about the maximum n = n˜, the first derivative d ln p(n)/dn must be zero. For the same reason |n=n˜ the second derivative d2 ln p(n)/d2n must be negative. We assume that |n=n˜ the higher terms in Eq 1.9 can be neglected, and define

ln A = ln p(n = n˜), and d2 ln p(n) B = = . − dn2 |n n˜ The approximation Eq 1.9 and the definitions above, allow us to write

1 ln p(n) ln A B(n n˜)2, (1.10) ≈ − 2 − or 1 p(n) A exp B(n n˜)2 . (1.11) ≈ − 2 −  We next use Stirling’s approximation to evaluate the first two derivatives of ln p(n) and the value of ln p(n) at its maximum to find the parameters A, B, and n. We write

ln p(n) = ln N! ln n! ln(N n)! + n ln p +(N n) ln q. (1.12) − − − − It is straightforward to obtain

d(ln p(n)) = ln n + ln(N n) + ln p ln q. (1.13) dn − − − The most probable value of n is found by finding the value of n that satisfies d ln p the condition dn = 0. We find N n˜ q − = , (1.14) n˜ p or (N n˜)p = nq˜ . If we use the relation p + q = 1, we obtain − n˜ = pN. (1.15) 10 1 THE RANDOM WALK

Note that n˜ = n¯, that is, the value of n for which p(n) is a maximum is also the mean value of n. The second derivative can be found from Eq 1.13. We have

d2(ln p(n)) 1 1 = . (1.16) dn2 − n − N n − Hence, the coefficient B defined earlier is given by

d2 ln p(n) 1 1 1 B = = = + = . (1.17) − dn2 |n n˜ n˜ N n˜ Npq − From the properties of the Binomial distribution we see that

1 B = σ2 where σ2 is the variance. If we use the simple form of Stirling’s approximation to find the normal- ization constant A from the relation ln A = lnp(n = n˜), we would find that ln A = 0. Instead, we have to use the more accurate form of Stirling’s approx- imation. The result is 1 1 A = = . (2Npq)1/2 √2πσ2 If we substitute our results for n, B, and A into Eq 1.11, we find the distribu- tion 1 p(n) = exp (n n¯)2 /2σ2 , (1.18) √2πσ2 − −  which is called the “Gaussian distrbution”. From our derivation we see that Eq 1.18 is valid for large values of N and for values of n near n¯. Even for relatively small values of N, the Gaussian approximation is a good approximation for most values of n. The most important feature of the Gaussian distribution is that its relative 1/2 width, σn/n¯, decreases as N− . Of course, the binomial distribution also shares this feature. We deal it in the next subsection.

1.1.3 The Gaussian or Normal distribution

The Gaussian distribution, also called the Normal distribution, is perhaps the most important family of continuous probability distributions, applicable in many fields including physics and economics. Carl Friedrich Gauss became associated with this set of distributions when he analyzed astronomical data using them, and defined the equation of its probability density function. It 1.1 What is a Random Walk? 11 is often called the “bell curve” because the graph of its probability density resembles a bell2. The importance of the normal distribution as a model of quantitative phe- nomena in the natural and behavioral sciences is due in part to the “central limit theorem”. Under certain conditions (such as being independent and identically-distributed with finite variance), the sum of a large number of ran- dom variables is approximately normally distributed– this is the central limit theorem. The practical importance of the central limit theorem is that the nor- mal cumulative distribution function can be used as an approximation to some other well-known cumulative distribution functions, for example: A binomial distribution with parameters N and p is approximately normal for large N, and p not too close to 1 or 0. The approximating normal distribution has pa- rameters µ = Np, σ2 = Np(1 p) = Npq. It is noteworthy that a binomial − distribution with parameter λ = Np for large n and p 0 such that λ = Np → is constant, gives another well-known distribution, known as the “Poisson distribution”, with parameters µ = σ2 = λ. There are various ways to characterize a probability distribution. The most customary is perhaps the probability density function (PDF); the other equiv- alent ways of expressing them are with the cumulative distribution function, the moments, the cumulants, the characteristic function, etc. The continuous probability density function of the normal distribution is:

1 P(x) = exp (x µ)2/2σ2 , √2πσ − −  where σ > 0 is the standard deviation, the real parameter µ is the mean or expected value. Each member of the Gaussian PDF family (see Fig. 1.4) may be defined by two important parameters, location and scale: the mean µ and variance (standard deviation squared) σ2, respectively. The standard normal distribution is the normal distribution with a mean µ = 0 and a variance σ2 = 1:

1 P(x) = exp x2/2 . √2π −  2) HISTORICAL DIGRESSION: The Normal distribution was first in- troduced by Abraham de Moivre in an article in 1733, which was later reprinted in the second edition of his The Doctrine of Chances, (1738) in the context of studying binomial distributions. The result was extended by Laplace in his book Analytical Theory of Probabil- ities (1812) where he used the Normal distribution in the analysis of errors of experiments. Carl Friedrich Gauss in 1809, assumed in his analyses a Normal distribution of the errors. The name "bell curve" goes back to E. Jouffret who first used the term "bell surface" in 1872 for a “bivariate normal” with independent components. It is also known that the name "Normal distribution" was coined in- dependently by Charles S. Peirce, Francis Galton and Wilhelm Lexis around 1875. 12 1 THE RANDOM WALK

1 µ=0 σ2=1 2 0.9 µ=0 σ =4 µ=0 σ2=6 2 0.8 µ=5 σ =1/3 µ=5 σ2=2/3 0.7

0.6

0.5

0.4

0.3

0.2

0.1

0 −10 −8 −6 −4 −2 0 2 4 6 8 10

Table 1.4 Gaussian PDFs.

As a Gaussian function with the denominator of the exponent equal to 2, the standard normal density function is an eigenfunction of the Fourier transform. The probability density function has the following notable properties, among others:

symmetry about its mean µ • the mode and median both equal the mean µ • the inflection points of the curve occur one standard deviation away • from the mean, i.e. at µ σ and µ + σ. − 1.1.4 Wiener process

In mathematics, the Wiener process is a continuous-time stochastic process named in honor of Norbert Wiener. Norbert Wiener (1923) had ultimately proved the existence of Brownian motion and made significant contributions to related mathematical theories, so Brownian motion is often called a Wiener process, although this is strictly speaking a confusion of a model with the phenomenon being modeled. A Wiener process is the scaling limit (a term applied to the behaviour of a lattice model in the limit of the lattice spacing going to zero) of random walk in one-dimension, which means that if you take a random walk with very small steps you get an approximation to a Wiener process. To be more precise, if the step size is ǫ, one needs to take a walk of length L/ǫ2 to approximate a Wiener process walk of length L. As the step size tends to 0 (and the number 1.1 What is a Random Walk? 13 of steps increased comparatively) random walk converges to a Wiener process in an appropriate sense. A Wiener process in multi-dimensions is the scaling limit of random walk in the same number of dimensions. A random walk is a discrete “fractal”, but a Wiener process trajectory is a true fractal, and there is a connection between the two: take a random walk until it hits a circle of radius r times the step length. The average number of steps it performs is r2. This fact is the dis- crete version of the fact that a Wiener process walk is a fractal of “Hausdorff dimension” 2. In two dimensions, the average number of points the same ran- dom walk has on the boundary of its trajectory is r4/3. This corresponds to the fact that the boundary of the trajectory of a Wiener process is a fractal of dimension 4/3, a fact predicted by Mandelbrot using simulations. It is one of the best known Levy processes (stochastic processes with station- ary independent increments) and occurs frequently in mathematics (the study of continuous time martingales, , diffusion processes), eco- nomics (mathematical theory of finance, in particular the Black-Scholes pricing model) and physics (study Brownian motion, the diffusion of minute particles suspended in fluid, and other types of diffusion via the Fokker-Planck and Langevin equations, see next section).

1.1.5 Langevin Equation and Brownian motion

In this subsection, we shall study the basics of the Langevin equation in the language of colloidal suspensions (Brownian motion). Consider a sufficiently small colloidal particle of mass m suspended in a liquid at absolute temper- ature T. On its path through the liquid it will continuously collide with the liquid molecules and follow a random path exhibiting Brownian motion. In physics, this can serve as a prototype problem whose solutions provide con- siderable insight into the mechansisms responsible for the existence of statis- tical fluctuations in a system in thermal equilibrium and “dissipation of en- ergy”. Moreover, such fluctuations constitute a background noise, which im- poses limitations on the possible accuracy of delicate physical measurements. Again for simplicity, we consider the motion restricted to one dimensions. We consider a sufficiently small particle of mass m whose mass is described by the position coordinate x(t) at any time t, and whose corresponding velocity is v(t) = dx/dt. It would be very complex to describe in details, the interaction of the small particle with motion of the molecules in the surrounding liquid (other degrees of freedom). However, all such degrees of freedom can be regarded as consti- tuting a heat reservoir at the temperature T, and their interaction described by some net force F(t). In addition, the particle may also interact with some 14 1 THE RANDOM WALK

external system, such as gravity or electromagnetic fields, through a force de- noted by Fext(t). The velocity v of the particle may be appreciably different from its mean value in equilibrium. The Newton’s equation of motion then reads dv m = F(t) + F (t). (1.19) dt ext Very little is known about the nature of the force F(t), except that it is some rapidly fluctuating function of the time t and varies in a higly irregular or random fashion. To make progress, one has to formulate the problem in sta- tistical terms and therefore, must consider an ensemble of very many similarly prepared systems, each of them consisting of a particle and the surrounding liquid. For each of these the force F(t) is some random function of time t. We also assume that the correlation time characterizing the force F(t) is small on a macroscopic time scale, and there is no preferred direction in space (if the par- ticle is imagined to be clamped to be stationary). Then the ensemble average F¯(t) vanishes. Since F(t) is a rapidly fluctuating function of time, v must be also fluctuating in time. But superimposed upon these fluctuations, the time dependence of v may also exhibit a more slowly varying trend, and thus we assume:

v = v¯ + v˜ (1.20)

where v˜ denotes the part of rapidly fluctuating part of v, and whose mean value vanishes. The slowly varying part v¯ is very significant (even though it is small) because it detemnines the behaviour of the particle over long periods of time. We must consider the fact that the interaction force F(t) must be actually affected by the motion of the particle in such a way that F itself also contains a slowly varying part F¯ tending to restore the particle to equilbrium. Hence similar to above equation for v, we must also write that

F = F¯ + F˜ (1.21) where F˜ denotes the part of rapidly fluctuating part of F, and whose mean value vanishes. The slowly varying part F¯ must be some function of v¯ which is such that F¯(v¯) = 0 in equlibrium when v¯ = 0. If v¯ is not too large, F¯(v¯) can be expanded ina power series in v¯, whose first non-vanishing term must be linear in v¯. Thus F¯ must have the general form

F¯ = γv¯ (1.22) − where γ is some positive “frictional” constant and where the negative sign indicates explicitly that the force F¯ acts in such a direction that it tends to 1.1 What is a Random Walk? 15 reduce the v¯ to zero as time increases. Thus we have the slowly varying part

dv¯ m = F¯ + F = γv¯ + F , (1.23) dt ext − ext and if one includes the rapidly fluctuating parts v˜ and F˜, then we have

dv m = γv + F + F˜(t), (1.24) dt − ext assuming γv¯ γv (since the rapidly fluctuating part γv˜ can be neglected ≈ compared to the predominantly fluctuating part F˜(t)). This is the Langevin equation, and describes in this way the behaviour of the colloidal particle at all later times if the initial conditions are specified. We note that since the Langevin equation contains the frictional force γv, it implies the existence − of processes whereby the energy associated with the particle is dissipated in due course of time to the other degrees of freedom (molecules of the liquid surrounding the collodial particle). Now, while describing Brownian motion in absence of external forces Fext, we have

dv m = γv + F˜(t). (1.25) dt − The Stokes’s law of hydrodynamics gives:

γ = 6πηa, (1.26) where η is the viscosity of the liquid and a is the radius of the colloidal particle (assumed to be spherical). Let the system be in thermal equilibrium. Clearly the mean displacement x¯ of the particle vanishes by symmetry, since there is no preferred direction in space. Our aim is to calculate the mean-square displacement x2 = x¯2 of the h i particle in a time interval t. We have v = x˙ and dv/dt = dx˙/dt, so that multiplying the Langevin equa- tion by x throughout, we get

dx d mx = m (xx˙) x˙2 = γxx˙ + xF˜(t). (1.27) dt  dt −  − Now we take the ensemble average of the above equation, and use the fact that irrespective of the value of x or v, we have xF˜ = x F˜ = 0. Using the h i h ih i 1 2 “equipartition theorem” of classical statistical mechanics, we have 2 m x˙ = 1 h i 2 kBT, such that d d m (xx˙) = m xx˙ = k T γ xx˙ . (1.28) h dt i dt h i B − h i 16 1 THE RANDOM WALK

This is a simple differential equation which can be solved to get the value of the quantity xx˙ . Thus, one gets h i k T xx˙ = C exp ( αt) + B , (1.29) h i − γ where C is a constant of integration and α = γ/m is the characteristic time constant of the system. Assuming that each particle in the ensemble starts out at time t = 0 and the position x = 0, so that x measures the displacement from kBT the initial position, the constant C satisfies 0 = C + γ . Hence, we have 1 d k T xx˙ = x2 = B (1 exp( αt)) , (1.30) h i 2 dt h i γ − − or

2 2kBT 1 x = t α− (1 exp( αt) . (1.31) h i γ  − − −  For us the case t α 1 when exp( αt) 0, is relevant and gives rise to ≫ − − → the interesting equation

2k T x2 = B t. (1.32) h i γ The particle then behaves like a diffusing particle executing a random walk so that x2 ∝ t. Indeed, the diffusion equation in physicsfor random walks h i gives a relation x2 = 2Dt, where D is the diffusion constant, and comparing h i these two we get

k T D = B , (1.33) γ which is known as the “Einstein relation”. Using the Stokes’s law, we also have

k T x2 = B t. (1.34) h i 3πηa

1.2 Do markets follow a random walk?

Prices of assets in a financial market, produce what is called a “financial time- series”. Different kinds of financial time-series have been recorded and stud- ied for decades, all over the world. Nowadays, all transactions on a financial market are recorded leading to huge amount of data available either commer- cially or for free in the Internet. And financial time-series analysis has been 1.2 Do markets follow a random walk? 17 of great interest to not only the practitioners (an empirical discipline) but also the theoreticians for making inferences and predictions. The inherent uncer- tainty in the financial time-series and its theory makes it specially interesting to economists, statisticians and physicists [40]. It is a formidable task to make an exhaustive review on this topic but we try to give a flavour of some of the aspects here.

1.2.1 What if the time-series were similar to a random walk?

The answer is simple: It would not be possible to predict future price move- ments using the past price movements or trends. Louis Bachelier, who was the first one to investigate such studies in 1900 [31], had come to the conclusion that “The mathematical expectation of the speculator is zero” and he had de- scribed this condition as a “fair game.” Let us discuss this issue in a bit more details. In economics, if P(t) is the price of a stock or commodity at time t, then the “log-return” is defined as: r (t) = ln P(t + τ) ln P(t), where τ is the interval τ − of time. The definition of daily log-return is illustrated in Fig. 1.5, using the price time-series for the General Electric. We generate a random time-series using random numbers from a Normal distribution with zero mean and unit standard deviation, in order to compare with the real empirical returns, and plot them in Fig. 1.6. If we divide the time-interval τ into N sub-intervals (of width ∆t), the total log-return rτ(t) is by definition the sum of the log-returns in each sub-interval. If the price changes in each sub-interval are independent (for the data shown in Fig. 1.6a) and identically distributed with a finite variance, according to the central limit theorem the cumulative distribution function F(rτ) would con- verge to a Gaussian (Normal) distribution for large τ. The Gaussian (Normal) distribution has the properties (a) when the average is taken, the most prob- able change is zero, (b) the probability of large fluctuations is very low, since the curve falls rapidly at extreme values and (c) it is a stable distribution. The distribution of returns were first modelled for “bonds” by Bachelier [31], as a Normal distribution, 1 P(r) = exp( r2/2σ2), √2πσ − where σ2 is the variance (second moment) of the distribution. The classical financial theories had always assumed this Normality, until Mandelbrot [33] and Fama [98] pointed out that the empirical return distri- butions are fundamentally different– they are “fat-tailed” and more peaked compared to the Normal distribution. Based on daily prices in different mar- kets, Mandelbrot and Fama found that F(rτ) was a stable Levy distribution 18 1 THE RANDOM WALK

Table 1.5 Price (USD), log-price and log-return plotted with time for General Electric during the period 1982-2000.

Table 1.6 Time-series. (a) Random time-series (3000 time steps), (b) Return time-series of the S&P500 stock index (8938 time steps).

whose tail decays with an exponent α 1.7, a result that suggested that short- ≃ 1.2 Do markets follow a random walk? 19 term price changes were not well-behaved since most statistical properties are not defined when the variance does not exist. For the probability density function of a stochastic process P(x), the char- acteristic function G(y) is given by the Fourier transform of the probability density function: +∞ G(y) = P(x) exp(iyx) dx, Z ∞ − and by performing the inverse Fourier transform we obtain the probability density function:

1 +∞ P(x) = G(y) exp( iyx) dy. 2π Z ∞ − − Levy and Khintchine [10, ?, ?] determined the entire class of stable distribu- tions described by the most general form of a characteristic function:

y π ln G(y) = iµy γ y α 1 iβ tan α [α = 1] , − | |  − y 2  6 | |   and y 2 ln G(y) = iµy γ y 1 + iβ ln y [α = 1] , − | |  y π | | | | where 0 < α 2, γ is a positive scaling factor, µ is any real number and β is ≤ an asymmetry parameter between 1 and 1. The analytical form of the Levy − stable distribution is known only for a few values of α and β. For symmet- ric stable distributions, β = 0 and if the distributions have zero mean (first moment), µ = 0. The characteristic function for the Gaussian distribution, a special case of Levy stable distribution with α = 2, β = 0 and µ = 0 is thus

G(y) = exp( γ y 2), − | | where γ σ2/2 is the positive scale factor. The symmetric stable Levy dis- ≡ tribution with zero mean, of index α and scale factor γ is the inverse Fourier transform: ∞ 1 α PLevy(x) = exp( γ y ) cos(yx)dy. π Z0 − | | If we assume that γ = 1, and look at the asymptotic approximation valid for large values of x : | | Γ(1 + α) sin(πα/2) (1+α) PLevy( x ) x − , | | ∼ π x 1+α ∼ | | | | we find that it has a power-law behaviour. We also find that x q diverge for | | q α when α < 2. It follows, in particular, that all Levy stable processes with ≥ α < 2 have infinite variance, as mentioned earlier. 20 1 THE RANDOM WALK

It is now known, using more extensive data, that the decay of the distribu- tion is fast enough to provide finite second moment. With time, several other interesting features of the financial data were unearthed. A point worth men- tioning is that the physicists have been analysing financial data with the mo- tive of finding common or “universal” regularities in the complex time-series, which is very different from those of the economists who are traditionally ex- perts in statistical analysis of financial data. The results of the empirical stud- ies on asset price series by the physicists show that the apparently random variations of asset prices share some statistical properties which are interest- ing, non-trivial and common for various assets, markets and time periods. These are called “stylized empirical facts”. This brings to our next question.

1.2.2 What are the “Stylized” facts?

Stylized facts have been usually formulated using general qualitative prop- erties of asset returns and hence distinctive characteristics of the individual assets are not taken into account. Below we quote just a few from the paper by Cont [84], which reviews the several empirical studies of the returns and other relevant issues.

α (i) Fat tails: large returns asymptotically follow a power law F(r ) r − , τ ∼ | | with α > 2 (with α = 3.01 0.03 for the positive tail and α = 2.84 0.12 ± ± for the negative tail [85]). With α > 2, the second moment (the vari- ance) is well-defined, excluding stable laws with infinite variance. There has been various suggestions for the form of the distribution: Student’s- t, hyperbolic, normal inverse Gaussian, exponentially truncated stable, etc. but no general consensus exists on the exact form of the distribution describing the tails (see Fig. 1.7).

(ii) Aggregational Normality: as one increases the time scale over which the returns are calculated, their distribution closes to Normality. The shape is different at different time scales. The fact that the shape of the distribution changes with τ makes it clear that the random process un- derlying prices must have non-trivial temporal structure 3.

(iii) Absence of linear auto-correlations: the auto-correlation of log-returns, ρ(T) r (t + T)r (t) is illustrated in Fig. 1.8. It normally rapidly ∼ h τ τ i decays to zero within a few minutes: for τ 15 minutes, it is practi- ≥ cally zero [86]. This supports in a way the “efficient market hypothesis”. When τ is increased, weekly and monthly returns exhibit some auto- correlation but the statistical evidence varies from sample to sample.

3) Any non-gaussian iid with finite variance has this property!!! What is however special is slow convergence. 1.2 Do markets follow a random walk? 21

2 10 Normal distribution S&P log−returns

1 10

0 10 Probability density function

−1 10

−0.1 −0.08 −0.06 −0.04 −0.02 0 0.02 0.04 0.06 0.08 0.1 Log−returns

Table 1.7 S&P 500 daily log-return distribution and Normal kernel density estimate. For calcu- lating log-returns, we have used the daily closure prices from January 3, 1950 to October 29, 2009, for 15054 days. The mean is -2.76E-4 and variance is 9.34E-5.

Log−returns Absolute log−returns

0.8 0.8

0.6 0.6

0.4 0.4 Autocorrelation Autocorrelation 0.2 0.2

0 0

−0.2 −0.2 0 50 100 150 200 0 50 100 150 200 Delay time / days Delay time / days

Table 1.8 Autocorrelation functions. For calculating log-returns we have used the daily closure prices from January 3, 1950 to October 29, 2009, for 15054 days.

(iv) Volatility clustering: price fluctuations are not identically distributed and the properties of the distribution, such as the absolute return or variance, change with time and this is called time-dependent or “clus- tered volatility” (see Fig. 1.9) . The volatility measure of absolute returns show a positive auto-correlation over a long period of time (see Fig. 1.8) – decays roughly as a power-law with exponent between 0.1 and 0.3 [86, 89, 90]. Therefore high volatility events tend to cluster in time and large changes tend to be followed by large changes and so also for small changes. Some of these features have been studied very well by the class of eco- nomic models called ARCH and GARCH models. 22 1 THE RANDOM WALK

0.2

0.1

0

Log−returns −0.1

−0.2 1950 1960 1970 1980 1990 2000 2010 −3 Date x 10 4

3

2

Volatility 1

0

1950 1960 1970 1980 1990 2000 2010 Date

Table 1.9 Returns and Volatility. For calculating log-returns we have used the daily closure prices from January 3, 1950 to October 29, 2009, for 15054 days. For volatility calculations we have used the moving time window of 20 days.

1.2.3 Short note on multiplicative stochastic processes ARCH/GARCH

Considerable interest has been in the application of ARCH/GARCH models to financial time-series which exhibit periods of unusually large volatility fol- lowed by periods of relative tranquility. The assumption of constant variance or “homoskedasticity” is inappropriate in such circumstances. A stochastic process with auto-regressional conditional “heteroskedasticity” (ARCH) is ac- tually a stochastic process with “non-constant variances conditional on the past but constant unconditional variances” [58]. An ARCH(p) process is de- fined by the equation 2 2 2 σt = α0 + α1xt 1 + ... + αpxt p, (1.35) − − where α0, α1,...αp are positive parameters and xt is a random variable with 2 zero mean and variance σt , characterized by a conditional probability distri- bution function ft(x), which may be chosen to be Gaussian. The nature of the 2 memory of the variance σt is controlled by the parameter p. The generalized ARCH processes, called the GARCH(p, q) processes, intro- duced by Bollerslev [59] is defined by the equation 2 2 2 2 2 σt = α0 + α1xt 1 + ... + αqxt q + β1σt 1 + ... + βpσt p, (1.36) − − − − where β1,..., βp are the additional control parameters. The simplest GARCH process is the GARCH(1,1) process with Gaussian conditional probability distribution function ft(x), and is given by 2 2 2 σt = α0 + α1xt 1 + β1σt 1. (1.37) − − It was shown in[60] that the variance is given by α σ = 0 , (1.38) 1 α β − 1 − 1 1.2 Do markets follow a random walk? 23 and the kurtosis is given by

6α2 κ = 3 + 1 . (1.39) 1 3α2 2α β β2 − 1 − 1 1 − 1 The random variable x can be written in term of σ by defining x η σ , t t t ≡ t t where ηt is a random Gaussian process with zero mean and unit variance. One can also rewrite Eq. 1.37 as a random multiplicative process

2 2 2 σt = α0 +(α1ηt 1 + β1)σt 1. (1.40) − − 1.2.4 Is the market efficient?

In financial econometrics, one of the most debatable issues is whether the mar- ket is “efficient” or not; the “efficient” asset market is one in which the infor- mation contained in past prices is instantly, fully and continually reflected in the asset’s current price. It was who proposed the efficient mar- ket hypothesis (EMH) in his Ph.D. thesis work in the 1960’s. He made the argument that in an active market that includes many well-informed and in- telligent investors, securities would be fairly priced and reflect all the available information. In his own words:

“An ‘efficient’ market is defined as a market where there are large numbers of rational, profit-maximizers actively competing, with each trying to predict future market values of individual securities, and where important current information is almost freely avail- able to all participants. In an efficient market, competition among the many intelligent participants leads to a situation where, at any point in time, actual prices of individual securities already reflect the effects of information based both on events that have already occurred and on events which, as of now, the market expects to take place in the future. In other words, in an efficient market at any point in time the actual price of a security will be a good esti- mate of its intrinsic value.” – Eugene F. Fama, “Random Walks in Stock Market Prices,” Finan- cial Analysts Journal, September/October 1965 (reprinted January- February 1995).

Besides, there continues to be disagreement on the degree of market effi- ciency. The three widely accepted forms of the efficient market hypothesis are: “Weak” form: all past market prices and data are fully reflected in secu- • rities prices and hence technical analysis is of no use. 24 1 THE RANDOM WALK

“Semistrong” form: all publicly available information is fully reflected • in securities prices and hence fundamental analysis is of no use. “Strong” form: all information is fully reflected in securities prices and • hence even insider information is of no use. The efficient market hypothesis has provided the basis for much of the fi- nancial market research. In the early 1970’s, a lot of the evidence seemed to have been consistent with the efficient market hypothesis: the prices followed a random walk and the predictable variations in returns, if any, turned out to be statistically insignificant. While most of the studies in the 1970’s concen- trated mainly on predicting prices from past prices, studies in the 1980’s also looked at the possibility of forecasting based on variables such as dividend yield (e.g. Fama & French [1988]). Several later studies also looked at things such as the reaction of the stock market to the announcement of various events such as takeovers, stock splits, etc. In general, results from event studies typi- cally showed that prices seemed to adjust to new information within a day of the announcement of the particular event, an inference that is consistent with the efficient market hypothesis. Studies beginning in the 1990’s started look- ing at the deficiencies of asset pricing models. The accumulating evidences started suggesting that stock prices could be predicted with a fair degree of reliability. To answer the question of whether predictability of returns repre- sented “rational” variations in expected returns or simply arose as “irrational” speculative deviations from theoretical values, further studies have been con- ducted in the recent years. Researchers have now discovered several other stock market “anomalies” that seem to contradict the efficient market hypoth- esis. Once an anomaly is discovered, in principle, investors attempting to profit by exploiting such an inefficiency should result in the disappearance of the anomaly. In fact, numerous anomalies that have been discovered via back- testing, have subsequently disappeared or proved to be impossible to exploit because of high transactions costs. In many cases, strong performers in one period frequently turn around and underperformed in subsequent periods. Numerous studies have found lit- tle or no correlation between strong performers from one period to the next. And this lack of consistent out-performance among active managers can be furnished as evidence in support of the efficient market hypothesis: “Market efficiency is a description of how prices in competitive markets respond to new information. The arrival of new infor- mation to a competitive market can be likened to the arrival of a lamb chop to a school of flesh-eating piranha, where investors are - plausibly enough - the piranha. The instant the lamb chop hits the water, there is turmoil as the fish devour the meat. Very soon the meat is gone, leaving only the worthless bone behind, and the 1.3 Power spectral density 25

water returns to normal. Similarly, when new information reaches a competitive market there is much turmoil as investors buy and sell securities in response to the news, causing prices to change. Once prices adjust, all that is left of the information is the worth- less bone. No amount of gnawing on the bone will yield any more meat, and no further study of old information will yield any more valuable intelligence.” – Robert C. Higgins, Analysis for Financial Management (3rd edi- tion 1992) Before ending the discussion, we must mention that the nature of efficient markets is paradoxical in the sense that if every practitioner truly believed that a market was efficient, then the market would not have been efficient since no one would have then analyzed the behaviour of the asset prices. In effect, efficient markets depend on market participants who believe the market is in- efficient and trade assets in order to make the most of the market inefficiency.

1.3 Power spectral density

In statistical signal processing and physics, the concept of a spectral density– power spectral density (PSD) or energy spectral density (ESD)– is a positive real function of a frequency variable associated with a stationary stochastic process, or a deterministic function of time, which has dimensions of power per Hz, or energy per Hz. It is often called simply the “spectrum” of the signal. In a sense, the spectral density captures the frequency content of a stochastic process and helps identify periodicities.

1.3.1 The spectral density

The energy spectral density describes how the energy (or variance) of a signal or a time series is distributed with frequency. If f (t) is a finite-energy (square integrable) signal, the spectral density Φ(ω) of the signal is the square of the magnitude of the continuous Fourier transform of the signal (where energy is taken as the integral of the square of a signal, which is the same as physical energy if the signal is a voltage applied to a unit-ohm load): 1 ∞ 2 F(ω)F (ω) Φ(ω) = f (t) exp( iωt)dt = ∗ √ Z ∞ − 2π π − where ω is the angular frequency (defined as 2π times the ordinary frequency) and F(ω) is the continuous Fourier transform of f (t), and F∗(ω) is its complex conjugate. 26 1 THE RANDOM WALK

If the signal is discrete with values fn, over an infinite number of elements n, we can still define an energy spectral density:

∞ 2 1 F(ω)F∗(ω) Φ(ω) = ∑ fn exp( iωn) = √π n= ∞ − 2π −

where F(ω) is simply the discrete-time Fourier transform of fn. However, if the number of defined values is finite, the sequence does not actually have an energy spectral density. But the sequence can be treated as periodic, using a discrete Fourier transform to make a discrete spectrum, or it can be extended with zeros and a spectral density can be computed as in the infinite-sequence case. Also, the continuous and discrete spectral densities are often denoted with the same symbols, even though their dimensions and units differ: the continuous case has a time-squared factor that the discrete case does not have. They can be constructed to have same dimensions and units by measuring time in units of sample intervals or by scaling the discrete case to 1 the desired time units. The multiplicative factor of 2π is also not absolute, but depends rather on the particular normalizing constants used in the definition of the various Fourier transforms. Note that the above definitions of energy spectral density require that the Fourier transforms of the signals exist, i.e., the signals are square-integrable (or square-summable). An alternative is the power spectral density (PSD), which describes the distribution of the power of a signal or time series with frequency. Here, power considered may be the actual physical power, or for convenience with abstract signals, may be defined as the squared value of the signal (the actual power if the signal was a voltage applied to a unit-ohm load). This instantaneous power (the mean or expected value of which is the average power) is then given by: P = s(t)2. Since a signal with non-zero average power is not square-integrable, the Fourier transforms do not exist in this case. Fortunately, the Wiener-Khinchin theorem provides a simple alternative: The power spectral density is the Fourier transform of the autocorrelation function R(τ) of the signal, if the signal is a stationary random process. This results in the formula: ∞ S( f ) = R(τ) exp( 2πi f τ)dτ. Z ∞ − − The power of the signal in a given frequency band can be calculated by integrating over positive and negative frequencies,

F2 F1 P = S( f )d f + S( f )d f . ZF Z F 1 − 2 1.3 Power spectral density 27

Note that the power spectral density of a signal exists if and only if the signal is a stationary process. If the signal is not stationary, then the autocorrela- tion function must be a function of two variables, so truly no power spectral density exists. However, similar techniques may be used to estimate a time- varying spectral density. The power spectrum G( f ) is defined by

f G( f ) = S( f )d f . Z ∞ ′ ′ − Noteworthy properties: (i) The spectral density of f (t) and the autocorrelation function of f (t) form a Fourier transform pair. (ii) One of the results of Fourier analysis is Parseval’s theorem which phys- ically means that the area under the energy spectral density curve is equal to the area under the square of the magnitude of the signal, the total energy: ∞ ∞ f (t) 2 = Φ(ω)dω. Z ∞ | | Z ∞ − − The above theorem holds true in the discrete cases as well. Another similar result: the total power in a power spectral density being equal to the corresponding mean total signal power, which is the autocorrelation function at zero lag.

1.3.2 Are there any long-time correlations?

The random walk theory of prices assumes that the returns are uncorrelated. But are they truly uncorrelated or are there long-time correlations in the finan- cial time-series? This question has been studied especially since it may lead to deeper insights about the underlying processes that generate the time-series [41]. We discuss this in the next chapter with details but here we introduce two measures to quantify the long-time correlations, and study the strength of trends: the R/S analysis to calculate the Hurst exponent and the detrended fluctuation analysis [42].

1.3.2.1 Hurst Exponent from R/S Analysis In order to measure the strength of trends or “persistence” in different pro- cesses, the rescaled range (R/S) analysis to calculate the Hurst exponent can be used. One studies the rate of change of the rescaled range with the change of the length of time over which measurements are made. We divide the time- series ξt of length T into N periods of length τ, such that Nτ = T. For each 28 1 THE RANDOM WALK

period i = 1,2,..., N, containing τ observations, the cumulative deviation is

iτ X(τ) = ∑ (ξt ξ τ) , (1.41) t=(i 1)τ+1 − h i − where ξ is the mean within the time-period and is given by h iτ 1 iτ ξ = ∑ ξt. (1.42) τ τ h i t=(i 1)τ+1 − The range in the i-th time period is given by R(τ) = max X(τ) min X(τ), − and the standard deviation is given by

1 iτ 2 1 2 S(τ) =  ∑ (ξt ξ )  . (1.43) τ τ t=(i 1)τ+1 − h i  −  Then R(τ)/S(τ) is asymptotically given by a power-law

R(τ)/S(τ) = κτH, (1.44)

where κ is a constant and H the Hurst exponent. In general, “persistent” behavior with fractal properties is characterized by a Hurst exponent 0.5 < H 1, random behavior by H = 0.5 and “anti-persistent” behavior by 0 ≤ ≤ H < 0.5. Usually Eq. (1.44) is rewritten in terms of logarithms, log(R/S) = H log(τ) + log(κ), and the Hurst exponent is determined from the slope.

1.3.2.2 Detrended Fluctuation Analysis (DFA) In the DFA method the time-series ξt of length T is first divided into N non- overlapping periods of length τ, such that Nτ = T. In each period i = 1,2,..., N the time-series is first fitted through a linear function zt = at + b, called the local trend. Then it is detrended by subtracting the local trend, in order to compute the fluctuation function,

1 iτ 2 1 2 F(τ) =  ∑ (ξt zt)  . (1.45) τ t=(i 1)τ+1 −  −  The function F(τ) is re-computed for different box sizes τ (different scales) to obtain the relationship between F(τ) and τ. A power-law relation between F(τ) and the box size τ, F(τ) τα, indicates the presence of scaling. The ∼ scaling or “correlation exponent” α quantifies the correlation properties of the signal: if α = 0.5 the signal is uncorrelated (white noise); if α > 0.5 the signal is anti-correlated; if α < 0.5, there are positive correlations in the signal. References 29

Table 1.10 Power spectrum analysis (left) and detrended fluctuation analysis (right) of auto- correlation function of absolute returns, from Ref. [89].

1.3.2.3 DFA and PSD analyses of auto-correlation function of absolute returns The analysis of financial correlations was done in 1997 by the group of H.E. Stanley [89]. The correlation function of the financial indices of the New York stock exchange and the S&P 500 between January, 1984 and December, 1996 were analyzed at one minute intervals. The study confirmed that the auto- correlation function of the returns fell off exponentially but the absolute value of the returns did not. Correlations of the absolute values of the index returns could be described through two different power laws, with crossover time t 600 minutes, corresponding to 1.5 trading days. Results from power × ≈ spectrum analysis and DFA analysis were found to be consistent. The power spectrum analysis of Fig. 1.10 yielded exponents β1 = 0.31 and β2 = 0.90 for f > f and f < f , respectively. This is consistent with the result that α = × × (1 + β)/2 and t 1/ f , as obtained from detrended fluctuation analysis × ≈ × with exponents α1 = 0.66 and α2 = 0.93 for t < t and t > t , respectively. × ×

References

0 5 R. K. Pathria, Statistical Mechanics, Second Edition, Butterworth-Heinemann, Oxford 1 P. A. Samuelson, Economics, Sixteenth Edi- (1996). tion, McGraw-Hill Inc., Auckland (1998). 6 L. D. Landau and E. M. Lifshitz, Sta- 2 J. M. Keynes, The General Theory of Em- tistical Physics, Third Edition (Part I), ployment, Interest and Money, The Royal Butterworth-Heinemann, Oxford (1998). Economic Society, Macmillan Press, Lon- 7 E. Majorana, Scientia 36, 58 (1942). don (1973). 8 L. P. Kadanoff, Simulation 16, 261 (1971). 3 http://www.britannica.com (2007). 9 E. W. Montroll and W. W. Badger, Intro- 4 F. Reif, Fundamentals of Statistical and Ther- duction to Quantitative Aspects of Social mal Physics, McGraw-Hill, Singapore Phenomena, Gordon and Breach, New York (1985). (1974). 30 References

10 R. N. Mantegna and H. E. Stanley, An 31 Bachelier L (1900) Theorie de la specula- Introduction to Econophysics, Cambridge tion. Annales Scientifiques de l’Ecole Nor- University Press, Cambridge (2000). male Superieure, Suppl. 3, No. 1017:21-86. 11 J.-P. Bouchaud and M. Potters, Theory English translation by Boness A. J. (1967). of Financial Risk, Cambridge University In: Cootner P (ed) The Random Charac- Press, Cambridge (2000). ter of Stock Market Prices. Page 17. MIT, Cambridge, MA 12 S. Moss de Oliveira, P. M. C. de Oliveira and D. Stauffer, Evolution, Money, War 32 Bouchaud J P (2005) The subtle nature and Computers, B. G. Teubner, Stuttgart- of financial random walks. CHAOS, Leipzig (1999). 15:026104. 33 B. B. Mandelbrot, J. Business 36, 394 (1963). 13 V. Pareto, Cours d’Economie Politique, Lau- 34 E. Fama, J. Business 38, 34 (1965). sanne and Paris (1897). 35 R. Cont, Quant. Fin. 1, 223-236 (2001). 14 L. Bachelier, Annales Scientifiques de l’Ecole 36 P. Gopikrishnan, M. Meyer, L. A. N. Ama- Normale Superieure III-7, 21 (1900). ral, H. E. Stanley, Eur. Phys. J. B (Rapid 15 K. Itô and H. P. Kean Jr., Diffusion Processes Note), 3 (1998) 139. and Their Sample Paths, Springer, Berlin, 37 R. Cont, M. Potters and J.-P. Bouchaud, Heidelberg (1996). in Eds. Dubrulle, Graner, Sornette, Scale 16 P. A. Samuelson, Industrial Management Invariance and Beyond (Proc. CNRS Work- Rev. 6, 13 (1965). shop on Scale Invariance. Les Houches, 1997), 17 F. Black and M. Scholes, J. Pol. Econ. 72, Springer, Berlin (1997). 637 (1973). 38 Y. Liu, P. Cizeau, M. Meyer, C.-K. Peng, H. 18 R. Merton, Bell J. Econ. Management Sci. 4, E. Stanley, Physica A 245, 437 (1997). 141 (1973). 39 P. Cizeau, Y. Liu, M. Meyer, C.-K. Peng, H. 19 G. J. Stigler, J. Business 37, 117 (1964). E. Stanley, Physica A 245, 441 (1997). 40 R.S. Tsay, Analysis of Financial Time Series 20 B. B. Mandelbrot, Int. Econ. Rev. 1, 79 (John Wiley, New York, 2002). (1960). 41 See e.g., R.N. Mantegna and H.E. Stanley, 21 B. B. Mandelbrot, J. Business 36, 394 (1963). An Introduction to Econophysics (Cam- 22 G. Parisi, Physica A 263, 557 (1999). bridge University Press, New York, 2000). 23 W. B. Arthur, Science 284, 107 (1999). 42 N. Vandewalle and M. Ausloos, Physica A 24 B. M. Roehner, Patterns of Speculation: 246, 454 (1997). A Study in Observational Econophysics, 43 C.-K. Peng et. al., Phys. Rev. E 49, 1685 Cambridge University Press, Cambridge (1994). (2002). 44 Y. Liu et. al., Phys. Rev. E 60, 1390 (1999). 25 R. N. Mantegna, Physica A 179, 232 (1991). 45 M. Beben and A. Orlowski, Eur. Phys. J B 26 J. D. Farmer, pre-print available on adap- 20, 527 (2001). org/9912002 (1999). 46 A. Sarkar and P. Barat, preprint avail- able at cond-mat/0504038 (2005); P. 27 D. Challet and Y. -C. Zhang, Physica A 246, Norouzzadeh, preprint available at 407 (1997). cond-mat/0502150 (2005); D. Wilcox and 28 Johnson, "Predictability of large future T. Gebbie, preprint available at cond- changes in a competitive evolving popula- mat/0404416 (2004). tion", Phys. Rev. Lett. 88, 017902 (2001). 47 M. L. Mehta, Random Matrices (Aca- 29 E. Samanidou, E. Zschischang, D. Stauf- demic Press, New York, 1991); T. Guhr, A. fer and T. Lux, in F. Schweitzer Ed., Mi- Muller-Groeling and H.A. Weidenmuller, croscopic Models for Economic Dynamics, Phys. Rep. 299, 189 (1999). Lecture Notes in Physics, Springer, Berlin- 48 L. Laloux et. al., Phys. Rev. Lett. 83, 1467 Heidelberg (2002); pre-print available at (1999); V. Plerou et. al., Phys. Rev. Lett. cond-mat/0110354 (2001). 83, 1471 (1999); M.S. Santhanam and P.K. 30 Various Authors (2005) Focus Issue: 100 Patra, Phys. Rev. E 64, 016102 (2001); M. years of Brownian motion. CHAOS, Barthelemy¯ et. al., Phys. Rev. E 66, 056110 15:026101. (2002). References 31

49 L. Bachelier, Theorie de la Speculation and non-universal properties of cross- (Gauthier-Villars, Paris, 1900). correlations in financial time series, cond- 50 B. Mandelbrot, J. Business 36, 394 (1963). mat/9902283, Phys. Rev. Lett., 83 (1999) 1471 51 E. Fama, J. Business 38, 34 (1965). 73 V. Plerou, P. Gopikrishnan, B. Rosenow, 52 J.D. Farmer, Computing in Science and L. A. N. Amaral, T. Guhr, H. E. Stanley, Engineering (IEEE) November-December A Random Matrix Approach to Cross- 1999, 26 (1999). Correlations in Financial Data, cond- 53 J. Scheinkman and B. LeBaron, J. Business mat/0108023 62, 311 (1989). 74 J. Bhattacharya. Complexity analysis of 54 K. Kaneko (Ed.), Theory and Applications spontaneous EEG. Acta Neurobiol. Exp., of Coupled Map Lattices (Wiley, New York, 60:495, 2000. 1993). 75 N. Burioka, G. Cornèlissen, F. Halberg, 55 R. Kapral, in Theory and Applications of H. Suyama, T. Sako, and E. Shimizu. Ap- Coupled Map Lattices (Ref.[54]). proximate entropy of human respiratory movement during eye-closed waking and 56 K. Kaneko, Physica D 34, 1 (1989). different sleep stages. Chest., 123:80, 2003. 57 A. Chakraborti and M.S. Santhanam, Int. 76 G. Oh, S. Kim, and C. Eom. Market effi- J. Mod. Phys. C 16, 1733 (2005). ciency in foreign exchange markets. In 58 R.F. Engle, Econometrica 50, 987 (1982). Proceedings of the International Confer- 59 T. Bollerslev, J. Econometrics 31, 307 ence APFA5 - Applications of Physics in (1986). Financial Analysis, 2006. 77 I. Peterson. Randomness, risk, and finan- 60 R.T. Baillie and T. Bollerslev, J. Economet- cial markets. Science, 166(15), October 9 rics 52, 91 (1992). 2004. 61 J. Toyli, M. Sysi-Aho and K. Kaski, Quant. 78 S. Pincus and R. E. Kalman. Irregularity, Fin. 4, 373 (2004). volatility, risk, and financial market time 62 J.-P. Onnela, A. Chakraborti, K. Kaski and series. PNAS, 101:13709, 2004. J. Kertesz, Phys. Rev. E 68, 056110 (2003). 79 S. Pincus and B. H. Singer. Randomness 63 P. Gopikrishnan et. al., Phys. Rev. E 64, and degrees of irregularity. Proc. Nati. 035106(R) (2001). Acad. Sci. USA, 93:2083, 1996. 64 E.P. Wigner, Ann. Math.62, 548 (1955); 65, 80 S. M. Pincus. Approximate entropy as a 203 (1957); 67, 325 (1958). measure of system complexity. Proc. Nati. Acad. Sci. USA, 88:2297, 1991. 65 G.J. Rodgers and A.J. Bray, Phys. Rev. B 81 S. M. Pincus, T. Mulligan, A. Iranmanesh, 37, 3557 (1998). S. Gheorghiu, M. Godschalk, and J. D. 66 M.V. Berry and M. Tabor, Proc. R. Soc. Veldhuis. Older males secrete luteinizing London A356, 375 (1977). hormone and testosterone more irregu- 67 M.S. Santhanam and H. Kantz, Phys. Rev. larly, and jointly more asynchronously, E 69, 056102 (2004). than younger males. Proc. Nati. Acad. 68 J.L. Doob, Stochastic Processes (Wiley, New Sci. USA, 93:14100, 1996. York, 1990). 82 I. Rezek and S. J. Roberts. Stochastic complexity measures for physiological 69 A.M. Sengupta and P.P. Mitra, Phys. Rev. E 60, 3389 (1999). signal analysis. IEEE Trans. Biomed. Eng., 44:1186, 1998. 70 V. Plerou et. al., Phys. Rev. E 65, 066126 83 J. S. Richman and J. R. Moorman. Physio- (2002). logical time-series analysis using approx- 71 Laurent Laloux, Pierre Cizeau, Jean- imate entropy and sample entropy. Am Philippe Bouchaud, Marc Potters, Noise J Physiol Heart Circ Physiol, 278:H2039, Dressing of Financial Correlation Matri- 2000. ces, cond-mat/9810255 84 R. Cont, Quant. Fin. 1, 223-236 (2001). 72 Vasiliki Plerou, Parameswaran Gopikr- 85 P. Gopikrishnan, M. Meyer, L. A. N. Ama- ishnan, Bernd Rosenow, Luis A. Nunes ral, H. E. Stanley, Eur. Phys. J. B (Rapid Amaral, H. Eugene Stanley, Universal Note), 3 (1998) 139 32 References

86 R. Cont, M. Potters and J.-P. Bouchaud, Dressing of Financial Correlation Matri- in Eds. Dubrulle, Graner, Sornette, Scale ces, cond-mat/9810255 Invariance and Beyond (Proc. CNRS Work- 107 Boris Podobnik, Plamen Ch. Ivanov, shop on Scale Invariance. Les Houches, 1997), Youngki Lee, Alessandro Chessa, H. Eu- Springer, Berlin (1997). gene Stanley, Systems with Correlations 87 R. F. Engle, Autoregressive Conditional in the Variance: Generating Power-Law Heteroscedasticity with Estimates of the Tails in Probability Distributions, cond- Variance of United Kingdom Inflation, mat/9910433, To appear in Europhysics Econometrica, Vol. 50, 1982, pp. 987-1007. Letters (2000) 88 T. Bollerslev, Generalized Autoregressive 108 Jae Dong Noh, A model for correlations in Conditional Heteroskedasticity, Journal of stock markets, Phys. Rev. E {\bf 61}, 5981 Econometrics, Vol. 31, 1986, pp. 307-327. (2000) 89 Y. Liu, P. Cizeau, M. Meyer, C.-K. Peng, H. 109 L. Kullmann, J. Kertesz, R. N. Man- E. Stanley, Physica A 245, 437 (1997). tegna, Identification of clusters of 90 P. Cizeau, Y. Liu, M. Meyer, C.-K. Peng, H. companies in stock indices via Potts E. Stanley, Physica A 245, 441 (1997). super-paramagnetic transitions, cond- mat/0002238 91 J.-P. Bouchaud and M. Potters, Physica A 299, 60-70 (2001). 110 Pierre Cizeau, Marc Potters and Jean- Philippe Bouchaud, Correlation struc- 92 J.-P. Bouchaud, A. Matacz and M. Potters, ture of extreme stock returns, cond- Phys. Rev. Lett. 87, 228701 (2001). mat/0006034, To appear in Quantitative 93 A. Pagan, J. Empirical Finance 3, 15 (1996). Finance 94 M.A. Simkowitz and W. L. Beedles, J. 111 Giovanni Bonanno, Fabrizio Lillo, Rosario Amer. Stat. Assoc. 75, 306 (1980). N. Mantegna, High-frequency Cross- 95 S. J. Kon, J. Finance 39, 147 (1984). correlation in a Set of Stocks, cond- 96 A. Peiro, J. Banking & Finance 23, 847-862 mat/0009350, Quantitative Finance, 1,Jan (1999). 2001, 96-104 97 F. Lillo and R. N. Mantegna, Eur. Phys. J. 112 Z. Burda, J. Jurkiewicz, M.A. Nowak, G. B 15, 603 (2000). Papp, I. Zahed, Levy Matrices and Finan- cial Covariances, cond-mat/0103108 98 E. Fama, J. Business 38, 34 (1965) 113 Hyun-Joo Kim, Youngki Lee, In-mook 99 K. French, J. 8, 55 Kim, Byungnam Kahng, Scale-free Net- (1980). work in Financial Correlations, cond- 100 M. Gibbons et al., J. Business 54, 579 mat/0107449 (1981). 114 V. Plerou, P. Gopikrishnan, B. Rosenow, 101 D. B. Keim, J. Financial Economics 12, 13 L. A. N. Amaral, T. Guhr, H. E. Stanley, (1983). A Random Matrix Approach to Cross- 102 R. A. Ariel, J. Financial Economics 18, 161 Correlations in Financial Data, cond- (1987). mat/0108023 103 Z.-F. Huang, Physica A 287, 405 (2000). 115 A. Dragulescu and V. M. Yakovenko, 104 Vasiliki Plerou, Parameswaran Gopikr- cond-mat/0103544, cond-mat/0211175. ishnan, Bernd Rosenow, Luis A. Nunes 116 Y. Fujiwara et al, cond-mat/0208398. Amaral, H. Eugene Stanley, Universal 117 Parameswaran Gopikrishnan, Vasi- and non-universal properties of cross- liki Plerou, Xavier Gabaix, H. Eugene correlations in financial time series, cond- Stanley, Statistical Properties of Share mat/9902283, Phys. Rev. Lett., 83 (1999) Volume Traded in Financial Markets, 1471 cond-mat/0008113, Phys. Rev. E. (Rapid 105 Rosario N. Mantegna, Hierarchical Comm.), 62 (2000) R4493. Structure in Financial Markets, cond- 118 M Ausloos and K. Ivanova, Mechanistic mat/9802256 approach to generalized technical analysis 106 Laurent Laloux, Pierre Cizeau, Jean- of share prices and stock market indices, Philippe Bouchaud, Marc Potters, Noise cond-mat/0201587 References 33

119 M. Ausloos and K. Ivanova, Generalized 139 Z. F. Huang and S. Solomon, Eur. Phys. J. Technical Analysis. Effects of transaction B 20, 601 (2000). volume and risk, cond-mat/0212641 140 Z. F. Huang and S. Solomon, Physica A 120 Vasiliki Plerou, Parameswaran Gopikrish- 294, 503 (2001). nan, H. Eugene Stanley, Symmetry Break- 141 O. Biham et al, Phys. Rev. E 64, 026101 ing in Stock Demand, cond-mat/0111349 (2001). 121 B. Rosenow, Int. J. Mod. Phys. C 13, (2002) 142 S. Solomon and R. Richmond, Physica A 419-425. 299, 188 (2001). 122 Fabrizio Lillo, J. Doyne Farmer, Rosario N. 143 D. Stauffer, e-print available at Mantegna, Single Curve Collapse of the http://ciclamino.dibe.unige.it/wehia/ pa- Price Impact Function for the New York pers/stauffer.zip (1999). Stock Exchange, cond-mat/0207428. 144 and Economics, Indian J. Phys. B 69: 681- 123 J.-P. Bouchaud, M. Potters and M. Meyer, 698 Eur. Phys. J. B 13, 595-599 (2000). 145 Moss de Oliveira S, de Oliveira P M C, 124 W. Enders, Applied Economic Time Series, Stauffer D (1999), Evolution, Money, War John Wiley. and Computers, Tuebner, Stuttgart, pp 110-111, 127 125 G. Kim and H. M. Markowitz, J. Portfolio 146 Dragulescu A A, Yakovenko V M (2000), Management 16, 45 (1989). Statistical Mechanics of Money, Euro. 126 F. Black and R. C. Jones, J. Portfolio Man- Phys. J. B 17: 723-726 agement 14, 48 (1987). 147 Chakraborti A, Chakrabarti B K (2000), 127 E. Samanidou, Diploma Thesis, Dept. Statistical Mechanics of Money: Effects of Economics, University of Bonn, Bonn of Saving Propensity, Euro. Phys. J. B 17: (2000). 167-170 128 W. Paul and J. Baschnagel, Stochas- 148 Hayes B (2002), Follow the Money, Am. tic Processes from Physics to Finance, Scientist, 90: (Sept-Oct) 400-405 Springer,Berlin-Heidelberg (1999). 149 Pareto V (1897), Le Cours d’Economique 129 M. Levy, H. Levy and S. Solomon, Eco- Politique, Lausanne & Paris nomic Lett. 45, 103 (1994). 150 Chakraborti A (2002), Distribution of 130 M. Levy, H. Levy and S. Solomon, J. Money in Model Markets of Economy, Physics I () 5, 1087 (1995). arXiv:cond-mat/0205221, to appear in Int. 131 M. Levy, N. Persky and S. Solomon, Int. J. J. Mod. Phys. C 13 (2003) High Speed Computation 8, 93 (1996). 151 Tsallis C (2003), An Unifying Concept 132 H. Levy, M. Levy and S. Solomon, Mi- for Discussing Statistical Physics and croscopic Simulation of Financial Markets, Economics, in this Proc. Vol.; Reiss H, Academic Press, New York (2000). Rawlings P K (2003) The Natural Role of 133 T. Hellthaler, Int. J. Mod. Phys. C 6, 845 Entropy in Equilibrium Economics, in this (1995). Proc. Vol. 152 Dragulescu A A, Yakovenko V M (2001), 134 R. Kohl, Int. J. Mod. Phys. C 8, 1309 (1997). Evidence for the Exponential Distribution 135 E. Zschischang and T. Lux, Physica A 291, of Income in the USA, Euro. Phys. J. B 20: 563 (2001). 585-589; Dragulescu A A, Yakovenko 136 E. Zschischang, Diploma Thesis, Dept. V M (2002), Statistical Mechanics of of Economics, University of Bonn, Bonn Money, Income and Wealth, arXiv:cond- (2000). mat/0211175 137 M. Levy and S. Solomon, Int. J. Mod. Phys. 153 Fujiwara Y, Aoyama H (2003), Growth C 7, 65 (1996); M. Levy and S. Solomon, and Fluctuations of Personal Income Int. J. Mod. Phys. C 7, 595 (1996). I & II, in this Proc. Vol., arXiv:cond- 138 S. Solomon, in G. Ballot and G. Weisbuch mat/0208398 Eds., Applications of Simulation to Social 154 Chakraborti A, Pradhan S, Chakrabarti B Sciences, Hermes Science, Paris (2000); S. K (2001), A self-organising Model Market Solomon and M. Levy, Int. J. Mod. Phys. C with single Commodity, Physica A 297: 7, 745 (1996). 253-259 34 References

155 Chatterjee A (2002) unpublished; Chatter- 178 J. Barofsky: Franky Takes on Wall Street. jee A, Chakrabarti B K, Manna S S (2003), www.santafe.edu/sfi/education/reus/reus02/files/barofsky_paper.pdf Pareto Law in a Kinetic Model of Market (2002) with Random Saving Propensity, arXiv: 179 P. Bak, M. Paczuski and M Shubnik, “Price cond-mat/0301289. variations in a stock market with many 156 F. Mori and T. Odagaki, J. Phys. Soc. Japan agents”, Phyica A 246, 430 (1997) and 70, 2845 (2001). cond-mat/9609144 157 D. Stauffer and N. Jan, in D. Stauffer Ed., 180 L.-H. Tang and G.-S. Tian “Reaction- Annual Reviews of Computational Physics diffusion-branching models of stock price VIII, Second Edition, World Scientific, fluctuations” Physica A 264, 543 (1999) Singapore (2000). and cond-mat/9811114 181 158 D. Stauffer and D. Sornette, Physica A 271, F. Leyvraz and S. Redner, Phys. Rev. Lett., 496 (1999). 70, 3824 (1991) 182 P. Krapivsky, Phys. Rev. E 51, 4774 (1995) 159 Y. -C. Zhang, Physica A 269, 30 (1999). 183 D. Eliezer and I.I. Kogan: “Scaling laws 160 I. Chang, D. Stauffer and R. B. Pandey, for the market microstructure of the pre-print for Int. J. Theo. Applied Fin. (2001). interdealer broker markets” Market 161 V. Plerou et al, pre-print available at cond- Microstructure 2, 3 (1999) and cond- mat/0106657 (2001). mat/9808249 162 A. M. C. de Souza and C. Tsallis, Physica A 184 S. Maslov: Simple model of limit order- 236, 52 (1997). driven market, Physica A 278, 571 (2000) 163 L. Kullmann and J. Kertesz, Int. J. Mod. and cond-mat/9912051 Phys. C 12 (2001). 185 S. Maslov and M. Mills: Price fluctuations 164 L. Kullmann and J. Kertesz, pre-print avail- from the order book perspective – empir- able at cond-mat/0105473 (2001). ical facts and a simple model Physica A 165 L. R. Silva and D. Stauffer, Physica A 294, 299, 234 (2001) and cond-mat/0102518 235 (2001). 186 F. Slanina Phys. Rev. E 64, 056136 (2001) and cond-mat/0104547 166 G. Iori, Int. J. Mod. Phys. C 10, 1149 (1999). 187 J.-P. Bouchaud, M. Mezard and M. Pot- 167 I. Chang and D. Stauffer, Physica A 299, ters, “Statistical properties of stock order 547 (2001). books: Empirical results and models” 168 F. Castiglione and D. Stauffer, Physica A Quantitative Finance, 2, 251 (2002) and 300, 531 (2001). cond-mat/02030511 169 T. Lux and M. Marchesi, Nature 397, 498 188 M. Potters and J.-P. Bouchaud “More sta- (1999). tistical properties of the limit order book” 170 T. Lux and M. Marchesi, Int. J. Theo. Appl. cond-mat/0210710 Finance 3, 67 (2000). 189 I. Zovko and J.D. Farmer “The power of 171 E. F. Fama, J. Finance 25, 383 (1970). patience: A behavioral regularity in limit 172 S. H. Chen et al, J. Econ. Behav. Org., in order placement” Quantitative Finance 2, press (2000). 387 (2002) and cond-mat/0206280 190 E. Smith, J.D. Farmer, L. Gillemot and 173 D. T. Kaplan, Physica D 73, 38 (1994). S. Krishnamurthy, “Statistical theory of 174 W. Brock et al, Econ. Rev. 15, 197 (1996). the continuous double auction” cond- 175 P. J. F. de Lima, J. Business and Econ. Stat. mat/0210475 16, 227 (1998). 191 M.G. Daniels, J.D. Farmer, L. Gillemot, G. 176 E. Zambrano, “The Revelation Prin- Iori and E. Smith: “Quantitative model of ciple of Bounded Rationality”, price diffusion and market friction based Sante Fe working paper 97-06-060 on Trading as a mechanistic random pro- http://www.santafe.edu/sfi/publications/Abstracts/97cess” Phys.- Rev. Lett. 90, 108102 (2003) 06-060abs.html and cond-mat/0112422 177 B. Edmonds, Modelling Bounded Ra- 192 F. Lillo, J.D. Farmer and R. Mantegna, tionality In Agent-Based Simulations Single curve collapse of the price im- using the Evolution of Mental Models, pact funciton” Nature 421, 129 (2003) and http://bruce.edmonds.name/popl/mbremm_1.htmlcond-mat/0207438 References 35

193 B. Biais, P. Hilton and C. Splatt “An em- 209 D. Sornette, Phys. Rep. 297, 239 (1998). pirical analysis of the LOB and the order 210 W. I. Newman, D. L. Turcotte, A. M. flow in the Paris Bourse” Journal of Fi- Gabrielov, Phys. Rev. E 52, 4827 (1995). nance, 50, 1655 (1995) 211 S. Drozdz, F. Grummer, F. Ruf, J. Speth, 194 C. Chiarella and G. Iori, Quantitative Fi- Log-periodic self-similarity: an emerging nance 2, 346 (2002) financial law?, cond-mat/0209591. 195 R.D. Willmann, G.M. Schutz and D. 212 W.-X. Zhou and D. Sornette, Renormal- Chalet “Exact Hurst exponents and ization Group Analysis of the 2000-2002 crossover behavior in a limit order mar- anti-bubble in the US S&P 500 index: Ex- ket model” Physica A 316, 526 (2002) and planation of the hierarchy of 5 crashes and cond-mat/0206446 Prediction, physics/0301023. 196 M. Kardar, G. Parisi and Y.C. Zhang, Phys. 213 Morrel H. Cohen and Vincent D. Natoli, Rev. Lett. 56, 889 (1986) Risk and utility in portfolio optimization, 197 D. Challet and R. Stinchcombe, Exclusion Physica A, in press (2003). particle mdoels of limit order financial 214 Chakrabarti, B. K., 2007, Kolkata Restau- markets, cond-mat/0208025 rant Problem as a Generalised El Farol 198 D. Challet and R. Stinchcombe, “Limit or- Bar Problem, in Econophysics of Mar- der market analysis and modeling: On an kets and Business Networks, pages universal cause for overdiffusive prices” 239-246, Eds. A. Chatterjee and B. K. cond-mat/0211082 Chakrabarti, New Economic Win- dows Series, Springer, Milan (2007); 199 James A. Feigenbaum, Peter G. O. Freund, http://www.arxiv.org/0705.2098 Discrete Scaling in Stock Markets Before Crashes, cond-mat/9509033. 215 Chakrabarti, A.S., Chakrabarti, B.K., Chat- terjee, A., Mitra, M., 2009, The Kolkata 200 James A. Feigenbaum, Peter G.O. Freund, Paise Restaurant Problem and Resourse Discrete Scale Invariance and the "Second utilisation, Physica A 288, 2420-2426 Black Monday", cond-mat/9710324. 216 Brian Arthur, W., 1994, Inductive Reason- 201 Jean-Philippe Bouchaud, Rama Cont, ing and Bounded Rationality: El Farol “A Langevin Approach to Stock Mar- Problem, American Economics Associa- ket Fluctuations and Crashes”, cond- tion Papers & Proceedings 84, 406. mat/9801279. 217 Challet, D., Marsili, M. and Zhang, Y.-C., 202 Anders Johansen, Olivier Ledoit and Di- 2005, Minority Games: Interacting Agents dier Sornette, Int. J. Theor. Applied Fi- in Financial Markets, Oxford University nance, Vol 3 No 1 (January 2000), cond- Press, Oxford. mat/9810071. 218 Kandori, M., 2008, Repeated Games, The 203 A. Johansen and D. Sornette, International New Palgrave Dictionary of Economics, Journal of Modern Physics C, Vol. 10, No. 2nd Edition, Palgrave Macmillan, New 4 (1999) 563-575, cond-mat/9901268. York, Volume 7, 98-105. 204 A. Johansen and D. Sornette, European 219 Orléan, A., 1995, Bayesian interactions Physical Journal B 17, 319-328 (2000). and collective dynamics of opinion: Herd cond-mat/0004263. behavior and mimetic contagion, Journal 205 Taisei Kaizoji, Physica A 287, 3-4, pp. 493– of Economic Behavior and Organization 506, cond-mat/0010263. 28, 257-274. 206 Stefan Bornholdt, Int. J. Mod. Phys. 220 Banerjee, A. V., 1992, A simple model of C, Vol. 12, No. 5 (2001) 667-674, cond- herd behavior, Quarterly Journal of Eco- mat/0105224. nomics 110, 3, 797-817. 207 Irene Giardina, Jean-Philippe Bouchaud, 221 Freckleton, R. P. and Sutherland, W. Bubbles, crashes and intermittency J., 2001, Do Power Laws Imply Self- in agent based market models, cond- regulation?, Nature 413, 382. mat/0206222. 222 Nowak, M. and Sigmund, K., 1993, A 208 J. Nauenberg, J. Phys. A 8, 925 (1975); G. strategy of win-stay, lose-shift that outper- Jona-Lasinio, Nuovo Cimento 26B, 99 forms tit-for-tat in the Prisoner’s Dilemma (1975). game, Nature 364, 56-58. 36 References

223 Smethurst, D. P. and Williams, H. C., 2001, of Games, Social Choices & Quantitative Power Laws: Are Hospital Waiting Lists Techniques”, Proc. Econophys-Kolkata IV, Self-regulating?, Nature 410, 652-653. Springer, Milan. 224 Ghosh, A., Chakrabarti, A.S., Chakrabarti, B.K., 2009, “Econophysics & Economics