Open Journal of , 2016, 6, 156-171 Published Online February 2016 in SciRes. http://www.scirp.org/journal/ojs http://dx.doi.org/10.4236/ojs.2016.61014

Student’s t Increments

Daniel T. Cassidy Department of Engineering Physics, McMaster University, Hamilton, Canada

Received 18 December 2015; accepted 23 February 2016; published 26 February 2016

Copyright © 2016 by authors and Scientific Research Publishing Inc. This work is licensed under the Creative Commons Attribution International License (CC BY). http://creativecommons.org/licenses/by/4.0/

Abstract Some moments and limiting properties of independent Student’s t increments are studied. Inde- pendent Student’s t increments are independent draws from not-truncated, truncated, and effec- tively truncated Student’s t-distributions with shape parameters ν ≥ 1 and can be used to create random walks. It is found that sample paths created from truncated and effectively truncated Stu- dent’s t-distributions are continuous. Sample paths for ν ≥ 3 Student’s t-distributions are also continuous. Student’s t increments should thus be useful in construction of stochastic processes and as noise driving terms in Langevin equations.

Keywords Student’s t-Distribution, Truncated, Effectively Truncated, , Random Walk, Sample Paths, Continuity

1. Introduction: Student’s t Increments The interest of this paper is independent Student’s t increments. These increments are independent draws from a Student’s t-distribution with support [−∞, +∞] , a truncated Student’s t -distribution with support [−+bbββ, ] , or an effectively truncated Student’s t-distribution with support [−∞, +∞] , but which has a multiplicative exp(−t 2 ) envelope which effectively truncates the distribution. Here β is the scale parameter for the Stu- dent’s t-distribution and b <∞ is a real constant. These independent Student’s t increments can be used to generate a random walk such as the Markov

sequence yx00= , yyx1= 01 + , , yynn=−1 + x n, where the xii ,= 0,1, , n are independent draws from a Student’s t-distribution, a truncated Student’s t-distribution, or an effectively truncated Student’s t-distribution. Attention will be restricted to Student’s t-distributions with location parameter (i.e., mean) µ = 0 , scale factor β <∞, and shape parameter ν ≥ 1 , which cover the Cauchy distribution, for which ν = 1 , to the Gaussian or , for which ν = ∞ . To distinguish between time t and and a realization of a random variable that is distributed as a Student’s

How to cite this paper: Cassidy, D.T. (2016) Student’s t Increments. Open Journal of Statistics, 6, 156-171. http://dx.doi.org/10.4236/ojs.2016.61014 D. T. Cassidy t-distribution, a bold face t will be used with the name of the distribution and a regular face t will represent time. The symbols x and ξ will represent random variables, and specific realizations of the random variables x and ξ will be represented as x and ξ . A stochastic process, which is a family of functions of time, is then ξ (t) whereas ξ (tt= i ) is a random variable for ti some constant and ξ (tt= i ) is a number in that both t and the value of ξ = ξ are specified. A Student’s t-distribution with location parameter µ = 0 , shape parameter ν , and scale parameter β , is given by [1]-[3] −+(ν 12) Γ+((ν 12) ) ξ 2 ft (ξν;, β) = 1+ 2 (1) Γ(ν2) π νβ2 νβ with −∞ ≤ξ ≤ +∞ . ft (ξν;, β) d ξ gives the probability that a random draw ξ from the Student’s t-dis- tribution lies in the interval ξ<≤+ξ ξξd .

A truncated Student’s t-distribution fbT (ξν;, β ,) with location parameter µ = 0 , shape parameter ν , and scale parameter β , is given by

1 ∞ ξ fbξν;,, β = fξν; , β × rect dξ (2) T ( ) ∫−∞ t ( )  Zbν ( ) 2bβ

1 b β = f (ξν;, β) d ξ (3) ∫−b β t Zbν ( )

∞ ξ = ξν β × ξ with Zbν ( ) ∫ ft ( ; ,) rect d (4) −∞ 2bβ where the rectangle function rect( xT) = 1 if xT< 1 and = 0 for xT> 1 has been used to truncate the 2 2 distribution and limit support to [−bbββ, ] . A Student’s t-distribution is obtained from a mixture of a normal distribution with a σ that is distributed as inverse chi with support [0,∞] [4]-[7]. Let a = 1 σ , then a is distributed as chi, χ(a;, νβ) , and

(ν −12) 22 (νβ2) aν−− 12e νβ a χ(aa; νβ ,) d = βνd.a (5) Γ(ν /2) 2ν 21− Using chi as defined above and a normal distribution with zero mean and standard deviation of σ = 1 a , the mixing integral when evaluated from a = 0 to a = +∞ yields a Student’s t-distribution

−a22ξ 2 ∞ ae ξνβ = χ νβ ft ( ;, ) ∫ (aa;,) d (6) 0 2π with a mean of zero, shape parameter ν , and a scale parameter of β .

The probability that x > q , Pqχ ( x > ;,νβ) , is needed to normalize properly a truncated chi distribution. A left-truncated chi distribution χ(a;, νβ) is zero for values aq≤ : ∞ Pq( x >=;,νβ) χ( x ;, νβ) d. x (7) χ ∫q

An effectively truncated Student’s t-distribution fqE (ξν;, β ,) is the pdf for a mixture of a left-truncated chi and normal distribution:

−a22ξ 2 ∞ ae χ(a;, νβ) fq(ξν;,, β ) = d.a (8) E ∫q 2π Pqχ ( x > ;,νβ) fE (ξν;, β , qf= 0) = t ( ξν ;, β) is a Student’s t-distribution with shape parameter ν and scale parameter β . This paper is organized as follows. The development in time of the for the sum of independent draws from distributions is reviewed in Section 2. It is shown that truncation of a Student’s t-distribution keeps the

157 D. T. Cassidy

moments finite and thus add, even if the distributions are not stable under convolution. Gaussian and Cauchy distributions are stable under self-convolution. A Gaussian convolved with a Gaussian yields a Gaussian. Student’s t-distributions other than ν = 1 and ν = ∞ distributions are not stable under self-convolution. The tails of the self-convolution of Student’s t-distributions are “stable”; only the deep tails retain the characteristic tν +1 power-law dependence of the original t-distribution [6] [8] [9]. However, the fact that the moments are finite and variances add under convolution allows the time development of the variance to be determined. Examples of smoothing of the characteristic function owing to truncation are given and examples of the mo- ments of distributions are given. The continuity of sample paths is discussed in Section 3. It is shown that truncated and effectively truncated Student’s t-distributions have continuous sample paths. It is also shown that sample paths created by Student’s t-distributions with ν ≥ 3 have continuous sample paths. Random walks are shown for independent increments drawn from a uniform distribution, from a normal distribution, and from ν = 1 and ν = 3 Student’s t-distri- butions. The samples paths for the different distributions were all simulated from the same sequence of pseudo random numbers. This enables observation of the effects of different shape parameters and truncations on the random walks. Section 4 is a conclusion.

2. Variances Add under Convolution

2 2 Let g and h be zero mean probability density functions (pdf’s) with variances σ g and σ h , and let f= gh ∗ be the convolution of g and h:

+∞ +∞ f x= guhx −= ud u g x − uhud. u (9) ( ) ∫∫−∞ ( ) ( ) −∞ ( ) ( ) fx( ) is also a zero mean pdf and hence the variance of fx( ) is

2 +∞ 1d 1 σ 22= ufud0 u= Fs= F′′ (10) f ∫ ( ) 22( ) = 2 ( ) −∞ (−i2π) d4s s 0 − π where Fs( ) = F { f( x)} is the −is2π Fourier transform of fx( ) . From the convolution theorem, Fs( ) = GsHs( ) ( ) , and F′′(0002000000) =+ G ′′ ( ) H( ) G′′( ) H( ) +=+ GH( ) ′′( ) G ′′( ) H ′′ ( ) (11)

since g and h are zero-mean pdf’s: ∫gu( )d u= G( 01) = , ∫hu( )d u= H( 01) = , and i2π∫ ug( u)d u= G′( 00) = , i2π∫ uh( u)d u= H′( 00) = . Thus

2 22 σf= σσ gh + (12)

and variances add under convolution. The argument holds even if the means for g and h are non-zero. The argument also holds for distributions that are stable or are not-stable under convolution, or for combinations of distributions that might not retain shape under the action of convolution. The Fourier transforms Fs( ), Gs( ) , and Hs( ) will exist for pdf’s that are continuous or have finite dis- continuities [10] p. 9. The derivatives of the transforms might not exist at some values of s owing to higher- order discontinuities, but truncation of the pdf will smooth the transform and remove the discontinuities. For 2 example, consider Fs( ) =+=−F {21( ( 2πx) )} exp( s) . This distribution in the x domain is a Cauchy distribution. The derivatives in the transform domain do not exist at s = 0 . However, provided that T <∞, derivatives at s = 0 exist for the convolution sin (πTs) Fs( ) = ∗−exp( s) , (13) T πs which is the Fourier transform of the truncated distribution,

158 D. T. Cassidy

x 2 FsT ( ) =F  rect × 2 (14) T 12+ ( πx) where rect( xT) = 1 if xT< 1 and = 0 for xT> 1 . 2 2 The convolution of Equation (13) does not appear to have an analytic expression except at s = 0 . An expression for the convolution, Equation (13), can be written for s ≥ 0 as su−−us ∞ sin (Tπ u)es sin (Tuπ )e Fs( ) = du+ d,u (15) T ∫∫s π uu−∞ π from which the derivatives at s = 0 can, with some effort, be calculated. For T <∞, FF′(0) = ′′′ ( 00) = and arctan (Tπ) FT′′ (02) = − 2 (16) π 32 (4) arctan (Tπ) 2T π FT(02) = −+ 2 . (17) π 3 The smoothing power of the convolution of Equation (13) can be observed if the sinc function is replaced by a unit area rectangle function with a similar width as the main lobe of the sinc function. Using this approximation for the sinc function, the convolution of Equation (13) becomes Ts FsT ( ) = rect ∗− exp( s) (18) 2

1 s T T =exp(suu −+) d1 exp( usu −) d (19) ∫∫s − 2 T and can be evaluated to give T+ sT 2  21sT +  1 +sT 2  FsT ( ) =2exp  −− exp exp exp −  , (20) 2 T  TT  T  which is, for T <∞, a continuous function of s and for which derivatives exist at s = 0 . This stands in stark contrast to the Fourier transform for the not-truncated function (i.e., for T = ∞ ), which is

F∞ ( s) = Fs( ) =exp( − s) . Figure 1 shows the effect of convolution on the Fourier transform of a Cauchy distribution. The Cauchy dis- tribution was truncated as indicated in Equation (14) with T = 100. The scale parameter of the Cauchy distri- −1 bution of Equation (14) is (2π) . The truncation thus removes values that have magnitudes greater than 100π times the scale factor. The probability of an observation with magnitude >50 is 0.002, i.e., Pr{ x >= 50} 0.002 , for the distribution of Equation (14). For a normally distributed random variable with mean µ = 0 and standard deviation σ , Pr{ x >= 3.09σ} 0.002 . Figure 2 shows similar quantities as Figure 1 but with T = 10000 . The probability of an observation with magnitude >5000 is 2× 10−5 for the Cauchy distribution of Equation (14). In a “normal” world, Pr{ x >=× 4.26σ} 2 10−5 . Truncation smooths the characteristic function and keeps moments finite. n The variance σ 2 of an n-fold convolution, f=∗∗∗ ff f, is σ 2 . If ff= = = ff = , then n nn12 ∑ i=1 fi 12 n 2 the variance for the n-fold self-convolution is n ×σ f . The pdf for the sum of n-independent draws from the 2 same parent distribution that is characterized by a pdf f with variance σ f is the n-fold self-convolution of the 2 parent pdf f and the variance of the sum of the n-independent draws is n ×σ f . For a process that is the summation of samples that are periodically drawn from a parent population, the variance of the process would be proportional to time.

Following Papoulis [11] p. 292, consider a homogenous and stationary Markov sequence [11] p. 530 yx00= , yyx1= 01 + ,  , yynn=−1 + x n, where the xii ,= 0,1, , n, are independent Student’s t increments, i.e., the xi are independent draws from a Student’s t-distribution. The sequence is homogeneous since the pdf’s for each xi are independent of n. The sequence is stationary since it is homogeneous and all xi have the

159 D. T. Cassidy

Figure 1. From top to bottom at s = 0 : Fs( ) =exp( − s) , T Ts exp(−∗s) T sinc(π Ts) , and exp(−∗s ) rect  for T 22 = 100.

Figure 2. From top to bottom at s = 0 : Fs( ) =exp( − s) , T Ts exp(−∗s) T sinc(π Ts) , and exp(−∗s ) rect  for 22 T =1 × 104 .

same pdf. Let Ts be the time between increments. The total time t taken to acquire the sequence yn is nT= t . The mean of y is zero and the variance of y , σ 2 , is s n n yn tσ 2 σσ22=n = (21) yn Ts 2 where σ is the variance of any of the Student’s t increments xi . Allow n →∞, which requires Ts → 0 . 22 2 The variance will remain finite and non-zero only if σσTso→ as Ts → 0 , where σ o is a constant. Thus σσ22= T varies linearly with sampling period T and the variance of the sequence σσ22= t varies os s yon linearly with time t.

The linear dependence on time of the variance of the Markov sequence yn arises not because of Gaussian properties, but because of the assumed independence of samples. The variance of a summation of independent samples is the sum of the variance of each sample, and thus the variance will increase linearly with the number of samples. If the samples are obtained by periodic sampling, then the variance will increase linearly with time.

Papoulis [11] p. 292 writes that the limit as n →∞, which requires Ts → 0 , results in a Wiener-Lévy process, which is a stochastic process that is continuous for almost all outcomes. Papoulis then shows y∞ is a normally distributed random variable. Papoulis assumed n samples were drawn from a with p = 1 and appealed to the DeMoivre-Laplace theorem to obtain a normal distribution in the limit that 2

160 D. T. Cassidy np(11− p)  [11], p. 66. Not all functions tend to a normal pdf under repeated convolution [10], p. 186. A Cauchy distribution is probably the most noted distribution that does not follow the central limit theorem. Not all Student’s t-distri- butions tend to a normal distribution. According to Bracewell [10], p. 190, only functions with finite area, finite mean, and finite second moments will tend to normal distributions under repeated convolution. For convolution of non-identical functions, Lyapunov’s condition on the ratio of absolute moments to power of the variance must be satisfied.

The dependence on time of the variance for the Markov sequence yn can be obtained in a slightly different manner than the approach of Papoulis [11] and in a manner that does not specify the underlying pdf’s. Following Shreve [12], p. 98, the expectation of the quadratic variation can be calculated

nn 2 n = −=2 =σσ 22 = EE{Q} ∑( yyi i−1 ) ∑∑i=1 E{xi} os T o t (22) ii=11= and the mean-square limit of the variance of the quadratic variation can be used to show convergence. n 22 The variance of Q is Ex2−=σ 2 TE2 x4 −σσ 22 Tx + 2 T 2. The fourth central , {∑∑i=1 ( i os) } { ( i osi( o) s)} 2 42 EE{xxii} = α( { }) , is proportional to the variance squared for a Student’s t-distribution (see below) and there- 22 fore α is a constant. The variance of Q is then (ασ−1)nTos, which tends to zero linearly with Ts as 2 2 Ts → 0 . Thus in a mean-square sense, the expectation of the quadratic variation is σ o t , i.e., E{Qt} = σ o . The variance of the stochastic process increases linearly with time t. For Gaussian increments, α = 3 . For Student’s t increments, α is a simple function of the shape parameter ν , the scale parameter β , and the degree and form of truncation of the underlying pdf. As the moments and continuity of a stochastic process are of interest, these topics are covered in the following sections. In the following, it is assumed on the strength of the arguments in this section and owing to the assumption of independent increments, that the scale factor β varies as t . The scale factor for a normal distribution is σβ= , the standard deviation. For Brownian motion, yy= ∞ is a normally distributed random variable and σσyo= t . For Brownian motion the increments are independent, Gaussian random variables.

2.1. Moments for Student’s t-Distributions With Support −∞, +∞ The nth central moment for a Student’s t-distribution with support [−∞, ∞] is given by

∞ µ νβ, = ξn f ξνβ;, d. ξ (23) n ( ) ∫−∞ t ( ) Closed form expressions for the second, fourth, and sixth central moment are given, along with the values of ν for which the expressions are valid. 2 2 The second central moment µ2 ( νβ, ) , which is the variance σ , is proportional to β and is valid for ν > 2 : ν µ( νβ, ) =×> β2 , ν 2. (24) 2 (ν − 2)

4 The fourth central moment µ4 ( νβ, ) is proportional to β and is valid for ν > 4 : 3ν 2 µ( νβ, ) = β4 ×>, ν 4. (25) 4 (νν−−42)( )

6 The sixth central moment µ6 ( νβ, ) is proportional to β and is valid for ν > 6 : 15ν 3 µ( νβ, ) = β6 ×>, ν 6. (26) 6 (ννν−−−642)( )( )

Not all central moments exist when the region of support for the t-distribution is [−∞, +∞] .

161 D. T. Cassidy

2.2. Moments for Truncated Student’s t-Distributions With Support −bbββ, Truncation of Student’s t-distributions keeps the moments finite and defined [6] [7]. As an example, consider a Student’s t-distribution with one degree of freedom, ν =1, (i.e., a Cauchy or Lorentzian distribution) with support [−+bbββ, ] where β <∞ is a scale parameter and b is a number. Provided that b <∞, central moments for the truncated Cauchy (and for all other truncated Student’s t-distributions with ν ≥ 1 ) exist. The integrals that define the truncated central moment for the ν =1 Student’s t-distribution are − n 2 1 1 b β ξξ µν( =1, β ,b) = 1+ dξ (27) n ∫−b β 2 Zb1 ( ) πβ β

− 2 1 b β 1 ξ arctan (b) = +=ξ with Zb1 ( ) ∫  12 d 2 . (28) −b β πβ β π Closed form expressions for the central moments for a truncated ν = 1 distribution are given. As might be n expected, µνn ( = 1, β , b) with n = 2, 4, is proportional to β with a constant of proportionality that is a function of b and n.

For truncated ν = 1 Student’s t-distributions, the second central moment µν2 ( = 1, β , b) is (bb− arctan ( )) µν( =1, β ,b) = β2 × , (29) 2 arctan (b)

the fourth central moment µν4 ( = 1, β , b) is (b3 +−3arctan( bb) 3 ) µν( =1, β , b) = β4 × , (30) 4 3arctan (b)

and the sixth central moment µν6 ( = 1, β , b) is (3bb53−− 5 15arctan( b) + 15 b) µν( =1, β , b) = β6 × . (31) 6 15arctan (b) All of these moments are defined with the single restriction that b <∞ (i.e., that the ν = 1 distribution is truncated). Since the tails of distributions for ν > 1 decrease more rapidly than for a ν = 1 distribution, the central moments can be evaluated for all truncated Student’s t-distributions with ν ≥ 1 . In this sense, the Cauchy distribution is a worst case.

3. Continuous Sample Paths For a Markov process, the sample paths are continuous functions of t, if for any  > 0 , 1 lim ∫ p( xx, t+∆ t | zt ,) d = 0 (32) ∆→t 0 ∆t xz−> uniformly in zt, , and ∆t [13], p. 46. p( x, t+∆ t |, zt) is the pdf for the process and t is time. The condition for continuous sample paths, Equation (32), can be written in different forms. For independent, zero mean ( z = 0 ), symmetric pdf’s

1  lim 1−∫ p( xx , t +∆ tz | = 0, t) d = 0 (33) ∆→t 0 ∆t ( − )

or equivalently, since Pr{−<≤−∧<≤t1xx tt 21 t 2} =0 for tt21>>0 , 1 ∞ lim∫ p( xx , t+∆ tz | = 0, t) d = 0. (34) ∆→t 0 ∆t  Both forms will be used. A stochastic process that is created as the sums of independent draws from a normal distribution (i.e., Gaussian increments) with variance ∆=tS1 and mean z has continuous sample paths. For simplicity in no-

162 D. T. Cassidy tation, assume that z = 0 . For this process with pdf given by 1 p( x, t+∆ tz | = 0, t) = exp( −x2 ( 2 ∆ t)) , (35) 2π∆t the limit

11 lim 1− exp −∆x2 ( 2 tx) d (36) ∫− ( ) ∆→t 0 ∆∆tt2π

 S =−−lim S1 expSx2 2 d x (37) ∫− ( ) S→∞ 2π

=−=limSS 1 erf 2 0 (38) S→∞ ( ( )) equals zero and the sample paths are continuous. An expansion of Equation (38) about S = ∞ shows that the dominant term goes as SSexp(− 2 ) :

2S 2 13 lim exp(−S 2) 1 −+2 42 =0 (39) S→∞  π SS and thus the limiting value as S →∞ is zero. For samples paths that are created as the sums of independent draws from a Student’s t-distribution with ν = 1 (i.e., ν = 1 Student’s t increments), which is a Cauchy distribution, the pdf ∆t p( x, t+∆ tz | = 0, t) = (40) π ( xt2 +∆ ) does not have continuous sample paths. The limit  1  ∆t lim 1− dx (41) ∆→ ∫− 2 t 0 ∆t π xz− +∆ t (( ) ) 2 =lim SS1 − arctan ( ) (42) S→∞ π does not equal zero. An expansion of SS(1− 2arctan ( ) π) about S = ∞

2S  11 1−+ (43) π 352SS 42 shows that the dominant term is S  and thus the limit is infinity as S →∞. Sample paths for both normal distributions and ν = 1 Student’s t-distributions (Cauchy) have lim p( x, t+∆ t | zt , ) =δ ( x − z) (44) ∆→t 0 as required for consistency [13], p. 47. For a process with ν = 2 Student’s t-distribution increments, the sample paths are continuous if the limit 3 − 2 12 x 2 lim 1−+ 1 dx (45) ∫−  ∆→t 0 ∆∆tt4 ∆t (2 ) 

3 − 2  2S Sx 2 =−+limSx 1 1 d (46) ∫−  S→∞ 42 

163 D. T. Cassidy

2 ( SS+ 22) 1 =−=limS  1 2 (47) S→∞ 2 32ε 2 (24 S + ) is zero. Since the limit is not zero, a process with ν = 2 Student’s t-distribution increments does not have con- tinuous paths. For a process with ν = 3 Student’s t-distribution increments, the sample paths are continuous if the limit

−2 2 12 x lim 1−+∫ 1 dx (48) ∆→t 0 ∆∆tt− ∆ 3 π 3 t ( )

− 2 2  2 S Sx =−+lim Sx1 1 d (49) S→∞ ∫− π 3 3 arctan( 3SS 3) 2 ++ 3 S 3arctan(  3 S 3) =lim S 1 − 2 . (50) S→∞ π 2S + 3 ( ) is zero. An expansion about S = ∞ shows that the dominant term for the condition for continuous sample paths for a ν = 3 Student’s t-distribution, Equation (50), is 1 S as S →∞:

43 723 − lim −+=OS( 52) 0. (51) S→∞ π3 S 5π5S 32 Processes with Student’s t-distributions increments with ν ≥ 3 have continuous paths since the limit as S →∞=0 . However, the fourth moments for Student’s t-distributions with ν ≤ 4 do not exist. Thus it would not be possible to use the mean-square variance of the quadratic variation to prove convergence of the ex- 2 pecation of the quadratic variation E{Q} to σ o t . See Equation (22) and associated discussion. The moments exist for truncated and effectively truncated Student’s t-distributions.1

3.1. Sample Paths: Truncated Cauchy

 2 Consider a truncated Cauchy with support [−bbββ, ] , or −bSbS, with S = 1 β . The variance for a truncated Cauchy with support [−bbββ, ] is given by Equation (29) since the mean is zero and truncation keeps the integral finite. The truncation need not be severe to obtain useful results; the variance diverges linearly with b. The condition for continuity is that the limit

π  S xS lim1− rect dx (52) ∫− 2  S→∞ 2arctan (bb) π(Sx +1) 2 arctan( min( Sb , )) =lim1 − (53) S→∞ arctan (b)

equals zero for any  > 0 . The rectangle function, rect( x) = 1 if x < 1 and = 0 otherwise, has been used 2 to truncate the distribution. If  Sb≥ , then the limit is zero and a process with truncated Cauchy increments should have a continuous path. However, it is not clear that the limit is zero for any  > 0 . The limit is zero when all the area of the pdf is

1A reader might wonder why truncate a distribution. It is convenient but not physical to assume support is [−∞, +∞] . Consider emission from a Lorentzian lineshape. Energy is conserved and the energy of the quantum of radiation must be positive. Infinite energy is not realistic Returns for the market are randomly distributed. Returns of <−100% are nonsensical. Wealth is limited, implying an upper limit for re- turns.

164

D. T. Cassidy enclosed by  for any  > 0 . In fact, the limit is zero only for  equal to “infinity”, since the maximum value (i.e., “infinity”) allowed is bS. Support for the truncated Cauchy distribution was taken as  −bSbS, . The support was chosen to scale with the scale factor of the distribution so that the distri- bution was truncated to include the same fraction of the area of the not-truncated distribution, regardless of the choice of the scale factor 1 S . That is, the truncation was chosen such that the value of Zb1 ( ) , which is defined by Equation (28), is independent of the scale factor.

3.2. Sample Paths: Effectively Truncated ν = 1 Distribution The pdf for a mixture of a left-truncated chi distribution for ν = 1 , a = 1 σ , aq> , and a normal distribution is [6], [7] βexp(−+q22( ξβ 2) 2) fqE (ξν;= 1, β , ) = . (54) (ξβ22+−) π (1 erf( βq 2 )) The tails of the pdf decrease as exp(− q22ξ 2) for non-zero q, where 1 q is the maximum value of σ that is included in the mixing integral.

The condition for continuous sample paths for fqE (ξν;= 1, β , ) can be written in several equivalent forms: 1  lim 1−=fqξν ; 1, β , d ξ = 0, (55) 2 2 ∫ E ( ) β →0 β ( − ) which, owing to symmetry in ξ , is equivalent to 1 ∞ limfqξν ;= 1, β , d ξ = 0. (56) 2 2 ∫ E ( ) β →0 β  The equation can be written as ∞ limS fE ξν ;= 1, β = 1Sq , d ξ= 0. (57) S→∞ ∫ ( ) Consider the inequality Sexp(−+ qS22(ξ S1) 2) f ξν;= 1, β = 1Sq , < . (58) E ( ) 2 ((ξ 2 S) +−0) π (1 erf(qS 2 )) An analytic expression for the integral of the upper bound of the inequality can be found. The dominant term in a series expansion for

22 ∞ Sexp(−+ qS(ξ S1) 2) S × dξ (59) ∫ 2 ((ξ 2S) +−0) π(1 erf(qS 2 )) about S = ∞ with qqS= o , where qo is a positive number, is

22 exp(−+qSo ( 1 ) 2) (60) 23 − πqqoo (1 erf( 2 )) and the limit

22 exp(−+qSo ( 1 ) 2) lim = 0 (61) S→∞ 23 − πqqoo (1 erf( 2 )) for  > 0 . The scaling qqS= o ensures that the truncation scales appropriately with S and thus keeps constant the area in the tails of the pdf that has been truncated. Since probability is ≥ 0 , i.e., f(ξν;= 1, β = 1Sq ,) ≥ 0 , and the limit as S →∞ of the integral of the

165 D. T. Cassidy

upper bound times S is zero, then ∞ limS fEoξν ;= 1, β = 1Sq , = q S dξ = 0 (62) S→∞ ∫ ( ) and the sample paths for stochastic processes that are created by summing independent draws from effectively truncated ν = 1 Student’s t-distributions (i.e., effectively truncated Cauchy distributions) are continuous. The same reasoning can be applied to all effectively truncated Student’s t-distributions with ν ≥ 1 and thus all stochastic processes created by summing independent draws from effectively truncated Student’s t-distributions have continuous sample paths. Figure 3 is composed of random walks wherein the increments for the walks were obtained from a uniform distribution, a normal distribution, and ν = 1 distributions. All walks were manufactured from the same se- quence of 2048 draws from a uniform distribution. This allows comparison of the walks and demonstrates the moderating influence of truncation and effective truncation on the walks. The parameter b was arbitrarily chosen to equal 50 and q = 0.025 was chosen to match approximately quadratic variation for the walks for the truncated and effectively truncated ν = 1 distributions. The walks with truncated and effectively truncated ν = 1 increments are more angular than the walk with normal increments. The ν = 1 walk shows the occasional large jump. The magnitudes of the jumps are significantly smaller in the truncated and effectively truncated ν = 1 walks. Note that the ν = 1 increments were scaled for presentation of the walks in the figure. See Table 1 and

Figure 3. Random walks for draws from a uniform distribution (red), a normal distribution (black), a ν =1 distribution (blue), a truncated ν =1 distribution (cyan), and an effec- tively truncated ν =1 distribution (magenta) for β =1 , b = 50 , and q = 0.025 . All ran- dom walks were derived from the same uniform distribution. To display the data, the incre- ments drawn from the ν =1 distribution were divided by 50 and the increments drawn from the truncated and effectively truncated ν =1 distributions were divided by 3.

Table 1. Descriptive statistics for 2048 draws from distributions with β =1,q = 0.025, and b = 50 .

Distribution Mean Std Dev Q Qi

Uniform −0.5 −0.008 0.292 0.058 1.80 174.7 0.085

Normal −0.025 1.023 0.008 2.96 2,142 1.046

f (ν = 1) 0.346 63.47 30.06 1231 8,245,552 4,026

fT (ν = 1) −0.143 5.849 −0.903 24.33 70,068 34.21

fE (ν = 1) −0.134 5.823 −0.318 41.98 69,440 33.91

f (ν = 3) −0.039 1.731 0.328 20.16 6,136 2.996

fT (ν = 3) −0.040 1.727 0.282 19.35 6,107 2.982

fE (ν = 3) −0.039 1.727 0.288 19.45 6,109 2.983

166 D. T. Cassidy

Table 2 for sample and parent statistics of the distributions used to generate the figures. For a Cauchy distri- bution with β = 1, Pr{ξ >= 50} 0.0127 . For a normal distribution, Pr{ξ >= 2.492σ} 0.0127 .

Figure 4 displays the pdf’s on a log10− log 2 plot. This figure clearly shows that there is little difference be- tween the truncated and effectively truncated ν = 1 pdf for xb< where b is the point of truncation. The tail of the truncated distribution falls off infinitely fast whereas the tails of the effectively truncated distribution fall off at the same rate as a Gaussian pdf. Since the random walk with Gaussian increments has continuous sample paths, one would expect an effectively truncated ν = 1 distribution to have continuous sample paths as the roll- off of the tails is similar. And since the tails of the truncated ν = 1 distribution roll-off faster than a Gaussian, one would expect a truncated ν = 1 walk to have continuous sample paths. The tails of the Cauchy distribution (i.e., a not truncated ν = 1 Student’s t-distribution) do not roll off as fast as a Gaussian. A Cauchy random walk does not have continuous samples paths. Large steps in the Cauchy random walk are obvious in Figure 3. There is little difference in shape between a truncated Student’s t-distribution and an effectively truncated Student’s t-distribution. From taking limits of the pdf, continuous sample paths were found for the effectively truncated ν = 1 distribution, yet a truncated ν = 1 distribution did not appear to have continuous sample paths (c.f. Equation (53) and related discussion). This discrepancy would seem to point to a problem with the con- dition for continuous sample paths or the interpretation of the condition for continuous sample paths.

Table 2. Parent statistics for distributions with β =1,q = 0.025 , and b = 50 .

Distribution Mean Std Dev Skewness Kurtosis Q Qi

Uniform −0.5 0 0.289 0 1.80 171 0.083

Normal 0 1 0 3 2,048 1

f (ν = 1) 0† - 0† - - -

fT (ν = 1) 0 5.589 0 27.50 63,982 31.24

fE (ν = 1) 0 5.617 0 52.28 64,624 31.55

f (ν = 3) 0 3= 1.732 0† - 6,144 3

fT (ν = 3) 0 1.693 0 37.04 5,873 2.868

fE (ν = 3) 0 1.702 0 56.14 5,932 2.896

Figure 4. Plots of a normal distribution (black), a ν =1 distri- bution (red), a truncated ν =1 distribution (cyan), and an effec- tively truncated ν =1 distribution (blue) for β = 1, b = 50, and q

= 0.025. Note the scaling on the axes: the ordinate is log10 and

the abscissa is log 2 .

167 D. T. Cassidy

3.3. Sample Paths: Effectively Truncated ν = 3 Student’s t-Distribution The pdf for a mixture of a left-truncated chi distribution for ν = 3 , a = 1 σ , aq> , and a normal distribution is [6] [7] 3 3βξβ32222(qq+ 3 + 2) exp( −+ q 222(ξβ 3) 2) fqE (ξν;= 1, β , ) = 2 . (63) π (ξβ2+3 2) ( π +6 βqq exp( −− 3 β22 3) πerf( 6β q 2)) The left truncation of the chi distribution imparts a multiplicative Gaussian envelope that effectively truncates the underlying t distribution.

Figure 5 displays a normal pdf and ν = 3 pdfs on a log10− log 2 plot. Note the similarity between the ν = 3 distribution, the truncated ν = 3 distribution, and the effectively truncated ν = 3 distribution for xb< where b = 50 is the point of truncation. The value of q = 0.025 was chosen to yield approximately the same standard deviation for the effectively truncated distribution as was obtained with the truncated distri- bution. See Table 1 and Table 2 for sample and parent statistics of the distributions. The effectively truncated distribution is just starting to show the same slope in the tail as the normal distribution. This owes to the exp(−x2 ) characteristic from truncation of the underlying χ distribution in the χ -normal mixture that creates the Student’s distribution. For a ν = 3 t-distribution with Pr{ξ >= 50} 1.76 × 10−5 . For a normal dis- tribution, Pr{ξ >=× 4.293σ} 1.76 10−5 . Figure 6 is similar to Figure 3 except increments were drawn from ν = 3 Student’s t-distributions and the walks were not scaled. There are five random walks displayed in Figure 6: a walk with uniform increments (red), a walk with normal increments (black), and walks with ν = 3 increments (not-truncated, truncated, and effectively truncated). The three ν = 3 walks almost perfectly overlap, showing that the walks are almost iden- tical, as one might surmise from Figure 5. The normal walk and the ν = 3 sample paths appear to have similar features. Figure 7 displays random walks with uniform increments, with Gaussian increments, and with ν = 3 truncated increments. All walks were created from the same 2048 random draws from a uniform distribution. The sample paths for the Gaussian increments are displayed twice; once with a scale factor of 1 and once with a scale factor of 1.693. The multiplicative scale factor of 1.693 is the ratio of standard deviations of the increments for the parent distributions of the normal and truncated ν = 3 distributions. The scaled plot is presented to facilitate comparison of the truncated ν = 3 and normal sample paths. It is clear that the ν = 3 and normal sample paths (for the 2048 time steps displayed) are similar. All walks shown in Figure 3, Figure 6, and Figure 7 were manufactured from the same sequence of variates drawn from a uniform distribution. This allows comparison of the effect of different distributions (uniform,

Figure 5. Plots of a ν = 3 distribution (blue), truncated ν = 3 dis- tribution (magenta), and effectively truncated ν = 3 distribution (cyan) for β =1 , b = 50 , and q = 0.025 . Note the scaling on the axes: the

ordinate is log10 and the abscissa is log 2 . For comparison, a normal distribution (black) and a ν =1 (i.e., a Cauchy) distribution (red) are also plotted.

168 D. T. Cassidy

Figure 6. Random walks for draws from a uniform distribution (red), a normal distribution (black), a ν = 3 distribution (blue), a trun- cated ν = 3 distribution (cyan), and an effectively truncated ν = 3 distribution (magenta) for β =1 , b = 50 , and q = 0.025 . All ran- dom walks were derived from the same uniform distribution. The increments were not scaled for this figure. Note that the three ν = 3 walks almost perfectly overlap.

Figure 7. Random walks for draws from a uniform distribution (red), a normal distribution (black), a truncated ν = 3 distribution (blue), and a scaled normal distribution (black) for β =1 , b = 50 , and

qo = 0.025 . The increments for the scaled normal distribution were multiplied by 1.693 All random walks were derived from the same uniform distribution.

Gaussian, Cauchy, ν = 1 and ν = 3 distributions) and truncation (both truncation by a rectangle function and effective truncation) on the sample paths. Table 1 lists descriptive statistics for the draws that were used to create the sample paths shown in Figure 3, Figure 6, and Figure 7. Table 2 lists the values found in Table 1, but calculated for the parent distributions. In

Table 1 and Table 2, Q is the quadratic variation and Qi is the average quadratic variation per step. There is good correspondence between the values obtained for the sample parameters and the parent parameters. In Table 2, the † symbol indicates a value that was obtained by a symmetry argument. The data in Table 1 and Table 2, and in Figures 3-6, clearly show the effectiveness of truncation and effec- tive truncation.

4. Conclusions Independent Student’s t increments, from which stochastic processes such as random walks are created, are

169 D. T. Cassidy

investigated. Attention is restricted to increments from not-truncated, truncated, and effectively truncated Student’s t-distributions with shape parameters ν ≥ 1 , which covers a broad range of distributions: from Cauchy distributions, for which ν = 1 , to normal or Gaussian distributions, for which ν = ∞ . A Student’s t-distribution can be obtained as a mixture of a chi distribution of the reciprocal of σ and a normal distribution with standard deviation of σ . Effectively truncated t-distributions arise from left-truncation of the chi distri- butions in the mixing integrals. An effectively truncated Student’s t-distribution has a Gaussian envelope that imparts interesting properties to the effectively truncated Student’s t-distribution.

Random walks, specifically Markov sequences yx00= , yyx1= 01 + ,  , yynn=−1 + x n, where the xii ,= 0,1, , n are independent Student’s t increments, are considered. The development in time of the scale parameter β of the Student’s t-distributions (not-truncated, truncated, and effectively truncated t-distributions) is investigated. It is found for distributions for which the variance exists that βα2 = t where α is a constant and t is time. The variance exists for truncated and effectively truncated Student’s t-distributions, and for Student’s t-distributions with shape parameter ν > 2 . The development in time of the scale parameter for Student’s t-distributions is consistent with a normal distribution, for which the variance, which equals β 2 , in- creases linearly with time. A Gaussian (or normal) distribution is stable under convolution; in general, a Student’s t-distribution is not stable under convolution. The continuity of the sample paths is investigated and it is found that truncated and effectively truncated Student’s t-distributions, and that Student’s t-distributions with ν ≥ 3 , have continuous sample paths. This opens the possibility for modelling with a greater number of distributions. Gardiner [13], p. 79 defines a Wiener process W (t) as t ξ (ss)d = W ( t) (64) ∫0 with the constraints that E0{ξ (t)} = , that W (t) is a continuous function of time t, and that W (t) is a Markov process. The requirement E0{ξ (t)} = is not restrictive as any non-zero mean value of ξ (t) can be considered to be signal. Gardner explains that one normally assumes that ξ (t) is Gaussian and that E{ξξ( t) ( t′′)} =δ ( tt − ) . The assumption of Gaussian statistics follows from a desire to have continuous paths for W (t) . The white noise property of the Wiener process, i.e., FF{E1{ξ (t) ξ ( t′′)}} ={δ ( tt −=)} , follows not from the assumption of Gaussian statistics for ξ (t) but from the assumption that W (t) is a Markov process. For a Markov process, W (t) is not determined probabilistically by any past values [13], p. 78 and thus W (t) and WW(tt′) − ( ) are independent for all tt< ′ and E{ξξ( t) ( t′′)} =δ ( tt − ) . A random walk process that is constructed from truncated or effectively truncated Student’s t increments ξ (t) with ν ≥ 1 is continuous. This process can also be constructed under the Markov assumption that E{ξξ( t) ( t′′)} =δ ( tt − ) , where for simplicity increments are assumed to be draws from zero mean dis- tributions such that EE{ξξ(tt)} ={ ( ′)} = 0. Student’s t-distributions with ν ≥ 3 appear also to be continuous, without the need for truncation. Thus it appears that there exists more than independent Gaussian increments for construction of random walks with continuous sample paths. Given continuous sample paths and second mo- ments that depend linearly on time for random walks with independent, not-truncated, truncated, and effectively truncated Student’s t-increments, it seems reasonable to speculate that the diffusion coefficients [13] [14], p. 79, p. 133

11 2 Dn (ξ, t) = lim (ξξ(tt+∆) − ( t)) (65) ∆→t 0 nt! ∆ ξ (t)=ξ exist and thus it should be possible to model noise in Langevin equations with appropriate t-distributions.

Acknowledgements This work was funded by the Natural Science and Engineering Research Council (NSERC) Canada.

References [1] Student (1908) The Probable Error of a Mean. Biometrika, 6, 1-25. http://dx.doi.org/10.1093/biomet/6.1.1 [2] Zabell, S.L. (2008) On Student’s 1908 Article “The Probable Error of a Mean”. Journal of the American Statistical Association, 103, 1-7. http://dx.doi.org/10.1198/016214508000000030

170 D. T. Cassidy

[3] Nadarajah, S. (2007) Explicit Expressions for Moments of t Order Statistic. Comptes Rendus Mathematique, 345, 523- 526. http://dx.doi.org/10.1016/j.crma.2007.10.027 [4] Praetz, P.D. (1972) The Distribution of Share Price Changes. The Journal of Business, 45, 49-55. http://dx.doi.org/10.1086/295425 [5] Gerig, A., Vicente, J. and Fuentes, M. (2009) Model for Non-Gaussian Intraday Stock Returns. Physical Review E, 80, Article ID: 065102. http://dx.doi.org/10.1103/PhysRevE.80.065102 [6] Cassidy, D.T. (2011) Describing n-Day Returns with Student’s t-Distributions. Physica A, 390, 2794-2802. http://dx.doi.org/10.1016/j.physa.2011.03.019 [7] Cassidy, D.T. (2012) Effective Truncation of a Student’s t-Distribution by Truncation of the Chi Distribution in a Mixing Integral. Open Journal of Statistics, 2, 519-525. http://dx.doi.org/10.4236/ojs.2012.25067 [8] Bouchaud, J.-P. and Potters, M. (2003) Theory of Financial Risk and Derivative Pricing. 2nd Edition, Cambridge Uni- versity Press, Cambridge. http://dx.doi.org/10.1017/CBO9780511753893 [9] Nadarajah, S. and Dey, D.K. (2005) Convolutions of the T distribution. Computers and Mathematics with Applications, 49, 715-721. http://dx.doi.org/10.1016/j.camwa.2004.10.032 [10] Bracewell, R.N. (2000) The Fourier Transform and Its Applications. 3rd Edition, McGraw-Hill, New York. [11] Papoulis, A. (1965) Probability, Random Variables, and Stochastic Processes. McGraw-Hill, New York. [12] Shreve, S.E. (2004) Stochastic Calculus for Finance II: Continuous Time Models. Springer, New York. http://dx.doi.org/10.1007/978-1-4757-4296-1 [13] Gardiner, C. (2009) Stochastic Methods: A Handbook for the Natural and Social Sciences. 4th Edition, Sprin- ger-Verlag, Berlin. [14] Lax, M., Cai, W. and Xu, M. (2006) Random Processes in Physics and Finance. Oxford University Press, New York. http://dx.doi.org/10.1093/acprof:oso/9780198567769.001.0001

171