arXiv:2003.09471v2 [math.PR] 29 Jul 2020 e words: Key P of average running and process Skellam generalize results difference-differential res governing L´evy subordinator, measures, bilities, stable tempered cha and time subordinator via stable process Skellam space-fractional tempered and order of process Skellam time-changed qa en n[] o ag ausof values large For [2]. in means equal admvralswihwsitoue o h aeo differen of case the for introduced was which bet variables difference random the taking by obtained is distribution Skellam Introduction 1 average. nti ril,w nrdc kla rcs forder of process Skellam introduce we article, this In itiuinadif and distribution itiuinrte hnteepnnildsrbto.Si an distribution. positive exponential the the between then times rather arrival distribution inter the of arrivals that of shown Fu is modeling in subordinator. process stable Skellam inverse time-fractional an with ( process in Skellam f studied proposed the in have are teams processes processes competing Skellam point time-fractional two two Recently, of of difference goals the of on number based the pixe of of difference Skell difference intensity the kind. the model first to as the such of areas margin different function which c Bessel processes modified and the Poisson L´evy process involves independent valued two integer of the difference is process called Skellam is which The distribution Skellam for 2 L´evy process Prop. time (see variables infi random is divisible variable infinitely random two Skellam The values. integer positive intensity b a adffSho fMteais adffUiest,Senghenn University, Cardiff Mathematics, of School Cardiff eateto ahmtc,Ida nttt fTechnology of Institute Indian Mathematics, of Department λ 1 iial,if Similarly, . kla rcs,sbriain ´v esr,Pisnpr Poisson L´evy measure, subordination, process, Skellam kla yePoesso re n Beyond and K Order of Processes Type Skellam λ 2 svr ls o0 hntedsrbto ed oaPisndi Poisson a to tends distribution the then 0, to close very is eaGupta Neha λ 1 ed o0 h itiuintnst oso distribution Poisson a to tends distribution the 0, to tends a rnKumar Arun , λ k 1 npriua edsussaefatoa kla proces Skellam space-fractional discuss we particular In . + λ Abstract 2 h itiuincnb prxmtdb h normal the by approximated be can distribution the , 1 . n[].Teeoe n a en continuous a define can one Therefore, [3]). in 1 a k ioa Leonenko Nikolai , n t unn vrg.W lodsusthe discuss also We average. running its and qain fteitoue rcse.Our processes. introduced the of equations n[0 hc sotie ytime-changing by obtained is which [10] in ia bevtosaeosre ncs of case in observed are observations milar d od adff F44G UK 4AG, CF24 Cardiff, Road, ydd oa,Rpaa,Pna 401 India 140001, - Punjab Rupnagar, Ropar, up nhg rqec rdn aa It data. trading frequency high in jumps entoidpnetPisndistributed Poisson independent two ween etvl.W eietemria proba- marginal the derive We pectively. kla process. Skellam ieydvsbesnei stedffrneof difference the is it since divisible nitely isnpoesi eea directions. several in process oisson gsi oso rcs yindependent by process Poisson in nges te,te rvddteapiainof application the provided they rther, mpoeshsvrosapiain in applications various has process am eaiejmsflo Mittag-Leffler follow jumps negative d si aea se[] n o modeling for and [4]) (see cameras in ls intensities t e 6 ,8 9]). 8, 7, [6, see lpoaiiyms uco (PMF) funcion mass probability al nas eotie ytkn the taking by obtained be also an obl aei 5.Temodel The [5]. in game ootball b cs forder of ocess λ 1 λ , 2 y(e 1)adfor and [1]) (see by tiuinwith stribution k ihnon- with running , s Danish fire insurance data (see [11]). Buchak and Sakhno in [12] also have proposed the governing equations for time-fractional Skellam processes. Recently, [13] introduced time-changed Poisson process of order k, which is obtained by time changing the Poisson process of order k (see [14]) by general subordinators. In this paper we introduce Skellam process of order k and its running average. We also discuss the time-changed Skellam process of order k. In particular we discuss space-fractional Skellam process and tempered space-fractional Skellam process via time changes in Poisson process by independent stable subordiantor and tempered stable subordiantor, respectively. We obtain closed form expres- sions for the marginal distributions of the considered processes and other important properties. Skellam process is used to model the difference between the number of goals between two teams in a football match. Similarly, Skellam process of order k can be used to model the difference between the number of points scored by two competing teams in a basketball match where k = 3. The remainder of this paper proceeds as follows: in Section 2, we introduce all the relevant definitions and results. We derive also the L´evy density for space- and tempered space-fractional Poisson processes. In Section 3, we introduce and study running average of Poisson process of order k. Section 4 is dedicated to Skellam process of order k. Section 5 deals with running average of Skellam process of order k. In Section 6, we discuss about the time-changed Skellam process of order k. In Section 7, we determine the marginal PMF, governing equations for marginal PMF, L´evy densities and generating functions for space-fractional Skellam process and tempered space-fractional Skellam process.

2 Preliminaries

In this section, we collect relevant definitions and some results on Skellam process, subordinators, space-fractional Poisson process and tempered space-fractional Poisson process. These results will be used to define the space-fractional Skellam processes and tempered space-fractional Skellam pro- cesses.

2.1 Skellam process

In this section, we revisit the Skellam process and also provide a characterization of it. Let S(t) be a Skellam process, such that S(t)= N (t) N (t), t 0, 1 − 2 ≥ where N1(t) and N2(t) are two independent homogeneous Poisson processes with intensity λ1 > 0 and λ2 > 0, respectively. The Skellam process is defined in [8] and the distribution has been introduced and studied in [1], see also [2]. This process is symmetric only when λ1 = λ2. The PMF sk(t)= P(S(t)= k) of S(t) is given by (see e.g. [1, 10])

λ k/2 −t(λ1+λ2) 1 Z sk(t)= e I|k|(2t λ1λ2), k , (2.1) λ2 ∈   p

2 where Ik is modified Bessel function of first kind (see [15], p. 375), ∞ (z/2)2n+k I (z)= . (2.2) k n!(n + k)! nX=0 The PMF sk(t) satisfies the following differential difference equation (see [10]) d s (t)= λ (s (t) s (t)) λ (s (t) s (t)), k Z, (2.3) dt k 1 k−1 − k − 2 k − k+1 ∈ with initial conditions s (0) = 1 and s (0) = 0, k = 0. The Skellam process is a L´evy process, its 0 k 6 L´evy density νS is the linear combination of two , νS(y)= λ1δ{1}(y)+λ2δ{−1}(y) and the corresponding L´evy exponent is given by ∞ φ (θ)= (1 e−θy)ν(y)dy. S(1) − Z−∞ The moment generating function (MGF) of Skellam process is

θ −θ E[eθS(t)]= e−t(λ1+λ2−λ1e −λ2e ), θ R. (2.4) ∈ With the help of MGF, one can easily find the moments of Skellam process. In next result, we give a characterization of Skellam process, which is not available in literature as per our knowledge.

Theorem 2.1. Suppose an arrival process has the independent and stationary increments and also satisfies the following incremental condition, then the process is Skellam.

λ1δ + o(δ), m>n,m = n + 1; λ δ + o(δ), m

3 2.2 Poisson process of order k (PPoK)

In this section, we recall the definition and some important properties of Poisson process of order k (PPoK). Kostadinova and Minkova (see [14]) introduced and studied the PPok. Let x ,x , ,x 1 2 · · · k be non-negative integers and ζ = x + x + + x , Π != x !x ! ...x ! and k 1 2 · · · k k 1 2 k Ω(k,n)= X = (x , x , ..., x ) x + 2x + + kx = n . (2.5) { 1 2 k | 1 2 · · · k } Also, let N k(t) , represent the PPok with rate parameter λt, then probability mass function { }t≥0 (pmf) is given by ζk N k P k −kλt (λt) pn (t)= (N (t)= n)= e . (2.6) Πk! X=Ω(Xk,n) The pmf of N k(t) satisfies the following differential-difference equations (see [14])

n∧k d k k k pN (t)= kλpN (t)+ λ pN (t), n = 1, 2,... dt n − n n−k Xj=1 d k k pN (t)= kλpN (t), (2.7) dt 0 − 0 k k with initial condition pN (0) = 1 and pN (0) = 0 and n k = min k,n . The characteristic function 0 n ∧ { } of PPoK N k(t) iuN k(t) −λt(k− k eiuj ) φ k (u)= E(e )= e j=1 , (2.8) N (t) P t where i = √ 1. The process PPoK is L´evy, so it is infinite divisible i.e. φ k (u) = (φ k (u)) . − N (t) N (1) The L´evy measure for PPoK is easy to drive and is given by

k

νN k (x)= λ δj(x), Xj=1 where δj is the Dirac delta function concentrated at j. The transition probability of the PPoK N k(t) are also given by Kostadinova and Minkova [14], { }t≥0 1 kλδ, m = n; − P(N k(t + δ)= m N k(t)= n)= λδ m = n + i, i = 1, 2,...,k; (2.9) |   0 otherwise.

k  The probability generating function (pgf) GN (s,t) is given by (see [14])

k k j GN (s,t)= e−λt(k− j=1 s ). (2.10) P The mean, and covariance function of the PPoK are given by k(k + 1) E[N k(t)] = λt; 2 k(k + 1)(2k + 1) Var[N k(t)] = λt; 6 k(k + 1)(2k + 1) Cov[N k(t),N k(s)] = λ(t s). (2.11) 6 ∧

4 2.3 Subordinators

Let Df (t) be real valued L´evy process with non-decreasing sample paths and its Laplace transform has the form E[e−sDf (t)]= e−tf(s), where ∞ f(s)= bs + (1 exs)ν(dx), s> 0, b 0, − ≥ Z0 is the integral representation of Bernstein functions (see [16]). The Bernstein functions are C∞, m dm non-negative and such that ( 1) m f(x) 0 for m 1 in [16]. Here ν denote the non-negative − dx ≤ ≥ L´evy measure on the positive half line such that ∞ (x 1)ν(dx) < , ν([0, )) = , ∧ ∞ ∞ ∞ Z0 and b is the drift coefficient. The right continuous inverse E (t) = inf u 0 : D (u) > t is f { ≥ f } the inverse and first exist time of Df (t), which is non-Markovian with non-stationary and non- independent increments. Next, we analyze some special cases of L´evy subordinators with drift coefficient b = 0, that is,

s p log(1 + α ), p> 0, α> 0, (gamma subordinator); (s + µ)α µα, µ> 0, 0 <α< 1, (tempered α-stable subordinator);  f(s)=  − (2.12) δ( 2s + γ2 γ), γ> 0, δ> 0, (inverse Gaussian subordinator);  − sαp, 0 <α< 1, (stable subordinator).    It is worth to note that among the subordiantors given in (2.12), all the integer order moments of stable subordiantor are infinite.

2.4 The space-fractional Poisson process

In this section, we discuss main properties of space-fractional Poisson process (SFPP). We also provide the L´evy density for SFPP which is not discussed in the literature. The SFPP Nα(t) was introduced by (see [17]), as follows

N(Dα(t)), t 0, 0 <α< 1, Nα(t)= ≥ (2.13) N(t), t 0, α = 1.  ≥ The probability generating function (PGF) of this process is of the form α α Gα(u, t)= EuNα(t) = e−λ (1−u) t, u 1, α (0, 1). (2.14) | |≤ ∈ The PMF of SFPP is ∞ ( 1)k ( λα)rtr Γ(rα + 1) P α(k,t)= P N (t)= k = − − { α } k! r! Γ(rα k + 1) r=0 X − k ( 1) (1, α); α = − 1ψ1 ( λ t) , (2.15) k! (1 k, α); − " − #

5 where hψi(z) is the Fox Wright function (see formula (1.11.14) in [18]). It was shown in [17] that the PMF of the SFPP satisfies the following fractional differential-difference equations d P α(k,t)= λα(1 B)αP α(k,t), α (0, 1], k = 1, 2,... (2.16) dt − − ∈ d P α(0,t)= λαP α(0,t), (2.17) dt − with initial conditions 0, k = 0, P (k, 0) = δk(0) = 6 (2.18) 1, k = 0.  The fractional difference operator  ∞ α (1 B)α = ( 1)jBj (2.19) − j − Xj=0   is defined in [19], where B is the backward shift operator. The characteristic function of SFPP is

α iθ α E[eiθNα(t)]= e−λ (1−e ) t. (2.20)

Proposition 2.1. The L´evy density νNα (x) of SFPP is given by

∞ α ν (x)= λα ( 1)n+1 δ (x). (2.21) Nα − n n nX=1   Proof. We use L´evy-Khintchine formula (see [20]),

∞ iθx α n+1 α (e 1)λ ( 1) δn(x)dx c − − n Z{0} n=1   ∞ X ∞ α α = λα ( 1)n+1 eiθn + ( 1)n 1 − n − n − "n=1   n=0   # ∞X X α = λα ( 1)n+1 eiθn = λα(1 eiθ)α, − n − − n=0 X   which is the characteristic exponent of SFPP from equation (2.20).

2.5 Tempered space-fractional Poisson process

The tempered space-fractional Poisson process (TSFPP) can be obtained by subordinating homo- geneous Poisson process N(t) with the independent tempered stable subordiantor Dα,µ(t) (see [21])

N (t)= N(D (t)), α (0, 1), µ> 0. (2.22) α,µ α,µ ∈

6 This process have finite integer order moments due to the tempered α-stable subordinator. The PMF of TSFPP is given by (see [21])

∞ ∞ r α ( t) αr αr m P α,µ(k,t) = ( 1)ketµ µm − λαr−m − − r! m k m=0 r=0 X X    k ∞ m −m tµα ( 1) µ λ (1, α); α = e − 1ψ1 ( λ t) , k = 0, 1,.... (2.23) k! m! "(1 k m, α); − # mX=0 − − The governing difference-differential equation is given by d P α,µ(k,t)= ((µ + λ(1 B))α µα)P α,µ(k,t). (2.24) dt − − − The characteristic function of TSFPP,

iθ α α E[eiθNα,µ(t)]= e−t((µ+λ(1−e )) −µ ). (2.25)

Using a standard conditioning argument, the mean and variance of TSFPP are given by

E(N (t)) = λαµα−1t, Var(N (t)) = λαµα−1t + λ2α(1 α)µα−2t. (2.26) α,µ α,µ −

Proposition 2.2. The L´evy density νNα,µ (x) of TSFPP is

∞ n α n ν (x)= µα−n λn ( 1)l+1δ (x), µ> 0. (2.27) Nα,µ n l − l Xn=1   Xl=1   Proof. Using (2.25), the characteristic exponent of TSFPP is given by F (θ) = ((µ+λ(1 eiθ))α µα) − − . We find the L´evy density with the help of L´evy-Khintchine formula (see [20]),

∞ n iθx α−n α n n l+1 (e 1) µ λ ( 1) δl(x)dx c − n l − Z{0} n=1   l=1   ∞ X n X n α n n = µα−n λn ( 1)l+1eiθx ( 1)l+1 n l − − l − n=1   l=1   l=1   ! X∞ nX X α n = µα−n λn ( 1)l+1δ (x) µα n l − l − n=0 X   Xl=0   = ((µ + λ(1 eiθ))α µα), − − − hence proved.

3 Running average of PPoK

In this section, first we introduced the running average of PPoK and their main properties. These results will be used further to discuss the running average of SPoK.

7 k Definition 3.1 (Running average of PPoK). We define the average process NA(t) by taking time- scaled integral of the path of the PPoK,

1 t N k (t)= N k(s)ds. (3.28) A t Z0 k We can write the differential equation with initial condition NA(0) = 0, d 1 1 t (N k (t)) = N k(t) N k(s)ds. dt A t − t2 Z0 Which shows that it has continuous sample paths of bounded total variation. We explored the compound Poisson representation and distribution properties of running average of PPoK. The char- k acteristic of NA(t) is obtained by using the Lemma 1 of (see [22]).

Lemma 3.1. If Xt is a L´evy process and Yt its Riemann integral defined by

t Yt = Xsds, Z0 then the characteristic functions of X and Y satisfy

1 φ (u)= E[eiuY (t)] = exp t log φ (tuz)dz , u R. (3.29) Y (t) X(1) ∈  Z0  k Proposition 3.1. The characteristic function of NA(t) is given by

k (eiuj 1) φN k (t)(u) = exp tλ k − . (3.30) A −  − iuj  Xj=1    Proof. The result follows by applying the Lemma 1 to (2.8) after scaling by 1/t.

Proposition 3.2. The running average process has a compound Poisson representation, such that

N(t)

Y (t)= Xi, (3.31) Xi=1 where Xi = 1, 2,... are independent, identically distributed (iid) copies of X random variables, independent of N(t) and N(t) is a Poisson process with intensity kλ. Then

law k Y (t) = NA(t).

Further, the X has following pdf

k k 1 f (x)= p (x)f (x)= f (x), (3.32) X Vi Ui k Ui Xi=1 Xi=1 where Vi follows discrete uniform distribution over (0, k) and Ui follows continuous uniform distri- bution over (0, i), i = 1, 2,...,k.

8 Proof. The pdf of U is f (x)= 1 , 0 x i. Using (5.42), the characteristic function of X is given i Ui i ≤ ≤ by k 1 (eiuj 1) φ (u)= − . X k iuj Xj=1 For fixed t, the characteristic functinon of Y (t) is

iuj k (e −1) −tλ k− j −kλt(1−φX (u))  =1 iuj  φY (t)(u)= e = e P , (3.33) which is equal to the characteristic function of PPoK given in (3.30). Hence by uniqueness of characteristic function the result follows.

Using the definition drφ (u) m = E[Xr] = ( i)r X , (3.34) r − dur (k+1) the first two moments for random variable X given in Proposition (3.2) are m1 = 4 and m2 = 1 18 [(k + 1)(2k + 1)]. Further using the mean, variance and covariance of compound Poisson process, we have k(k + 1) E[N k (t)] = E[N(t)]E[X]= λt; A 4 1 Var[N k (t)] = E[N(t)]E[X2]= [k(k + 1)(2k + 1)]λt; A 18 Cov[N k (t),N k (s)] = E[N k (t),N k (s)] E[N k (t)]E[N k (s)] A A A A − A A = E[N k (s)]E[N k (t s)] E[N k (s)2] E[N k (t)]E[N k (s)] A A − − A − A A 1 k2(k + 1)2 = [k(k + 1)(2k + 1)]λs λ2s2, s

Definition 3.2 (Long range dependence (LRD)). Let X(t) be a stochastic process which has corre- lation function for s t for fixed s, that satisfies, ≥ c (s)t−d Cor(X(t), X(s)) c (s)t−d, 1 ≤ ≤ 2 for large t, d> 0, c1(s) > 0 and c2(s) > 0. That is, Cor(X(t), X(s)) lim = c(s) t→∞ t−d for some c(s) > 0 and d > 0. We say that if d (0, 1) then X(t) has the LRD property and if ∈ d (1, 2) it has short-range dependence (SRD) property [23]. ∈

9 Proposition 3.3. The running average of PPoK has LRD property.

Proof. Let 0 s

Then for d = 1/2, it follows

Cor(N k (t),N k (s)) (8(2k + 1) 9(k + 1)kλs) s1/2 lim A A = − = c(s). t→∞ t−d 8(2k + 1)

4 Skellam process of order K (SPoK)

In this section, we introduce and study Skellam process of order K (SPoK).

k k Definition 4.1 (SPoK). Let N1 (t) and N2 (t) be two independent PPoK with intensities λ1 > 0 and λ2 > 0. The stochastic process Sk(t)= N k(t) N k(t) 1 − 2 is called a Skellam process of order K (SPoK).

k k Proposition 4.1. The marginal distribution Rm(t)= P(S (t)= m) of SPoK S (t) is given by

λ m/2 −kt(λ1+λ2) 1 Z Rm(t)= e I|m|(2tk λ1λ2), m . (4.35) λ2 ∈   p Proof. For m 0, using the pmf of PPoK given in (2.6), it follows ≥ ∞ P k P k I Rm(t)= (N1 (t)= n + m) (N2 (t)= n) m≥0 n=0 X ∞ (λ t)ζk (λ t)ζk = e−kλ1t 1 e−kλ2t 2  Π !   Π !  n=0 k k X X=Ω(Xk,n+m) X=Ω(Xk,n)    

Setting x = n and n = x + k (i 1)n , we have i i i=1 − i ∞ Px m+x −kt(λ1+λ2) (λ2t) (λ1t) m + x x Rm(t)= e x! (m + x)! n1!n2! ...nk! n1!n2! ...nk! x=0 n1+n2+...+nk=m+x  ! n1+n2+...+nk=x  ! X∞ X X (λ t)x (λ t)m+x = e−kt(λ1+λ2) 2 1 km+xkx, x! (m + x)! Xx=0 using the multinomial theorem and modified Bessel function given in (2.2). Similarly, it follows for m< 0.

10 In the next proposition, we prove the normalizing condition for SPoK.

Proposition 4.2. The pmf of Sk(t) satisfies the following normalizing condition

∞ Rm(t) = 1. m=−∞ X Proof. Using the property of modified Bessel function of first kind

∞ θ y/2 1 I (2a θ θ )= ea(θ1+θ2), θ |m| 1 2 y=−∞ 2 X   p and puting this result in (4.35), we obtain

∞ −kt(λ1+λ2) kt(λ1+λ2) Rm(t)= e e = 1. m=−∞ X

Proposition 4.3. The L´evy measure for SPoK is

k k

νSk (x)= λ1 δj(x)+ λ2 δ−j(x). Xj=1 Xj=1 Proof. The proof follows by using the independence of two PPoK used in the definition of SPoK.

Remark 4.1. Using (2.10), the pgf of SPoK is given by

∞ k k j k −j S m −t(k(λ1+λ2)−λ1 s −λ2 s ) G (s,t)= s Rm(t)= e j=1 j=1 . (4.36) P P m=−∞ X Further, the characteristic function of SPoK is given by

k iju k −iju −t[k(λ1+λ2)−λ1 e −λ2 e ] φ k (u)= e j=1 j=1 . (4.37) S (t) P P

4.1 SPoK as a pure birth and death process

In this section, we provide the transition probabilities of SPoK at time t + δ, given that we started at time t. Over such a short interval of length δ 0, it is nearly impossible to observe more than k → event; in fact, the probability to see more than k event is o(δ).

Proposition 4.4. The transition probabilities of SPoK are given by

λ1δ + o(δ), m>n,m = n + i, i = 1, 2,...,k; P(Sk(t+δ)= m Sk(t)= n)= λ δ + o(δ), m

11 Proof. Note that for i = 1, 2, , k, we have · · · k−i P(Sk(t + δ)= n + i Sk(t)= n)= P(the first process has i+j arrivals and the second process has j arrivals) | Xj=1 + P(the first process has i arrivals and the second process has 0 arrivals) k−i = (λ δ + o(δ)) (λ δ + o(δ)) + (λ δ + o(δ)) (1 kλ δ + o(δ)) 1 × 2 1 × − 2 Xj=0 = λ1δ + o(δ). Similarly, for i = 1, 2, , k, we have · · · k−i P(Sk(t + δ)= n i Sk(t)= n)= P(the first process has j arrivals and the second process has i+j arrivals) − | Xj=1 + P(the first process has 0 arrivals and the second process has i arrivals) k−i = (λ δ + o(δ)) (λ δ + o(δ)) + (1 kλ δ + o(δ)) (λ δ + o(δ)) 1 × 2 − 1 × 2 Xj=0 = λ2δ + o(δ). Further, k P(Sk(t + δ)= n Sk(t)= n)= P(the first process has j arrivals and the second process has j arrivals) | Xj=1 + P(the first process has 0 arrivals and the second process has 0 arrivals) k = (λ δ + o(δ)) (λ δ + o(δ)) + (1 kλ δ + o(δ)) (1 kλ δ + o(δ)) 1 × 2 − 1 × − 2 Xj=0 = 1 kλ δ kλ δ + o(δ). − 1 − 2

Remark 4.2. The pmf Rm(t) of SPoK satisfies the following difference differential equation k k d R (t)= k(λ + λ )R (t)+ λ R (t)+ λ R (t) dt m − 1 2 m 1 m−j 2 m+j Xj=1 Xj=1 k k = λ (1 Bj)R λ (1 F j)R (t), m Z, − 1 − m − 2 − m ∈ Xj=1 Xj=1 with initial condition R (0) = 1 and R (0) = 0 for m = 0,Let B be the backward shift operator 0 m 6 defined in (2.19) and F be the forward shift operator defined by F jX(t) = X(t + j) such that (1 F )α = ∞ α F j. Multiplying by sm and summing for all m in (5.39), we get the following − j=0 j differential equationP  for the pgf k k d k k GS (s,t)= k(λ + λ )+ λ sj + λ s−j GS (s,t). dt − 1 2 1 2  Xj=1 Xj=1   12 The mean, variance and covariance of SPoK can be easily calculated by using the pgf, k(k + 1) E[Sk(t)] = (λ λ )t; 2 1 − 2 1 Var[Sk(t)] = [k(k + 1)(2k + 1)] (λ + λ )t; 6 1 2 1 Cov[Sk(t),Sk(s)] = [k(k + 1)(2k + 1)] (λ + λ )s, s

5 Running average of SPoK

In this section, we introduce and study the new stochastic L´evy process which is running average of SPoK. Definition 5.1. The following stochastic process defined by taking time-scaled integral of the path of the SPoK, 1 t Sk (t)= Sk(s)ds, (5.39) A t Z0 is called the running average of SPoK. Next we provide the compound Poisson representation of running average of SPoK.

iuSk (t) k Proposition 5.1. The characteristic function φ k (u)= E[e A ] of S (t) is given by SA(t) A

k iuj k −iuj 1 (e 1) (1 e ) R φSk (t)(u) = exp kt λ1 1 − + λ2 1 − , u . (5.40) A −   − k iuj   − iuj  ∈  Xj=1 Xj=1        Proof. By using the Lemma 1 to equation (4.37) after scaling by 1/t.  Remark 5.1. It is easily observable that in equation (5.40) has removable singularity at u = 0. To remove that singularity we can define φ k (0) = 1. SA(t) Proposition 5.2. Let Y (t) be a compound Poisson process

N(t)

Y (t)= Jn, (5.41) n=1 X where N(t) is a Poisson process with rate parameter k(λ + λ ) > 0 and J are iid random 1 2 { n}n≥1 variables with mixed double uniform distribution function pj which are independent of N(t). Then

law k Y (t) = SA(t).

13 Proof. Rearranging the φ k (t), SA(t)

k iuj k −iuj λ1 1 (e 1) λ2 1 (1 e ) φSk (t)(0) = exp (λ1 + λ2)kt − + − 1 A  λ1 + λ2 k iuj λ1 + λ2 k iuj −  Xj=1 Xj=1    The random variables J1 being a mixed double uniformly distributed has density k k 1 p (x)= p (x)f (x)= f (x), (5.42) J1 Vi Ui k Ui Xi=1 Xi=1 P 1 where Vj follows discrete uniform distribution over (0, k) with pmf pVj (x) = (Vj = x) = k , j = 1, 2,...k, and Ui be doubly uniform distributed random variables with density f (x) = (1 w)1 (x)+ w1 (x), i x i. Ui − [−i,0] [0,i] − ≤ ≤ Further, 0

k iuj k −iuj λ1 1 (e 1) λ2 1 (1 e ) φJ1 (u)= − + − . λ1 + λ2 k iuj λ1 + λ2 k iuj Xj=1 Xj=1 The characteristic function of Y (t) is

−k(λ1=λ2)t(1−φJ (u)) φY (t)(u)= e 1 , (5.43) putting the characteristic function φJ1 (u) in the above expression yields the characteristic function k of SA(t), which completes the proof.

Remark 5.2. The q-th order moments of J1 can be calculated by using (3.34) and also using Taylor series expansion of the characteristic φJ1 (u)), around 0, such that ∞ ∞ (eiuj 1) (iuj)r (1 e−iuj) ( iuj)r − =1+ & − =1+ − . iuj (r + 1)! iuj (r + 1)! r=1 r=1 X X We have m = (k+1)(λ1−λ2) and m = 1 [(k+1)(2k+1)]. Further, the mean, variance and covariance 1 4(λ1+λ2) 2 18 of running average of SPoK are k(k + 1) E[Sk (t)] = E[N(t)]E[J ]= (λ λ )t A 1 4 1 − 2 1 Var[Sk (t)] = E[N(t)]E[J 2]= [k(k + 1)(2k + 1)](λ + λ )t A 1 18 1 2 1 k2(k + 1)2 Cov[Sk (t),Sk (s)] = [k(k + 1)(2k + 1)](λ λ )s (λ λ )2s2. A A 18 1 − 2 − 16 1 − 2 Corollary 5.1. For λ2 = 0 the running average of SPoK is same as the running average of PPoK, i.e.

φ k (u)= φ k (u). SA(t) NA(t) Corollary 5.2. For k = 1 this process behave like the running average of Skellam process. Corollary 5.3. The ratio of mean and variance of SPoK and running average of SPoK are 1/2 and 1/3 respectively.

14 6 Time-changed Skellam process of order K

We consider time-changed SPoK, which can be obtained by subordinating SPoK Sk(t) with the independent L´evy subordinator D (t) satisfying E[D (t)]c < for all c > 0. The time-changed f f ∞ SPoK is defined by Z (t)= Sk(D (t)), t 0. f f ≥ Note that the stable subordinator doesn’t satisfy the condition E[D (t)]c < . The MGF of time- f ∞ changed SPoK Zf (t) is given by

k θj k −θj E[eθZf (t)]= e−tf(k(λ1+λ2)−λ1 j=1 e −λ2 j=1 e ). P P

Theorem 6.1. The pmf Hf (t)= P(Zf (t)= m) of time-changed SPoK is given by

∞ (kλ )m+x(kλ )x H (t)= 1 2 E[e−k(λ1+λ2)Df (t)D2m+x(t)], m Z. (6.44) f (m + x)!x! f ∈ x=max(0X,−m)

Proof. Let hf (x,t) be the probability density function of L´evy subordinator. Using conditional argument

∞ Hf (t)= Rm(y)hf (y,t)dy Z0 ∞ λ m/2 = e−ky(λ1+λ2) 1 I (2yk λ λ )h (y,t)dy λ |m| 1 2 f Z0  2  ∞ m+x x ∞ p (kλ1) (kλ2) −k(λ1+λ2)y 2m+x = e y hf (y,t)dy (m + x)!x! 0 x=max(0X,−m) Z ∞ (kλ )m+x(kλ )x = 1 2 E[e−k(λ1+λ2)Df (t)D2m+x(t)]. (m + x)!x! f x=max(0X,−m)

Proposition 6.1. The state probability Hf (t) of time-changed SPoK satisfies the normalizing con- dition ∞ Hf (t) = 1. m=−∞ X Proof. Using (6.44), we have

∞ ∞ ∞ λ m/2 H (t)= e−ky(λ1+λ2) 1 I (2yk λ λ )h (y,t)dy f λ |m| 1 2 f m=−∞ m=−∞ Z0  2  X X∞ p −ky(λ1+λ2) ky(λ1+λ2) = e e hf (y,t)dy 0 Z ∞ = hf (y,t)dy = 1. Z0

15 The mean and covarience of time changed SPoK are given by,

k(k + 1) E[Z (t)] = (λ λ )E[D (t)] f 2 1 − 2 f 1 k2(k + 1)2 Cov[Z (t),Z (s)] = [k(k + 1)(2k + 1)](λ + λ ))E[D (s)] + (λ λ )2Var[D (s)]. f f 6 1 2 f 4 1 − 2 f

7 Space fractional Skellam process and tempered space fractional Skellam process

In this section, we introduce time-changed Skellam processes where time time-change are stable subordinator and tempered stable subordinator. These processes give the space-fractional version of the Skellam process similar to the time-fractional version of the Skellam process introduced in [10].

7.1 The space-fractional Skellam process

In this section, we introduce space-fractional Skellam processes (SFSP). Further, for introduced processes, we study main results such as state probabilities and governing difference-differential equations of marginal PMF.

Definition 7.1 (SFSP). Let N1(t) and N2(t) be two independent homogeneous Poison processes with intensities λ1 > 0 and λ2 > 0, respectively. Let Dα1 (t) and Dα2 (t) be two independent stable subor- dinators with indices α (0, 1) and α (0, 1) respectively. These subordinators are independent 1 ∈ 2 ∈ of the Poisson processes N1(t) and N2(t). The subordinated stochastic process

S (t)= N (D (t)) N (D (t)) α1,α2 1 α1 − 2 α2 is called a SFSP.

Next we derive the moment generating function (MGF) of SFSP. We use the expression for marginal (PMF) of SFPP given in (2.15) to obtain the marginal PMF of SFSP.

α α − θS (t) θ(N (D (t))−N (D (t))) −t[λ 1 (1−eθ)α1 +λ 2 (1−e θ )α2 ] M (t)= E[e α1,α2 ]= e 1 α1 2 α2 = e 1 2 , θ R. θ ∈ In the next result, we obtain the state probabilities of the SFSP.

P Theorem 7.1. The PMF Hk(t)= (Sα1,α2 (t)= k) of SFSP is given by

∞ k ( 1) (1, α1); (1, α2); H (t)= − ψ ( λ α1 t) ψ ( λ α2 t) I k n!(n + k)! 1 1 − 1 1 1 − 2 k≥0 n=0 "(1 n k, α1); #! "(1 n, α2); #! X − − − ∞ |k| ( 1) (1, α1); α1 (1, α2); α2 + − 1ψ1 ( λ1 t) 1ψ1 ( λ2 t) Ik<0 n!(n + k )! "(1 n, α1); − #! "(1 n k , α2); − #! nX=0 | | − − − | | (7.45) for k Z. ∈

16 Proof. Note that N1(Dα1 (t)) and N2(Dα2 (t)) are independent, hence

∞ P P P I (Sα1,α2 (t)= k)= (N1(Dα1 (t)) = n + k) (N2(Dα2 (t)) = n) k≥0 n=0 X∞ + P(N (D (t)) = n)P(N (D (t)) = n + k )I . 1 α1 2 α2 | | k<0 nX=0 Using (2.15), the result follows.

In the next theorem, we discuss the governing differential-difference equation of the marginal PMF of SFSP.

P Theorem 7.2. The marginal distribution Hk(t) = (Sα1,α2 (t) = k) of SFSP satisfy the following differential difference equations d H (t)= λα1 (1 B)α1 H (t) λα2 (1 F )α2 H (t), k Z (7.46) dt k − 1 − k − 2 − k ∈ d H (t)= λα1 H (t) λα2 H (t), (7.47) dt 0 − 1 0 − 2 1 with initial conditions H (0) = 1 and H (0) = 0 for k = 0. 0 k 6 Proof. The proof follows by using probability generating function.

Remark 7.1. The MGF of the SFSP solves the differential equation

dM (t) θ = M (t)(λα1 (1 eθ)α1 + λα2 (1 e−θ)α2 ). (7.48) dt − θ 1 − 2 − Proposition 7.1. The L´evy density ν (x) of SFSP is given by Sα1,α2

∞ ∞ α1 n1+1 α1 α2 n2+1 α2 νS (x)= λ1 ( 1) δn (x)+ λ ( 1) δ−n (x). α1,α2 − n 1 2 − n 2 n =1 1 n =1 2 X1   X2   Proof. Substituting the L´evy densities ν (x) and ν (x) of N (D (t)) and N (D (t)), respec- Nα1 Nα2 1 α1 2 α2 tively from the equation (2.21), we obtain

ν (x)= ν (x)+ ν (x), Sα1,α2 Nα1 Nα2 which gives the desired result.

7.2 Tempered space-fractional Skellam process (TSFSP)

In this section, we present the tempered space-fractional Skellam process (TSFSP). We discuss the corresponding fractional difference-differential equations, marginal PMFs and moments of this process.

17 Definition 7.2 (TSFSP). The TSFSP is obtained by taking the difference of two independent tem- pered space fractional Poisson processes. Let Dα1,µ1 (t), Dα2,µ2 (t) be two independent TSS (see [24]) and N1(t),N2(t) be two independent Poisson processes whcih are independent of TSS. Then the stochastic process Sµ1,µ2(t)= N (D (t) N (D (t)) α1,α2 1 α1,µ1 − 2 α2,µ2 is called the TSFSP. µ1,µ2 P µ1,µ2 Theorem 7.3. The PMF Hk (t)= (Sα1,α2 (t)= k) is given by

∞ ∞ k α m −m µ1,µ2 ( 1) t(µ 1 +µ1α1 ) µ1 λ (1, α1); α H (t)= − e 1 1 ψ ( λ 1 t) k n!(n + k)! m! 1 1 − 1 × n=0 m=0 "(1 n k m, α1); #! X X − − − ∞ l −l µ2 λ2 (1, α2); α2 1ψ1 ( λ2 t) (7.49) l! "(1 l k, α2); − #! Xl=0 − − when k 0 and similarly for k< 0, ≥ ∞ ∞ |k| α m −m µ1,µ2 ( 1) t(µ 1 +µ1α1 ) µ1 λ (1, α1); α H (t)= − e 1 1 ψ ( λ 1 t) k n!(n + k )! m! 1 1 − 1 × n=0 m=0 "(1 n m, α1); #! X | | X − − ∞ l −l µ2 λ2 (1, α2); α2 1ψ1 ( λ2 t) . (7.50) l! "(1 l n k , α2); − #! Xl=0 − − − | |

Proof. Since N1(Dα1,µ1 (t)) and N2(Dα2,µ2 (t)) are independent,

∞ P µ1,µ2 P P I Sα1,α2 (t)= k = (N1(Dα1,µ1 (t)) = n + k) (N2(Dα2,µ2 (t)) = n) k≥0 n=0  X∞ + P(N (D (t)) = n)P(N (D (t)) = n + k )I , 1 α1,µ1 2 α2,µ2 | | k<0 n=0 X which gives the marginal PMF of TSFPP by using (2.23).

Remark 7.2. We use this expression to calculate the marginal distribution of TSFSP. The MGF is obtained by using the conditioning argument. Let fα,µ(x,t) be the density function of Dα,µ(t). Then

∞ θ α α θN(Dα,µ(t)) θN(u) −t{(λ(1−e )+µ) −µ } E[e ]= E[e ]fα,µ(u, t)du = e . (7.51) Z0 Using (7.51), the MGF of TSFSP is

µ ,µ α α θS 1 2 (t) θN (D (t)) θN (D (t)) −t[{(λ (1−eθ)+µ )α1 −µ 1 }+{(λ (1−eθ)+µ )α2 −µ 2 }] E[e α1,α2 ]= E e 1 α1,µ1 E e 2 α2,µ2 = e 1 1 1 2 2 2 . h i h i µ1,µ2 α1−1 α2−1 Remark 7.3. We have E[Sα ,α (t)] = t(α µ α µ ). Further, the covariance of TSFSP can 1 2 1 1 − 2 2 be obtained by using (2.26) and

µ1,µ2 µ1,µ2 Cov Sα1,α2 (t),Sα1,α2 (s) = Cov[N1(Dα1,µ1 (t)),N1(Dα1,µ1 (s))] + Cov[N2(Dα2,µ2 (t)),N2(Dα2,µ2 (s))]

  = Var(N1(Dα1,µ1 (min(t,s))) + Var(N2(Dα2,µ2 (min(t,s))).

18 Proposition 7.2. The L´evy density ν µ1,µ2 (x) of TSFSP is given by Sα1,α2

∞ n1 α1−n1 α1 n1 n1 l1+1 νSµ1,µ2 (x)= µ1 λ1 ( 1) δl1 (x) α1,α2 n1 l1 − n1=1   l=1   X∞ Xn α 2 n + µ α2−n2 2 λ n2 2 ( 1)l2+1δ (x), µ , µ > 0. 2 n 2 l − l2 1 2 n =1 2 2 X2   lX2=1   Proof. By adding L´evy densities ν (x) and ν (x) of N (D (t)) and N (D (t)) re- Nα1,µ1 Nα2,µ2 1 α1,µ1 2 α2,µ2 spectively from the equation (2.27), which leads to

ν µ1,µ2 (x)= νNα ,µ (x)+ νNα ,µ (x). Sα1,α2 1 1 2 2

7.3 Simulation of SFSP and TSFSP

We present the algorithm to simulate the sample trajectories for SFSP and TSFSP. We use Python 3.7 and its libraries Numpy and Matplotlib for the simulation purpose.

Simulation of SFSP: Step-1: generate independent and uniformly distributed in [0, 1] rvs U, V for fix values of parameters;

Step-2: generate the increments of the α-stable subordinator Dα(t) (see [25]) with pdf fα(x,t), using d d 1 the relationship D (t + dt) D (t) = D (dt) = (dt) α D (1), where α − α α α sin(απU)[sin((1 α)πU)]1/α−1 Dα(1) = − ; [sin(πU)]1/α log V 1/α−1 | | 1/α Step-3: generate the increments of Poisson distributed rv N(Dα(dt)) with parameter λ(dt) Dα(1);

Step-4: cumulative sum of increments gives the space fractional Poisson process N(Dα(t)) sample trajectories;

Step-5: generate N1(Dα1 (t)), N2(Dα2 (t)) and subtract these to get the SFSP Sα1,α2 (t).

We next present the algorithm for generating the sample trajectories of TSFSP.

Simulation of TSFSP: Use the first two steps of previous algorithm for generating the increments of α-stable subordinator

Dα(t).

Step-3: for generating the increments of TSS Dα,µ(t) with pdf fα,µ(x,t), we use the following steps called “acceptance-rejection method”;

(a) generate the stable random variable Dα(dt);

(b) generate uniform (0, 1) rv W (independent from Dα); −µDα(dt) (c) if W e , then Dα,µ(dt)= Dα(dt) (“accept”); otherwise go back to (a) (“reject”). Note ≤ α α that, here we used that f (x,t)= e−µx+µ tf (x,t), which implies fα,µ(x,t)(x,dt) = e−µx for c = eµ dt α,µ α cfα(x,dt)

19 and the ratio is bounded between 0 and 1; step-4: generate Poisson distributed rv N(Dα,µ(dt)) with parameter λDα,µ(dt) step-5: cumulative sum of increments gives the tempered space fractional Poisson process N(Dα,µ(t)) sample trajectories; step-6: generate N1(Dα1,µ1 (t)), N2(Dα2,µ2 (t)), then take difference of these to get the sample paths µ1,µ2 of the TSFSPSα1,α2 (t).

Figure 1: The sample trajectories of SFSP and TSFSP

Acknowledgments: NG would like to thank Council of Scientific and Industrial Research(CSIR), India, for the award of a research fellowship.

References

[1] J.G. Skellam, The frequency distribution of the difference between two Poisson variables belong- ing to different populations, J. Roy. Statist. Soc. Ser. A. (1946), pp. 109-296.

[2] J.O. Irwin, The frequency distribution of the difference between two independent variates follow- ing the same , J. Roy. Statist. Soc. Ser. A. 100(1937), pp. 415-416.

[3] F.W. Steutel and K. Van Harn, Infinite Divisibility of Probability Distributions on the Real Line, Marcel Dekker, New York, 2004.

[4] Y. Hwang, J. Kim and I. Kweon, Sensor noise modeling using the Skellam distribution: Ap- plication to the color edge detection, Cvpr, IEEE Conference on Computer Vision and Pattern Recognition, (2007) pp. 1-8.

[5] D. Karlis and I. Ntzoufras, Bayesian modeling of football outcomes: Using the Skellam’s distri- bution for the goal difference, IMA J. Manag. Math. 20(2008), pp. 133-145.

20 [6] E. Bacry, M. Delattre, M. Hoffman and J. Muzy, Modeling microstructure noise with mutually exciting point processes, Quant. Finance 13(2013), pp. 65-77.

[7] E. Bacry, M. Delattre, M. Hoffman and J. Muzy, Some limit theorems for Hawkes processes and applications to financial statistics, Stoch. Proc. Appl. 123(2013), pp. 2475-2499.

[8] O.E. Barndorff-Nielsen, D. Pollard and N. Shephard, Integer-valued L´evy processes and low latency financial econometrics, Quant. Finance, 12(2011), pp. 587-605.

[9] P. Carr, Semi-static hedging of barrier options under Poisson jumps, Int. J. Theor. Appl. Fi- nance, 14(2011), pp. 1091-1111.

[10] A. Kerss, N.N. Leonenko. and A. Sikorskii, Fractional Skellam processes with applications to finance, Fract. Calc. Appl. Anal. 17(2014), pp.532-551.

[11] A. Kumar, N.N. Leonenko and A. Pichler, Fractional risk process in insurance, Math. Financ. Econ. 529(2019), pp. 121-539.

[12] K.V. Buchak and L.M. Sakhno, On the governing equations for Poisson and Skellam processes time-changed by inverse subordinators, Theory Probab. Math. Statist. 98(2018) pp. 87-99.

[13] A. Sengar, A. Maheshwari and N.S. Upadhye, Time-changed Poisson processes of order k, Stoch. Anal. Appl. 38(2020), pp. 124-148.

[14] K.Y. Kostadinova and L.D. Minkova, On the Poisson process of order k, Pliska Stud. Math. Bulgar. 22(2012).

[15] M. Abramowitz and I.A. Stegun, Handbook of Mathematical Functions, Dover, New York, 1974.

[16] R.L. Schilling, R. Song and Z. Vondracek, Bernstein functions: theory and applications, De Gruyter, Berlin, 2012.

[17] E. Orsingher, F. Polito, The space-fractional Poisson process, Statist. Probab. Lett. 82(2012), pp. 852-858.

[18] A. Kilbas, H. Srivastava and J. Trujillo, Theory and Applications of Fractional Differential Equations, Elsevier, 2006.

[19] J. Beran, Statistics for Long-Memory Processes, Chapman & Hall, New York, 1994.

[20] K-I. Sato, L´evy processes and infinitely divisible distributions, Cambridge University Press, Cam- bridge, 1999.

[21] N. Gupta, A. Kumar and N. Leonenko, Tempered Fractional Poisson Processes and Fractional Equations with Z-Transform, Stoch. Anal. Appl. (2020) (To Appear).

[22] W. Xia, On the distribution of running average of Skellam process, Int. J. Pure Appl. Math. 119(2018), pp. 461-473.

21 [23] A. Maheshwari and P. Vellaisamy, On the long-range dependence of fractional Poisson and negative binomial processes, J. Appl. Probab. 53(2016), pp. 989-1000.

[24] J. Rosi´nski, Tempering stable processes, Stochastic. Process. Appl. 117(2007), pp. 677-707.

[25] D. Cahoy and V. Uchaikin and A. Woyczynski, Parameter estimation from fractional Poisson process, J. Statist. Plann. Inference, 140(2013), pp.31063120.

22