POLSKAAKADEMIANAUK,INSTYTUTMATEMATYCZNY

DISSERTATIONES MATHEMATICAE (ROZPRAWY MATEMATYCZNE)

KOMITET REDAKCYJNY

ANDRZEJ BIALYNICKI-BIRULA, BOGDAN BOJARSKI, ZBIGNIEW CIESIELSKI, JERZYLO S,´ ZBIGNIEW SEMADENI, JERZY ZABCZYK redaktor, WIESLAW ZELAZKO˙ zast¸epca redaktora

CCCLVIII

JOLANTA K. MISIEWICZ

Substable and pseudo-isotropic processes Connections with the geometry of subspaces of Lα-spaces

W A R S Z A W A 1996 Jolanta K. Misiewicz Institute of Mathematics Technical University of Wroc law Wybrze˙ze Wyspia´nskiego 27 50-370 Wroc law, Poland E-mail: [email protected]

Published by the Institute of Mathematics, Polish Academy of Sciences Typeset in TEX at the Institute Printed and bound by

&

PRINTEDINPOLAND

c Copyright by Instytut Matematyczny PAN, Warszawa 1996

ISSN 0012-3862 CONTENTS

I. Introduction ...... 5 II. Pseudo-isotropic random vectors ...... 9 II.1. Symmetric stable vectors ...... 9 II.2. Pseudo-isotropic random vectors ...... 15 II.3. Elliptically contoured vectors ...... 23 II.4. α-symmetric random vectors ...... 27 II.5. Substable random vectors...... 32 III. Exchangeability and pseudo-isotropy ...... 35 III.1. Pseudo-isotropic exchangeable sequences ...... 35 III.2. Schoenberg-type theorems ...... 40 III.3. Some generalizations...... 43 IV. Stable and substable stochastic processes...... 45 IV.1. Gaussian processes and Reproducing Kernel Hilbert Spaces...... 45 IV.2. Elliptically contoured processes ...... 47 IV.3. Symmetric stable stochastic processes ...... 50 IV.4. Spectral representation of symmetric stable processes...... 56 IV.5. Substable and pseudo-isotropic stochastic processes...... 59 IV.6. Lα-dependent stochastic integrals ...... 62 IV.7. Random limit theorems ...... 63 V. Infinite divisibility of substable stochastic processes ...... 64 V.1. Infinitely divisible distributions. L´evymeasures ...... 66 V.2. Approximative logarithm ...... 68 V.3. Infinite divisibility of substable random vectors...... 73 V.4. Infinite divisibility of substable processes...... 77 References ...... 80 Index ...... 90

1991 Mathematics Subject Classification: Primary 46A15, 60B11; Secondary 60G07, 46B20, 60E07, 60K99. Received 2.3.1995; revised version 18.9.1995. I. Introduction

This paper is devoted to a problem which can be expressed in a very simple way in elementary probability theory. Consider a symmetric random vector X = (X1,X2) taking values in R2. For every line ℓ in R2 passing through the origin we define a random variable Πℓ(X) which is the orthogonal projection of X onto ℓ. In other words, Πℓ(X)= e ,X , where e is a unit vector contained in the line ℓ, and , denotes the usual inner h ℓ i ℓ h· ·i product in R2. The problem is to characterize all random vectors X with the property that all orthogonal projections Πℓ(X) are equal in distribution up to a scale parameter. Equivalently, we can say that with every line ℓ is associated a positive constant c(ℓ) such that Π (X) has the same distribution as c(ℓ) X . Random vectors having this property ℓ · 1 are called pseudo-isotropic. The existence of such random vectors is evident: it is enough to notice that multidi- mensional symmetric Gaussian random vectors and multidimensional symmetric stable vectors are pseudo-isotropic. Another example is given by a random vector uniformly distributed on the unit sphere in R2. In this case we can see that the distribution of the orthogonal projection does not depend on the direction of the diameter on which we are projecting. In analytical description of the problem we want to characterize all symmetric random vectors X = (X1,X2) such that their characteristic function ϕX (ξ1t, ξ2t) is the same as the characteristic function ϕ (c(ξ , ξ )t) of the random variable c(ξ , ξ ) X . The X1 1 2 1 2 · 1 function c : R2 [0, ) has to be one-homogeneous. The equivalence easily follows from → ∞ the equality ξ X + ξ X = ξ2 + ξ2 Π (X), 1 1 2 2 1 2 · ℓ where ℓ is a line passing through the origin,q containing the unit vector

ξ1 ξ2 2 2 eℓ = , , and c(ξ1, ξ2)= ξ1 + ξ2 c(ℓ). ξ2 + ξ2 ξ2 + ξ2 ·  1 2 1 2  q The considered propertyp of randomp vectors can be easily generalized to random vectors taking values in spaces bigger than R2. In the paper we consider pseudo-isotropic random vectors taking values in Rn, pseudo-isotropic sequences of random variables and pseudo- isotropic stochastic processes. Pseudo-isotropic random vectors taking values in infinite dimensional linear spaces are not mentioned in the paper, but the interested reader can find a lot of useful information in the references. By studying pseudo-isotropic random vectors and stochastic processes we do not only want to enrich the class of distributions with properties which are interesting and co- 6 J. K. Misiewicz

nvenient for calculations. We also want to propose another method of studying symme- tric stable random vectors and processes which are strictly connected with the idea of pseudo-isotropy. Firstly, the scale parameter c(ξ1, ξ2) can be given by the Lα-, for some α (0, 2], i.e. there may exist a linear operator : R2 L (S,Σ,ν) for some ∈ ℜ → α measure space (S,Σ,ν) such that ( ) c(ξ , ξ )= (ξ , ξ ) . ∗ 1 2 kℜ 1 2 kα This means that the characteristic function of a pseudo-isotropic random vector X can be of the form ϕ( (ξ , ξ ) ), while the characteristic function of a symmetric α-stable kℜ 1 2 kα random vector is of the form exp (ξ , ξ ) α . Secondly, representation ( ) holds {−kℜ 1 2 kα} ∗ under a very weak condition on the pseudo-isotropic random vector X: it is enough to assume that there exists ε > 0 such that E X ε < . Thirdly, every known function | 1| ∞ c : R2 [0, ) which can appear in the definition of a pseudo-isotropic random vector → ∞ admits a representation ( ) for some α (0, 2]. ∗ ∈ Historically, people considered 1938, the year that Schoenberg published three papers on spherically symmetric random vectors and completely monotonic functions (see [215]– [217]), as the beginning of the investigation of pseudo-isotropic random vectors. We should remember, however, that even earlier, in 1920’s and 1930’s, Paul L´evy and Aleksandr Yakovlevich Khinchin had introduced the idea of stable random variables and vectors, and also that the beginning of the investigations of Gaussian random vectors goes back to the beginning of the 18th century. Note that spherically symmetric random vectors have a very special property. Namely, both the characteristic function and the multidimensional density function (if the latter one exists) are constant on spheres centered at the origin. This was the reason why the theory developed by Schoenberg broke into two parts. The first one, which we call pseudo- isotropy, describes random vectors and processes having all one-dimensional projections the same up to a scale parameter, which is equivalent with the fixed geometry of level curves for the characteristic function. The second one, only occasionally appearing in this paper, describes random vectors and processes with fixed geometry of level curves for multidimensional density functions and is usually connected with the de Finetti theorem where the exchangeability σ-field is specified by the geometry. The spherically symmetric random vectors and processes were extensively studied and turned out to be very useful in statistics and probability theory. Slightly generalizing the definition to the vectors which are linear images of spherically symmetric random vectors, mathematicians were considering elliptically contoured random vectors and spherically generated random vectors. We shall also mention here that well-known sub-Gaussian random vectors and stochastic processes in both possible definitions (i.e. random vectors which are mixtures of a symmetric Gaussian random vector, and random vectors having all weak moments proportional to the corresponding moments of some Gaussian random vector) are in fact elliptically contoured. More about the history of elliptically contoured random vectors and processes can be found in [44] and [172]. In 1967 Bretagnolle, Dacunha-Castelle and Krivine (see [29]) proved that all positive definite norm dependent functions on infinite-dimensional Lα-spaces are mixtures of cha- racteristic functions of symmetric stable random vectors. This shows also that for α> 2 Substable and pseudo-isotropic processes 7

the only positive definite norm-dependent function on infinite-dimensional Lα-space is constant, so the corresponding pseudo-isotropic random vector is zero everywhere. This paper was important for the theory because for the first time symmetric α-stable ran- dom vectors appeared not only as an example of pseudo-isotropic random vectors, but it was shown that in some cases pseudo-isotropic random vectors have to be mixtures of symmetric stable vectors. In 1976 Christensen and Ressel (see [47]) showed that a positive definite norm de- pendent function on an infinite-dimensional has to be a mixture of the function exp ξ 2 . This result, giving only the necessary condition, is weaker than {−k k } the previous one, because only on a the function exp ξ 2 is positive {−k k } definite (and defines the characteristic function of some Gaussian random vector). In 1988 [162] (see also [172], 1990) Misiewicz generalized this result and proved that a posi- n tive definite quasi-norm-dependent function on a linear space containing ℓα’s uniformly must be a mixture of the function exp ξ α . All these results connect the theory of {−k k } pseudo-isotropic random vectors and processes with the well-developed geometry of vec- tor spaces, the idea of spaces having a stable type or cotype. In this paper we put a lot of attention to the connections between the geometrical properties of the Reproducing Kernel Space (defined for pseudo-isotropic random vectors in analogy to the Reproducing Kernel Hilbert Space, well known object in the theory of Gaussian random vectors) and some properties, especially infinite divisibility, of pseudo-isotropic random vector. Once you have noticed that the uniform distribution on the unit sphere in Rn is spherically symmetric it is not difficult to see that all spherically symmetric distributions on Rn are mixtures of this special one. For a long time it was the only known the- orem characterizing pseudo-isotropic random vectors. When giving this characterization Schoenberg also gave a list of questions on existence and on characterizing theorems of pseudo-isotropic random vectors with different types of quasi-norms. It turned out that even for partial answers we had to wait for more than 50 years. Let me remind here the main of Schoenberg’s questions (see [217]): 1. Let p > 2, n > 2; does there exist a positive number α such that exp ξ α is {−k kp } positive definite on Rn? 2. Does there exist a positive number α such that exp ξ α is positive definite on {−k k∞} Rn? The questions can be reformulated in the following way: does there exist α 2 such n n ≤ that ℓp , p> 2, (ℓ∞ respectively) embeds isometrically into some Lα-space? The equiva- lence of these two formulations follows immediately from the L´evy spectral theorem for characteristic functions of symmetric stable random vectors taking values in Rn (see [138], 63). The restriction α< 2 was already known to Schoenberg (see [217], 5). There were § § 2 many attempts to solve these problems. In 1976 Dor [52] showed that for α> 1, ℓp em- beds isometrically into Lα-space if and only if α p 2 or p = 2. The first Schoenberg ≤ ≤ n problem was finally solved in 1991 by Koldobsky [121] who showed that ℓp does not embed into any L -space if n 3, p> 2, α 2. α ≥ ≤ In 1989 Misiewicz [163] showed that if a random vector taking values in R3 is pseudo- 3 isotropic with the quasi-norm being the ℓ∞-norm on R , then it is equal to zero with 8 J. K. Misiewicz

probability one (this answers negatively also the second Schoenberg’s question). In 1991 Zastawny [245], and independently Lisitsky [143] showed that if a pseudo-isotropic ran- 3 dom vector depends on the ℓα-norm in R for some α> 2, then it is zero with probability one. These results are quite important as the first finite-dimensional examples of spaces on which every norm-dependent positive definite function must be constant. The exam- ples known earlier were spaces connected with the result of Bretagnolle, Dacunha-Castelle n and Krivine [29], i.e. spaces containing at least ℓα’s uniformly for some α> 2. Another problem lies in finding a full characterization of different types of pseudo- isotropic random vectors on Rn. In fact, except spherically symmetric (and consequently, elliptically contoured) random vectors, the full characterization is known only in the case of ℓ1-symmetric random vectors, i.e. pseudo-isotropic random vectors with the quasi-norm c(ξ)= ξ . This result, given by Cambanis, Keener and Simons in 1983 (see [38]), is | k| based on a very interesting integral formula P

π/2 π/2 \ \ x2 y2 ( x + y )2 f + dt = f | | | | dt, sin t cos t sin t 0   0   which holds for every measurable function f for which one of these integrals exists. They n showed that every ℓ1-symmetric random vector in R must be of the form (U / D ,...,U / D ) Θ, 1 1 n n · n where U = (U1,...,Un) has a uniformp distributionp on the unit sphere in R , D = 1 1 (D1,...,Dn) has Dirichlet distribution with parameters ( 2 ,..., 2 ), Θ is a non-negative random variable, and U, D and Θ are independent. The question of characterization of all other kinds of pseudo-isotropic random vectors in Rn remains open. In 1985 Richards (see [199], [200]) proposed a method for solving this problem but he has got only some partial results for the general representation of pseudo-isotropic random vectors. Also in the paper of Misiewicz and Richards [171] there are only necessary conditions for ℓα-symmetric random vectors. In fact, it has not been even shown till now that if we consider the set n(c) of all pseudo-isotropic distributions n M on R with the fixed quasi-norm c, then there exists one type of distribution, say µ0 (like the uniform distribution on the unit sphere in Rn for the quasi-norm c(ξ)= ξ ), such k k2 that every µ (c) is a scale mixture of µ , even though we know that (c) is a ∈ Mn 0 Mn convex, weakly closed set. However, we show in the paper that for every quasi-normed space (E,c) the set of pseudo-isotropic random vectors with the characteristic functions of the form ϕ(c(ξ)) is a weak closure of the set of convex linear combinations of its extreme points. There is still another open problem. Every known quasi-norm c, admissible for non- trivial pseudo-isotropic random vectors, is given by formula ( ) for some α (0, 2], i.e. c ∗ ∈ is an Lα-norm. But maybe it is possible to obtain an admissible quasi-norm which cannot be given by this formula? For now, we do not even know any method of studying this problem. This paper is meant as a review of the theory of pseudo-isotropic random vectors and stochastic processes. However, we do not give here proofs of all the results, unless they are either important for further considerations, are completely new or when the original Substable and pseudo-isotropic processes 9

paper could be difficult to find. Most attention, however, was paid to the connections between properties of pseudo-isotropic random vectors and processes, and the geometrical properties of their Reproducing Kernel Spaces. Acknowledgements. The author would like to express her gratitude to Professor Czes law Ryll-Nardzewski for many inspiring discussions and critical remarks.

II. Pseudo-isotropic random vectors

In this chapter we give the basic properties of pseudo-isotropic random vectors, inc- luding known special cases. Some of the results given here are new. We give also a list of open questions in this area.

II.1. Symmetric stable vectors. Stable random variables and vectors play a crucial role in probability theory. Their investigations were started in 1920’s and in 1930’s by Paul L´evy and Aleksandr Yakovlevich Khinchin. The literature on this topic is very rich; see, e.g., P. L´evy [138], Gnedenko and Kolmogorov [72], Zolotarev [247], Ibragimov and Linnik [96]. Just recently appeared two books completely devoted to stable stochastic processes: one of Samorodnitsky and Taqqu [212], the second of Janicki and Weron [98]. Also recently, Ledoux and Talagrand published a book “Probability in Banach Spaces” (see [134]), where stable random variables, vectors and processes are shown to play an important role in structure theorems for Banach spaces. In this section we concentrate only on that part of the theory of stable distribu- tions which will be useful in the theory of pseudo-isotropic random vectors. Namely, we give here the basic properties and representation theorems for symmetric stable random vectors. We give also some basic properties of the standard strictly positive α-stable ran- dom variables Θα. Finally, we construct the reproducing kernel space for a symmetric α-stable random vector in Rn in analogy to the reproducing kernel Hilbert space for Gaussian random vectors. All symmetric α-stable vectors which appear in the paper are non-degenerate. Definition II.1.1. A random variable X is said to have a stable distribution if for every choice of positive numbers a and b there exists a positive number c and a constant d such that d (II.1.1) aX1 + bX2 = cX + d, d where X1, X2 are independent copies of X, and where = denotes equality in distribution. A random variable X is said to have a symmetric stable distribution if it has stable distribution and P X A = P X A for every Borel set A R. { ∈ } { ∈− } ⊂ Theorem II.1.1. For any stable random variable X there exists a number α (0, 2] ∈ such that for every choice of positive numbers a and b there exists a constant d such that d α α 1/α aX1 + bX2 = (a + b ) X + d,

where X1, X2 are independent copies of X. 10 J. K. Misiewicz

See Feller [66], Theorem VI.1.1 for a proof. The constant α is called the index of stability or the characteristic exponent. A stable random variable X with index α is called α-stable. Proofs of the following basic properties of symmetric stable random variables can be easily found in almost every book on probability theory (see e.g. Feller [66]). Consider a symmetric α-stable random variable X. As X and X have the same − distributions, it is easy to see that for every a,b R, ∈ aX + bX =d ( a α + b α)1/αX, 1 2 | | | | where X1 and X2 are independent copies of X. Denote by ΦX (ξ) the characteristic function of X. Then for all a,b,ξ R, ∈ Φ (aξ)Φ (bξ)= Φ (( a α + b α)1/αξ), X X X | | | | and the only solution of this functional equation in the set of characteristic functions is Φ (ξ) = exp A ξ α X {− | | } for some A> 0. The constant A, or rather the constant A1/α, is the scale parameter of the random variable X. We denote by (α, c) the distribution of the symmetric α-stable S random variable for which A = cα. For α = 2, the random variable X with distribution (2,c) has all moments; i.e. S X L for every p > 0. If 0 <α< 2 then it is not even true that the random variable ∈ p X with distribution (α, c) belongs to L . However, it can be shown that in that case S α α α α lim t P X >t = cα c , t→∞ {| | } · where cα > 0 depends only on α. Therefore, X has moments of order r for every 0 0. {− α} {− } It easily follows from the equality of the corresponding Laplace transforms that Θα is an α-stable random variable. Using Bernstein’s theorem we also see that Θα is concentrated + on the positive half-line. Throughout the paper we will use the notation γα for the + distribution of Θα. Only in one case the density of γα is given in an explicit form, namely 1 1 γ+ (dx)= e−1/4xdx, x > 0; 1/2 2√π √x3 for details see Feller [66], Examples II.4(f) and XIII.3(b). If α (0, 1), then the density ∈ of Θα can be obtained by the inverse Fourier transform of its characteristic function. Substable and pseudo-isotropic processes 11

Namely, we have ∞ 1 \ π π γ (dx)= exp t tαx−α cos α sin tαx−α sin α dt dx, α xπ − − 2 2 × 0       and the proof of this formula can be found in [96], Theorem 2.3.1(3).

Definition II.1.2. A random vector X = (X1,...,Xn) is said to be symmetric stable in Rn (notation: SαS) if P X A = P X A for every Borel set A Rn, and if { ∈ } { ∈ − } ⊂ for every choice of a,b R there exists a positive constant c such that ∈ d aX1 + bX2 = cX,

where X1, X2 are independent copies of X. n Theorem II.1.2. Let X = (X1,...,Xn) be a symmetric stable random vector in R . Then there exists α (0, 2] such that all linear combinations of the components of X are ∈ symmetric α-stable random variables. P r o o f. The definition of a symmetric stable random vector X is equivalent to the following condition for its characteristic function Φ : for every a,b R there exists a X ∈ positive constant c such that for every ξ Rn, ∈ ΦaX (ξ)ΦbX (ξ)= ΦcX (ξ). Put ξ = (ξ , 0,..., 0) Rn. Then the characteristic function Φ of the first component 1 ∈ X1 X1 of the random vector X has the property ΦaX1 (ξ1)ΦbX1 (ξ1)= ΦcX1 (ξ1), which means that X is a stable random variable with some index of stability α (0, 2]. Evidently, X 1 ∈ 1 is symmetric as a component of a symmetric random vector X, hence cα = a α + b α. | | | | Consider now a random variable Y = ξ,X = n ξ X . Calculating the characteristic h i k=1 k k function Φ of Y we get Y P ΦaY (t)ΦbY (t)= ΦaX (tξ)ΦbX (tξ)= ΦcX (tξ)= ΦcY (t), which means that the random variable Y is stable. As Y is a linear combination of the components of a symmetric random vector X, it is also symmetric. If the index of stability for Y is β, then cβ = a β + b β. Comparing with the stability of X we have | | | | 1 ( a β + b β)1/β = ( a α + b α)1/α | | | | | | | | for every a,b R; this, however, is possible only if α = β. Now, Y is a symmetric ∈ α-stable random variable and thus there exists a positive constant c(ξ) such that Φ (t) = exp c(ξ)α t α , t R. Y {− | | } ∈ Corollary II.1.1. If every linear combination of the components of the random vector n X = (X1,...,Xn) in R is symmetric α-stable then X is symmetric α-stable. The next two theorems are known in the literature as the L´evy spectral representation for symmetric stable random vectors in Rn (see [138], 63). In the language of geometry § of Banach spaces they can be expressed as follows: Let X be a finite-dimensional linear space equipped with a quasi-norm, i.e. continuous function c : X [0, ) such that c(x)=0 x = 0 and c(tx) = t c(x) for every → ∞ ⇔ | | 12 J. K. Misiewicz

t R, x X. Then the function exp c(x)α is positive definite on X if and only if ∈ ∈ {− } α 2 and (X,c) embeds isometrically into some L -space. ≤ α L´evy was using finite measures ν on the unit sphere Sn−1 Rn (so his “embedding” ⊂ was taken into the space L (Sn−1, ν) with the correspondence x x, y ) in order to α ↔ h i obtain uniqueness of the representation. For the purpose of this paper, however, it is better to omit this restriction.

Theorem II.1.3. For every positive finite symmetric measure ν on Rn such that

Ì Ì . . . ξ, x α ν(dx) < for every ξ = (ξ ,...,ξ ) Rn, the formula

|h i| ∞ 1 n ∈

\ \ Φ(ξ) = exp . . . ξ, x α ν(dx) , ξ Rn, − |h i| ∈ n Rn o defines the characteristic function of some symmetric α-stable random vector X = n (X1,...,Xn) on R . Proof. Let ν be a positive finite symmetric measure on Rn. If the characteristic function of the random vector X = (X1,...,Xn) is given by the function Φ, then evi- dently X is symmetric α-stable. So we only need to show that the function Φ is indeed a characteristic function of some random vector. To show this, let us define a family of probability measures ∞ m∗k Exp(m ) = exp( m (Rn)) ε , ε − ε k! kX=1 where

∞ \ m (A) a−1 ν(A/s)s−α−1 ds, ε ≡ ε for every Borel set A Rn, and where the constant a is defined by ⊂

∞ \ a = (1 cos s)s−α−1 ds. − 0 It is easy to see now that Φ is the characteristic function of the probability measure which

is the weak limit of the probability measures Exp(mε) as ε 0, because

\ \ \ \ ց ihξ,xi lim . . . e Exp(mε)(dx) = lim exp . . . (1 cos ξ, x ) mε(dx) εց0 εց0 − − h i Rn n Rn o

\ \ \ = lim exp a−1 . . . ξ, x α ν(dx) (1 cos s)s−α−1 ds εց0 − |h i| −

n Rn ε o

\ \ = exp . . . ξ, x α ν(dx) . − |h i| n Rn o n Theorem II.1.4. If a random vector X = (X1,...,Xn) on R is symmetric α-stable,

then there exists a positive finite measure ν on Rn such that

\ \ Eeihξ,xi = exp . . . ξ, x α ν(dx) for every ξ Rn. − |h i| ∈ n Rn o Substable and pseudo-isotropic processes 13

P r o o f. We will follow here the proof given by Ledoux and Talagrand in [134]. Recall that if the random variable Y has a symmetric α-stable distribution (α, c) and r < α S then E X r = cr cr (see formula (II.1.2)). It follows that for every ξ = (ξ ,...,ξ ) Rn, | | α,r 1 n ∈ n n r α/r E exp i ξ X = exp c(ξ)α = exp c−α E ξ X , k k {− } α,r k k n Xk=1 o n  Xk=1  o n where c(ξ) is the scale parameter for the random variable k=1 ξkXk . For every r < α n−1 define then a positive finite measure mr on the unit sphere S = S∞ for the sup-norm n P

∞ on R by setting, for every bounded measurable function ϕ on S,

\ \ \ k·k \ . . . ϕ(y) m (dy)= c−r . . . ϕ(x/ x ) x r P (dx), r α,r k k∞ k k∞ X S Rn where P is the distribution of X = (X ,...,X ). Hence for any ξ = (ξ ,...,ξ ) Rn,

X 1 n 1 n ∈ \ n \ n r α/r E exp i ξ X = exp . . . ξ x m (dx) . k k − k k r n k=1 o n  S k=1  o X X Now the total mass mr of the measure mr is easily seen to be majorized by | | n −1 n r r mr inf xk c(ek) , | |≤ x∈S | | h Xk=1 i Xk=1 where e ,1 k n, are the unit vectors of Rn. Therefore sup m < . Let m be a k ≤ ≤ r<α | r| ∞ cluster point (in the -weak sense) of m : r < α ; m is a positive finite measure which ∗ { r } is clearly the spectral measure of X.

Remark II.1.1. It is easy to see that for every finite positive symmetric measure ν on

Ì Ì Rn such that . . . ξ, x α ν(dx) < for every ξ = (ξ ,...,ξ ) Rn we can construct |h i| ∞ 1 n ∈ a finite positive symmetric measure ν on Sn−1 = x Rn : n x2 =1 such that for 1 { ∈ k=1 k }

every ξ Rn,

\ \ \ ∈ \ P . . . ξ, x α ν(dx)= . . . ξ, x α ν (dx). |h i| |h i| 1 Rn Sn−1 It is enough to use spherical variables and integrate out the radial part. If the character-

istic function of a symmetric α-stable random vector X is

\ \ exp . . . ξ, x α ν(dx) , − |h i| n Sn−1 o then the symmetric measure ν is called the spectral measure of X. For 0 <α< 2 the spectral measure of a symmetric α-stable random vector is uniquely determined.

Example II.1.1. A random vector (X1,...,Xn) is symmetric Gaussian if there exists a symmetric positive definite n n-matrix such that the characteristic function × R n 1 E exp i ξ X = exp ξ, ξ . k k − 2h R i n Xk=1 o   This means that for every ξ Rn the random variable n ξ X has the same distribu- ∈ k=1 k k tion as ( ξ, ξ )1/2X , where the random variable X has distribution N(0, 1). It is easy h R i 0 0 P to see that a symmetric Gaussian random vector is symmetric 2-stable, thus for every 14 J. K. Misiewicz

symmetric positive definite n n-matrix there exists a finite positive measure ν on Sn−1

× \ such that \ 1 ξ, ξ = . . . ξ, x 2 ν(dx), ξ Rn. 2h R i |h i| ∈ Sn−1 However, in the case of symmetric Gaussian random vectors the spectral measure ν is

not uniquely determined; we have e.g.

\ \ \ n \ n ξ2 = . . . ξ, x 2 1 (δ + δ ) (dx)= . . . ξ, x 2c λ(dx), k |h i| · 2 ek −ek |h i| n−1 n−1 kX=1 S kX=1 S n−1 where ek = (0,..., 0, 1, 0,..., 0), λ is the uniform distribution on the unit sphere S and c is a suitable constant. Example II.1.2. If the spectral measure of a symmetric α-stable random vector (X1,...,Xn) is of the form n 1 ν(dx)= a (δ + δ )(dx) 2 k ek −ek Xk=1 for some positive constants a1,...,an, then the characteristic function of (X1,...,Xn) can be written as n Φ (ξ) = exp a ξ α . X − k| k| n k=1 o It is easy to see that in this case X has independentX components. The opposite implication also holds, i.e. if a symmetric α-stable random vector X has independent components, 1 n then its spectral measure is of the form ν(dx) = 2 k=1 ak(δek + δ−ek )(dx), for some positive constants a ,...,a . 1 n P Example II.1.3. Let X = (X1,...,Xn) be a symmetric α-stable random vector on n R with spectral measure ν and let Θβ , where β (0, 1), be independent of X. Consider 1/α ∈

the random vector Y = XΘβ . The characteristic function of Y is of the form

\ \ Eeihξt,Y i = E exp Θ t α . . . ξ, x α ν(dx) − β| | |h i|

n Sn−1 o \ \ β = exp t αβ . . . ξ, x α ν(dx) , − | | |h i| n  Sn−1  o for every ξ Rn and t R, which means that all linear combinations of components of ∈ ∈ Y are symmetric (αβ)-stable random variables. From Corollary II.1.1 we see that the random vector Y is also symmetric (αβ)-stable, so by Theorem II.1.4 there exists a finite n−1

positive measure ν1 on S such that

\ \ Eeihξ,Y i = exp . . . ξ, x αβ ν (dx) . − |h i| 1 n Sn−1 o Finally, we see that for every α (0, 2], κ < α and every finite positive measure ν on n−1 ∈ n−1

S there exists a finite positive measure ν1 on S such that

\ \ \ \ 1/α 1/κ . . . ξ, x α ν(dx) = . . . ξ, x κ ν (dx) . |h i| |h i| 1  Sn−1   Sn−1  To see this, it is enough to put β = κ/α in the previous considerations. Substable and pseudo-isotropic processes 15

For a symmetric α-stable random vector X with spectral measure ν let : Rn ℜ → L (Sn−1, ν) be a linear operator defined by the formula (ξ)= ξ, x . The characteristic α ℜ h i function of the random vector X can now be written in the following, slightly more convenient way: n (II.1.4) E exp i ξ X = exp (ξ) α . k k {−kℜ kα } n Xk=1 o Define now a linear space ( ) as follows: H X ( )= (Rn)= ξ, x : ξ Rn L (Sn−1, ν). H X ℜ {h i ∈ }⊂ α As we have seen in Example II.1.3, for every κ < α there exists a finite positive symmetric measure ν on Sn−1 such that ( ) embeds isometrically into L (Sn−1, ν ). It can also 1 H X κ 1 happen that a given space ( ) connected with the symmetric α-stable random vector H X embeds isometrically into some L -space for some β > α. By α = α ( ) we will denote β ◦ ◦ ℜ the following constant: α = sup β (0, 2] : ( ) embeds isometrically into some L -space . ◦ { ∈ H X β } Evidently α α. The geometrical properties of the space ( ) will be the subject ◦ ≥ H X of extensive studies in Chapters V and VI. We mention that for a symmetric Gaussian random vector X, the space ( ) is called the Reproducing Kernel Hilbert Space, and H X it is indeed the Hilbert space defined on Rn by the inner product n ξ, η = E( ξ,X η,X )= E ξ η X X , ξ,η Rn. h i h ih i j k j k ∈  j,kX=1  II.2. Pseudo-isotropic random vectors. In this section we give the definition and basic properties of pseudo-isotropic random vectors. The concept of pseudo-isotropic vectors appeared as a natural generalization of spherically invariant vectors, elliptically contoured vectors, α-symmetric vectors (see Cambanis, Keener and Simons [38]) or norm- dependent vectors (see e.g. Bretagnolle, Dacunha-Castelle and Krivine [29]). All these kinds of random vectors were extensively studied, and will be described in Sections II.3 and II.4. The term pseudo-isotropic distributions was introduced by Misiewicz and Schef- fer in 1992 (see [172]). The same generality of definition was obtained by Eaton [58], [59] when defining random variables (probability measures on R) with n-dimensional versions; however, it was hard to talk in this language about stochastic processes. We notice that in spite of the generality of the definition, no example is known yet of a pseudo-isotropic random vector which is not L -norm-symmetric for some α (0, 2]. α ∈ Definition II.2.1. A symmetric random vector (X1,...,Xn) is said to be pseudo- isotropic if for every ξ Rn, ξ = 0, there exists a positive constant c(ξ) satisfying ∈ 6 n d ξkXk = c(ξ)X1, Xk=1 where =d denotes equality of distributions. Similarly, a symmetric probability measure (or a symmetric σ-finite measure) µ on Rn is said to be pseudo-isotropic if for every ξ Rn, ∈ 16 J. K. Misiewicz

ξ = 0, there exists a positive constant c(ξ) such that for every Borel set R, 6 A⊂ n µ x Rn : ξ x = µ x Rn : c(ξ) x . ∈ k k ∈ A { ∈ · 1 ∈ A} n kX=1 o Remark II.2.1. Clearly, a single point mass at the origin is pseudo-isotropic, and mass at the origin can be added to or removed from a pseudo-isotropic measure without destroying pseudo-isotropy. We call a pseudo-isotropic measure pure if it gives no mass to the origin. R e m a r k II.2.2. Symmetric measures on a single line through the origin are not pseudo-isotropic on Rn (except for masses at the origin only). However, if µ is such a measure then every its one-dimensional orthogonal projection is obtained by a non- negative homothety T , a 0, from µ, where T µ(B)= µ(B/a), T µ = δ µ(R). a ≥ a 0 0 Remark II.2.3. If X = (X1,...,Xn) is a symmetric α-stable random vector with spectral measure ν, then it is pseudo-isotropic with the function c given by c(ξ)α = (ξ) α, kℜ kα for the linear operator : Rn L (Sn−1, ν) defined as (ξ)= ξ, x . ℜ → α ℜ h i R e m a r k II.2.4. The definition of pseudo-isotropic random vector reminds the fol- lowing definition (equivalent for symmetric variables to Definition II.1.1) of symmetric stable random variable X: for every n N and every choice of ξ = (ξ ,...,ξ ) Rn ∈ 1 n ∈ there exists a positive constant c = c(ξ) such that n d ξkXk = c(ξ)X, kX=1 where X1,...,Xn are independent copies of X. d Indeed, if X = (X1,...,Xn) is pseudo-isotropic, then Xk = c(ek)X1, so without loss of generality we can assume that the Xk’s are identically distributed. Thus we can say that X1, the one-dimensional projection of a pseudo-isotropic random vector X = (X1,...,Xn) fulfills the nth condition of the above definition with the random variables X1,...,Xn being not necessarily independent copies of the random variable X1. Equivalently, we can say that if the pseudo-isotropic random vector has at least two independent components then it is symmetric α-stable for some α (0, 2]. ∈ Remark II.2.5. Denote by ϕ(t), t R, the characteristic function of the first compo- ∈ nent X1 of the pseudo-isotropic random vector X =(X1,...,Xn). Then the characteristic function of X at the point ξ = (ξ1,...,ξn) is of the form E exp i ξ,X = E exp ic(ξ)X = ϕ(c(ξ)), { h i} { 1} so it has the same level curves as the function c. Moreover, if ϕ(c(ξ)) is the characteristic function of a non-degenerate pseudo-isotropic random vector X = (X1,...,Xn) then there exists another pseudo-isotropic random vector Y = (Y1,...,Yn) with characteristic function ψ(c(ξ)) for which the function ψ considered on [0, ) is positive, decreasing and ∞ one-to-one. To see this consider for example Y = X Θ, where Θ has a symmetric Cauchy · Substable and pseudo-isotropic processes 17

distribution, E exp itΘ = exp t , X and Θ independent. Indeed, { } {−| |} E exp i ξ, Y = E exp ic(ξ)X Θ = E exp t X =: ψ(c(ξ)). { h i} { 1 } {−| | · | 1|} Proposition II.2.1 (Misiewicz and Scheffer [172], 4.1). Assume that the charac- § teristic function of a pseudo-isotropic vector X = (X1,...,Xn) can be written in two different ways ϕ1(c1(ξ)) and ϕ2(c2(ξ)). Then there exists a positive constant a such that

c1(ξ)= ac2(ξ) and ϕ1(t)= ϕ2(t/a). The basic properties of the function c appearing in the definition of a pseudo-isotropic random vector are described in the following:

Theorem II.2.1 (Misiewicz and Ryll-Nardzewski [170]). If (X1,...,Xn) is a pseudo- isotropic random vector with fixed function c, then: 1) c(tξ)= t c(ξ) for every ξ Rn and t R. | | ∈ ∈ 2) c : Rn [0, ) is a continuous function. → ∞ 3) If is a norm on Rn then there exist positive constants m, M such that for k·k every ξ Rn, m ξ c(ξ) M ξ . ∈ k k≤ ≤ k k P r o o f. Without loss of generality (see Remark II.2.5) we can assume that the charac- teristic function ϕ(x) of X is strictly decreasing on [0, ). Now, the first two properties 1 ∞ trivially follow from the definition of the function c and continuity of the corresponding characteristic function. To prove property (3) it is enough to notice that c is a continuous function on the compact set ξ Rn : ξ =1 . { ∈ k k } Every function c : Rn [0, ) with the properties given in Theorem II.2.1 will be → ∞ called a quasi-norm on Rn. Notice that if c is a quasi-norm on Rn then (property 3)) there exists a positive constant K such that c(ξ + η) K(c(ξ)+ c(η)) ξ, η Rn. ≤ ∀ ∈ The regularity of the level curves for the characteristic function of a pseudo-isotropic ran- dom vector imposes some regularity conditions on the distribution of its one-dimensional projections. Namely, we have:

Theorem II.2.2 (Misiewicz [164], Th. 2). Let (X1,X2) be a pseudo-isotropic random vector. Then one of the following conditions holds: 1) P X U > 0 for every open set U R; { 1 ∈ } ⊂ 2) (X ,X ) is bounded and c(ξ)2 = ξ, ξ for some symmetric positive definite 2 2- 1 2 h R i × matrix . R P r o o f. Without loss of generality assume that c(1, 0) = 1. Assume also that there exists an open set, and consequently, an open interval (t,s) R such that P t 0. } 1 Now, for every ξ R2, ∈ P t

The sets A(ξ)= x R2 : c(ξ)−1(ξ x + ξ x ) (t,s) , ξ R2, are open cylinders in R2 { ∈ 1 1 2 2 ∈ } ∈ and it is easy to see that B x R2 : x > Mt A(ξ), ≡{ ∈ k k }⊂ [ξ where M = sup c(ξ) : ξ =1, ξ R2 and is the Euclidean norm on R2. To show { k k ∈ } k·k that P (X ,X ) B = 0, notice that for any compact set K B there exists a finite { 1 2 ∈ } ⊂ sequence ξ ,...,ξ R2 such that K A(ξ ) . . . A(ξ ), and we obtain (1) (n) ∈ ⊂ (1) ∪ ∪ (n) n P (X ,X ) K P (X ,X ) A(ξ ) =0. { 1 2 ∈ }≤ { 1 2 ∈ (k) } Xk=1 This means that the random vector (X ,X ) takes values in the compact set R B with 1 2 \ probability one, hence, in particular, it has a finite weak second moment, and > E ξ X + ξ X 2 = E c(ξ)X 2 = c(ξ)2 E X 2, ∞ | 1 1 2 2| | 1| · | 1| 2 thus the function c(ξ) is defined by the L2-norm on R , which ends the proof. The next theorem was proved by M. Keane and the author in 1992, but the proof presented below has never been published (indicated by “NP”): Theorem II.2.3. (NP) Let µ be a σ-finite measure on R2 which is pure and pseudo- isotropic. Then, for any straight line ℓ in R2, µ(ℓ)=0. P r o o f. Denote by (µ), for a measure µ on R2, the collection of measures on R P obtained by projecting µ orthogonally onto straight lines through the origin of R2, iso- metrically identified with R. Our proof consists of two parts. Part1. For any point R2, µ( ) = 0. To see this, suppose that µ( )= q> 0 Q ∈ {Q} {Q} for some R2. Since µ is pure, = (0, 0). Therefore, some projection carries over to Q ∈ Q 6 Q (0,0) and one of the measures in (µ) has an atom at the origin of mass at least q. Since P all measures in (µ) are rescalings of the same measure, it follows that each measure in P (µ) has an atom of mass q at the origin. Translating back to µ yields µ(ℓ) q for each P ≥ line ℓ through the origin in R2, which is impossible if µ is σ-finite, since µ( (0, 0) )=0. { } Indeed, if B , k N, are such that µ(B ) < , B B ..., and R2 = ∞ B , then k ∈ k ∞ 1 ⊂ 2 ⊂ k=1 k there exists at least one k◦ such that µ(B ◦ ℓ) > q/2 for infinitely many lines ℓ through k ∩ S the origin. But then µ(B ◦ ) µ(B ◦ ℓ)= . Contradiction. k ≥ k ∩ ∞ Part 2. For any line ℓ in R2, µ(ℓ) = 0. Suppose that µ(ℓ)= q> 0. Projecting in the P direction of ℓ yields a measure in (µ) with an atom of mass q, not necessarily at the P origin. All measures in (µ) have an atom of mass q. This means that for each direction 2 P Θ in R there is a straight line ℓΘ in direction Θ such that µ(ℓΘ) = q > 0. Using now Part 1 it is easy to see that this contradicts the σ-finiteness of µ, since µ( )= 0 for {Q} all R2. Q ∈ It can be easily seen from the above proposition that if µ is a σ-finite measure on Rn which is pure and pseudo-isotropic, then µ( ) = 0 for every proper hyperplane in Rn. L L If not, then there exists an (n 1)-dimensional hyperplane with a positive measure − L1 and then the projection T of the measure µ onto the line orthogonal to would have an L atom at the point T ( ), which is impossible. L1 Substable and pseudo-isotropic processes 19

Let c : Rn [0, ) be a quasi-norm on Rn. We define (c,n) as the set of all → ∞ M symmetric probability distributions µ on R for which µ(c(ξ)), ξ Rn, is the characteristic ∈ function of some (of course, pseudo-isotropic) measure on Rn. It is not difficult to show the following (for details see [170]). b Theorem II.2.4 (Misiewicz and Ryll-Nardzewski [170]). For every n N and every ∈ quasi-norm c : Rn [0, ) the set (c,n) has the following properties: → ∞ M (i) if µ , µ (c,n), then µ µ (c,n), 1 2 ∈M 1 ∗ 2 ∈M (ii) if µ , µ (c,n), 0 p 1, then pµ + (1 p)µ (c,n), 1 2 ∈M ≤ ≤ 1 − 2 ∈M (iii) if µ (c,n), µ µ, then µ (c,n), { k}⊂M k ⇒ ∈M (iv) if there exists µ (c,n), µ = δ , then (c,n) contains also another measure ∈M 6 0 M µ = δ having both the density and the characteristic function infinitely differentiable on 1 6 0 R 0 . \{ } Proof. The first three properties are obvious. For (iv) it is enough to take the pseudo- isotropic random vector X = (X1,...,Xn) with the characteristic function µ(c(ξ)), ξ Rn and define µ as the distribution of the one-dimensional projection of the ran- ∈ 1 dom vector Y = X Θ for the random variable Θ having, e.g., a symmetric Gaussianb · distribution or a symmetric Cauchy distribution. The following proposition has not been published yet. Proposition II.2.2. (NP) For every n N and every quasi-norm c : Rn [0, ) ∈ → ∞ there exists a set of extreme points Extr(c,n) (c,n) such that ⊂M (i) δ Extr(c,n), 0 ∈ (ii) for every µ Extr(c,n) and every a> 0 the measure T (µ) Extr(c,n); ∈ a ∈ (iii) the set (c,n) is equal to the intersection of the set of all symmetric probability M measures and the weak closure of the set of all convex linear combinations of elements of Extr(c,n). Proof. Consider the set of measures (c,n) as a subset of symmetric probability M measures on [ , ]. By K we denote the weak closure of (c,n). Then K is a convex, −∞ ∞ M weakly closed set of measures on a compact set [ , ] so it contains the extreme points −∞ ∞ Extr(K). It is easy to see that δ Extr(K). It is also evident that the condition (ii) 0 ∈ holds for every µ Extr(K). We need to show only that if µ Extr(K) then µ (c,n) ∈ ∈ ∈M or µ( , ) = 0. Thus, assume that µ Extr(K). Then µ = αµ + 1−α (δ + δ ), −∞ ∞ ∈ 0 2 −∞ ∞ where µ is a symmetric probability measure on R. Let µ (c,n) weakly converge to 0 k ∈M µ. It is easy to see that

r \ itx αµ0(t) = lim lim e µk(dx). r→∞ k→∞ −r n Let νk denote the pseudo-isotropicb measure on R with characteristic function µk(c(ξ)).

We need to show that

\ \ ihξ,xi b (a) lim lim . . . e νk(dx)= αµ0(c(ξ)). r→∞ k→∞ kxk

n This equality means that the function µ0(c(ξ)) is positive definite on R as a limit of positive definite functions, thus µ (c,n) and, consequently, α =1 or α = 0, which 0 ∈M was to be shown. In order to prove (a) noteb that for every ξ Sn−1 we have ∈

r/c(ξ)

\ \ \ ihtξ,xi ic(tξ)x (b) . . . e 1{|hξ,xi|

with l-dimensional basis and

\ \ . . . eihtξ,xi ν (dx) ν (B(r, ξ)) Cε, k ≤ k ≤ B(r,ξ) which together with (b) completes the proof of (a). Condition (iii) trivially follows from these considerations. The next theorem, stating that every two-dimensional Banach space embeds isomet- rically into some L1-space, has been proved by several authors in different ways; see e.g.: Ferguson 1962 [67], Herz 1963 [92], Lindenstrauss 1964 [141], Assouad 1979–1980 [14], [15] or Misiewicz and Ryll-Nardzewski 1989 [170]. We recall here an outline of the proof from [170], as the most useful for direct calculations. Theorem II.2.5. A function ψ(t,s) = exp c(t,s) , t,s R, is the characteristic {− } ∈ function of a pseudo-isotropic, 1-stable random vector (X1,X2) if and only if the function c(t,s) defines a norm on R2. Proof. If ψ(t,s) is the characteristic function of a symmetric 1-stable random vector

(X1,X2), then \ c(t,s)= tx + sy ν(dx, dy) | | S1 Substable and pseudo-isotropic processes 21

for some positive finite measure ν on S1 R2, hence c is an L -norm on R2. Now we ⊂ 1 need only prove that for every norm c(t,s) on R2 there exists a finite measure ν on (0, 2π] such that

2π \ c(t,s)= t cos ϕ + s sin ϕ ν(dϕ). | | 0 Let us define a function q as follows: q(ϕ)= c(cos ϕ, sin ϕ), and assume for a while that q has a continuous second derivative (this means that the norm c is smooth enough). In this case the convexity of the set (t,s) : c(t,s) 1 and { ≤ } homogeneity of the function c is equivalent to the inequality q′′ + q 0. It is easy to ≥ check that

2π \ 4c(t,s)= t cos ϕ + s sin ϕ (q′′(ϕ π/2)+ q(ϕ π/2)) dϕ | | − − 0

2π \ = r cos(ϕ ϕ ) (q′′(ϕ π/2)+ q(ϕ π/2)) dϕ, | − 0 | − − 0 where t = r cos ϕ0, s = sin ϕ0, thus c(t,s)= rq(ϕ0). We obtain an explicit formula for the density of the measure ν: ν(dϕ)= 1 (q′′(ϕ π/2) + q(ϕ π/2))dϕ. Less smooth norms 4 − − can always be approximated by ones which are smooth enough. Example II.2.1. Let c(t,s) = ( s α + t α)1/α for some α> 1. It is only a matter of | | | | laborious calculations to check that in this case q′′(ϕ π/2)+q(ϕ π/2) = q′′(ϕ)+q(ϕ) = (α 1) cos ϕ sin ϕ α−2( cos ϕ α + sin ϕ α)1/α−2 − − − | | | | | | for ϕ = π k, k 1, 2, 3, 4 . Theorem II.2.5 states that q′′ + q is the density of the 6 2 ∈ { } spectral measure of the two-dimensional symmetric Cauchy random vector (X1,X2) with characteristic function exp ( s α + t α)1/α . {− | | | | } The next theorem (see [172], 4, 1987), apparently rather trivial, states that L -norm- § α symmetric random vectors, α 2, play a crucial role in the theory of pseudo-isotropic ≤ vectors. It shows that if there exists a pseudo-isotropic random vector X = (X1,...,Xn) with function c which cannot be expressed as an L -norm for some α (0, 2] then X does α ∈ 1 not have any positive moment. We should underline here that the problem of existence of a quasi-norm c which cannot be expressed as an L -norm for some α 2, but for α ≤ which there exists a non-trivial function ϕ such that ϕ(c( )) is positive definite, is still · open. Because of Theorem II.2.6 we are mainly interested in this paper in subspaces of L -spaces for α 2. α ≤ Theorem II.2.6 (Misiewicz [164], Th. 1). Assume that the random vector X = (X ,...,X ) Rn is pseudo-isotropic and there exists ε > 0 such that E X ε < . 1 n ∈ | 1| ∞ Then there exist a maximal positive number α (0, 2] and a corresponding finite positive ∈ symmetric measure ν on the unit sphere Sn−1 Rn such that

⊂ \ \ n α α c(ξ) = . . . ξkxk ν(dx). Sn−1 X

22 J. K. Misiewicz

P r o o f. Denote by ν the distribution of the random vector X. Let p = min ε, 2 . 1 { } Without loss of generality we can assume that E X p = 1. Notice that

| 1| \ n p \ c(ξ)p = E c(ξ)X p = E ξ X = . . . ξ, x p ν (dx). | 1| k k |h i| 1 Rn X It follows from Theorem II.1.3 that there exists a symmetric p-stable random vector Y with characteristic function exp c(ξ)p . This means that the function c(ξ)p is negative {− } definite on Rn (see definition on page 34). Define now α = sup p (0, 2] : c(ξ)p is a negative definite function on Rn . { ∈ } Since a limit of negative definite functions is negative definite, it follows that c(ξ)α = lim c(ξ)p for p α is negative definite on Rn, and consequently the function exp c(ξ)α ր {− } is positive definite on Rn, and therefore it is the characteristic function of some random vector (Z ,...,Z ). For every t R and every ξ Rn we have 1 n ∈ ∈ n E exp i ξ Z = exp t αc(ξ)α , k k {−| | } n o which means that all one-dimensionalX projections of the random vector (Z1,...,Zn) are symmetric α-stable, and consequently (see Corollary II.1.1) the random vector (Z1,...... ,Zn) is symmetric α-stable. Let ν be the spectral measure of (Z1,...,Zn); then the

characteristic function of Z can be written as \ n \ E exp i ξ Z = exp . . . ξ, x α ν(dx) , k k − |h i| n−1 n X o n S o which leads to the desired expression for c(ξ). The next theorem was proven by D. Richards in 1985 (see [199], [200]). n Theorem II.2.7. Let X = (X1,...,Xn) be a pseudo-isotropic random vector on R Ì∞ with characteristic function ϕ(c(ξ)), ξ Rn. If ϕ(c(ξ)) L (Rn), or if rn−1ϕ(r)dr < ∈ ∈ 1 0 , then the density function f(x) of X can be written in the following way: ∞ ∞ 1 \ f(x)= rn−1ϕ(r)I (xr) dr, (2π)n c 0

where

\ \ I (x) . . . cos x, u ω(du), x Rn, c ≡ h i ∈ c(u)=1 and n ω(du)= ( 1)k+1u du ...du du ...du . − k 1 k−1 k+1 n Xk=1 Corollary II.2.1. Under the assumptions of Theorem II.2.6, there exists a probability measure ν on y Rn : c(y)=1 such that the density function f(x) of the pseudo- { ∈ } isotropic random vector X can be written as

\ \ \ f(x)= C rn−1ϕ(r) ν(xr) dr, where ν(x)= . . . exp i x, y ν(dy). { h i} 0 c(y)=1 b b Substable and pseudo-isotropic processes 23

P r o o f. It was shown by Richards [199] that the restriction of the measure ω to the set y Rn : c(y)=1 is finite and positive. Hence, the measure ω( )/I (0) is a probability { ∈ } · c measure on y Rn : c(y)=1 and the formula for the density f(x) easily follows from { ∈ } n Theorem II.2.7 with C = Ic(0)/(2π) .

Proposition II.2.3. (NP) Let X = (X1,...,Xn) and Y = (Y1,...,Yn) are in- dependent pseudo-isotropic random vectors with characteristic functions ϕ1(c1(ξ)) and ϕ2(c2(ξ)) respectively. If X + Y is pseudo-isotropic with characteristic function ϕ(c(ξ)) then either there exist positive constants k1, k2 such that c1(ξ)= k1c2(ξ)= k2c(ξ) for all ξ Rn, or there exist positive numbers m 0,

ϕ1(rt)ϕ2(st)= ϕ(c(r, s)t).

P r o o f. (a) If there exists a positive constant k1 > 0 that c1(ξ) = k1c2(ξ) for all ξ Rn then the characteristic function of X + Y can be written as ψ(c (ξ)), for ψ(t)= ∈ 2 ϕ1(k1t)ϕ2(t). By Proposition II.2.1 there exists a positive constant a such that c(ξ) = ac2(ξ), thus k2 = k1/a. Assume now that there is no k > 0 such that c1(ξ)= kc2(ξ) for all ξ Rn. Consider the function q(ξ) = c (ξ)/c (ξ), ξ Rn. Since c (ξ) > 0 for every ∈ 1 2 ∈ 2 ξ = 0, and both functions c and c are continuous, the function q attains its extremes 6 1 2 on the unit sphere Sn−1 and q(Sn−1) = [m,M] (0, ). Choose ξ Rn such that ⊂ ∞ ∈ q(ξ)= r/s [m,M]. Then ∈ st st sc(ξ) ϕ (rt) ϕ (st)= ϕ c ξ ϕ c ξ = ϕ t , 1 · 2 1 1 c (ξ) · 2 2 c (ξ) c (ξ)   2    2   2  thus the statement holds with c(r, s)= sc(ξ)/c2(ξ).

II.3. Elliptically contoured vectors. The investigations of elliptically contoured distributions started in 1938 with the paper [215] of Schoenberg. This paper was devoted n to random vectors invariant under isometries in R and in ℓ2. Later on this concept was generalized to the elliptically contoured random vectors, which are in fact images of vectors invariant under isometries through linear operators. In this paper we recall only some basic properties of elliptically contoured random vectors. For further information we refer the reader to [44], which treats the problem mainly from the statistical point of view, and [172], where emphasis is put on measure theory. Both papers contain rich bibliographies. In this paper we discuss only some characterizing representations and properties of elliptically contoured random vectors, as they can be helpful in the general theory of pseudo-isotropic random vectors. Because of this, from the collection of equivalent defi- nitions of elliptically contoured random vectors we choose here the definition of pseudo- isotropic random vector with the function c specified as a norm defined by an inner product on Rn.

Definition II.3.1. A random vector X = (X1,...,Xn) is elliptically contoured if it is pseudo-isotropic with a function c : Rn [0, ) defined by an inner product on Rn; → ∞ 24 J. K. Misiewicz

i.e. there exists a symmetric positive definite n n-matrix such that × ℜ c(ξ)2 = ξ, ξ , ξ Rn. h ℜ i ∀ ∈ Remark II.3.1. If = I, i.e. if c(ξ)2 = n ξ2, then elliptically contoured random ℜ k=1 k vectors with such c are also called in the literature rotationally invariant, spherically P generated or spherically contoured (see Askey [12], Box [28], Gualtierotti [80], Huang and Cambanis [95], Kelker [116], [117], Kingman [119], [120], Letac [136]). From now on we will use the notation (ϕ, ,n) for the distribution of an elliptically EC ℜ contoured random vector X = (X ,...,X ) with c(ξ)2 = ξ, ξ and E exp itX = 1 n h ℜ i { 1} ϕ(t). The following lemma was proved by Crawford (see [48]) in 1977, originally for absolutely continuous distributions:

Lemma II.3.1. Let X = (X1,...,Xn) be elliptically contoured with distribution (ϕ, ,n), = BT B, and let C be a non-singular n n-matrix. If Y = B−1CX, EC ℜ ℜ × then Y is elliptically contoured with distribution (ϕ, CT C,n). EC As a corollary, a random vector X on Rn is elliptically contoured if and only if there exists a non-degenerate linear operator B : Rn Rn such that B−1X is rotationally → invariant. The next crucial result was proved in 1938 by Schoenberg (see [215], [217]).

Theorem II.3.1. A random vector X = (X1,...,Xn) is rotationally invariant if and d only if there exists a non-negative random variable Θ such that X =(U1,...,Un)Θ, where (n) the random vector U = (U1,...,Un) is independent of Θ and has a uniform distribution on the unit sphere Sn−1 = x Rn : n x2 =1 . { ∈ k=1 k } P r o o f. It is enough to define Θ =P X 2 and check that (U1,...,Un)Θ and X have k k the same characteristic function. (n) Consider the random vector U = (U1,...,Un) defined in Theorem II.3.1. It is evident that the distribution of U (n), being supported in Sn−1, cannot be absolutely continuous with respect to the Lebesgue measure on Rn. The distribution of U (n) is of sign-symmetric Dirichlet type with parameters (2,..., 2;1,..., 1), i.e. the following conditions hold:

(i) (U1,...,Un) is a sign-symmetric random vector; n 2 (ii) k=1 Uk = 1 with probability one; (iii) the joint density function of (U ,...,U ) is P 1 n−1 n−1 −1/2 Γ (n/2) 2 1 uk , Γ (1/2)n − +  Xk=1  where (a) = a whenever a 0, and (a) = 0 otherwise. + ≥ + One can show that the joint density function of the first k components (U1,...,Uk), k

k (n−k)/2−1 Γ (n/2) 2 1 uj ; Γ ((n k)/2)Γ (1/2)k − + j=1 −  X  for details see [81]. Now, we have the following: Substable and pseudo-isotropic processes 25

Theorem II.3.2. The marginal density function of (X1,...,Xk), k

∞ \ g(r)= (1 ru)α−1dF (u), − + 0 with F a non-decreasing, non-negative function. In the case of α N the function g is ∈ α-times monotonic if and only if it is α-times differentiable and ( 1)kg(k)(t) 0 for every − ≥ 0 k α. ≤ ≤ Evidently, U (n) is pseudo-isotropic, thus its one-dimensional projections are all the same and n n 1/2 2 E exp i ξkUk = E exp i ξk U1 n Xk=1 o n  Xk=1  o 1 Γ (n/2) \ = cos( ξ u)(1 u2)(n−3)/2 du Ω ( ξ ). Γ ((n 1)/2)Γ (1/2) k k2 − ≡ n k k2 − −1 The function Ωn, playing an important role in the theory of pseudo-isotropic random vectors, can also be written in the following way: π/2 2Γ (n/2) \ Ω (r)= cos(r sin ϕ)cosn−2(ϕ) dϕ n Γ ((n 1)/2)Γ (1/2) − 0 n 2 n/2−1 = Γ J (r), 2 r (n−2)/2    26 J. K. Misiewicz

where Jν (r) is a Bessel function, i.e. a cylindrical function of the first kind, thus it is a solution of the differential equation (for details see e.g. [76]) d2J (r) 1 dJ (r) ν2 ν + ν + 1 J (r)=0. dr2 r dr − r2 ν   This implies that d2 n 1 d Ω (r)+ − Ω (r)+ Ω (r)=0. dr2 n r dr n n Now, we have the following:

Theorem II.3.3 (Schoenberg [217]). If X = (X1,...,Xn) is an elliptically contoured random vector with representation X =d BU (n)√Θ, = BT B, then ∞ ℜ n \ E exp i ξ X = rn−1Ω (( ξ, ξ )1/2r) λ(dr), k k n h ℜ i n Xk=1 o 0 where λ is the distribution of Θ. We can see that not every symmetric positive definite function ϕ on R with ϕ(0) = 1 has the property that ϕ( ) is the characteristic function of an elliptically contoured k·k2 random vector. In 1973 Askey [12] proved the following Theorem II.3.4. Let ϕ : [0, ) R be continuous and such that ϕ(0)=1, lim ϕ(t) ∞ → t→∞ =0 and ( 1)kϕ(k)(t) 0 is convex for k = [n/2]. Then for every positive definite n n- − ≥ × matrix , ϕ(ξ ξT ) is the characteristic function of some elliptically contoured random ℜ ℜ vector. n Finally, let us calculate the Richard function I2(x), x R (see Theorem II.2.7) for ∈ 2 2

rotationally invariant random vectors, i.e. for the function c(ξ) = k=1 ξk. We have

\ \ I (x)= . . . cos x, u ω(du) P 2 h i n 2

Σk=1uk =1 \ \ n−1 (n−3)/2 =2 . . . cos x, u 1 u2 du ...du , h i − k 1 n−1 Σn u2 =1 k=1 k=1 k  X  where u is substituted by (1 n−1 u2 )1/2 in the second line. We have obtained an n − k=1 k expression that is, up to a multiplicative constant, equal to the characteristic function of P the random vector U (n), hence Γ ((n 1)/2)Γ (1/2) I (x)= − Ω ( x ). 2 Γ (n/2) n k k2 It now follows that if a rotationally invariant random vector X = U (n)Θ with representa- tion (ϕ,I,n) has an integrable characteristic function ϕ( ξ ), ξ Rn, then its density EC k k2 ∈ function can be written as ∞ Γ ((n 1)/2)Γ (1/2) \ f(x)= − rn−1ϕ(r)Ω ( x r) dr. Γ (n/2) n k k2 0 Let us also notice here that ϕ(r) is the characteristic function of the random variable Θ U . · 1 Substable and pseudo-isotropic processes 27

II.4. α-symmetric random vectors

Definition II.4.1. A symmetric random vector X = (X1,...,Xn) has α-symmetric distribution, α> 0, if X is pseudo-isotropic with the function c : Rn [0, ) given by → ∞ ( ξ α)1/α if 0 <α< , c(ξ)= | k| ∞ max ξ1 ,..., ξn if α = .  P {| | | |} ∞ The existence of α-symmetric random vectors, at least for α (0, 2], follows immedia- ∈ tely from the existence of symmetric stable random vectors with independent identically distributed coordinates. It turns out, however, that it is not easy to get the full cha- racterization of α-symmetric random vectors on Rn, except for the case α = 2 (which was shown in the previous section). The main reason is the complexity of the formulas appearing in the following lemma, where we calculate the Richards representation (see Proposition II.2.7 and [199] for the proof) of the density function of an α-symmetric random vector:

Lemma II.4.1. If the distribution of an α-symmetric random vector X = (X1,...,Xn) is absolutely continuous with respect to the Lebesgue measure then its density function f(x) can be written as follows: ∞ 1 \ (II.4.1) f(x)= rn−1ϕ(r)I (xr) dr, (2π)n α 0 where ϕ(t) = E exp itX , and with the notation u′ = (u ,...,u ), u′′ = (u ,..., u ) { 1} 1 n 1 − n where u = (1 n−1 u α)1/α, the function I (x) can be expressed as

n − 1 | k| α \ \ n−1 1/α−1 P ′ ′′ α . . . (cos x, u + cos x, u ) 1 uk du1 ...dun−1. − h i h i − | | {Σn 1|u |α≤1} k=1 k=1 k  X  P r o o f. Notice first that the characteristic function of X, ϕ( ξ ), is sign-invariant k kα i.e. it does not depend on the signs of the components of the vector ξ. Thus the density function f(x) has to be sign-invariant as well. Now, using the inverse Fourier transform,

we obtain \ 1 \ f(x)= . . . cos x, ξ ϕ( ξ ) dξ . . . dξ (2π)n h i k kα 1 n Rn

\ \ 1 \ = . . . (cos x, ξ′ + cos x, ξ′′ )ϕ( ξ ) dξ . . . dξ , (2π)n h i h i k kα 1 n 0 Rn−1 for ξ′ = (ξ ,...,ξ ) and ξ′′ = (ξ ,..., ξ ). Substituting now ξ = ru ,..., ξ = 1 n 1 − n 1 1 n−1 ru , ξ = r(1 n−1 u α)1/α, we get the desired formula. n−1 n − k=1 | k| Example

II.4.1.P In the case α=1, the expression for I1(x) becomes especially simple:

\ \ I (x)= . . . (cos x, u′ + cos x, u′′ ) du ...du . 1 h i h i 1 n−1 n−1 {Σk=1 |uk |≤1} For n = 2 and x = y it can be easily calculated as | | 6 | | x sin x y sin y I (x, y)=4 − , 1 x2 y2 − 28 J. K. Misiewicz

and for n = 3, x = y = z it is equal to | | 6 | | 6 | | x2 cos x y2 cos y z2 cos z 8 + + . (x2 y2)(z2 x2) (x2 y2)(y2 z2) (y2 z2)(z2 x2)  − − − − − −  Example II.4.2. In the case α = and n = 3, the expression for I (x,y,z) has ∞ ∞ been obtained by Misiewicz (see [163]), and is equal to 2 I (x,y,z)= ( x + y + z) cos( x + y + z) + (x y + z) cos(x y + z) ∞ xyz { − − − − + (x + y z) cos(x + y z) (x + y + z) cos(x + y + z) , − − − } whenever xyz( x + y + z)(x y + z)(x + y z)(x + y + z) = 0. − − − 6 Let denote the set of probability measures on [0, ). For every bounded Borel P+ ∞ function f L∞[0, ) and λ , let ∈ ∞ ∈P+

∞ \ (f λ)(t) := f(rt) λ(dr), ⊙ 0 the scale mixture of f with respect to the measure λ. It is easy to see that (f λ) ν = ⊙ ⊙ (f ν) λ. Further, for A L∞[0, ), let ⊙ ⊙ ⊂ ∞ A λ = f λ : f A , A = f λ : f A, λ . ⊙ { ⊙ ∈ } ⊙P+ { ⊙ ∈ ∈P+} With a slight change of notation of [38], we denote by Φn(α), α > 0, the set of all functions ϕ : [0, ) R such that ϕ( ξ ) is a characteristic function (of an α-symmetric ∞ → k kα random vector) on Rn. The set Φ (α) coincides with the set µ : µ (c,n) for the n { ∈ M } function c(ξ)= ξ , thus it follows from Theorem II.2.4 that k kα (P1) n N α> 0, Φ (α) is a convex, closed subset of the setb of all real characteristic ∀ ∈ ∀ n functions on Rn. If ϕ Φ (α) and λ , then ϕ λ( ξ ) is the characteristic function of the random ∈ n ∈P+ ⊙ k kα vector XΘ, where X = (X ,...,X ) and Θ are independent, ϕ( ξ ) is the characteristic 1 n k kα function of X, and λ is the probability distribution of the random variable Θ. This implies that (P2) Φ (α) = Φ (α). n ⊙P+ n It is clear that the marginals of α-symmetric distributions are α-symmetric as well, hence (P3) Φ (α) Φ (α) if n m. n ⊂ m ≥ Proposition II.4.1. For every n 3 and every α> 0, β ≥ (P4) e−t Φ (α) β α 2. ∈ n ⇔ ≤ ≤ −tβ β History of the proof. Notice first that e Φn(α) if and only if exp ξ α is Rn n ∈ {−k k } positive definite on , if and only if ℓα embeds isometrically into some Lβ-space. The sufficiency was already known to P. L´evy [138]. The proof can be easily obtained from the construction presented in Example II.1.3 and it does not depend on the dimension of n the space ℓα. The proof of necessity has a long history going back to the first investigations of symmetric stable random vectors [138], and the first Schoenberg problem [217] (see also Substable and pseudo-isotropic processes 29

Introduction). It is easy to see (and was already known to P. L´evy and Schoenberg) that β n must be less than or equal to 2. In 1963 Herz [92] proved that if 1 <β< 2 and ℓα embeds isometrically into some L -space then β α β(β 1)−1. In 1973 Witsenhausen [240] β ≤ ≤ − proved that if α > 2.7, n 3, then ℓn does not embed isometrically into any L -space. ≥ α 1 In 1976 Dor [52] (see also [29]) proved that if α, β [1, ) and ℓn embeds isometrically ∈ ∞ α into some L -space then 1 β α 2. In 1991 Koldobsky [121] proved that if α> 2, β ≤ ≤ ≤ and if n 3 then ℓn does not embed isometrically into any L -space, β 2. Note that ≥ α β ≤ the result of Koldobsky solves finally, after 53 years, the first Schoenberg question. And, in 1995, Grz¸a´slewicz and Misiewicz [78] noticed that the previous considerations do not include all the cases when α < 1 or β < 1. They proved that if 0 <α<β 2 then ℓ2 ≤ α does not embed isometrically into any Lβ-space. In the case α 1, n = 2, the existence of α-symmetric random vectors can be easily ≥ obtained from Theorem II.2.4 stating that for every α 1 the function ≥ exp ( ξ α + ξ α)1/α {− | 1| | 2| } 2 is positive definite on R , thus it defines an α-symmetric 1-stable random vector (X1,X2). 2 In Theorem 2.1 of [52] Dor proved that if 1<β<2<α then ℓα does not embed isometrically into any Lβ-space. Combining these facts with the result of Koldobsky and other results described in the history of the proof of necessity in Proposition II.4.1 we have β (P5) e−t Φ (α) 0 <β 1, α> 2, β α 2 ∈ 2 ⇔{ ≤ ≤ ≤ } A complete characterization of the classes Φn(α) is known only for the following cases (P6)–(P9), listed below. (P6) Φ (2) = Ω , n n ⊙P+ where Ω ( ξ ), ξ Rn, is the characteristic function of the random vector U = n k k2 ∈ (U ,...,U ) having a uniform distribution on the unit sphere Sn−1 = x Rn : x 1 n { ∈ k k2 =1 (details are given in Section II.3). } (P7) Φ (1) = Ω λ , n n ⊙ n ⊙P+ −1/2 where λn is the distribution of the random variable Θ , Θ having the density function Γ (n/2) h (t)= t−n/2(t 1) (n−3)/2. n π1/2Γ ((n 1)/2) − + − This fact was proven by Cambanis, Keener and Simons (see [38]) in 1983. The proof was based on the formula for I1(x) given in Example II.4.1, and also on the following integral formula:

π/2 π/2 \ \ x2 y2 ( x + y )2 f + dt = f | | | | dt, sin t cos t sin t 0   0   which holds for every measurable function f for which one of the above integrals makes sense. (P8) Φ ( )= 1 if n 3. n ∞ { } ≥ This result was proven by J. Misiewicz in 1989 (see [163]) and the proof was based on the formula for I∞(x) given in Example II.4.2. It was shown that every density function given 30 J. K. Misiewicz

by formula (II.4.1) for I (x), n 3, has one-dimensional marginal densities which are ∞ ≥ concave on the positive half-line, and this is impossible for probability density functions. (P9) Φ (α)= 1 if n 3, α> 2. n { } ≥ This result was proven independently by two authors: Lisitsky in 1991 [143] and Zastawny in 1991 [245]. We give here a sketch of the proof presented by Zastawny. First he defined a class Z(3) of finite-dimensional normed spaces (E, ) with the following properties: k·k 1) dim E 3; ≥ 2) there exist ξ , ξ , ξ E such that the function 1 2 3 ∈ ∂ G(t,a,b)= tξ + aξ + bξ ∂tk 3 1 2k exists for almost all (a,b) R2 with respect to the Lebesgue measure, and, moreover, ∈ G(1,a,b) H(a,b)= L (R2). ξ + aξ + bξ ∈ 1 k 3 1 2k n It is easy to see that if (E, )= ℓα, n 3, α> 2, then choosing for ξ1, ξ2, ξ3 the 3 k·k ≥ standard basis e1, e2, e3 in ℓα, we get H(a,b)=(1+ a α + b α)−1 L (R2). | | | | ∈ 1 Similarly for α = , n 3, choosing for ξ , ξ , ξ the standard basis e , e , e in ℓ3 , ∞ ≥ 1 2 3 1 2 3 ∞ we get 1 H(a,b)= 1 L (R2). max a , b {|a|<1,|b|<1} ⊂ 1 {| | | |} Next, he showed that if the space (E, ) belongs to Z(3), then the only positive definite k·k norm-dependent function on E is constant.

Proposition II.4.2 (Misiewicz and Richards [171]). Let λn,α be the distribution of −1/α Θ , where Θ has the density function hn(t) (see (P7)). Then (P10) Φ (α) λ Φ (α) Φ (α/2). n ⊙ n,α ⊂ n ∩ n Proof. Let ϕ Φ (α), ϕ 1, and let λ . Since Φ (α) = Φ (α), and ∈ n 6≡ ∈ P+ n ⊙P+ n λ , we only need to show that n,α ⊙P+ ⊂P+ ψ ϕ λ Φ (α/2). ≡ ⊙ n,α ∈ n To this end, let X = (X1,...,Xn) be an α-symmetric random vector with charac- teristic function ϕ( ξ ), and let the random vector D = (D ,...,D ) have a Dirichlet k kα 1 n distribution with parameter (1/2,..., 1/2; 1/2) (see [38]) such that X and D are inde- −1/α −1/α pendent. Then the characteristic function of the vector Y = (X1D1 ,...,XnDn ) is n n 1/α E exp i ξ, Y = E E exp i ξ X D−1/α = E ϕ ξ αD−1 . { h i} D X k k k D | k| k n Xk=1 o h Xk=1 i  By the result of Cambanis, Keener and Simons [38], we have

n n 2 ξ αD−1 =d ξ α/2 D−1. | k| k | k| 1 Xk=1  Xk=1  Substable and pseudo-isotropic processes 31

On noting that D has the beta distribution with parameters (1/2, (n 1)/2), we obtain 1 − ∞ \ n 2/α E exp i ξ, Y = ϕ ξ α/2 t−1/α h (t) dt. { h i} | k| n 0  kX=1   Proposition II.4.3 (Misiewicz and Richards [171]). Let 0 <β<α, n N, and let 1/α ∈ + να,β be the distribution of the random variable Θ , for Θ = Θβ/α with distribution γβ/α (see (II.1.3)). Then (P11) Φ (α) ν Φ (α) Φ (β). n ⊙ α,β ⊂ n ∩ n Proof. If ϕ Φ (α) then there exists a random vector (X ,...,X ) = X with ∈ n 1 n characteristic function ϕ( ξ ). It easily follows from the previous considerations that k kα ϕ ν Φ (α). Now, we only need to construct a random vector Y with characteristic ⊙ α,β ∈ n functions of the form E exp i ξ, Y = ψ( ξ ), { h i} k kα where ψ = ϕ να,β. Let Θ1,...,Θn be independent, identically distributed, positive ⊙ + (β/α)-stable variables with common distribution γβ/α, independent of X. Then the cha- 1/α 1/α racteristic function of Y = (X1Θ1 ,...,XnΘn ) is n n 1/α E exp i ξ, Y = E E exp i ξ X Θ1/α = E ϕ ξ αΘ . { h i} Θ1,...,Θn X k k k Θ1,...,Θn | k| k  Xk=1  h kX=1 i  By the use of Laplace transforms, it is easy to verify that

n n α/β ξ αΘ =d ξ β Θ = ξ αΘ , | k| k | k| 1 k kβ 1 kX=1  kX=1  hence

∞ \ E exp i ξ, Y = ϕ( ξ t1/α) γ+ (dt)= ψ( ξ ), { h i} k kβ β/α k kβ 0 which was to be shown. Corollary II.4.1 (Misiewicz and Richards [171]). (P12) α> 0 n 2 Φ (β)= 1 . ∀ ∀ ≥ n { } β<α\ P r o o f. Suppose that for n 2, ϕ Φ (β) for all β < α, so that, in particular, ≥ ∈ n ϕ Φ (β) for all β < α. Without loss of generality (see Remark II.2.5) we can assume ∈ 2 that ϕ is a positive and decreasing function on [0, ). Then for t,s R, the function ∞ ∈ ϕ( t ) if s = 0, | | ψ(t,s) = lim ϕ( (t,s) β)= ϕ( s ) if t = 0, β→0+  k k ϕ(| |) if ts = 0,  ∞ 6 would be positive definite on R2. This is possible only if ϕ 1.  ≡ Properties (P10), (P11) and (P12) were obtained by Misiewicz and Richards in 1991 (see [171]). 32 J. K. Misiewicz

II.5. Substable random vectors

Definition II.5.1. We say that a random vector (Y1,...,Yn) is substable if there exist α (0, 2], a SαS random vector (X ,...,X ), and a non-negative random variable Θ ∈ 1 n independent of (X1,...,Xn) such that

d 1/α (Y1,...,Yn) = (X1,...,Xn)Θ , where =d means equality of distributions. Sometimes substable random vectors are called scale mixtures of stable random vec-

tors because of the following representation of the characteristic function: \ n n \ E exp i ξ Y = E exp i ξ X Θ1/α = E exp . . . ξΘ1/α, x α ν(dx) k k k k − |h i| n−1 n Xk=1 o n Xk=1 o n S o

∞ \ = E exp (ξΘ1/α) α = exp u (ξ) α λ(du) {−kℜ kα} {− kℜ kα} 0 where λ (mixing measure) is the distribution of the random variable Θ, and : Rn n−1 ℜ → Lα(S ) is the characterizing operator for the symmetric α-stable random vector (X1,...,Xn) defined in (II.1.4). The characteristic function describes the distribution uniquely and the above characteristic function is uniquely determined by α, the operator , and the random variable Θ. Therefore we will use the notation (α, , Θ) for the ℜ E ℜ substable random vector with the above characteristic function. In the book of Samorodnitsky and Taqqu [212] we find another definition of substable random vectors. Namely, only a SαS random vector can be β-substable for some α < β 2, which means that the only possible mixing measure is equal to γ up to a scale ≤ α/β parameter. Also, following the corresponding definition for sub-Gaussian random vectors, we can find in some papers that the random vector (Y1,...,Yn) is α-substable if there exists a SαS random vector (X1,...,Xn) and a constant c> 0 such that for every β < α and every (ξ ,...,ξ ) Rn, 1 n ∈ n β n β E ξ Y = cβ E ξ X . k k · k k k=0 k=0 X X Let us note here that in our definition we have no assumption on moments, but assuming the existence of the corresponding moment, we get the above condition to hold. It may also happen that there exists a random vector (Y1,...,Yn) with this moment property which is not a mixture of any stable random variables (for example, (Y1,...,Yn) uniformly distributed on the unit sphere in Rn is 2-substable in this sense, but it cannot be any mixture of Gaussian random vectors). It is easy to see that for a substable random vector Y = (α, , Θ) we have E ℜ

∞ \ (II.5.1) P (Y ,...,Y ) B = P (X ,...,X ) Bs−1/α λ(ds) { 1 n ∈ } { 1 n ∈ } 0

∞ \ −1/α = γα(Bs ) λ(ds) 0 Substable and pseudo-isotropic processes 33

According to this formula, let us define the operator “ ” acting on the signed measure ◦ λ on [0, ) through the symmetric α-stable measure γ (or, through the measure γ+, ∞ α α α 1) in the following way: ≤

∞ \ (II.5.2) γ λ(B)= γ (Bs−1/α) λ(ds) α◦ α 0

∞ \ γ+ λ(B)= γ+(Bs−1/α) λ(ds) . α ◦ α  0  Every substable random vector (Y1,...,Yn) is, by definition, represented by an α- stable symmetric random vector (X1,...,Xn) and a non-negative random variable Θ. For fixed α (0, 2] that representation is unique up to a multiplicative constant, which means ∈ that if (Y ,...,Y )= (α, , Θ) and (Y ,...,Y )= (α, , Θ1) then there exists c> 0 1 n E ℜ 1 n E ℜ1 such that = c and Θ = Θ1c−1. However, we do not have uniqueness of the parameter ℜ ℜ1 α, as it may happen that a given symmetric α-stable random vector (X1,...,Xn) is also β-substable for some β (α, 2]. In fact, some of α-stable random vectors are even sub- ∈ Gaussian. Given a substable random vector (Y1,...,Yn), it seems to be interesting to find the maximal α (0, 2] for which the representation (Y ,...,Y )= (α, , Θ) is possible. ∈ 1 n E ℜ Lemma II.5.1 (Misiewicz [166]). Let 0 <α<β 2. Suppose that (α, , Θ) and ≤ E ℜ (β, , Θ1) are two different representations of a substable random vector (Y ,...,Y ). E ℜ1 1 n Then there exists a constant c> 0 such that cΘ1 =d Θ Θβ/α, (ξ) β = c (ξ) β , ξ Rn. α/β kℜ1 kβ · kℜ kα ∀ ∈ P r o o f. It follows from the assumption that the characteristic function of the random vector Y can be expressed in two different ways:

∞ ∞ \ n \ E exp i ξ Y = exp u (ξ) α λ(du)= exp u (ξ) β λ (du). k k {− kℜ kα} {− kℜ1 kβ} 1 n Xk=1 o 0 0 Choose ξ = (ξ ) Rn such that (ξ ) α = 1, and let (ξ ) β = c > 0. For every ◦ ◦k ∈ kℜ ◦ kα kℜ1 ◦ kβ s> 0 we then have

∞ ∞ \ n \ E exp i sξ Y = exp usα λ(du)= exp cusβ λ (du). ◦k k {− } {− } 1 n Xk=1 o 0 0 This means that

∞ ∞

\ \ exp usα λ(du)= exp cusβ λ (du) {− } {− } 1 0 0 for every s> 0. It is easy to see that

∞ \ exp sα = exp sβu γ+ (du), s> 0. {− } {− } α/β 0 Now,

∞ ∞ ∞

\ \\ exp csβu λ (du)= exp sβuβ/αt λ(du) γ+ (dt). {− } 1 {− } α/β 0 0 0 34 J. K. Misiewicz

1 d β/α From the uniqueness of Laplace transform we infer that cΘ = Θα/βΘ , and then, immediately, c−1 (ξ) β = (ξ) β , ξ Rn. kℜ1 kβ kℜ kα ∀ ∈ We will call (Y ,...,Y ) = (α , , Θ◦) the maximal representation for a substable 1 n E ◦ ℜ◦ random vector if, for any other representation (β, , Θ1), we have β α . E ℜ1 ≤ ◦ Let us recall that a function ϕ : E R is negative definite on the linear space E if → for every n N, every choice of c1,...,cn R and x1,...,xn E, ∈ n n ∈ ∈ c =0 c c ϕ(x x ) < 0. i ⇒ i j i − j i=1 i,j=1 X X Lemma II.5.2 (Misiewicz [166]). For every substable random vector Y = (α, , Θ) E ℜ there exists a maximal representation Y = (α , , Θ◦). This representation is unique E ◦ ℜ◦ up to a scale coefficient. Moreover, α = sup β (0, 2] : (ξ) β is negative definite on Rn . ◦ { ∈ kℜ kα } P r o o f. From the assumption it follows that Y is a scale mixture of a symmetric α-stable random vector X with the characteristic function exp (ξ) α . Let us define {−kℜ kα} α = sup β (0, 2]: exp (ξ) β is positive definite on Rn . ◦ { ∈ {−kℜ kα} } The function exp (ξ) α◦ is positive definite as a limit of positive definite functions, {−kℜ kα } so it is the characteristic function of a symmetric α◦-stable random vector X◦. It is easy to see that X =d X Θ1/α◦ and Y =d X (Θα◦/αΘ )1/α◦ . ◦ · α/α◦ ◦ · α/α◦ Taking now any other representation Y = (β, , Θ1), we see that exp (ξ) β E ℜ1 {−kℜ1 kβ} is the characteristic function of a corresponding symmetric β-stable random vector. It follows from Lemma II.5.1 that there exists a positive constant c such that exp (ξ) β = exp c (ξ) β . {−kℜ1 kβ} {− kℜ kα} The last equality implies that β α , as α is the greatest β for which exp (ξ) β ≤ ◦ ◦ {−kℜ1 kα} is a positive definite function. To conclude the proof we need only use the well-known fact that the function ϕ is negative definite on the linear space E if and only if the function exp ϕ(x) is positive definite on E. {− } Lemma II.5.3 (Misiewicz [166]). Let X Rn be a SαS random vector. Then X is ∈ β-substable if and only if β [α, α ]. ∈ ◦ Proof. If X is β-substable then β α from Lemma 2. Assume that β < α and X = ≤ ◦ (X ,...,X ) = (β, , Θ), where Θ has distribution λ. Considering the characteristic 1 n E ℜ function of the random variable X1, we would have

∞ \ α β E exp itX = e−c|t| = e−c1|t| s λ(ds), { 1} 0 for suitable positive constants c and c1. This, however, is impossible for any positive Ì β measure λ as e−c1|t| s λ(ds) is a completely monotonic function of the argument t β and | | the second derivative of the function exp ctα/β changes sign on the positive half-line. {− } Substable and pseudo-isotropic processes 35

To prove the converse implication it is enough to notice that if β [α, α ], then (ξ) β is ∈ ◦ kℜ kα negative definite on Rn, hence exp (ξ) β , being a positive definite function, defines {−kℜ kα} a SβS random vector Y on Rn and then X =d Y Θ1/β . · α/β

III. Exchangeability and pseudo-isotropy

In this chapter we discuss the main characterization theorems for pseudo-isotropic sequences of random variables. We start from the well known concept of exchangeable sequences and the theorem known as the de Finetti theorem. We do not prove this theorem in this work; the original proof can be found in [68], but much simpler and more elegant are the proofs of C. Ryll-Nardzewski [208] in 1957, or of O. Kallenberg [104] in 1975. In Section III.1 we give the main properties of exchangeable pseudo-isotropic sequen- ces of random variables and main properties of conditionally independent pseudo-isotropic random vectors. In Section III.2 we recall the original proof of the Schoenberg theorem (see [215], [216], [217]). This proof is exceptional in this area because only for α = 2 the level curves for the density function and the characteristic function of ℓα-isotropic random vector coincide. We also give the result of Bretagnolle, Dacunha-Castelle and Krivine [29] and its generalizations, as the main characterization theorems for pseudo-isotropic sequences, which are based on the de Finetti theorem. In Section III.3 we present the analogs of the previous theorems for the situation where the exchangeability and geometrical conditions are only approximately fulfilled. Thus, we give the result of Christensen and Ressel [47] on finite-dimensional Banach spaces, and n Theorem III.3.2 on spaces containing ℓα’s uniformly (see [172]). At the end, we give some results on linear spaces equipped with a quasi-norm which can be expressed in the form of a sum of some functions.

III.1. Pseudo-isotropic exchangeable sequences. We will use the notation R(∞) ∞ for the linear space of all sequences (ξ1, ξ2,...) R having a finite number of non-zero ∈ n coordinates. Let us recall that a family of measures µn on R is consistent if, for every n N and every Borel set A Rn, we have ∈ ⊂ µ (A R)= µ (A). n+1 × n n A family of characteristic functions ϕn on R is consistent (defines a consistent family of measures) if and only if for every (ξ , ξ ,...) R∞ we have 1 2 ∈ ϕn+1(ξ1,...,ξn, 0) = ϕn(ξ1,...,ξn). Throughout the paper we will often use the Kolmogorov theorem (see Feller [66], Vol. II, IV.6, Th. 1), which states that for every consistent family of measures µ on Rn there § n exists a sequence of random variables X ,X ,... such that, for every n N and for every 1 2 ∈ Borel set A Rn, the following formula holds: ⊂ µ (A)= P (X ,...,X ) A . n { 1 n ∈ } 36 J. K. Misiewicz

Definition III.1.1. We say that a sequence of random variables X1,X2,... is exchan- geable if for each n the distribution of the random vector (X1,...,Xn) is exchangeable with respect to permutations in the sense that (X1,...,Xn) and (Xπ1,...,Xπn) have the same distribution for every permutation π on 1,...,n . N { } We define as the σ-field on R generated by the random variables g(X ,...,X ), Fn 1 n+j j N, where g is a Borel function on Rn+j satisfying for all permutations π on 1,...,n ∈ { } the following condition:

g(x1,...,xn, xn+1,...,xn+j )= g(xπ1,...,xπn, xn+1,...,xn+j ). It is clear that , Fn ⊃Fn+1 and also that ∞ = . Fn ցF Fn n=1 \ Theorem III.1.1 (de Finetti). If X1,X2,... are exchangeable, then they are conditio- nally independent given in the sense that F n P X A : i =1,...,n = P X A . { i ∈ i |F} { i ∈ i |F} i=1 Y Assume now that a pseudo-isotropic sequence X : k N of random variables is { k ∈ } exchangeable, i.e. X : k N =d X : k N for every finite permutation π of the { k ∈ } { π(k) ∈ } set N (in the sense of equality of all finite-dimensional distributions). Then it is easy to see that c(ξ) = c(π(ξ)) for every finite permutation π. We will call such a function exchangeable. From the de Finetti theorem, exchangeability means that the random variables Xk, k N, are conditionally independent given the exchangeability σ-field . In particular, ∈ F this means that we have the following equality for the conditional characteristic functions: E exp i ξ X = E exp i ξ X . k k F k k F i  n X o  Y  n X o  On the other hand, the sequence X : k N is pseudo-isotropic, so { k ∈ } ψ(ξ) E exp i ξ X = E exp ic(ξ)X = E(exp ic(ξ)X ). ≡ k k { 1} { 1}|F Following the proof of then X de Finettio theorem, we have, for every k N, ∈ ϕω(t) E(exp itX1 ) = lim E(exp itX1 n) ≡ { }|F n→∞ { }|F = lim E(exp itXk n)= E(exp itXk ). n→∞ { }|F { }|F Finally, we have the following equality: ϕ(c(ξ)) ψ(ξ)= E exp i ξ X = E ϕ (ξ )= Eϕ (c(ξ)). ≡ k k ω k ω n X o Yk Example III.1.1. We define an exchangeable function c on R(∞) as follows: n n α c(ξ ,...,ξ )α = ξ α + ξ , n N, ξ R. 1 n | i| i ∈ i ∈ i=1 i=1 X X

Substable and pseudo-isotropic processes 37

It is easy to see that this function has the consistency property, i.e.

c(ξ1,...,ξn, 0) = c(ξ1,...,ξn). (∞) Define a family of symmetric α-stable cylindrical measures µn on R , α (0, 2], such n ∈

that the measure µn, as a measure on R , has the characteristic function \ \ n . . . exp i ξ x µ (dx) = exp c(ξ ,...,ξ )α . k k n {− 1 n } Rn n Xk=1 o The consistency property for the function c guarantees the consistency of the family of cylindrical measures µn, thus from the Kolmogorov theorem, there exists a sequence X1,X2,... of random variables such that µn is the distribution of the random vector (X1,...,Xn). Notice that the sequence X1,X2,... admits the representation X = Y + Z, k N, k k ∈ where Z, Y , Y ,... are independent identically distributed (α, 1) random variables. As 1 2 S the function c is exchangeable, the sequence of random variables X1,X2,... is also exchan- geable and the de Finetti theorem states that this sequence is conditionally independent. According to our representation, it is easy to see that the exchangeability σ-field is the F σ-field defined by the random variable Z, thus ϕ (t)= E(exp itX Z) = exp t α + itZ . ω { 1} | {−| | } Example III.1.2. In order to define an exchangeable symmetric α-stable sequence of random variables X ,X ,... for α < 2 we choose β,γ (α, 2], β<γ, and define a 1 2 ∈ consistent function c by the formula

n n β/γ c(ξ ,...,ξ )β = ξ β + ξ γ . 1 n | k| | k| Xk=1  kX=1  The corresponding sequence of symmetric α-stable variables X1,X2,... has finite-dimen- sional distributions defined by the characteristic functions n E exp i ξ X = exp c(ξ ,...,ξ )α . k k {− 1 n } n Xk=1 o It is easy to see that the sequence X1,X2,... admits the representation 1/γ 1/β Xk = (Yk + ZkΘβ/γ)Θα/β , where Y are independent identically distributed (β, 1) random variables, Z are inde- k S k pendent identically distributed (γ, 1) random variables, and Y , k N , Z , k N , S { k ∈ } { k ∈ } Θ , Θ are totally independent. We can see now that the exchangeability σ-field β/γ α/β F for the sequence X1,X2,... is defined by the random variables Θβ/γ , Θα/β; therefore ϕ (t)= E(exp itX Θ , Θ ) = exp t βΘ + t γΘ Θγ/β . ω { 1} | β/γ α/β {−| | α/β | | β/γ α/β} Example III.1.3. Choose α (0, 2], and define a consistent function c on R(∞) as ∈ follows: 1 α c(ξ ,...,ξ )α = ξ , 1 n 2n k S k∈S X X

38 J. K. Misiewicz

where the first summation goes over all non-empty subsets S of 1,...,n . Define now a (∞) { } n consistent family of cylindrical measures µn on R such that µn, as a measure on R ,

has the characteristic function \ \ n . . . exp i ξ x µ (dx) = exp c(ξ ,...,ξ )α . k k n {− 1 n } Rn n Xk=1 o From the Kolmogorov theorem, there exists a sequence of random variables X1,X2,... such that µn is the distribution of the random vector (X1,...,Xn). As the function c is exchangeable, there exists an exchangeability σ-field under which the random variables Xk are conditionally independent. However, this time, it is not easy to obtain either an explicit representation of this σ-field, or a stochastic representation of the sequence X1,X2,... as a sequence of conditionally independent random variables. Example III.1.4. Let ν be a positive finite measure on (0, 2]. As before, we define an exchangeable sequence of symmetric α-stable random variables X1,X2,..., putting n E exp i ξ X = exp c(ξ ,...,ξ )α , k k {− 1 n } n Xk=1 o where 2 \ α/p c(ξ ,...,ξ )α = ξ p ν(dp). 1 n | k| α X It is easy to see that given p the random variables X1,X2,... have the same finite- 1/α 1/α dimensional distributions as Y1Θα/p, Y2Θα/p,..., where the random variables Y1, Y2,... are independent identically distributed (p, 1), and independent of Θ . This means S α/p that ϕ (t)= E(exp itX ) = exp t p(ω) . ω { 1}|F {−| | } This time, however, it is not easy to specify the exchangeability σ-field , or the stochastic F representation for the sequence X1,X2,...

Proposition III.1.1. Assume that the sequence X1,X2,... is exchangeable and pseudo-isotropic with characteristic function ϕ(c(ξ)). Then 1) For every n N, and every a Rn, the sequence of random variables ∈ ∈ nk Yk = aj Xj j=n(Xk−1)+1 is also exchangeable and pseudo-isotropic. 2) ϕ is a characteristic function for almost every ω Ω. ω ∈ 3) ϕ is a non-negative function. ′ Proof. 1) The sequence Y1, Y2,... is evidently pseudo-isotropic with the function c given by c′(ξ ,...,ξ ) c(a ξ ,...,a ξ ,...,a ξ ,...,a ξ ). 1 k ≡ 1 1 n 1 1 k n k Exchangeability follows from exchangeability of the function c.

Property 2) easily follows from the definition of the function ϕω. Substable and pseudo-isotropic processes 39

To see 3), it is enough to notice that for every t R, ∈ t t t 2 ϕ(t)= ϕ c , − = E ϕ 0. c(1, 1) c(1, 1) ω c(1, 1) ≥      − − − Proposition III.1.2 (Misiewicz [168]). Assume that a 2-dimensional pseudo-isotropic random vector (X1,X2) consists of conditionally independent random variables. If for every ξ , ξ R, 1 2 ∈ P ϕ (ξ )ϕ (ξ )= ϕ (c(ξ , ξ )) =1, { ω 1 ω 2 ω 1 2 } then either ϕ 1 with probability one, or there exist α (0, 2] and a non-negative ω ≡ ∈ random variable Θ such that c(ξ , ξ )α = ξ α + ξ α, 1 2 | 1| | 2| d 1/α and (X1,X2) is α-substable with representation (X1,X2)=(Y1, Y2)Θ , where Y1, Y2 are independent identically distributed (α, 1) random variables, independent of Θ. S P r o o f. Notice first that ϕ (ξ) is non-negative since for every ξ R we know that ω ∈ with probability one ϕ (ξ)= ϕ (ξ/c(1, 1))ϕ ( ξ/c(1, 1)) = ϕ (ξ/c(1, 1)) 2 0. ω ω − ω − − | ω − | ≥ Now, for almost all ω Ω, and every ξ , ξ ,t R, the following functional equation ∈ 1 2 ∈ holds: ϕω(ξ1t)ϕω(ξ2t)= ϕω(c(ξ1, ξ2)t), where the function ϕω is non-negative and, as a conditional characteristic function, con- tinuous. It is well known that the only solution of this functional equation (in the set of symmetric characteristic functions) is of the form ϕ (t) = exp A(ω) t p(ω) ω {− | | } where p(ω) (0, 2] and A(ω) 0. It may happen that A(ω) = 0 almost everywhere, ∈ ≥ and then ϕ 1 almost everywhere. Let P A(ω) > 0 > 0. For every ω Ω such that ω ≡ { } ∈ A(ω) > 0 we have ϕ (ξ t)ϕ (ξ t) = exp A(ω) t p(ω)( ξ p(ω) + ξ p(ω)) ω 1 ω 2 {− | | | 1| | 2| } = ϕ (c(ξ , ξ )t) = exp A(ω) t p(ω)c(ξ , ξ )p(ω) , ω 1 2 {− | | 1 2 } which means that ξ p(ω) + ξ p(ω) = c(ξ , ξ )p(ω), ξ , ξ R. As ℓ -norms are essentially | 1| | 2| 1 2 1 2 ∈ p different for different p, we deduce that p(ω) = const = p (0, 2] for every ω such that ∈ A(ω) > 0. Finally, ϕ (t) = exp A(ω) t p ω {− | | } Now, define Θ(ω)= A(ω)1/p, which ends the proof.

Proposition III.1.3 (Misiewicz [168]). Let the random vector X = (X1,...,Xn), n 4, consisting of conditionally independent random variables, be pseudo-isotropic with ≥ c(ξ ,...,ξ )α = ξ α for some α> 0. Then either (X ,...,X ) 0, or α (0, 2] and 1 n | k| 1 n ≡ ∈ the random vector (X ,...,X ) is α-substable with representation P 1 n d 1/α (X1,...,Xn) = (Y1,...,Yn)Θ , 40 J. K. Misiewicz

where Θ is a non-negative random variable, and Y1,...,Yn are independent, identically distributed (α, 1) random variables, independent of Θ. S P r o o f. It follows from the assumptions that the characteristic function of the random vector (X ,...,X ) at ξ Rn can be written as follows: 1 n ∈ ϕ(c(ξ)) E exp i ξ X = E ϕ (ξ )= E exp ic(ξ)X = Eϕ (c(ξ)). ≡ k k ω k { 1} ω Notice first that ϕω isn aX non-negativeo functionY since E(ϕ (ξ) ϕ ( ξ))2 = Eϕ2 (ξ)+ Eϕ2 (ξ) 2Eϕ (ξ)ϕ ( ξ) ω − ω − ω ω − ω ω − = ϕ(c(ξ, ξ)) + ϕ(c( ξ, ξ)) 2ϕ(c(ξ, ξ))=0, − − − − where we used c(ξ, ξ)= c( ξ, ξ)= c(ξ, ξ)=21/α ξ . In turn, − − − | | E(ϕ (ξ )ϕ (ξ ) ϕ (c(ξ , ξ )))2 ω 1 ω 2 − ω 1 2 = Eϕ2 (ξ )ϕ2 (ξ ) 2Eϕ (ξ ))ϕ (ξ ))ϕ (c(ξ , ξ )) + Eϕ2 (c(ξ , ξ )) ω 1 ω 2 − ω 1 ω 2 ω 1 2 ω 1 2 = ϕ(c(ξ , ξ , ξ , ξ )) 2ϕ(c(ξ , ξ ,c(ξ , ξ ))) + ϕ(c(ξ , ξ )c(1, 1))) = 0, 1 1 2 2 − 1 2 1 2 1 2 since, in our case, c(ξ , ξ , ξ , ξ )= c(ξ , ξ ,c(ξ , ξ )) = c(ξ , ξ )c(1, 1)=21/α( ξ α + ξ α)1/α. 1 1 2 2 1 2 1 2 1 2 | 1| | 2| This means that for every ξ , ξ R we have 1 2 ∈ P ϕ (ξ )ϕ (ξ )= ϕ (( ξ α + ξ α)1/α) =1, { ω 1 ω 2 ω | 1| | 2| } and the same arguments as in the proof of Proposition III.1.2 end the proof.

III.2. Schoenberg-type theorems. In 1938 Schoenberg [215], [216], proved the following Theorem III.2.1. A function ϕ( ξ ), ϕ(0) = 1, is positive definite on ℓ if and only k k 2 if there exists a probability measure λ such that ∈P+ ∞ \ 2 (III.2.1) ϕ(t)= e−t s λ(ds), t 0. ≥ 0 P r o o f. We will give here a sketch of the original proof of Schoenberg since, in some sense, it is exceptional in this area. It is based on the fact that for every n N, the ∈ characteristic function of the random vector (X1,...,Xn) is rotationally invariant if and only if its distribution is rotationally invariant. Using Remark II.3.2, we find that the density of X is an ((n 1)/2)-times monotonic function and thus, as n tends to infinity, 1 − the density f ( x ) of X is a completely monotonic function. Using the fact that every 1 | | 1 completely monotonic function can be expressed as a Laplace transform, we deduce that there exists a positive measure λ on [0, ) such that 1 ∞ ∞ \ 2 f ( x )= e−x s λ (ds), t R. 1 | | 1 ∈ 0 As f ( x ) is the density of a probability distribution, we have 1 | |

∞ ∞

\ \ 1= f ( x ) dx = (s/2)1/2 λ (ds). 1 | | 1 −∞ 0 Substable and pseudo-isotropic processes 41

Now it is easy to see that λ(ds) (s/2)1/2λ (ds) defines the desired probability measure ≡ 1 λ. The inverse implication is trivial as exp ξ2 is a positive definite function on ℓ {− k} 2 being the characteristic function of a sequence of symmetric Gaussian random variables, P and every mixture of positive definite functions is also positive definite. Definition III.2.1. A function c on a linear space E with values in [0, ) is said to ∞ be a quasi-norm iff the following conditions hold: (i) c(tv)= t c(v) for every t R and every v E; | | ∈ ∈ (ii) c(v) = 0 if and only if v = 0;

(iii) for every n N, n dim(E), and every choice of linearly independent v1,...,vn ∈ n ≤ Rn ∈ E, the function c( k=1 xkvk) is continuous with respect to x on . Recently KoldobskyP noticed that if a quasi-normed space (E,c) is such that there exists a non-trivial pseudo-isotropic random vector X with the characteristic function E exp i ξ,X = ϕ(c(ξ)) 1 then the above definition coincides with the usual definition { h i} 6≡ of quasi-norm on a linear space. We only need to show Proposition III.2.1 (NP). Assume that ϕ(c(ξ)), ξ E, is the characteristic function ∈ of a non-degenerate random vector X. Then (iv) K > 0 ξ, η E c(ξ + η) K(c(ξ)+ c(η)). ∃ ∀ ∈ ≤ P r o o f. For finite-dimensional spaces this is a trivial consequence of the property (3) in Theorem II.2.1. The referee proposed the following proof of this fact in infinite- dimensional spaces: notice that for every positive t, P X > 2t/c(ξ + η) = P ξ + η,X > 2t { 1 } {h i } P ξ,X >t + P η,X >t ≤ {h i } {h i } = P X > t/c(ξ) + P X > t/c(η) . { 1 } { 1 } Suppose that c(ξ),c(η) 1, and we still can make c(ξ +η) greater than any given number ≤ K. Then for any fixed t > 0, making K we get 2P X > t P X > 0 . This → ∞ { 1 } ≥ { 1 } leads to a contradiction as t , unless X = 0 with probability 1. → ∞ 1 Definition III.2.2. We say that a linear space E equipped with a quasi-norm c contains ℓ (resp. ℓn) isometrically if there exists a linear operator T : ℓ E (resp. α α α → T : ℓn E) such that c(T (ξ)) = ξ for every ξ ℓ (resp. ξ ℓn). α → k kα ∈ α ∈ α In 1967, Bretagnolle, Dacunha-Castelle and Krivine (see [29]) proved Theorem III.2.2. Assume that E = L (S, , µ) is an infinite-dimensional space, α B α > 0, and assume that ϕ( ), ϕ(0) = 1, is a positive definite function on E. Then k·kα there exists a probability measure λ such that ∈P+

∞ \ α (III.2.2) ϕ(t)= e−t s λ(ds), t 0. ≥ 0 If α > 2, then λ = δ . If α (0, 2], then for every λ formula (III.2.2) defines a 0 ∈ ∈ P+ function ϕ such that ϕ( ) is positive definite on E. k·kα 42 J. K. Misiewicz

P r o o f. In the original proof the authors notice first that every infinite-dimensional L (S, , µ) space contains ℓ isometrically, so the function ϕ( ) restricted to ℓ is also α B α k·kα α positive definite. The proof of existence of the measure λ is then basically the same as the proof of Theorem III.2.3. The second part of the theorem follows immediately from negative definiteness of the norm on L (S, , µ) for α (0, 2]. α B ∈ Our next theorem (see [168]) is a simple generalization of the result of Bretagnolle, Dacunha-Castelle and Krivine; however, as we will see in Example III.2.1, the generali- zation is quite significant. Theorem III.2.3. Let E be an infinite-dimensional linear space equipped with a quasi- norm c : E [0, ). If , for every n N, the space (E,c) contains ℓn isometrically, and → ∞ ∈ α ϕ(c( )), ϕ(0) = 1, is positive definite on E, then there exists a probability measure λ · ∈P+ such that ϕ is given by formula (III.2.2). If α>2, then the only possible measure λ is δ0. P r o o f. From the assumption it follows that for every n N there exist v ,...,v ∈ n1 nn ∈ E such that for every ξ ,...,ξ R we have 1 n ∈ n n 1/α c ξ v = ξ α = ξ . k nk | k| k kα  kX=1   Xk=1  Let ϕ (ξ ,...,ξ ) ϕ( ξ ). The sequence ϕ , being a sequence of positive definite func- n 1 n ≡ k kα n tions on Rn, defines a consistent family of cylindrical probability measures on R(∞), hence from the Kolmogorov theorem, there exists a sequence of random variables X1,X2,... such that for every n N and every ξ ,...,ξ R, ∈ 1 n ∈ n E exp i ξ X = ϕ (ξ ,...,ξ )= ϕ( ξ ). k k n 1 n k kα n Xk=1 o As the function is exchangeable on Rn for every n N, the sequence X ,X ,... is k·kα ∈ 1 2 also exchangeable and from the de Finetti theorem, conditionally independent. Now it is enough to apply Proposition III.1.3. Example III.2.1. Let 0 <α< 2. Consider a linear space E defined as E = (ℓ1 ℓ2 ℓ3 . . .) . α ⊕ α ⊕ α ⊕ ℓ2 Denoting by ek, k N, the canonical basis in this space, we see that the space spanned ∈ n n by en(n−1)/2+1,...,en(n+1)/2 is equal to ℓα, which means that E contains ℓα isometrically for every n N. If the function ϕ( ) is positive definite on E, then by Theorem III.2.3, ∈ k·k the function ϕ is given by formula (III.2.2) for some probability measure λ on [0, ). ∞ On the other hand, the space spanned by e : n N is equal to ℓ , thus it { n(n−1)+1 ∈ } 2 follows from the Schoenberg Theorem that there exists another probability measure λ1 on [0, ) such that ϕ is given by formula (III.2.1). ∞

Which of these two representations is better and why? To answer this question, notice Ì that exp t α = exp t2s γ+ (ds), hence denoting by Θ the random variable with {−| | } {− } α/2 distribution λ we obtain λ = (Θ Θ2/α), 1 L α/2 · Substable and pseudo-isotropic processes 43

which means, for example, that λ1 must be absolutely continuous with respect to the Lebesgue measure, except for having possibly an atom at zero. It is easy to see that for every λ formula (III.2.2) defines a function ϕ such that ϕ( ) is positive definite ∈P+ k·k on E. Thus, formula (III.2.2) gives us the full characterization of such functions ϕ, while formula (III.2.1) is of existence type.

III.3. Some generalizations. Let us recall first the following: Definition III.3.1. We say that a linear space E equipped with a quasi-norm c contains ℓn’s uniformly if, for every ε> 0, and every n N, there exists a linear operator α ∈ T : ℓn E such that, for every ξ ℓn, α → ∈ α c(T (ξ)) ξ (1 + ε)c(T (ξ)). ≤k kα ≤ In 1983, Christensen and Ressel (see [47]) proved the following theorem: Theorem III.3.1. Assume that E is an infinite-dimensional Banach space. If ϕ( ), k·k ϕ(0) = 1, is a positive definite function on E, then there exists a probability measure λ + such that ∈P ∞ \ 2 ϕ(t)= e−t s λ(ds), t 0. ≥ 0 Proof. The proof is based on the Dvoretzky theorem (see [57] or [142]), which states n that every infinite-dimensional Banach space contains ℓ2 ’s uniformly. The conclusion follows now immediately from Theorem III.3.2 below. Originally the next theorem was obtained as a simple generalization of Theorem n III.3.1. We assume here that the corresponding space E contains ℓα’s uniformly, instead n of containing ℓ2 ’s uniformly. Consequently, the original proof was exactly the same as the proof of Christensen and Ressel [47]; we only had to put “α” instead of “2” in the calculations. However, the anonymous referee proposed a much simpler proof which we present here. Theorem III.3.2 (Misiewicz [162], see also [172]). Assume that E is an infinite- dimensional linear space equipped with a quasi-norm c : E [0, ). If (E,c) contains → ∞ ℓn’s uniformly for some α> 0, and if ϕ(c( )), ϕ(0) = 1, is a positive definite function on α · E, then there exists a probability measure λ such that ∈P+

∞ \ α ϕ(t)= e−t s λ(ds), t 0. ≥ 0

If α> 2, then λ = δ0. P r o o f. The standard application of ultraproducts (see Lindenstrauss and Tzafriri n [142], p. 119) shows that if a quasi-normed space (E,c) contains ε-isometric copies of ℓα for every n N and every ε> 0, then an ultrapower of E contains the space ℓ isometrically. ∈ α If ϕ(c( )) is positive definite on E then it is positive definite on any ultrapower of E, and · therefore it is positive definite on ℓα. The rest is the result of Bretagnolle, Dacunha- Castelle and Krivine. 44 J. K. Misiewicz

The next theorem was proved by the author in August 1991, and inspired by discus- sions with Prof. P. Ressel.

Theorem III.3.3 (Misiewicz [168]). Let X1,X2,... be a pseudo-isotropic sequence of random variables characterized by functions ϕ and c. Assume that there exists a continu- ous one-to-one function g : [0, ) [0, ) such that for every n N there exist linearly ∞ → ∞ ∈ independent e ,...,e R(∞) satisfying 1 n ∈ g c ξiei = hi(ξi), for some functions h and every choice X of ξ ,...,ξX R. Then there exist α> 0 and a i 1 n ∈ probability measure λ on [0, ) such that ∞

∞ \ ϕ(t)= exp t αs λ(ds), t> 0. {−| | } ∀ 0

If α> 2, then the only possible λ is δ0. P r o o f. Without loss of generality we can assume that ϕ is a completely monoto- nic function (which can be done easily by multiplying the original random vector by a symmetric Cauchy random variable). In particular, we can assume that ϕ is one-to-one on the positive half-line. The characteristic function of the sequence X1,X2,... can be written as follows: ϕ c ξ e = ϕ g−1 g c ξ e = ϕ g−1 h (ξ ) ψ h (ξ ) . i i i i i i ≡ i i On the X other hand, we have   X    X   X  1 1 ϕ c ξiei = ϕ c c ξiei e1 = ψ h1 c ξiei . c(e1) c(e1)   X     X      X  The function ψ is one-to-one, thus 1 h1 c ξiei = hi(ξi). c(e1)   X  X In particular, c(ei) h t = h ( t ), 1 c(e ) | | i | |  1  for every i N and t R. Now, for every n N there exist linearly independent ∈ ∈ ∈ e ,...,e R(∞) and positive numbers c ,...,c such that 1 n ∈ 1 n g c ξiei = h1(ciξi). From the result of Ressel, Theorem  X 6 in [195], weX know that 1 exp h1(ciξi t ) = exp h1 c ξiei t − | | − c(e1) | | n X o    X   is a characteristic function on Rn. The left-hand side of the above formula is the cha- racteristic function of the random variable ciξiYi, where Yi are independent, with the characteristic function exp h ( t ), while the right-hand side is the characteristic {− 1 | | P function of (c(e ))−1c( ξ e )Y . This means that there exists α (0, 2] such that the 1 i i 1 ∈ P Substable and pseudo-isotropic processes 45

random variables Xi are symmetric α-stable, and there exists a positive A such that h ( t )= A t α, 1 | | − | | and g c ξ e = h (c ξ )= A c ξ α = A ξ α. i i 1 i i | i i| i| i| We needed the property  X thatϕ isX a one-to-one functionX onlyX to get the above equality. Now, considering the characteristic function of the original sequence X1,X2,..., we ob- tain ϕ c ξ e = ϕ g−1 g c ξ e = ϕ g−1 A ξ α , i i i i i| i| which means  thatX ϕ g−1( α) is positive  X definite on ℓn for every X n N. Using The- ◦ | · |α α ∈ orem III.2.2, we see that there exists a probability measure λ on [0, ) such that ∞

∞ \ ϕ c ξ e = exp A ξ αt λ(dt), i i − i| i|   X  0 n X o which ends the proof. In 1969 Einhorn (see [62]) proved that the only positive definite norm-dependent function on C[0, 1] is constant. This fact does not require any proof now, in view of universality of the space C[0, 1] and, for example, the result of Bretagnolle, Dacunha- Castelle and Krivine. I am very grateful to A. Koldobsky for bringing to my attention the paper of Aharoni, Maurey and Mityagin [4]. They proved the following result characterizing Banach spaces for which the only possible positive definite norm-dependent function is constant:

Theorem III.3.4. Let (E,c) be a Banach space with a symmetric basis ei : i N . n { ∈ } If lim inf(c( i=1 ei)/√n)=0 then every norm-dependent positive definite function on E is constant. P Equivalently we can say that if a sequence X : k N is exchangeable and pseudo- { k ∈ } isotropic with X = 0 then lim inf(c( n e )/√n) > 0. 1 6 i=1 i P IV. Stable and substable stochastic processes

In this chapter we define and study symmetric stable and substable stochastic proces- ses. We underline, however, that we identify processes having the same multidimensional distributions, any path properties are not of interest in this paper. We define the Repro- ducing Kernel Space for substable and Lα-symmetric processes and study the dependence between geometrical properties of this space and some properties of the corresponding processes.

IV.1. Gaussian processes and Reproducing Kernel Hilbert Spaces. In this section we give only the definition and the main properties of symmetric Gaussian pro- cesses. The literature on the subject is very rich; and many aspects of the theory are considered. For this paper the most important is the idea of the Reproducing Kernel 46 J. K. Misiewicz

Hilbert Space for a given symmetric Gaussian process and we present here several me- thods of its construction. We want to describe also the role of the Reproducing Kernel Hilbert Space in studying some properties of the corresponding Gaussian process. For more information see [129]. Definition IV.1.1. Let T be a set and let (Ω, , P) be a probability space. A family F = X : t T of real random variables is called a symmetric Gaussian process if every X { t ∈ } finite linear combination of elements of X : t T is a symmetric Gaussian random { t ∈ } variable. It follows from the above definition that if X : t T is a symmetric Gaussian { t ∈ } process then for every n N, every choice of t1,...,tn T, and every ξ1,...,ξn R, ∈ ∈ n ∈ there exists a positive number σ such that the random variable k=1 ξkXtk has normal distribution N(0, σ). Moreover, P n 2 n 2 σ = E ξkXtk = ξj ξkE(Xtj Xtk ).  Xk=1  j,kX=1 One of the possible ways of constructing the Reproducing Kernel Hilbert Space was proposed by Aronszajn (see [11]). In his construction we do not need to have any sto- chastic process, the Reproducing Kernel Hilbert Space is a space built for any covariance function defined on any set T. We have then: Definition IV.1.2. Let T be any set and let C be a real-valued function on T T. × Then C is called a covariance on T if (a) C(s,t) = C(t,s) for all s,t T and (b) ∈ n ξ ξ C(t ,t ) 0 for every n N, every choice of t ,...,t T and every choice j,k=1 j k j k ≥ ∈ 1 n ∈ of ξ ,...,ξ R. P 1 n ∈ The following theorem was proved by Aronszajn in 1950 (see [11]). Theorem IV.1.1. Let T be any set and let C be a real-valued covariance on T T. × Then there exists a unique Hilbert space K(C) of functions on T, satisfying C( ,t) K(C) for each t T, · ∈ ∈ f, C( ,t) = f(t) for each f K(C) and t T. h · i ∈ ∈ The class K(C) of functions on T forming the Hilbert space obtained in the above theorem is called the Reproducing Kernel Hilbert Space (for short RKHS) of the cova- riance C. The last theorem gives its existence and uniqueness. Adapting Theorem IV.1.1 to the function C(t,s)= E(X ,X ), t,s T, t s ∈ for a symmetric Gaussian process = X : t T we see that there exists a Hilbert X { t ∈ } space K(C)= H( ) of functions on T such that X H( )= f : f(t)= E(X Y ) for a unique Y ( ) , X { t f f ∈ L X } where ( ) denotes the linear subspace of L (Ω, , P) generated by the random variables L X 2 F X : t T . Conversely, if C is a covariance on a set T then there exists a symmetric { t ∈ } Gaussian process X : t T defined on a suitable probability space such that C(t,s)= { t ∈ } E(XtXs). Substable and pseudo-isotropic processes 47

Another method of constructing the Reproducing Kernel Hilbert Space was proposed by, e.g., Kallianpur [110], [108], [109], [111], Dudley [53], Kuo [129]. They start from the symmetric Gaussian process = X : t T . Consider the space R(T) of all sequences X { t ∈ } ξ : t T with a finite number of non-zero coordinates equipped with the inner product { t ∈ } T ξ, η = E ξ X η X , ξ,η R( ). h i t t t t ∈ By the RKHS for the Gaussian X process Xwe now understand the closure of R(T) with X respect to the norm defined by this inner product. We will try to adapt this construction to symmetric stable and pseudo-isotropic processes. In some papers (see e.g. [33], [34], [149]) the starting point for the construction of the RKHS for a given symmetric Gaussian process (or a Gaussian measure on a vector X space) was the space of admissible translates for the process, i.e., T ξ R : µ µ , { ∈ X−ξ ≺ X } where µ µ means that the distribution µ of the stochastic process X ξ : X−ξ ≺ X X−ξ { t − t t T is absolutely continuous with respect to the distribution µ of the stochastic ∈ } X process X : t T . This space equipped with a suitable inner product is also called { t ∈ } the Reproducing Kernel Hilbert Space for the symmetric Gaussian process . X It turns out that for a given symmetric Gaussian process all these definitions coincide in the sense that all these spaces are isometrically isomorphic. More details on Reprodu- cing Kernel Hilbert Spaces of Gaussian processes and measures can be found in [11], [43], [33], [129], [53], [54].

IV.2. Elliptically contoured processes Definition IV.2.1. A symmetric stochastic process X : t T is called elliptically { t ∈ } contoured iff its finite-dimensional projections are all elliptically contoured; i.e. for every n N and every choice of t ,...,t T the random vector (X ,...,X ) is elliptically ∈ 1 n ∈ t1 tn contoured. Following Definition II.3.1 we thus infer that if X : t T is elliptically contoured { t ∈ } then for every n N and every t ,...,t T, there exist a symmetric, non-degenerate ∈ 1 n ∈ n n-matrix , and a real function ϕ on [0, ) such that for every ξ ,...,ξ R, × ℜ ∞ t1 tn ∈ n E exp i ξ X = ϕ((ξ ,...,ξ ) (ξ ,...,ξ )T ). tk tk t1 tn ℜ t1 tn n Xk=1 o Notice that there is no uniqueness of the function ϕ or of the matrix ; they both depend ℜ on n, on the choice of t1,...,tn, and on each other. However, using Proposition II.2.1, it is easy to see that we can choose one function ϕ on [0, ) such that for every n N and ∞ ∈ every choice of t ,...,t there exists a non-degenerate symmetric matrix such that for 1 n ℜ every ξ ,...,ξ R, t1 tn ∈ n E exp i ξ X = ϕ((ξ ,...,ξ ) (ξ ,...,ξ )T ). tk tk t1 tn ℜ t1 tn n Xk=1 o Fixing the function ϕ in this formula one obtains uniqueness of the matrices = ℜ (t ,...,t ). ℜ 1 n 48 J. K. Misiewicz

Elliptically contoured processes and distributions were extensively studied by many authors in a number of different aspects. Let us remind only how many names they have: elliptically contoured, spherically generated, rotationally invariant, sub-Gaussian, ℓ2-dependent, L2-symmetric, isotropic and many others. The most spectacular, known to me, application of elliptically contoured processes was found by M. Mendel (see [156]), where they were used in a computer program for a self-learning robot on the production line. As we have seen in the previous section (see e.g. Schoenberg theorem), infinite- dimensional elliptically contoured measures and elliptically contoured stochastic processes with an infinite number of linearly independent variables are much more regular than in the finite-dimensional case. However, the full characterization of such processes or meas- ures has not been obtained directly, because of the trouble with the definition of the corresponding linear operator : R(T) L , in the case when the second moment does ℜ → 2 not exist. At the beginning authors followed the classical construction of the Reproducing Kernel Hilbert Space, as it was done for symmetric Gaussian processes. In 1964 Vershik obtained a characterization for elliptically contoured second order processes (see [236]. In 1974 Gualtierotti (see [80]) gave a characterization of spherically invariant cylindrical measures with finite weak second moment on infinite-dimensional separable Hilbert spaces (assuming that his “separable Hilbert space” means: infinite- dimensional separable Hilbert space). In 1977 Crawford proved a similar theorem for infinite-dimensional Banach spaces (see [48]) under the assumption that the elliptically contoured measure has a weak second moment. In 1982 for R∞, and in 1984 for infinite- dimensional Banach spaces, Misiewicz obtained the same characterization without any moment assumption (see [158], [160]). And in 1982, Okazaki, in the paper [180] which, as far as I know, has never been published, gave the same for locally convex spaces. More information about elliptically contoured distributions and processes can be found in the review paper of Chmielewski (see [44]). The final version of the representation theorem for elliptically contoured stochastic processes can now be written as in the next theorem. For the proof, see e.g. [172].

Theorem IV.2.1. Let X : t T be a symmetric stochastic process such that { t ∈ } the linear space spanned by X : t T is infinite-dimensional. Then the following { t ∈ } conditions are equivalent: 1. X : t T is elliptically contoured; { t ∈ } 2. There exist a Hilbert space H, a linear operator : R(T) H, and a real-valued ℜ → function ϕ on [0, ) such that ∞

E exp i xtXt = ϕ( (ξ) 2). T kℜ k n tX∈ o The function ϕ turns out to be a completely monotonic function. 3. There exist a non-negative random variable Θ and a symmetric Gaussian stochastic process Y : t T independent of Θ such that { t ∈ } X : t T =d Y Θ1/2 : t T . { t ∈ } { t · ∈ } Substable and pseudo-isotropic processes 49

In 1982 Hardin published a paper (see [84]) in which he studied the linear regression property for elliptically contoured processes. He showed that one of the possible definitions of linearity of regression is just equivalent to the property of processes being elliptically contoured. Namely, he considered the following definition: Definition IV.2.2. Let = X : t T L (Ω, P) be a stochastic process and let X { t ∈ }⊂ 1 ( ) be the real of all finite linear combinations ξ X . We say that L X k tk X has linear regression property if all regressions in ( ) are linear, that is, L X P E(X X ,...,X ) ( ) whenever X ,X ,...,X ( ). 0 | 1 n ∈ L X 0 1 n ∈ L X The main result in Hardin’s paper is based on the following lemma: Lemma IV.2.1 (Hardin [84]). Let = X : t T L (Ω, P) be an elliptically X { t ∈ } ⊂ 1 contoured process. Then 1. There exists an inner product , on ( ) such that h· ·i L X (E X )2 = X,X for all X ( ), | | h i ∈ L X and X, Y E(X Y )= h i Y for all X, Y =0 in ( ); | Y, Y · 6 L X h i 2. If X, Y ( ), E X = E Y = 1, and E(X Y )=0 then the random vector ∈ L X | | | | | (X, Y ) is rotationally invariant. Theorem IV.2.2 (Hardin [84]). Let = X : t T be a stochastic process satisfying X { t ∈ } at least one of the following two conditions: a) L (Ω, P) and dim( ( )) 2; X ⊂ 2 L X ≥ b) L (Ω, P) and dim( ( )) 3. X ⊂ 1 L X ≥ Then has linear regression property if and only if is an elliptically contoured X X process. Of course, every elliptically contoured process for which regressions are defined, has linear regression property—in this implication the dimension condition is not needed. For the opposite implication when L (Ω, P) the condition dim( ( )) 3 is essential. X ⊂ 1 L X ≥ Hardin has given an example ( = (X, Y ), where X, Y are i.i.d. random variables with X a symmetric α-stable distribution, α< 2) of a two-dimensional process which has linear regression property but which is not elliptically contoured. See also [147], Th. 6.1.1, and [112], Th. 1.4. Let = X : t T be an elliptically contoured process with an infinite number X { t ∈ } of linearly independent random variables. By Theorem IV.2.1 (condition 3) it follows that is a scale mixture of a symmetric Gaussian process, thus, assuming that the X mixing random variable Θ has no atom at zero, all finite-dimensional projections of X are absolutely continuous with respect to the Lebesgue measure. More precisely: for every n N and every choice of t ,...,t T the random vector (X ,...,X ) has density of ∈ 1 n ∈ t1 tn the form (t ,...,t ) −1/2f (x (t ,...,t )x′), x Rn, |ℜ 1 n | n ℜ 1 n ∈ 50 J. K. Misiewicz

where (t ,...,t ) is a symmetric n n-matrix (EX X ) and ℜ 1 n × ti tj ∞ \ r f (r)= (2πt)−n/2 exp λ(dt), n − 2t 0   where λ is the distribution of the random variable Θ. d Notice that if = Xt : t T = Yt : t T Θ = Θ, Θ independent of X { ∈ } { T ∈ } · Y · Y : t T , then for every radial set B R (i.e., s B B for every s [0, )) we { t ∈ } ⊂ · ⊂ ∈ ∞ have P B = P B . {X ∈ } {Y ∈ } As we know, the linear support of the symmetric Gaussian process Y : t T is radial. { t ∈ } It is easy to see then that it is equal to the linear support of any elliptically contoured process of the form Y : t T Θ1/2, unless P Θ =0 = 1. Denoting by A( ) the set { t ∈ } · { } X of admissible translations for the stochastic process , we also have the following lemma X proved independently by Zak˙ [249], and Smole´nski and Sztencel [224]. Lemma IV.2.2. Let be an elliptically contoured stochastic process with the repre- X sentation Θ1/2 = Y : t T Θ1/2. Then either P Θ =0 =1 or A( )= A( ). Y · { t ∈ } · { } X Y P r o o f. Since A( ) is a radial set, it follows that if x A( ) then x/√t A( ) Y ∈ Y Ì∞∈ Y for every t > 0. Take a set B such that P B = 0. Since P B = P {X ∈ } {X ∈ } 0 {Y ∈ B/√t λ(dt), it follows that P B/√t = 0 for λ-almost all t, where λ denotes the } {Y ∈ } distribution of the variable Θ. Consequently, P (B x)/√t = 0 for λ-almost all t, {Y ∈ − } hence P (B x) = 0. Conversely, let x A( ). Since A( ) for a Gaussian process {X ∈ − } 6∈ Y Y is equal to the intersection of all linear spaces of full measure, there exists a linear Y T subspace L of R such that P L = 1 and x L. Thus P L = 1 and hence {Y ∈ } 6∈ {X ∈ } x A( ). 6∈ X In 1976 Zinn (see [246]) asked whether for an α-stable distribution µ the assumption that the symmetric spectral measure ν of µ does not charge finite-dimensional sets implies that A(µ)= 0 . In1986 Zak˙ and in 1987 Smole´nski and Sztencel (see [249], [224]) proved { } that the answer is “no”. Namely, they proved that every α-stable elliptically contoured distribution has the spectral measure ν which is equivalent to the radial projection of the corresponding Gaussian distribution γ, i.e. it does not charge finite-dimensional sets. But, as we have seen above, the set of admissible translates for elliptically contoured α-stable distribution coincides with the set of admissible translates for the corresponding Gaussian distribution, so it is not equal to 0 . { } IV.3. Symmetric stable stochastic processes. By R(T) we denote the linear space of all ξ RT having a finite number of non-zero coordinates. ∈ Definition IV.3.1. A stochastic process = X : t T with an arbitrary index X { t ∈ } set T is called symmetric α-stable if each element of ( ), the set of all finite real (T) L X combinations T ξ X , ξ R , is a symmetric α-stable random variable. t∈ t t ∈ Example PIV.3.1 (Symmetric α-stable L´evy motion). A stochastic process = Xt : X { t 0 is called a symmetric α-stable L´evy motion for some α (0, 2] if ≥ } ∈ Substable and pseudo-isotropic processes 51

(1) X0 = 0 a.s. (2) has independent increments, X (3) X X has distribution (α, (t s)1/α) for any 0 s t< . t − s S − ≤ ≤ ∞ Observe that the process has stationary increments. It is the Brownian motion X when α = 2. The α-stable L´evy motions are 1/α-self-similar, that is, for all c > 0, X : t 0 and c1/αX : t 0 have the same finite-dimensional distributions. The { ct ≥ } { t ≥ } role that α-stable L´evy motion plays among stable processes is similar to the role that the Brownian motion plays among Gaussian processes.

Since, for a symmetric α-stable process , each Y = t∈T ξtXt ( ) is a sym- X ∈ L X T metric α-stable random variable, therefore (see Corollary II.1.1) for every ξ R( ) there P ∈ exists a non-negative constant c(ξ) such that

ξtXt = c(ξ)X0, T tX∈ where X0 is the standard symmetric α-stable random variable with the distribution (α, 1). In order to construct a spectral representation for the symmetric non-degenerate S α-stable process, we introduce a quasi-norm in the set ( ) defining L X 1∧1/α (IV.3.1) ξtXt = c(ξ) L(X ) t∈T X ( L(X ) is a norm if α 1). Schilder [213] has shown that this quasi-norm metrizes k·k ≥ convergence in probability on ( ). By ◦( ) we denote the completion of ( ) in L X L X L X this metric. It is easy to see that each element of ◦( ) is symmetric α-stable. We L X already know (see Example II.1.3) that for every n N each n-dimensional subspace of ∈ ◦( ) embeds linearly and isometrically into L (Sn−1). This implies, by Bretagnolle et L X α al. (see [29], Theorem 4, p. 246) for the case α 1 and Schreiber (see [218], Corollary 3.3, ≥ p. 89) for the case α< 1, the following: Theorem IV.3.1. If = X : t T is a SαS process then there exists a measure X { t ∈ } space (E, µ) and a linear isometric embedding : ◦( ) Lα(E, µ) such that for every T ℜ L X → ξ R , ∈ α (IV.3.2) E exp i ξtXt = exp ξtft , T − T α n t∈ o n t∈ o X X where ft = (Xt) : t T Lα(E, µ). { ℜ ∈ }⊂ Conversely, the Kolmogorov theorem implies that for any choice of functions f : t { t ∈ T the formula (IV.3.2) defines a symmetric α-stable process X : t T . For more } { t ∈ } details see also [113], [85]. Note that the measure µ on E does not have to be finite; it does not even have to be σ-finite. To see this, consider a stochastic process X : t 0 , where the random { t ≥ } variables X are totally independent with symmetric α-stable distribution (α, 1). For t S the space E we can take here [ 1, 1][0,∞) with the measure µ having atoms of the same − weight at every point e , where e has all the coefficients zero except the tth. ± t t The linear operator : ◦( ) L (E, µ) is called the spectral representation of ℜ L X → α the symmetric α-stable process . In the case where ◦( ) is separable, we may choose X L X 52 J. K. Misiewicz

(E, µ) to be [0, 1] with the Lebesgue measure. Sometimes in this paper we will identify the spaces ( ) and R(T) by L X (T) ( ) ξtXt ξ R , L X ∋ T ↔ ∈ tX∈ and, consequently, we will say that the linear operator maps R(T) into L (E, µ). ℜ α By the Reproducing Kernel Space for the symmetric α-stable process (notation X ( )) we will understand H X ( )= (Y ) : Y ◦( ) L (E, µ), H X {ℜ ∈ L X }⊂ α or, equivalently, with the identification R(T) ( ), the space ( ) is the completion T ≡ L X H X of (R( )) in the space L (E, µ). ℜ α When α = 2, the space ( ) is called the Reproducing Kernel Hilbert Space for the H X Gaussian process . ( ) is then a Hilbert space, so any other Hilbert space of the X H X same dimension is isometrically isomorphic to ( ). Subspaces of an L -space do not H X α have this property; for example, for every n N, and every 1 <α< 2, there exists an n ∈ n L1-space containing isometrically both ℓα and ℓ2 . We want to describe how the geometry of the space ( ) L affects the properties H X ⊂ α of substable processes. It may happen that ( ) embeds isometrically into some L - H X β space, for some β (α, 2], which is shown by the following example: ∈ Example IV.3.2. Let = Y : t T be any symmetric β-stable stochastic process Y { t ∈ } (for example, a symmetric β-stable L´evy motion) for some β (0, 2] with the spectral ∈ representation , and let α<β. We define the following process: ℜ = Θ1/β = X Θ1/β : t T , X Y · α/β { t · α/β ∈ } where Θ independent of . It is easy to see that is a symmetric α-stable stochastic α/β Y X process as all its finite-dimensional distributions are symmetric α-stable:

β α E exp i ξtXt = EΘ exp (ξ) β Θ = exp (ξ) β . T {−kℜ k · } {−kℜ k } n Xt∈ o It follows from Theorem IV.3.1 that there exists a linear operator : RT L (E , µ ) ℜ1 → α 1 1 such that α E exp i ξtXt = exp 1(ξ) α . T {−kℜ k } n Xt∈ o Comparing these two formulas, we get L (E, µ) ( ) ( ) L (E , µ ). β ⊃ H Y ≡ H X ⊂ α 1 1 Similarly to the finite-dimensional case, we define for a symmetric α-stable stochastic process = X : t T the following parameter: X { t ∈ } α = sup β (0, 2] : ( ) embeds isometrically into some L ◦ { ∈ H X β} = sup β (0, 2] : (ξ) β is negative definite on ( ) . { ∈ kℜ kα H X } Every symmetric α-stable stochastic process can be treated as being substable with mixing variable Θ = 1 everywhere. Using Lemma II.5.2 (which does not depend on the dimension of the Reproducing Kernel Space) we conclude that for every symmetric Substable and pseudo-isotropic processes 53

α-stable stochastic process , there exists a maximal α (coinciding with the one defined X ◦ above) such that =d Θ1/α◦ , where is symmetric α -stable, and Θ is independent X Y◦ α/α◦ Y ◦ α/α◦ of . Y Let the set T of parameters be fixed. Denote by (α) the set of all symmetric MS α-stable processes = X : t T for which α = α ( ( )). We identify processes X { t ∈ } ◦ H X having all finite-dimensional projections the same. By (α) we denote the set of all SS symmetric α-stable stochastic processes = X : t T . In view of Lemma II.5.3 it is X { t ∈ } easy to see that (α) can be written as a sum of disjoint classes: SS (α)= (α) (β) Θ1/β , SS MS ∪ MS ◦ α/β α<β[≤2 where (β) Θ1/β = = Θ1/β : Θ , independent, (β) MS ◦ α/β {X Y◦ α/β α/β Y Y ∈MS } = (α) : α ( ( )) = β . {X ∈SS ◦ H X } Example IV.3.3. The “simplest” SαS stochastic processes in this sense, i.e. processes from the class (α), are processes with independent increments since their Reproducing MS Kernel Spaces contain ℓα isometrically, thus they do not embed isometrically in any Lβ- space if β (α, 2]. To see this, we take a symmetric α-stable process = Xt : t T ∈ T X { ∈ } with independent increments and the representing operator : R( ) L . Choosing an ℜ → α infinite sequence t1 < t2 < ..., ti T we see that ℓα = Lin uj : j N , where uj = −1 ∈ { ∈ } (e e − ) e e − . ℜ tj − tj 1 k tj − tj 1 kα It is known that none of L -spaces, α (0, 2] (or their subspaces), contains ℓn’s α ∈ β uniformly if 0 <β<α. Moreover, the set of all β (0, 2] for which ( ) does not n ∈ H X contain ℓβ’s uniformly is an open interval, thus there exists α = inf β (0, 2] : ( ) contains ℓn’s uniformly , s { ∈ H X β } and ( ) contains ℓn ’s uniformly (for details see e.g. [134]). Finally, for every symmetric H X αs α-stable process we have X α α ( ( )) α ( ( )). ≤ ◦ H X ≤ s H X Example IV.3.4. In order to construct an example of a SαS stochastic process X : { n n N for which ( ) contains ℓn’s uniformly but does not contain ℓ isometrically let ∈ } H X α α us choose first a sequence of α α, α , α (0, 2], such that for every n N and every n ց n ∈ ∈ x Rn, ∈ x x (1+1/n) x . k kαn ≤k kα ≤ k kαn It is evident that such a choice is possible. Now we take the following families of totally independent random variables: Y , a symmetric α -stable random variable with characteristic function exp t αl , • lj l {−| | } l N, j =0,...,l 1, ∈ − Θ , a positive, (α /α )-stable random variable with Laplace transform • αi+1/αi i+1 i exp tαi+1/αi , i N, {− } ∈ Θ , a positive, (α/α )-stable random variable with Laplace transform • α/αi i exp tα/αi , i N. {− } ∈ 54 J. K. Misiewicz

(∞) We now construct a consistent family of measures on R by defining a measure µn Rn(n+1)/2 n n on as the distribution of Xn = (X1 ,...,Xn(n+1)/2), where

n n 1/αi 1/αn+1 (l 1)l X = Ylj Θ Θ for k = − + j. k · αi+1/αi · α/αn 2 Yi=l The consistency of the family µn follows from the equality

αn/αn+1 d Θα /α Θ = Θα/α . n+1 n · α/αn+1 n From the Kolmogorov theorem there exists a symmetric stochastic process Z : k N { k ∈ } such that for every n N, ∈ d (Z1,...,Zn(n+1)/2) = Xn, and this is the required symmetric α-stable process. To see that the space ( ) for H X this process contains ℓn’s uniformly, let us fix ε > 0, n N. We choose m N such α ∈ ∈ that m max n, 1/ε . Calculating now the characteristic function of the random vector ≥ { } (Zm(m−1)/2,...,Zm(m−1)/2+n−1), we get

n−1 n−1 m E exp i ξj Zm(m−1)/2+j = E exp i ξj Xm(m−1)/2+j j=0 j=0 n X o n X o n−1 = E exp i ξ Y Θ1/αm j mj α/αm j=0 n X o n−1 α/αm = exp ξ αm . − | j | j=0 n  X  o This means that the n-dimensional part of ( ) corresponding to the set Z ,... H X { m(m−1)/2 ...,Z of random variables is isometric to the space ℓn , and according to m(m−1)/2+n−1} αm our assumptions

n−1 n−1 n−1 1/αm 1/α 1/αm ξ αm ξ α (1+1/m) ξ αm . | j | ≤ | j | ≤ | j | j=0 j=0 j=0  X   X   X  Noticing that 1 + 1/m < 1+ ε ends the proof.

Example IV.3.5. We show that it may happen that α◦ < αs. Let us take: 0 <α<β<η 2, • ≤ X , X , independent identically distributed symmetric β-stable random variables • 1 2 with characteristic function exp t β , {−| | } X , X ,..., independent identically distributed symmetric η-stable random varia- • 3 4 bles with characteristic function exp t η , {−| | } Θ , • α/β Θ . • β/η Substable and pseudo-isotropic processes 55

We choose all these variables to be totally independent. Now we define a sequence of random variables Yn in the following way:

X1 if n = 1,

Yn = X2 if n = 2,  X Θ1/η if n 3.  n β/η ≥ 1/β and a sequence of random variables Zn by the formula Zn = YnΘα/β. The sequence Yn is symmetric β-stable because its one-dimensional projections are all symmetric β-stable: ∞ ∞ E exp i ξ Y t = E exp it(ξ X + ξ X ) E exp it ξ X Θ1/η k k { 1 1 2 2 } k k β/η n k=1 o n k=3 o X ∞ X \ ∞ = exp t β( ξ β + ξ β) exp t η ξ η u γ+ (du) {−| | | 1| | 2| } − | | | k| β/η 0 n  Xk=3  o ∞ β/η = exp t β ξ β + ξ β + ξ η − | | | 1| | 2| | k| n   kX=3  o = exp t β( (ξ , ξ ) β + (ξ , ξ ,...) β) . {−| | k 1 2 kβ k 3 4 kη } This means that the sequence Zn is β-substable. On the other hand, it is symmetric α-stable since ∞ ∞ 1/β E exp it ξkZk = E exp it ξkYk Θα/β n k=1 o n  k=1  o

X ∞ X \ = exp t β( (ξ , ξ ) β + (ξ , ξ ,...) β)u γ+ (du) {−| | k 1 2 kβ k 3 4 kη } α/β 0 = exp t α( (ξ , ξ ) β + (ξ , ξ ,...) β)α/β . {−| | k 1 2 kβ k 3 4 kη } In a similar way we can prove that the sequence Zn is also τ-substable for every α<τ<β. As the geometries of level curves of characteristic functions for the sequences Y and Z coincide, and Y ’s have every moment τ,τ < β, the linear operator : n n n ℜτ R(∞) L corresponding to Z (also to Y ) can be defined as follows: → τ n n ∞ R(∞) ξ (ξ) = (E X τ )−1/τ ξ Y L (Ω). ∋ 7→ ℜτ | 1| k k ∈ τ kX=1 Indeed, we have

∞ τ ∞ τ β β 1/β τ ξkYk = E ξkYk = E ( (ξ1, ξ2) β + (ξ3, ξ4,...) η ) X1 τ | k k k k | Xk=1 Xk=1 τ β β τ/β = E X1 ( (ξ1 , ξ2) + (ξ3, ξ4,...) ) . | | k kβ k kη The above calculations show that the mapping L (Ω) (ξ) (ξ) L (Ω) α ∋ℜα 7→ ℜτ ∈ τ defines an isometry (up to a multiplicative constant) between ( ) and ( ), thus H ℜα H ℜτ ( ) embeds isometrically into some L -space for every α<τ <β, and we have H ℜα τ α = β. On the other hand, α = η because ( ) contains ℓ isometrically. ◦ s H ℜα η 56 J. K. Misiewicz

IV.4. Spectral representation of symmetric stable processes. From Theorem IV.3.1 we know that for every symmetric α-stable stochastic process = X : t T X { t ∈ } there exist a measure space (E, , µ) and a linear isometric embedding : ◦( ) B T ℜ L X → L (E, µ) such that for every ξ R , α ∈ α E exp i ξtXt = exp ξtft , − α n t∈T o n t∈T o X X where f = (X ) : t T L (E, µ). { t ℜ t ∈ }⊂ α As in the book of Samorodnitsky and Taqqu ([212], Sections 3.3 and 3.4) we construct an independently scattered σ-additive symmetric α-stable random measure Z on = α B0 B : µ(B) < such that for every B , Z (B) is a symmetric α-stable random { ∈B ∞} ∈B0 α variable with the distribution (α, µ(B)1/α). The measure µ is called the control measure S for the α-stable random measure Zα.

Let \

I(f)= f(x) Zα(dx) E be the α-stable stochastic integral of the function f L (E, µ) as defined in [212], Sec- ∈ α tion 3.4. The characteristic function of the random variable I(f) can be written as

follows: \ E exp itI(f) = exp t α f(x) α µ(dx) . { } − | | | | n E o We now see that the symmetric α-stable stochastic process = X : t T can be X { t ∈ }

identified with \ X = I(f )= f (x) Z (dx), t T, t t t α ∈ E where the function f L (E, µ) is equal to (X ). t ∈ α ℜ t In this section we consider the SαS stochastic processes X : t T which admit { t ∈ }

spectral representation of the form \ (IV.4.1) X = Y (ω) dZ (ω), t T, t t α ∈ Ω where Y : t T itself is a symmetric stable process, Y L (Ω, , P), (Ω, , P) is a { t ∈ } t ∈ α B B probability space, and Z is a random SαS measure on (Ω, , P) which is independently α B scattered. The multi-dimensional characteristic function of such a process is then α (IV.4.2) E exp i ξtXt = exp E ξtYt . T − T n t∈ o n t∈ o X X Proposition IV.4.1 (Misiewicz [169]). Let = X : t T be a symmetric α-stable X { t ∈ } stochastic process. Then (α) if and only if admits representation (IV.4.1) X 6∈ MS X with = Y : t T a symmetric β-stable stochastic process for some β (α, 2]. Y { t ∈ } ∈ Proof. If (α) (α) then there exists β (α, 2] and a symmetric β-stable X ∈SS \ MS ∈ stochastic process = Z : t T such that Z { t ∈ } X : t T =d Z Θ1/β : t T , { t ∈ } { t · α/β ∈ } Substable and pseudo-isotropic processes 57

where Θ is independent of . We define = c−1 for c = (E Y α)1/α, where Y is α/β Z Y Z | 0| 0 a canonical symmetric β-stable random variable with characteristic function exp t β . {−| | } The stochastic process is symmetric β-stable thus there exists a spectral representation T Y : R( ) L such that ℜ → β β E exp i ξtYt = exp (ξ) β , T {−kℜ k } n Xt∈ o which means, in particular, that for every ξ R(T) the random variable ξ Y has ∈ t t the same distribution as (ξ) Y . Now it is enough to check that the corresponding kℜ kβ · 0 P multidimensional characteristic functions coincide. As = Θ1/β , its characteristic X Z · α/β function is 1/β 1/β E exp i ξtZtΘα/β = EΘE exp i (ξ) βY0Θα/β T { kℜ k } n Xt∈ o = E exp (ξ) βΘ = exp (ξ) α . Θ {−kℜ kβ α/β } {kℜ kβ }

From formula IV.4.2 we can now calculate the characteristic function for the stochastic Ì process defined as Ω Yt(ω) Zα(dω):

\ α α −α E exp i ξt Yt dZα = exp E ξtYt = exp c E ξtZt T − T − T n Xt∈ Ω o n tX∈ o n tX∈ o −α α α = exp c E (ξ) βY0 = exp ( ξ) . {− |kℜ k | } {−kℜ kβ } To get the opposite implication, assume that the symmetric α-stable stochastic process is defined by formula (IV.4.1) for some symmetric β-stable process , β (α, 2]. It is X Y ∈ easy to see that ( ) c ( ) L , which implies that (α). H X ≡ · H Y ⊂ β X 6∈ MS Notice that the set of all symmetric α-stable processes can be written now as a sum

of disjoint classes \ (α) (β) Θ1/β = (α) Y dZ : (β) . MS ∪ MS · α/β MS ∪ t α Y ∈MS α<β[≤2 α<β[≤2 n o Then the set (α) seems to be a very special class of symmetric α-stable processes; MS one could think that processes from this class are rather exceptional. On the other hand, processes from the class (β) Θ1/β are simply scale mixtures of processes from (β), MS · α/β MS or α-stable stochastic integrals of processes from (β), so their properties are strictly MS connected with the properties of processes from (β). MS In this sense we can divide the study of symmetric stable processes into two areas: one—the properties of processes from the class (α), two—the dependencies between MS the properties of processes from (α) and the properties of mixtures of such processes. MS To see this, let us discuss several simple properties. Let T be an ordered R-vector space. A stochastic process = X : t T is X { t ∈ } called stationary if for every choice of t . . . t , t ,...,t T the random vector 0 ≤ ≤ n 0 n ∈ (X ,...,X ) has the same distribution as (X ,...,X ). A stochastic process X : t0 tn t0+s tn+s { t t T has independent increments if for every choice of t <...

random vector (X ,...,X ) has the same distribution as s r (X ,...,X ). Now we st1 stn | | · t1 tn

can formulate the following Ì Proposition IV.4.2 (Misiewicz [169]). Let Xt = Yt dZα as defined in (IV.4.1) for a symmetric β-stable stochastic process = Y : t T , β (α, 2]. Then: Y { t ∈ } ∈ 1) if Y : t T is stationary, then X : t T is stationary; { t ∈ } { t ∈ } 2) if Y : t T has stationary increments, then X : t T has stationary { t ∈ } { t ∈ } increments; 3) if Y : t T is self-similar with power r, then X : t T is self-similar with { t ∈ } { t ∈ } the same power r.

P r o o f. To prove 1) it is enough to show that the vectors (Xt1 ,...,Xtn ) and

(Xt1+s,...,Xtn+s) have the same characteristic function for every choice of t1,...,tn,s T. Let ξ ,...,ξ R; then we have ∈ 1 n ∈ n n α n α E exp i ξ X = exp E ξ Y (ω) = exp E ξ Y (ω) k tk − k tk − k tk +s n k=1 o n k=1 o n k=1 o X n X X

= E exp i ξkXtk+s , n Xk=1 o which was to be shown. The proof of 2) is very similar. Assume that X : t T is self-similar with power r, i.e. for every t ,...,t ,s T, { t ∈ } 1 n ∈ the random vector (X ,...,X ) has the same distribution as s r(X ,...,X ). This st1 stn | | t1 tn means that n n α n α E exp i ξ Y = exp E ξ X = exp E ξ s rX k stk − k stk − k| | tk n k=1 o n k=1 o n k=1 o X n X X

= E exp i ξ s rY k| | tk n Xk=1 o which ends the proof of 3). For jointly symmetric α-stable random variables, 1 <α< 2, we introduce (following, for example, [212], Sections 2.7–2.9) the concept of covariation, which, in some sense, plays a role similar to covariance function in the case of Gaussian processes. Let us recall the notation ahpi = a p sgn(a). The covariation K (X, Y ) of random variables (X, Y ) | | α with joint symmetric α-stable distribution and spectral representation : (X, Y ) 1 ℜ L →

Lα(S , µ) is defined by the formula \ 1 ∂ hα−1i Kα(X, Y )= lim ξ (X)+ η (Y ) α = s1s2 µ(ds). α (ξ,η)→(0,1) ∂ξ k ℜ ℜ k S1

If X and Y are jointly symmetric α-stable random variables, α > 1, then Kα(X, Y ) = 0 if and only if Y is orthogonal in the James sense to X, i.e. λX + Y Y in k kα ≥ k kα

the space (X, Y ) for every λ R. L ∈ Ì Proposition IV.4.3 (Misiewicz [169]). Let Xt = Yt dZα as defined in (IV.4.1) for a symmetric β-stable stochastic process = Y : t T , β (α, 2]. Then: Y { t ∈ } ∈ Substable and pseudo-isotropic processes 59

1) if Y : t T has independent increments and EY = const (does not depend on { t ∈ } t t), then the increments of X : t T are orthogonal in the James sense; { t ∈ } 2) X X in the James sense if and only if Y Y in the James sense; moreover, t ⊥ s t ⊥ s K (X ,X )= E Y α(K (Y , Y ))α/β−1K (Y , Y ), α t s | 0| β s s β t s where Y has the characteristic function exp t β . 0 {−| | } P r o o f. Assume that Y : t T has independent increments and let t

hα−1i 1 ∂ α Kα(Xt,Xs)= EYtYs = lim E ξYt + ηYs α (ξ,η)→(0,1) ∂ξ | |

1 ∂ α = lim E (ξ, η) βY0 α (ξ,η)→(0,1) ∂ξ |k k |

1 α ∂ α = E Y0 lim (ξ, η) β α | | (ξ,η)→(0,1) ∂ξ k k

α β α/β−1 1 ∂ β = E Y0 lim ( (ξ, η) β) (ξ, η) β | | (ξ,η)→(0,1) k k β ∂ξ k k = E Y α(K (Y , Y ))α/β−1K (Y , Y ). | 0| β s s β t s It is now easy to see that X X in the James sense if and only if Y Y in the t ⊥ s t ⊥ s James sense.

IV.5. Substable and pseudo-isotropic stochastic processes Definition IV.5.1. A symmetric stochastic process X : t T is called substable { t ∈ } if it is a scale mixture of stable processes, i.e. there exist a symmetric stable stochastic process Y : t T and a non-negative random variable Θ such that { t ∈ } (V.5.1) X : t T =d Θ1/αY : t T , { t ∈ } { t ∈ } where =d denotes equality of all finite-dimensional distributions. As we have seen before, even a symmetric stable process can be substable, and unique- ness of representation (V.5.1) depends on the geometrical properties of the Reproducing Kernel Space ( ). For example, if Y : t T is symmetric α-stable and such that H Y { t ∈ } ( ) contains ℓn’s uniformly and there exists t > 0 such that P Θ t = 1, then the H Y α { ≤ } representation (V.5.1) is unique.

Recall that a symmetric stochastic process Xt : t T is pseudo-isotropic if for T { ∈ } every ξ R( ), ∈ d ξtXt = c(ξ)X0 T tX∈ 60 J. K. Misiewicz

(T) for some symmetric random variable X0, where c : R [0, ) is a quasi-norm on T → ∞ R( ). Consider the set of all pseudo-isotropic stochastic processes X : t T with fixed { t ∈ } quasi-norm c. We have seen in Section II.2 and in Chapter III that this class is not trivial (contains more than the process Xt 0 : t T ) if there exists α (0, 2], and a linear T { ≡ ∈ } T ∈ operator : R( ) L (S,Σ,ν) such that for every ξ R( ), ℜ → α ∈ (IV.5.2) c(ξ)= (ξ) . kℜ kα If the function c for a pseudo-isotropic stochastic process X : t T is given by formula { t ∈ } (IV.5.1) we will say that the process is Lα-symmetric. We do not know whether or not every pseudo-isotropic stochastic process is L -symmetric for some α (0, 2]. α ∈ Similarly to the case of stable stochastic processes we define the Reproducing Kernel Space ( ) for substable, L -symmetric and pseudo-isotropic stochastic processes. If H X α X : t T is a substable stochastic process with representation Θ1/αY : t T for { t ∈ } { t ∈ } some symmetric stable process Y : t T , then we identify ( ) with the space ( ). { t ∈ } H X H Y If X : t T is an L -symmetric stochastic process with the quasi-norm defined by { t ∈ } α formula (IV.5.1), then we define the Reproducing Kernel Space ( ) as the closure in (T) H X Lα(S,Σ,ν) of the space spanned by (ξ) : ξ R . If Xt : t T is a pseudo- {ℜ ∈ }T { ∈ } isotropic stochastic process with the quasi-norm c : R( ) [0, ) then we define the T → ∞ Reproducing Kernel Space ( ) as the closure of R( ) in the quasi-norm c. H X Let us also define α = sup β [α, 2] : ( ) embeds isometrically into L , ◦ { ∈ H X β} α = inf β [α, 2] : ( ) contains ℓn’s uniformly . s { ∈ H X β } We have already mentioned that all known examples of non-trivial pseudo-isotropic stochastic processes are L -symmetric for some α (0, 2], so also α (0, 2]. Some gene- α ∈ ◦ ∈ ral properties of the Reproducing Kernel Spaces for non-trivial pseudo-isotropic stochastic processes were given in Section III, but the question whether or not it can happen that α◦ = 0 is still open. Some more information can be derived from the following theorem proven by V. Tarieladze (see [234], Prop. VI.2.3 or [162]): Theorem IV.5.1. If the Reproducing Kernel Space for a non-trivial pseudo-isotropic stochastic process X : t T is a Banach space then: { t ∈ } 1) ( ) is isomorphic to a closed subspace of some L -space; H X 0 2) ( ) has cotype 2; H X 3) for every 0

One could think that every Lα-symmetric stochastic process has to be β-substable for some β α, or equivalently, that the result of Bretagnolle, Dacunha-Castelle and Krivine ≥ n holds not only for spaces containing ℓα’s uniformly but also for “maximal” subspaces of an Lα-space, i.e. subspaces which do not embed isometrically into any Lβ-space if α<β. In the example below we show that this is not true, at least in the case α 1. This ≤ Substable and pseudo-isotropic processes 61

restriction does not seem to be essential, but nothing more can be done until the full representation of the ℓα-symmetric measures is known in the finite-dimensional case. We will use here the result of Cambanis, Keener and Simons (see [38] and also Property (P7) in Section II.4), namely their very nice integral formula: Ef(t2D−1 + s2D−1)= Ef(( t + s )2D−1), 1 2 | | | | 1 where (D1,D2) is a random vector with the Dirichlet distribution and parameters (1/2, 1/2), and f is an arbitrary function for which this expectation exists. Example IV.5.1. Let 0 < 2α< 2, • X ,X ,... be independent identically distributed symmetric (2α)-stable random • 1 2 variables with the characteristic function exp t 2α , {−| | } D = (D ,D ) be a random vector with the Dirichlet distribution and parameters • 1 2 (1/2, 1/2) such that D and X1,X2,... are independent.

We define a sequence of random variables Yn as follows: −1/2α X1D1 if n = 1, Yn = −1/2α ( XnD if n 2. 2 ≥ Calculating the characteristic function for the sequence Y we get for every ξ R(∞), n ∈ ∞ ∞ E exp i ξ Y = E exp ξ 2αD−1 ξ 2αD−1 . k k D − | 1| 1 − | k| 2 n Xk=1 o n Xk=2 o Using now the integral formula mentioned above we have

∞ ∞ 1/2 2 E exp i ξ Y = E exp ξ α + ξ 2α D−1 k k D1 − | 1| | k| 1 n Xk=1 o n   kX=2   o 1 \ 1 = exp ( ξ α + (ξ , ξ ,...) α )2x−1 x−1/2(1 x)−1/2 dx {− | 1| k 2 3 k2α }π − 0 ∞ \ 1 = exp ( ξ α + (ξ , ξ ,...) α )2y y−1(y 1)−1/2 dy {− | 1| k 2 3 k2α }π − 1 ϕ(( ξ α + (ξ , ξ ,...) α )1/α). ≡ | 1| k 2 3 k2α This means that the function ϕ( ) is positive definite on the space which is built k·k H from R and ℓ2α-spaces joined together by an α-norm. Evidently, we have α ( )= α, α ( )=2α, ◦ H s H and moreover, ∞ \ 1 ϕ( t )= exp t 2αy y−1(y 1)−1/2 dy. | | {−| | }π − 1 Assume now that the theorem of Bretagnolle, Dacunha-Castelle and Krivine holds for the space treated as a “maximal” subspace of an L -space, so there exists a non- H α 62 J. K. Misiewicz

negative random variable Θ with distribution λ such that

∞ ∞ ∞

\ \\ ϕ(t)= exp t αx λ(dx)= exp t 2αx2u γ+ (du) λ(dx). {−| | } {−| | } 1/2 0 0 0 From the uniqueness of the Laplace transform we would deduce that the random variable Θ2Θ has the density function 1 1 (y)y−1(y 1)−1/2, which is impossible 1/2 π [1,∞) − as every mixture of Θ has support equal to the whole half-line [0, ). 1/2 ∞

IV.6. Lα-dependent stochastic integrals. In this section we are going to construct the L -dependent stochastic integral of a non-random function f L (S,Σ,m), following α ∈ α roughly the construction of the stable integral given e.g. in [212]. The similar construction in a much more general case was done by Kallenberg in 1993 (see [107]).

Recall that a stochastic process = Xt : t T is Lα-symmetric if it is pseudo- X { ∈ } T isotropic and there exists a measure space (S,Σ,m) and a 1-1 linear operator : R( ) T ℜ → L (S,Σ,m) such that the quasi-norm c : R( ) [0, ) is given by the formula α → ∞ T c(ξ)= (ξ) , ξ R( ). kℜ kα ∈ We will assume that the family X : t T of random variables contains at least { t ∈ } countably many linearly independent random variables Xt. This assumption is equivalent to the statement that the space Lα(S,Σ,m) is infinite-dimensional. For convenience we fix a random variable Y such that for every ξ R(T) the random ∈ variable T ξ X has the same distribution as c(ξ) Y . t∈ t t · By J(f) we denote the L -symmetric stochastic integral of the function f P p ∈ Lα(S,Σ,m). In order to construct J(f), we need to specify the finite-dimensional di- stributions of J(f) : f L (S,Σ,m) , show that they are consistent, and then apply { ∈ α } the Kolmogorov existence theorem.

Given f1,...,fn Lα(S,Σ,m), we define a probability measure Pf1,...,fn by its cha- ∈ T racteristic function. First we need to find ξ R( ) such that f = (ξ ), k =1,...,n. k ∈ k ℜ k Such a choice is possible, and it is unique, since we assumed that −1 exists. Now, for ℜ η Rn, we put

∈ \ \ n

. . . exp i ηkxk Pf1,...,fn (dx1,...,dxn) Rn n k=1 o X n n = E exp i η ξ , = E exp i η ξ , kh k X i k k X n Xk=1 o n D Xk=1 Eo n n = E exp i ηkξk Y = E exp i ηkfk Y . ℜ α α n  k=1  o n k=1 o X X It is easy to see that in this construction Pf1,...,f n is the distribution of the random vector ( ξ , ,..., ξ , ), thus the function defined above is a characteristic function h 1 X i h n X i and the consistency of the family P : n N, f L (S,Σ,m), k = 1,...,n { f1,...,fn ∈ k ∈ α } of probability measures follows immediately from the consistency of P : n { Xt1 ,...,Xtn ∈ N, t T, k =1,...,n . k ∈ } Substable and pseudo-isotropic processes 63

Now, by the Kolmogorov existence theorem, there exists a symmetric stochastic process J(f) : f L (S,Σ,m) whose finite-dimensional distributions are given by { ∈ α } P . Then, for every f L (S,Σ,m), the random variable J(f) has (up to a mul- f1,...,fn ∈ α tiplicative constant) the same distribution as Y , i.e.

\ 1/α J(f) =d f Y where f = f(x) α m(dx) . k kα · k kα | |  S  On the other hand, we see that exp (ξ) α is a positive definite function on R(T), {−kℜ kα} since exp f α is positive definite on L (S,Σ,m). Similarly to Section IV.4 (see {−k kα} α also [212], Chapter 3.2) we construct a stable stochastic integral I(f), f Lα(S,Σ,m), ′ ∈ such that the corresponding distribution Pf1,...,fn is symmetric α-stable with characte- ristic function exp n η f α . Using again the result of Bretagnolle, Dacunha- {−k k=1 k kkα} Castelle and Krivine [29], we obtain the existence of a probability measure λ on [0, ) P ∞ such that ∞ n \ n α E exp i ηkfk Y = exp ηkfk s λ(ds). α − α n k=1 o 0 n k=1 o X X Choosing now the non-negative random variable Θ with distribution λ to be independent of I(f), we finally obtain J(f) =d Θ1/α I(f), · in the sense of equality of finite-dimensional distributions. In [107] Kallenberg showed that if S is Polish space and m is a σ-finite measure then we can choose an independently scattered σ-additive symmetric α-stable random measure Zα, appearing in the definition of I(f) (see Section IV.4), and the random variable Θ such that J(f)= Θ1/α I(f) a.s. · IV.7. Random limit theorems. Notice that any sequence X : n N of sym- { n ∈ } metric, jointly α-stable random variables fulfills the limit theorem in the sense that there exists a sequence of positive numbers c(n) such that X + . . . + X 1 n =d X , c(n) (α) where X is the standard symmetric α-stable random variable with distribution (α, 1). (α) S To see this, it is enough to take the coefficient c(n) such that n E exp it X = exp c(n)α t α . k {− | | } n Xk=1 o Equivalently c(n) = (1,..., 1) , where : R(N) L is the spectral representation kℜ kα ℜ → α for the sequence X : n N . { n ∈ } The similar property holds for pseudo-isotropic sequences of random variables. Na- mely, if X : n N is a pseudo-isotropic sequence with quasi-norm c : R(N) [0, ), { n ∈ } → ∞ then X + . . . + X 1 n =d X , a(n) 1 where a(n)= c( n e )/c(e ), and e : k N is the standard base in R(N). k=1 k 1 { k ∈ } P 64 J. K. Misiewicz

In some cases we can get a one-dimensional projection of a pseudo-isotropic distribu- tion (in fact, substable distribution) as a limit distribution in random limit theorems for a sequence of symmetric, jointly stable random variables. More information on different types of random limit theorems can be found, for example, in papers of Rychlik and Szynal (see e.g. [207], [205], [206]). Theorem IV.7.1. (NP) Let X : n N be a sequence of independent identically { n ∈ } distributed random variables with distribution (α, 1), and let Θ(n), n N, be independent S ∈ of X : n N , taking values in N. If there exists a positive random variable Θ such { n ∈ } that Θ(n)/n converges in distribution to Θ then

X1 + . . . + XΘ(n) 1/α lim P

The problem of infinite divisibility of substable stochastic processes discussed in this chapter appears here as the simplest example of the dependence of geometrical proper- ties of the Reproducing Kernel Space and some properties of the corresponding pseudo- isotropic process. This problem, however, can also be considered as a special case of the much more general problem of infinite divisibility of mixtures of infinitely divisible di- Substable and pseudo-isotropic processes 65

stributions initiated by Zolotarev (see [248]) in 1958. Zolotarev defined µλ as a mixture of a probability measure µ with respect to a probability measure λ supported by [0, ) ∞ by the formula

∞ \ ∗θ µλ = µ λ(dθ), 0 where µ should be taken infinitely divisible if λ is not concentrated on the natural num- bers. It was proven that if λ is infinitely divisible then µλ is infinitely divisible (see Zolotarev [248], and in full generality, Tortrat [232]). This result corresponds to our Pro- position V.4.1. Some interesting examples of mixtures µλ can be found in [73]. Gnedenko posed a very interesting question: is it possible that the mixture µλ is infinitely divisible for a mixing measure λ which does not have this property? One of the first and maybe most spectacular answers is due to Davidson (see [49]). He showed (using the P´olya ∗θ condition) that every mixture µλ is infinitely divisible if for every θ > 0, µ has the characteristic function of the form exp v (u) with v (u)= v ( u), non-negative and {− θ } θ θ − concave on the positive half-line. This shows, for example, that every mixture of a sym- metric α-stable random variable, α< 1, is infinitely divisible. More interesting examples of infinitely divisible mixture µλ with mixing measure λ which is not infinitely divisible can be found in [231], [232], [233]. In Section 1 we give the definition and basic properties of L´evy measures on Rn and associated generalized Poisson measures sExp(ν) and Exp(ν). We also recall the L´evy–Khinchin representation for infinitely divisible probability distributions on Rn.

In Section 2 we define and study a sequence of signed measures LN (µ) (the sequence called approximative logarithm) defined through the convolution powers of the probabi- lity measure µ on Rn, in order to find the method of calculating the logarithm of this measure µ, or eventually, the L´evy measure for an infinitely divisible measure µ. We give a collection of sufficient conditions for this sequence to find out whether or not µ is a generalized Poisson measure. We underline that the methods and results presented in this section remain true for mixtures µλ studied by Zolotarev and Tortrat (see [248], [231], [232], [233]). In Section 3 we apply this sequence to study infinite divisibility of substable random vectors on Rn. It is well known that infinite divisibility of substable random vectors does not imply infinite divisibility of the mixing variable. We give a sufficient condition for the mixing random variable under which a substable random vector is infinitely divisi- ble. In Section 4 we study infinite divisibility for substable stochastic processes. This turns out to be strictly dependent on the geometry of the Reproducing Kernel space ( ), H X a subspace of an Lα-space generated by the corresponding stable process (as defined in Section IV.5). Under some additional geometrical assumptions, infinite substable se- quences of random variables are infinitely divisible if and only if their mixing measures are infinitely divisible (similarly to infinite sub-Gaussian sequences). This equivalence does not hold in general. 66 J. K. Misiewicz

V.1. Infinitely divisible distributions. L´evy measures. We introduce here the following notation: ∞ 1 e(ν)= ν∗k, k! Xk=0 for a signed measure ν with finite variation. The measure e(ν) could be called the expo- nential function of the measure ν, this name however is historically reserved for the following: ∞ ∞ 1 1 Exp(ν) = exp ν(Rn) ν∗k = (ν ν(Rn)δ )∗k, {− } k! k! − 0 Xk=0 Xk=0 which differs from e(ν) by the normalizing constant exp ν(Rn) which guarantees that {− } if ν is a positive measure then Exp(ν) is a probability measure. The concept of the probability measure Exp(ν) for a finite positive measure ν on Rn was generalized in a natural way to weak limits of such measures. Using the notation ν ν setwise, for positive measures ν, ν on Rn such that ν (A) ν(A) for every k ↑ { k} k ↑ Borel set A Rn, we introduce (see e.g. Linde [139]) the following definition: ⊂ Definition V.1.1. A positive σ-finite Borel measure ν on Rn is a L´evy measure if ν( 0 ) = 0, and there exist a sequence of positive finite measures ν on Rn and a sequence { } k x Rn such that ν ν setwise and (δ Exp(ν )) converges weakly to a probability { k}⊂ k ↑ xk ∗ k measure µ on Rn. The measure µ is called a generalized Poisson measure with L´evy measure ν. It can be shown that a positive σ-finite measure ν on Rn such that ν( 0 )=0isa { }

L´evy measure in the sense of this definition if and only if

\ \ (V.1.1) . . . min(1, x 2) ν(dx) < k k ∞ (for the proof see e.g. Linde [139]). In fact it is shown that the above characterization of

L´evy measures remains valid if we consider measures on a Hilbert space (see Parthasa-

Ì Ì rathy [181], VI.4.7 and VI.4.8). Notice that if . . . min(1, x ) ν(dx) < for a σ-finite k k ∞ positive measure ν on Rn, ν( 0 ) = 0, then the condition (V.1.1) is satisfied, so ν is a { } L´evy measure. Notice also that if µ1, µ2 are two generalized Poisson measures with the same L´evy measure ν then there exists x Rn such that µ = δ µ . 0 ∈ 1 x0 ∗ 2 Definition V.1.2. Let ν be a L´evy measure on Rn. Then sExp(ν) is the generalized

Poisson measure on Rn defined by its characteristic function as follows: for ξ Rn, \ \ ∈ sExp(ν)∧(ξ) = exp . . . (eihξ,xi 1 i ξ, x 1 ( x )) ν(dx) . − − h i (0,1) k k n o For a given L´evy measure ν on Rn and ε (0, 1), if ν x = ε = 0 then the ∈ {k k }

generalized Poisson measure µε defined by its characteristic function

\ \ exp . . . (eihξ,xi 1 i ξ, x 1 ( x )) ν(dx) , ξ Rn, − − h i (0,ε) k k ∈ n o differs from sExp(ν) only by translation with respect to the vector y = (yε,...,yε ) Rn, ε 1 n ∈ Substable and pseudo-isotropic processes 67

where, for every ξ Rn, \ ∈ \ ξ,y = . . . ξ, x ν(dx). h εi h i ε

Definition V.1.3. If ν is a symmetric L´evy measure on Rn, or if ν is a L´evy measure

Ì Ì on Rn such that . . . min(1, x ) ν(dx) < , then Exp(ν) is the generalized Poisson k k ∞

measure on Rn defined by its characteristic function

\ \ Exp(ν)∧(ξ) = lim exp . . . (eihξ,xi 1) ν(dx) , ξ Rn. εց0 − ∈ n kxk>ε o If a L´evy measure ν is concentrated on the positive half-line (0, ), we can calculate ∞ the Laplace transform of the generalized Poisson measure sExp(ν), which is

∞ \ sExp(ν)∼(t) = exp (e−ts 1 ts1 (s)) ν(ds) . − − (0,1) n 0 o Ì∞ Moreover, if a L´evy measure ν is concentrated on (0, ) and such that min(1,s) ν(ds) ∞ 0 < , then the Laplace transform of Exp(ν) is ∞

∞ \ Exp(ν)∼(t) = exp (e−ts 1) ν(ds) . − n 0 o Definition V.1.4. A probability measure µ on Rn is called infinitely divisible if for every n N there exists a probability measure µ such that µ = µ∗n, where λ∗n ∈ n n ≡ λ λ . . . λ (n times). ∗ ∗ ∗ There exists a full characterization of infinitely divisible distributions, which can be expressed in the following well-known theorem (see e.g. [139] for the proof): Theorem V.1.1 (L´evy–Khinchin). If µ is an infinitely divisible measure on Rn, then µ = γ sExp(ν) δ , ∗ ∗ x0 where γ is a symmetric Gaussian measure on Rn, ν is a L´evy measure on Rn, x Rn. 0 ∈ This decomposition is unique. If a probability measure µ on Rn is symmetric and infinitely divisible, then µ = γ Exp(ν), ∗ for γ a symmetric Gaussian measure on Rn, and ν a symmetric L´evy measure on Rn. This decomposition is also unique. Now, for a probability measure µ on Rn, we want to define a logarithm of this measure. It is evident that this is not always possible. Definition V.1.5. A signed measure ν of finite variation is the logarithm of the probability measure µ (notation ν = Log(µ)) if and only if µ = e(ν).

If Log(µ) exists then it is uniquely determined. Indeed, if µ = e(ν1) = e(ν2), then

the Fourier transform of µ can be expressed in two different ways:

\ \ \ \ ihξ,xi ihξ,xi µ(ξ) = exp . . . e ν1(dx) = exp . . . e ν2(dx) , n Rn o n Rn o b 68 J. K. Misiewicz

and the uniqueness of the Fourier transform for signed measures of finite variation implies that ν1 = ν2. Notice that if µ is a probability measure on Rn and ν = Log(µ), then 1 = µ(Rn) = exp(ν(Rn)), so ν(Rn) = 0. We also know that the measure e(ν) is an infinitely divisible probability measure on Rn if and only if (ν ν( 0 )δ ) is a positive measure on Rn 0 . − { } 0 \{ } V.2. Approximative logarithm. By E we denote here a closed topological semi- group with the neutral element 0 E, and equipped with the Borel σ-field . Further on ∈ B we will specify E to be either Rn or [0, ), but some results in this section can be formu- ∞ lated in this, slightly more general, language. In order to find the method for calculating the logarithm of a probability measure µ on E we will study the following sequence of signed measures, which we could call the approximative logarithm: N ( 1)k+1 (V.2.1) L (µ)= − (µ δ )∗k. N k − 0 Xk=1 It is easy to see that for every probability measure µ concentrated on E, we have L (µ)(E) = 0, since L (µ)(E)= N (( 1)k+1/k)(1 1)k = 0. N N k=1 − − Moreover, by expanding the term (µ δ )∗k in (V.2.1), and reversing the order of P − 0 summation we obtain N ( 1)k+1 k k N 1 N N 1 k L (µ)= − ( 1)k−j µ∗j = δ + ( 1)j+1µ∗j . N k j − − 0 k − k j j=0 j=1 kX=1 X   Xk=1 X Xk=j   Using now the formula (see [188], 4.2.2.56) N 1 k 1 N (V.2.2) = , k j j j Xk=j     we can get the following, somewhat more convenient expression for LN (µ): N 1 N ( 1)k+1 N (V.2.3) L (µ)= δ + − µ∗k. N − 0 k k k kX=1 Xk=1   This does not mean yet that L (µ) has an atom at zero of weight N 1/k as the N − k=1 measure µ itself can have an atom at zero. If µ is concentrated on [0, ) or on ( , 0] ∞P −∞ then N ( 1)k+1 L (µ) 0 = − (µ 0 1)k, N { } k { }− Xk=1 and the sequence L (µ)( 0 ) converges to ln(µ 0 ). N { } { } Lemma V.2.1 (Misiewicz [167]). For every probability measure λ on [0, ) and every ∞ symmetric probability measure µ on Rn with strictly positive Fourier transform we have

∞ ∞

\ \ e−ts L (λ)(ds) log e−ts λ(ds) , N →

0 0

\ \ \ \ n o . . . eihx,yi L (µ)(dy) log . . . eihx,yi µ(dy) , N → Rn n Rn o Substable and pseudo-isotropic processes 69

as N , for all t [0, ) and x Rn. The convergence is uniform if the corresponding → ∞ ∈ ∞ ∈ transforms of the measures λ or µ are separated from zero.

P r o o f. Both of these formulas follow from the same argument upon replacing the Laplace transform with the Fourier transform. Therefore we will only prove the first one. We have

∞ ∞ \ \ N ( 1)k+1 e−ts L (λ)(ds)= − e−ts (λ δ )∗k(ds) N k − 0 0 kX=1 0 ∞ N ( 1)k+1 \ k = − e−ts (λ δ )(ds) k − 0 kX=1 h 0 i

∞ ∞ \ N \ ( 1)k+1 k = − e−ts λ(ds) 1 log e−ts λ(ds) k − → kX=1 h 0 i n 0 o as N , where the convergence follows from the fact that the Laplace transform of → ∞ any probability measure λ on [0, ) is strictly positive everywhere on [0, ). The case ∞ ∞ of uniform convergence trivially follows from the above calculations.

Notice that even though under the assumptions of Lemma V.2.1 the sequence of the Laplace (Fourier) transforms for the measures LN (λ) converges to a well-defined logarithm of a Laplace transform (Fourier transform), the last function does not have to be the Laplace (Fourier) transform of any measure. But if there exists a signed measure of finite variation having the Laplace (Fourier) transform equal to the logarithm of the Laplace (Fourier) transform of some probability measure, then we have the following two lemmas as a simple consequence of Lemma V.2.1:

Lemma V.2.2 (Misiewicz [167]). Let λ be a probability measure on [0, ) and let m , ∞ 1 m be two L´evy measures on (0, ). Then λ = sExp(m m ) δ for some x R 2 ∞ 1 − 2 ∗ x0 0 ∈ if and only if there exists a sequence ε 0, m ( ε )= m ( ε )=0, and a sequence k ց 1 { k} 2 { k} x R such that k ∈

∞ ∞

\ \ −ts −ts lim e LN (λ)(ds) = lim txk + (e 1) (m1 m2)(ds) . N→∞ k→∞ − − − 0 n εk o P r o o f. From Lemma V.2.1 we have

\ \ −ts −ts lim e LN (λ)(ds) = log e λ(ds) . N→∞ 0 n o Notice now that

∞ \ tx + (e−ts 1) (m m )(ds) − k − 1 − 2 εk

∞ \ = t(x y )+ (e−ts 1 ts1 (s)) (m m )(ds), − k − k − − (0,1) 1 − 2 εk 70 J. K. Misiewicz

Ì1 where yk = s(m1 m2)(ds). Since m1 and m2 are L´evy measures on (0, ), we have εk − ∞

∞ ∞

\ \ −ts −ts lim (e 1 ts1(0,1)(s)) (m1 m2)(ds)= (e 1 ts1(0,1)(s)) (m1 m2)(ds). k→∞ − − − − − − εk 0 It is evident now that if λ = δ sExp(m m ) then lim (x y )= x . On the x0 ∗ 1 − 2 k→∞ k − k 0 other hand, if we assume equality of the limits, then also (x y ) has to converge, so k − k there exists x R such that 0 ∈

\ \ log e−ts λ(ds) = tx + (e−ts 1 ts1 (s)) (m m )(ds), − 0 − − (0,1) 1 − 2 n o 0 which was to be shown.

Lemma V.2.3 (Misiewicz [167]). Let µ be a symmetric probability measure on Rn and let m , m be two symmetric L´evy measures on Rn. Then µ = Exp(m m ) if and only 1 2 1 − 2

if

\ \ \ \ ihξ,yi ihξ,yi lim . . . e LN (µ)(dy) = lim . . . (e 1) (m1 m2)(dy). N→∞ εց0 − − kyk>ε

Example V.2.1. Let 1 α > 0 and µ = αδ + (1 α)δ regarded as a probability ≥ 0 − 1 measure on E = [0, ). Then we have ∞ N ( 1)k+1 L (µ)= − (1 α)k(δ δ )∗k N k − 1 − 0 kX=1 N N k 1 ( 1)k+1 k = δ (1 α)k + − (1 α)k ( 1)k−j δ∗j − 0 k − k − j − 1 j=1 Xk=1 Xk=1 X   N 1 N N 1 k = δ (1 α)k + ( 1)j+1δ (1 α)k. − 0 k − − j k j − j=1 Xk=1 X Xk=j   In order to find the limit coefficients for δj, we regard the distribution of Xj as the waiting time for the jth success in the Bernoulli sequence with success probability α. Then we have k 1 P X = k = − αj (1 α)k−j , k = j, j +1,... { j } j 1 −  −  Consequently,

∞ k 1 α j ∞ 1 k 1= − αj (1 α)k−j = j (1 α)k, j 1 − 1 α k j − Xk=j  −   −  Xk=j   hence,

∞ 1 k 1 1 α j (V.2.4) (1 α)k = − . k j − j α Xk=j     Substable and pseudo-isotropic processes 71

It is easy to see that the sequence of measures LN (µ) converges uniformly on compact sets to the following well defined signed measure: ∞ ( 1)k+1 1 α k ν δ ln(α)+ − − δ . ≡ 0 k α k Xk=1   For α > 1/2 the convergence holds also in the variation norm sense. It already follows from Lemma V.2.1 that the Laplace transform of LN (µ) converges pointwise to ln(α + (1 α)e−t), and it is not difficult to see that the last function is the Laplace transform − of the measure ν.

Lemma V.2.4 (Misiewicz [167]). Let µ be a probability measure on E such that µ( 0 )= { } α> 1/2. Then there exists a signed measure ν on E of finite variation such that L (µ) N → ν as N , where the convergence holds in the variation norm sense and µ = e(ν). → ∞ Proof. We can write µ = αδ + (1 α)µ′, where µ′ is a probability measure such 0 − that µ′( 0 ) = 0, and then { } µ δ = (1 α)(µ′ δ ) − 0 − − 0 is a signed measure with the variation (1 α)2 < 1. Now it is easy to see that L (µ) − N converges in the variation norm to the measure ∞ ( 1)k+1 ν − (1 α)k(µ′ δ )∗k, ≡ k − − 0 Xk=1 with variation bounded by ln(α). By Lemma V.2.1 and a bounded convergence ar- − gument we see that µ and e(ν) have the same Fourier transform, which completes the proof.

Example V.2.2. Let µ be a distribution of an E-valued random vector X and let Θ be independent of X and such that p = P Θ =0 =1 P Θ =1 , 1/2

Lemma V.2.5 (Misiewicz [167]). Assume that λ is a signed measure of finite variation concentrated on E, with p = λ(E). Then

N ( 1)k+1 L (Exp(λ)) = δ − (e−p 1)k + λ[1 (1 e−p)N ] N 0 k − − − k=1 X∞ ( 1)l ∂ l−1 + − λ∗l (1 e−p)N . l! ∂p − Xl=2   72 J. K. Misiewicz

Moreover, if p =0 then ∞ ( 1)l ∂ l−1 L (e(λ)) = L (Exp(λ)) = λ + − λ∗l (1 e−p)N . N N l! ∂p − l=N+1   p=0 X

P r o o f. We start from the expression (V.2.3) for the measure µ = Exp( λ), where λ is a finite signed measure with p = λ(E) = p. Noticing that (Exp(λ))∗k = Exp(kλ), we obtain

LN (Exp(λ)) N N 1 ( 1)k+1 N = δ + − Exp(kλ) − 0 k k k Xk=1 Xk=1   N N ∞ 1 ( 1)k+1 N 1 = δ + − e−kp klλ∗l − 0 n k k l! n=1 X kX=1   Xl=0 N N ∞ N 1 ( 1)k+1 N 1 N = δ + δ − e−kp + λ∗l ( 1)k+1 e−kpkl−1. − 0 n 0 k k l! − k n=1 X kX=1   Xl=1 kX=1   Using formula (V.2.2) (see 4.2.2.56 in [14]) it is easy to check that

N ( 1)k+1 N 1 N ( 1)k+1 N − (e−p 1)k = + − e−kp. k − − k k k kX=1 Xk=1 Xk=1   Moreover, kl−1e−kp = ( 1)l−1(∂/∂p)l−1e−kp. So finally we obtain − N ( 1)k+1 L (Exp(λ)) = δ − (e−p 1)k + λ[1 (1 e−p)N ] N 0 k − − − k=1 X∞ ( 1)l ∂ l−1 + − λ∗l (1 e−p)N . l! ∂p − Xl=2   Since (∂/∂p)l−1(1 e−p)N = (1 e−p)N−l+1W (N,e−p), where W (N,e−p) is a − − l−1 l−1 multinomial of degree l 1 with respect to N and to e−p, for every fixed l 2 we obtain − ≥ ∂ l−1 lim (1 e−p)N =0. N→∞ ∂p −   It is easy to notice that for p = λ(E 0 ) = 0 we have \{ } ∞ ( 1)l ∂ l−1 L (Exp(λ)) = λ + − λ∗l (1 e−p)N . N l! ∂p − l=N+1   p=0 X

It is also easy to see that for every measure λ on (0, ) separated from zero (supported ∞ on (ε, ) for some ε> 0) ∞ lim LN (Exp(λ)) = pδ0 + λ, N→∞ − in the sense of convergence on compact sets. Substable and pseudo-isotropic processes 73

Example V.2.3. Take λ = δ1; then from formula (V.2.3) we get

N N 1 ( 1)k+1 N L (δ )= δ + − δ . N 1 − 0 k k k k Xk=1 Xk=1   Hence, the absolute value of the weight of every atom of the measure LN (δ1) tends to infinity as N , and therefore there does not exist any measure on [0, ) which is → ∞ ∞ the limit of the measures LN (δ1) in any sense. However, we still have

∞ \ −ts −t lim e LN (δ1)(ds) = ln(e )= t. N→∞ − 0 This convergence is pointwise but not uniform in this case. On the other hand, we know that the function t cannot be written as the Laplace transform of any measure on [0, ). − ∞ R e m a r k s. Assume that µ is a probability measure on Rn and ν is a signed measure on Rn with total mass zero and finite variation. Then we have the following:

1. It is easy to see that if LN (µ) tends to ν in variation (the convergence can be weak as well as uniform on compact sets), then µ = e(ν). 2. Assume that the Fourier transform µ of the measure µ is strictly positive. As the Fourier transform and the inverse Fourier transform are continuous linear one-to-one mappings in the space of all tempered distributionsb on Rn (see Rudin [203], Th. 7.15), in view of Lemma V.2.3, µ = Exp(ν) if and only if LN (µ) converges to ν in the tempered distribution sense. 3. Assume that µ is symmetric with a strictly positive Fourier transform µ, and assume that the sequence of variations L (µ) is bounded. Then the function log(µ(x)), x Rn, | N | ∈ is continuous, hence, in view of [93], Corollary 33.21, there exists a signedb measure ν on Rn with total mass zero and finite variation such that µ = e(ν). b

V.3. Infinite divisibility of substable random vectors. Studying sub-Gaussian random variables (i.e. substable with α = 2) Kelker [118] found an example of a mixing variable Θ which is not infinitely divisible while the random variable XΘ1/2, for X being symmetric Gaussian, has this property. In Kelker’s example we have α = 2, n = 1 and the distribution λ of the random variable Θ is equal to Exp(m), where 0,26 for x =1, 2, 4, 5, m( x )= { } 0, 04 for x = 3.  − In this case the random variable XΘ1/2 is infinitely divisible with the L´evy measure having density ∞ 1 \ x2 exp m(du) 0. √2π − 2u ≥ 0  

Similar examples were given in [172] for sub-Gaussian random vectors (Y1,...,Yn)= (2,I,Θ), where the linear operator I corresponds to the Gaussian random vector with E 74 J. K. Misiewicz

independent, identically distributed components, which means that I(ξ) 2 = ξ 2. k k | k| For a similar reason as in Kelker’s example,X the random vector (Y1,...,Yn) is infinitely divisible for the non-infinitely divisible random variable Θ with distribution λ = Exp(m), where p for x =1, 2, 4, 5, (V.3.1) m( x )= { } q for x = 3,  − provided that 4p q = 1, p,q > 0, and − 0, 25

Lemma V.3.1 (Misiewicz [166]). For every SαS measure γα, and every finite signed measure m on the positive half-line, γ Exp(m)= Exp(γ m). α ◦ α ◦ P r o o f. We only need to check that the corresponding characteristic functions coin- cide. Let (X1,...,Xn) be a symmetric α-stable random vector with distribution γα and density function f(x1,...,xn), and let Θ have the distribution λ = Exp(m). The Fourier transform of the measure γ Exp(m) is equal to α ◦ n 1/α E exp i ξkXkΘ n k=1 o

X ∞ \ = exp u (ξ) α λ(du) {− kℜ kα} 0 ∞ ∞ \ R 1 = e−m( ) exp u (ξ) α m∗k(du) k! {− kℜ kα} kX=0 0

∞ \ = exp (1 exp u (ξ) α ) m(du) − − {− kℜ kα} n 0 o

\\ \ = exp . . . 1 cos i ξ x u1/α f(x) dx m(du) − − j j n 0    o

X ∞

\ \ \ = exp . . . 1 cos i ξ y f(yu−1/α)u−n/α m(du) dy − − j j

n   X  0 o

\ \ = exp . . . 1 cos i ξ y (γ m)(dy) . − − j j α ◦ n   X  o Substable and pseudo-isotropic processes 75

Example V.3.1. Assume that the SαS random vector (X1,...,Xn) has the charac- terizing operator : Rn L such that ℜ → α n (ξ) α = ξ α, kℜ kα | k| Xk=1 which means that all the components are i.i.d., symmetric α-stable random variables with scale parameter 1. It is well known that every SαS random variable can be treated as + sub-Gaussian with the mixing measure γα/2, hence the density of (X1,...,Xn) can be

written in the following form: \ \ n 2 n −n/2 xk −1/2 + + f(x)= . . . (2π) exp sk γα/2(ds1) ...γα/2(dsn). − 2sk  Xk=1  kY=1  Now we define the substable random vector (Y ,...,Y )= (α, , Θ) with λ = Exp(m), 1 n E ℜ where m is given by formula (V.3.1). It follows from Lemma V.3.1 that (Y1,...,Yn) is

infinitely divisible provided \ f(yu−1/α)u−n/αm(du) 0, y Rn. ≥ ∈ Using the previous representation for the function f and changing the order of integration, we see that this condition holds if

\ n x2 (2π)−n/2 exp k u−1/α u−n/α m(du) 0, y Rn. − 2sk ≥ ∈  kX=1  Now it is easy to check that the substable random vector (Y1,...,Yn) is infinitely divisible provided 0, 25

∞ \ γ ν(B)= γ (Bs−1/α) ν(ds) for every Borel set B Rn, α◦ α ⊂ 0

∞ \ γ+ ν(B)= γ+(Bs−1/α) ν(ds) for every Borel set B [0, ). α ◦ α ⊂ ∞ 0 76 J. K. Misiewicz

Proposition V.3.1 (Misiewicz [167]). If γα is the distribution of an SαS random + vector, γα is the distribution of Θα, and λ is a probability measure, then L (γ λ)= γ L (λ), L (γ+ λ)= γ+ L (λ). N α◦ α◦ N N α ◦ α ◦ N P r o o f. As all these measures have finite variation we only need to show that the

corresponding Fourier transforms (respectively, Laplace transforms) coincide, i.e.,

\ \ . . . cos ξ, x (L (γ λ))(dx) h i N α◦

Rn \ N \ ( 1)n+1 = − . . . cos ξ, x (γ λ δ )∗n(dx) n h i α ◦ − 0 n=1 Rn

X \ N \ ( 1)n+1 n = − . . . cos ξ, x (γ λ δ )(dx) n h i α◦ − 0 n=1 Rn X n o ∞ N \ ( 1)n+1 n = − exp (ξ) αs λ(ds) 1 n {−kℜ kα } − n=1 X n 0 o ∞ N n \ ( 1)n+1 n k = − ( 1)n−k exp (ξ) αs λ(ds) n k − {−kℜ kα } n=1 X kX=0   n 0 o ∞ N ( 1)n+1 n n \ = − ( 1)n−k exp (ξ) αs λ∗k(ds) n k − {−kℜ kα } n=1 X kX=0   0 ∞ \ N ( 1)n+1 n n = exp (ξ) αs − ( 1)n−kλ∗k (ds) {−kℜ kα } n k − n=1 0  X Xk=0   

\ \ \ = exp (ξ) αs L (λ)(ds)= . . . cos ξ, x (γ L (λ))dx. {−kℜ kα } N h i α ◦ N 0 Rn The proof of the second formula is almost the same if we replace the Fourier transform by the Laplace transform, and will be omitted.

We will denote by fα(x1,...,xn) the density function for the SαS random vector re- presented by the linear operator . By L+ (λ) we will denote the measure L (λ) re- ℜ N N stricted to the set (0, ). ∞ Proposition V.3.2 (Misiewicz [167]). The substable random vector (Y1,...,Yn) = (α, , Θ) is infinitely divisible if and only if for every x = 0, x Rn, there exists a E ℜ 6 ∈ finite, non-negative limit

∞ \ −1/α −1/α −n/α + g(x) = lim fα(x1u ,...,xnu )u L (λ)(du), N→∞ N 0 and this g(x) is the density of a symmetric L´evy measure on Rn. P r o o f. The “only if” part of the proof is trivial. So assume that the substable random vector (Y1,...,Yn) is infinitely divisible. From Lemma V.2.1 and Proposition V.3.1 we Substable and pseudo-isotropic processes 77

find that for every ξ Rn,

∈ ∞

\ \ \ . . . eihξ,xi L (γ λ)(dx) log exp (ξ) αs λ(ds) . N α ◦ → {−kℜ kα } Rn n 0 o n But, for the random vector (Y1,...,Yn), there exists a symmetric L´evy measure m on R such that

\ \ \ log exp (ξ) αs λ(ds) = . . . (eihξ,xi 1) m(dx), {−kℜ kα } − n 0 o Rn

thus,

\ \ \ \ . . . eihξ,xi L (γ λ)(dx) . . . (eihξ,xi 1) m(dx). N α ◦ → − Rn Rn

On the other hand, using Proposition V.3.1, we have

\ \ . . . eihξ,xi L (γ λ)(dx) N α ◦

Rn

\ \ = . . . eihξ,xi γ L (λ)(dx) α ◦ N Rn

∞ \ = (exp (ξ) αs 1) L+ (λ)(ds) {−kℜ kα }− N 0

\ \\ 1/α = . . . (eihξ,xis 1)f (x ,...,x ) dx L+ (λ)(ds) − α 1 n N Rn 0

\ \ \ = . . . (eihξ,xi 1) f (x s−1/α,...,x s−1/α)s−n/α L+ (ds) dx. − α 1 n N Rn 0 Thus, finally,

\ \ \ ihξ,xi −1/α −1/α −n/α + lim . . . (e 1) fα(x1s ,...,xns )s L (ds) dx N→∞ − N

Rn 0

\ \ = . . . (eihξ,xi 1) m(dx). − Rn

V.4. Infinite divisibility of substable processes. The problem of infinite divi- sibility of substable processes was studied in the case of sub-Gaussian processes (i.e. substable processes with α = 2). It is well known that infinite divisibility of the mixing variable Θ guarantees infinite divisibility of the sub-Gaussian process Y : t T . We { t ∈ } also know that if the linear space spanned by a sub-Gaussian process Y : t T is { t ∈ } infinite-dimensional and the process is infinitely divisible then the mixing variable Θ is infinitely divisible as well (for details see [161] and [172]). As we have seen before, the latter is no longer true in the finite-dimensional case. Even though this was not explicitly shown, the proof that infinite divisibility of sub-Gaussian process forces the mixing varia- ble to be infinitely divisible was based on a very special property of Hilbert spaces: any two subspaces of a Hilbert space having the same dimension are identical, in the sense 78 J. K. Misiewicz

that they are isometrically isomorphic. We have already seen before that subspaces of Lα-spaces are much more complicated. Let us start from two easy facts. Proposition V.4.1 (Misiewicz [166]). If Y = Y : t T is a substable stochastic { t ∈ } process with infinitely divisible mixing variable Θ, then Y is infinitely divisible. Proof. As Θ is an infinitely divisible non-negative random variable, for every k N ∈ there exists a non-negative random variable Θk such that Θ =d Θk,1 + . . . + Θk,k, where Θk,1,...,Θk,k are independent copies of Θk. Now, for every fixed n N, and every ∈ t ,...,t T, we have 1 n ∈ k,1 1/α k,k 1/α d k,1 k,k 1/α d X1(Θ ) + . . . + Xk(Θ ) = X1(Θ + . . . + Θ ) = (Yt1 ,...,Ytn ) i i where Xi = (Xt1 ,...,Xtn ) are independent copies of the random vector (Xt1 ,...,Xtn ) with X and Θn,j independent. This means that for every n, k N the random vector i ∈ (Yt1 ,...,Ytn ) has the same distribution as the sum of k independent identically distribu- ted random vectors, which completes the proof. It is easy to see that the above proposition holds if we replace the stochastic processes by random vectors as the dimension of the space spanned by Y : t T does not play { t ∈ } any particular role here. This is not the case in the next proposition. We will use the results presented in Chapter III. Proposition V.4.2 (Misiewicz [166]). Let the substable stochastic process Y : t T { t ∈ } =d X : t T Θ1/α admit representation (α, , Θ). If the space ( ) contains ℓn’s { t ∈ } · E ℜ H X α uniformly then the mixing variable Θ is infinitely divisible. P r o o f. The characteristic function of the stochastic process Y : t T is { t ∈ }

∞ \ α E exp i ξtYt = exp (ξ) αu λ(du) =: ϕ( (ξ) α), T {−kℜ k } kℜ k n Xt∈ o 0 where λ is the distribution of Θ. Now, ϕ( ) is a positive definite function on k kα L (S, ,m), since it is a scale mixture of exp x α which is positive definite on the α B {−k kα} whole space L (S, ,m). Since the substable stochastic process Y : t T is infinitely α B { t ∈ } divisible, also all the functions ϕ1/k( ), k N, are positive definite on ( ). As ( ) kkα ∈ H X H X contains ℓn’s uniformly by Theorem III.3.2, for every k N there exists a probability α ∈ measure λ on [0, ) such that k ∞

∞ \ ϕ1/k(s)= exp sαu λ (du), s> 0. {− } k 0 Now, for every k N and s> 0, we have ∈

∞ ∞ \ \ k ϕ(s) = (ϕ1/k(s))k = exp sαu λ (du) = exp sαu λ∗k(du). {− } k {− } k n 0 o 0 ∗k The uniqueness of the Laplace transform yields λk = λ, which completes the proof. Substable and pseudo-isotropic processes 79

Proposition V.4.3 (Misiewicz [166]). Assume that the substable stochastic process Y : t T admits representation (α, , Θ), and α = α ( ( )), α = α ( ( )). { t ∈ } E ℜ ◦ ◦ H X s s H X Then: a) If α < α and there exists τ (α, α ] such that the random variable Θ Θτ/α is ◦ ∈ ◦ α/τ infinitely divisible, then the stochastic process is infinitely divisible. αs/α b) If the stochastic process is infinitely divisible, then the random variable Θα/αs Θ is infinitely divisible.

α◦ P r o o f. The inequality α < α◦ means that the function (ξ) α is negative definite T kℜ k T on R( ), so the function exp (ξ) α◦ is positive definite on R( ) and we can define {−kℜ kα } a consistent family of measures µ : n N, t T in such a way that { t1,...,tn ∈ j ∈ } µ (ξ ,...,ξ ) = exp (ξ ,...,ξ ) α◦ . t1,...,tn t1 tn {−kℜ t1 tn kα } From the Kolmogorov theorem there exists a stochastic process Yt : t T such that b { ∈ } µ (B)= P (X ,...,X ) B t1,...,tn { t1 tn ∈ } for every Borel set B Rn. The process X : t T is symmetric α -stable as its ⊂ { t ∈ } ◦ finite-dimensional projections are all symmetric α◦-stable. Taking now Θτ/α◦ and Θα/τ such that Y : t T , Θ and Θ are totally independent, we see that the stochastic { t ∈ } τ/α◦ α/τ process Z : t T = X Θ1/α◦ : t T { t ∈ } { t τ/α◦ ∈ } is symmetric τ-stable process and, moreover, Y : t T =d X Θ1/α◦ Θ1/τ Θ1/α : t T =d Z (Θ Θτ/α)1/τ : t T . { t ∈ } { t τ/α◦ α/τ ∈ } { t α/τ ∈ } By Proposition V.4.1, the stochastic process Yt : t T is infinitely divisible if the τ/α { ∈ } random variable Θα/τ Θ is infinitely divisible. This completes the proof of a). To prove b) assume first that the stochastic process is infinitely divisible. Its charac- teristic function

∞ \ ϕ( (ξ) )= exp (ξ) αs λ(ds) kℜ kα {−kℜ kα } 0 is positive definite on R(T), which means that the function

∞ \ ϕ( h )= exp h αs λ(ds) k kα {−k kα } 0 is positive definite on the space ( ). This space contains ℓn ’s uniformly, thus it follows H X αs from Theorem III.3.2 that there exists a probability measure λ on [0, ) such that 1 ∞

∞ \ ϕ( h )= exp h αs s λ (ds). k kα {−k kα } 1 0 On the other hand, the stochastic process Y : t T is infinitely divisible, hence for { t ∈ } every k N the function ϕ1/k( h ) is also positive definite on ( ), and there exists a ∈ k kα H X probability measure λk such that

∞ \ ϕ1/k( h )= exp h αs s λ (ds). k kα {−k kα } k 0 80 J. K. Misiewicz

∗k The uniqueness of the Laplace transform yields λ1 = λk , so the measure λ1 is infinitely divisible. Now it is enough to notice that λ1 is the distribution of the random variable αs/α Θα/αs Θ .

References

[1] A. de Acosta, Asymptotic behavior of stable measures, Ann. Probab. 5 (1977), 494–499. [2] A. de Acosta, A. Araujo and E. Gine, On Poisson measures, Gaussian measures and the Central Limit Theorem in Banach spaces, Adv. in Probab. 5 (1978), 1–68. [3] R. J. Adler, S. Cambanis and G. Samorodnitsky, On stable Markov processes, Stochastic Process. Appl. 34 (1990), 1–17. [4] I. Aharoni, B. Maurey and B. S. Mityagin, Uniform embeddings of metric spaces and of Banach spaces into Hilbert spaces, Israel J. Math. 52 (1985), 251–265. [5] R. Ahmad, Extension of the normal family to spherical families, Trabajos Estadist. 23 (1972), 51–60. [6] P. M. Alberti, A note on stochastic operators on L1-spaces and convex functions, J. Math. Anal. Appl. 130 (1988), 556–563. [7] A. A. Alzaid, C. R. Rao and D. N. Shanbhag, Elliptical symmetry and exchange- ability with characterizations, J. Multivariate Anal. 33 (1990), 1–16. [8] G. Andersen and T. Kawata, Some integral transforms of characteristic functions, J. Math. Statist. 39 (1968), 1923–1931. [9] T. W. Anderson, The integral of a symmetric unimodal function over a symmetric and some probability inequalities, Proc. Amer. Math. Soc. 6 (1955), 170–176. [10] D. F. Andrews and C. L. Mallows, Scale mixtures of normal distributions, J. Roy. Statist. Soc. 36 (1974), 99–102. [11] N. Aronszajn, Theory of reproducing kernels, Trans. Amer. Math. Soc. 68 (1950), 337–404. [12] R. Askey, Radial characteristic functions, Tech. Report Math. Research Center, Uni- versity of Wisconsin-Madison, 1262. [13] P. Assouad, Un espace hyperm´etrique non plongeable dans un espace L1, C. R. Acad. Sci. Paris 285 (1977), 361–363. [14] —, Plongements isom´etriques dans L1; aspect analytique, in: Initiation Seminar on Anal- ysis, G. Choquet – M. Rogalski – J. Saint-Raymond, 19th Year, Exp. No. 14, Publ. Math. Univ. Pierre et Marie Curie, 41, Univ. Paris VI, Paris, 1980. [15] —, Caract´erisations des sous-espaces norm´es de L1 de dimension finie, S´eminaire d’Ana- lyse Fonctionnelle 1979–1980, preprint. [16] J.-G. Bak, D. McMichael, J. Vance andS. Wainger, Fourier transforms of surface 3 area measure on convex surfaces in R , Amer. J. Math. 111 (1989), 633–668. [17] K. Ball, Inequalities and sphere-packing in Lp, Israel J. Math. 58 (1987), 243–256. [18] B. M. Bennett, On certain multivariate non-normal distribution, Proc. Cambridge Philos. Soc. 57 (1961), 434–436. [19] C. Berg, J. P. R Christensen and P. Ressel, Harmonic Analysis on Semigroups. Theory of Positive Definite and Related Functions, Grad. Texts Math. 100, Springer, 1984. [20] C. Berg et P. Ressel, Une forme abstraite du th´eor`eme de Schoenberg, Arch. Math. (Basel) 30 (1978), 55–61. [21] R. H. Berk, Sphericity and the normal law, Ann. Probab. 14 (1986), 696–701. [22] S. M. Berman, Second order random fields over ℓp with homogeneous and isotropic increments, Z. Wahrsch. Verw. Gebiete 12 (1969), 107–126. Substable and pseudo-isotropic processes 81

[23] —, Stationarity, isotropy and sphericity in Lp, ibid. 54 (1980), 21–23. [24] L. Bishop, D. A. S. Fraser andK. W. Ng, Some decompositions of spherical distri- butions, Statist. Hefte 20 (1979), 1–20. [25] E. D. Bolker, A class of convex bodies, Trans. Amer. Math. Soc. 145 (1969), 323–345. [26] C. Borell, Convex measures on locally convex spaces, Ark. Mat. 18 (1974), 239–252. [27] —, Gaussian Radon measures on locally convex spaces, Math. Scand. 38 (1976), 265–284. [28] G. E. P. Box, Spherical distributions, Ann. Math. Statist. 24 (1953), 687–688. [29] J. Bretagnolle, D. Dacunha-Castelle et J. L. Krivine, Lois stables et espaces Lp, in: Symposium on Probability Methods in Analysis, Lecture Notes in Math. 31, Springer, 1967, 48–54. [30] W. Bryc, On bivariate distributions with “rotation invariant” absolute moments, San- khy¯aSer. A 54 (1992), 432–439. [31] —, Normal distributions and characterizations, preprint, 1991. [32] W. Bryc and A. Pluci´nska, A characterization of infinite Gaussian sequences by con- ditional moments, Sankhy¯aSer. A 47 (1985), 166–173. [33] T. Byczkowski, RKHS for Gaussian measures on metric vector spaces, Bull. Polish Acad. Sci. Math. 35 (1987), 93–103. [34] T. Byczkowski andT. Inglot, Gaussian random series on metric vector spaces, Math. Z. 196 (1987), 39–50. [35] S. Cambanis, C. D. Hardin and A. Weron, Ergodic properties of stationary stable processes, Stochastic Process. Appl. 24 (1987), 1–18. [36] —, —, —, Innovations and Wold decompositions of stable sequences, Probab. Theory Related Fields 79 (1988), 1–27. [37] S. Cambanis, S. Huang and G. Simons, On the theory of elliptically contoured dis- tributions, J. Multivariate Anal. 11 (1981), 368–385. [38] S. Cambanis, R. Keener and G. Simons, On α-symmetric distributions, ibid. 13 (1983), 213–233. [39] S. Cambanis and G. Miller, Some path properties of p-th order and symmetric stable processes, Ann. Probab. 8 (1980), 1148–1156. [40] S. Cambanis and G. Simons, Probability and expectation inequalities, Z. Wahrsch. Verw. Gebiete 59 (1982), 1–25. [41] S. Cambanis and A. R. Soltani, Prediction of stable processes, spectral and moving average representations, ibid. 66 (1984), 593–612. [42] B. A. Chartres, A geometrical proof of a theorem due to Slepian, SIAM Rev. 5 (1963), 335–341. [43] S. D. Chatterji and V. Mandrekar, Equivalence and singularity of Gaussian mea- sures and applications, Probab. Anal. and Related Topics 1 (1978), 169–197. [44] M. A. Chmielewski, Elliptically symmetric distributions: A review and bibliography, Internat. Statist. Rev. 49 (1981), 67–74. [45] J. Chover, Certain convexity conditions on matrices with applications to Gaussian pro- cesses, Duke Math. J. 29 (1962), 141–150. [46] J. P. R. Christensen and B. C. Ressel, Positive definite functions on abelian semi- groups, Math. Ann. 223 (1976), 253–274. [47] J. P. R. Christensen and B. C. Ressel, Norm dependent positive definite functions on B-spaces, in: Lecture Notes in Math. 990, Springer, 1983, 47–53. [48] J. J. Crawford, Elliptically contoured measures on finite-dimensional Banach spaces, Studia Math. 60 (1977), 15–32. [49] R. Davidson, Arithmetic and other properties of certain Delphic semigroups, Z. Wahrsch. Verw. Gebiete 10 (1968), 146–172. 82 J. K. Misiewicz

[50] S. J. Devlin, R. Gnanadesikan andJ. Kettenring, Some multivariate applications of elliptical distributions, in: Essays in Probability and Statistics, Chapter 24, Shinko Tsusho Co., Tokyo, 1976, 365–393. [51] S. J. Dilworth and A. L. Koldobsky, The Fourier transform of order statistics with applications to Lorenz spaces, preprint of Banach Space Bulletin Board, 1993. [52] L. E. Dor, Potentials and isometric embeddings in L1, Israel J. Math. 24 (1976), 260– 268. [53] R. M. Dudley, Singularity of measures on linear spaces, Z. Wahrsch. Verw. Gebiete 6 (1966), 129–132. [54] R. M. Dudley andM. Kanter, Zero-one laws for stable measures, Proc. Amer. Math. Soc. 45 (1974), 245–252. [55] A. J. Dunn, Estimation of the means of dependent variables, Ann. Math. Statist. 29 (1958), 1095–1111. [56] —, Confidence intervals for the means of dependent normally distributed variables, J. Amer. Statist. Assoc. 54 (1959), 613–621. [57] A. Dvoretzky, Some results on convex bodies and Banach spaces, in: Proc. Internat. Sympos. on Linear Spaces, Academic Press, 1961, 123–160. [58] M. L. Eaton, Characterization of distributions by the identical distribution of linear forms, J. Appl. Probab. 3 (1966), 481–494. [59] —, On the projections of isotropic distributions, Ann. Statist. 9 (1981), 391–400. [60] B. Efron and R. A. Olshen, How broad is the class of normal scale mixtures, ibid. 6 (1978), 1159–1164. [61] A. Ehrhard, Sym´etrisation dans l’espace de Gauss, Math. Scand. 53 (1983), 281–301. [62] S. J. Einhorn, Functions positive definite in C[0, 1], Proc. Amer. Math. Soc. 22 (1969), 702–703. [63] K. T. Fang, S. Kotz andK. W. Ng, Symmetric Multivariate and Related Distributions, Chapman and Hall, London, 1990. [64] C. Fefferman, M. Jodeit andM. D. Perlman, A spherical surface measure inequality for convex sets, Proc. Amer. Math. Soc. 33 (1972), 114–119. [65] J. Feldman, Equivalence and perpendicularity of Gaussian processes, Pacific J. Math. 8 (1958), 699–708. [66] W. Feller, An Introduction to Probability Theory and its Applications, Vol. II, Wiley, New York, 1966. [67] T. S. Ferguson, A representation of the symmetric bivariate Cauchy distribution, Ann. Math. Statist. 33 (1962), 1256–1266. [68] B. de Finetti, La pr´evision, ses lois logiques, ses sources subjectives, Ann. Inst. H. Poincar´e7 (1937), 1–68. [69] P. Funk, Uber¨ eine geometrische Anwendung der Abelschen Integralgleichung, Math. Ann. 77 (1916), 129–135. [70] I. N. Gel’fand, M. I. Graev and N. J. Vilenkin, Generalized Functions, Vol. V, Academic Press, New York, 1966. [71] M. Ghosh and E. Pollack, Some properties of multivariate distributions with pdf’s constant on ellipsoids, Comm. Statist. 4 (1975), 1157–1160. [72] B. V. Gnedenko and A. N. Kolmogorov, Limit Distributions for Sums of Indepen- dent Random Variables, Addison-Wesley, Reading, 1954. [73] B. V. Gnedenko and G. Fahim, On a transform theorem, Soviet Math. Dokl. 10 (1969), 769–772. [74] F. S. Gordon and A. M. Mathai, Characterizations of the multivariate normal dis- tribution using regression properties, Ann. Math. Statist. 43 (1972), 205–229. [75] Y. Gordon, Elliptically contoured distributions, Probab. Theory Related Fields 76 (1987), 429–438. Substable and pseudo-isotropic processes 83

[76] I. S. Gradshte˘ın and I. M. Ryzhik, Tables of Integrals, Sums, Series and Products, Fizmatgiz, Moscow, 1962 (in Russian). [77] R. Grz¸a´slewicz, Plane sections of the unit ball of lp, Acta Math. Hungar. 52 (1988), 219–225. [78] R. Grz¸a´slewicz and J. K. Misiewicz, Isometric embeddings of subspaces of Lα-spaces and maximal representation for symmetric stable processes, preprint, 1995. [79] A. F. Gualtierotti, Some remarks on spherically invariant distributions, J. Multivari- ate Anal. 4 (1974), 347–349. [80] —, A likelihood ratio formula for spherically invariant processes, IEEE Trans. Inform. Theory 22 (1976), 610. [81] R. D. Gupta, J. K. Misiewicz and D. St. P. Richards, Infinite sequences with sign-symmetric Liouville type distributions, Probab. Math. Statist. 16 (1996), 29–44. [82] S. Das Gupta, M. L. Eaton, I. Olkin, M. Perlman, L. J. Savage and M. Sobel, Inequalities on the probability content of convex regions for elliptically contoured distri- butions, in: Proceedings of the Sixth Berkeley Symposium on Mathematical Statistics and Probability II, Univ. of California Press, Berkeley, 1972, 241–264. [83] C. D. Hardin, Isometries on subspaces of Lp, Indiana Univ. Math. J. 30 (1981), 449–465. [84] —, On the linearity of regression, Z. Wahrsch. Verw. Gebiete 61 (1982), 293–302. [85] —, On the spectral representation of symmetric stable processes, J. Multivariate Anal. 12 (1982), 385–401. [86] S. Helgason, The Radon Transform, Birkh¨auser, Berlin, 1980. [87] A. Hertle, Gaussian plane and spherical means in separable Hilbert spaces, in: Measure Theory, Springer, 1945, 314–335. [88] —, Gaussian surface measures and the Radon transform on separable Banach spaces, in: Lecture Notes in Math. 794, Springer, 1980, 513–531. [89] —, Zur Radon Transformation von Funktionen und Massen, Dissertation, Univ. Erlan- gen-N¨urnberg, 1979. [90] —, On the asymptotic behaviour of Gaussian spherical integrals, in: Probability in Ba- nach Spaces IV, Lecture Notes in Math. 990, Springer, 221–234. [91] C. S. Herz, Fourier transforms related to convex sets, Ann. of Math. 75 (1962), 81–92. [92] —, A class of negative definite functions, Proc. Amer. Math. Soc. 14 (1963), 670–676. [93] E. Hewitt and K. A. Ross, Abstract Harmonic Analysis, Vol. II, Grundlehren Math. Wiss. 152, Springer, Berlin, 1970. [94] J. P. Holmes, W. Hudson andJ. D. Mason, Operator-stable laws, multiple exponents and elliptical symmetry, Ann. Probab. 10 (1982), 602–612. [95] S. T. Huang andS. Cambanis, Spherically invariant processes; their nonlinear struc- ture, discrimination, and estimation, J. Multivariate Anal. 9 (1979), 59–83. [96] I. A. Ibragimov and Yu. V. Linnik, Independent and Stationary Sequences of Ran- dom Variables, J. F. C. Kingman (ed.), Wolters–Noordhoff, Groningen, 1971. [97] Pang I-Min, Simple proof of equivalence conditions for measures induced by Gaussian processes, Selected Transl. in Math. Statist. and Probab. 12 (1973), 103–118. [98] A. Janicki and A. Weron, Simulation and Chaotic Behaviour of α-stable Stochastic Processes, Marcel Dekker, New York, 1994. [99] D. R. Jensen, Linear models without moments, Biometrica 66 (1979), 611–617. [100] K. Joag-dev, M. D. Perlman andL. D. Pitt, Association of normal random variables and Slepian’s inequality, Ann. Probab. 11 (1983), 451–455. [101] K. Jogdeo, A simple proof of an inequality for multivariate normal probabilities of rectangles, Ann. Math. Statist. 41 (1970), 1357–1359. [102] M. E. Johnson andJ.S. Ramberg, Elliptically symmetric distributions: Characteriza- tion and random variate generation, A.S.A. Proc. Statist. Comp. Sect. 1977, 262–265. 84 J. K. Misiewicz

[103] Z. J. Jurek, On L´evy (spectral) measures of integral form on Banach spaces, Probab. Math. Statist. 11 (1990), 139–148. [104] O. Kallenberg, Infinitely divisible processes with interchangeable increments and ran- dom measures under convolution, Z. Wahrsch. Verw. Gebiete 32 (1975), 309–321. [105] —, Some new representations in bivariate exchangeability, Probab. Theory Related Fields 77 (1988), 415–455. [106] —, Characterizations and embedding properties in exchangeability, Z. Wahrsch. Verw. Gebiete 60 (1982), 249–281. [107] —, Some linear random functionals characterized by Lα-symmetries, in: Stochastic Pro- cesses: A Festschrift in Honour of Gopinath Kallianpur, Springer, New York, 1983, 171– 180. [108] G. Kallianpur, The role of reproducing kernel Hilbert spaces in the study of Gaus- sian processes, in: Advances in Probability and Related Topics, Vol. 2, P. Ney (ed.), M. Dekker, New York, 1970. [109] —, Abstract Wiener spaces and reproducing kernel Hilbert spaces, Z. Wahrsch. 17 (1971), 345–347. [110] G. Kallianpur and H. Oodaira, The equivalence and singularity of Gaussian pro- cesses, in: Proc. Sympos. on Time Series Analysis, Wiley, New York, 1963, 279–291. [111] —, —, Non-anticipative representations of equivalent Gaussian processes, Ann. Probab. 1 (1973), 104–122. [112] M. Kanter, Linear sample spaces and stable processes, J. Funct. Anal. 9 (1972), 441–459. [113] —, A representation theorem for Lp spaces, Proc. Amer. Math. Soc. 31 (1972), 472–474. [114] —, Stable laws and the embedding of Lp spaces, Amer. Math. Monthly 80 (1973), 403–407. [115] Y. Kasahara and M. Maejima, Weighted sums of i.i.d. random variables attracted to integrals of stable processes, Probab. Theory Related Fields 78 (1988), 75–96. [116] D. Kelker, Distribution theory of spherical distributions and some characterization the- orems, Technical Report rm. 210, dk-1., Michigan State University, 1958. [117] —, Distribution theory of spherical distributions and a location-scale parameter general- ization, Sankhy¯aSer. A 32 (1970), 419–438. [118] —, Infinite divisibility and variance mixtures of the normal distribution, Ann. Math. Statist. 42 (1971), 802–808. [119] J. F. C. Kingman, Random walks with spherical symmetry, Acta Math. 109 (1963), 11–53. [120] —, On random sequences with spherical symmetry, Biometrica 59 (1972), 492–494. [121] A. L. Koldobsky, Schoenberg’s problem on positive definite functions, Algebra and Analysis (Leningrad Math. J.) 3 (1991), 78–85. [122] —, Convolution equations in certain Banach spaces, Proc. Amer. Math. Soc. 111 (1991), 755–765. [123] —, A Banach subspace of L1/2 which does not embed in L1 (isometric version), preprint of Banach Space Bulletin Board, 1993. [124] —, Generalized L´evy representation of norms and isometric embeddings into Lp-spaces, Ann. Inst. H. Poincar´e28 (1992), 335–353. [125] —, Common subspaces of Lp-spaces, Proc. Amer. Math. Sci. 122 (1994), 207–212. [126] L. S. Kudina, On decomposition of radially symmetric distributions, Theory Probab. Appl. 20 (1975), 644–648. [127] J. Kuelbs, Positive definite symmetric functions on linear spaces, J. Math. Anal. Appl. 42 (1973), 413–426. [128] —, Representation theorem for symmetric stable processes and stable measures on H, Z. Wahrsch. Verw. Gebiete 26 (1973), 259–271. [129] H.-H. Kuo, Gaussian Measures in Banach Spaces, Lecture Notes in Math. 463, Springer, Berlin, 1973. Substable and pseudo-isotropic processes 85

[130] Yu. G. Kuritsyn, Multidimensional versions and two Schoenberg problems, in: Sta- bility Problems for Stochastic Models, Proc. of the Seminar, Moscow, Inst. for System Studies, Moscow, 1989, 72–79. [131] Yu. G. Kuritsyn and A. V. Shestakov, On α-symmetric distributions, Theory Probab. Appl. 29 (1984), 804–806. [132] S. Kwapie´nand W. A. Woyczy´nski, Random Series and Stochastic Integrals—Single and Multiple, Springer, New York, 1992. [133] A. G. Laurent, Applications of fractional calculus to spherical (radial) probability mod- els and generalizations, in: Fractional Calculus and its Applications, Lecture Notes in Math. 457, Springer, New York, 1974, 256–266. [134] M. Ledoux and M. Talagrand, Probability in Banach Spaces, Isoperimetry and Pro- cesses, Springer, 1991. [135] E. Lehman, Some concepts of dependence, Ann. Math. Statist. 37 (1966), 1137–1153. [136] G. Letac, Isotropy and sphericity; some characterizations of the normal distribution, Ann. Statist. 9 (1981), 408–417. [137] H. M. Leung andS. Cambanis, On the rate distortion of spherically invariant vectors and sequences, IEEE Trans. Inform. Theory IT-24 (1978), 367–373. [138] P. L´evy, Th´eorie de l’addition des variables al´eatoires, Gauthier-Villars, Paris, 1937. [139] W. Linde, Infinitely divisible and stable measures on Banach spaces, Teubner Texte zur Math. 58, Leipzig, 1983. [140] W. Linde andP. Mathe, Conditional symmetries of stable measures on Rn, Ann. Inst. H. Poincar´eSect. B 19 (1983), 57–69. [141] J. Lindenstrauss, On the extension of operators with finite dimensional range, Illinois J. Math. 8 (1964), 488–499. [142] J. Lindenstrauss and L. Tzafriri, Classical Banach Spaces, Lecture Notes in Math. 338, Springer, 1973. [143] A. D. Lisitsky, One more solution of the Schoenberg problem, preprint, 1991. [144] D. Louie, B. S. Rajput and A. Tortrat, A zero-one dichotomy theorem for r-semi- stable laws on infinite dimensional linear spaces, Sankhy¯aSer. A 42 (1980), 9–18. [145] A.L. Luczak, Elliptical symmetry and characterization of operator-stable and operator- semi-stable measures, Ann. Probab. 12 (1984), 1217–1223. [146] E. Lukacs, Characteristic Functions, Griffin, London, 1960. [147] E. Lukacs and R. G. Laha, Applications of Characteristic Functions, Hafner, 1964. [148] C. L. Mallows, A note on asymptotic joint normality, Ann. Math. Statist. 43 (1972), 508–515. [149] V. Mandrekar, Multiparameter Gaussian processes and their Markov property, lecture notes, EPF-Lausanne, 1975. [150] D. J. Marcus, Non-stable laws with all projections stable, Z. Wahrsch. Verw. Gebiete 64 (1983), 139–156. [151] M. Marques andS. Cambanis, Admissible and singular translates of stable processes, in: Probability Theory on Vector Spaces IV,La´ncut 1987, Lecture Notes in Math. 1391, Springer, 1989, 239–257. [152] G. Marsaglia, Choosing a point from the surface of a sphere, Ann. Math. Statist. 43 (1972), 645–646. [153] R. D. Martin and S. C. Schwartz, On mixture, quasi-mixture and nearly normal random processes, ibid. 43 (1972), 948–967. [154] E. Masry andS. Cambanis, Spectral density estimation for stationary stable processes, Stochastic Process. Appl. 18 (1984), 1–31. [155] D. K. McGraw and J. F. Wagner, Elliptically symmetric distributions, IEEE Trans. Inform. Theory 14 (1968), 110–120. 86 J. K. Misiewicz

[156] M. B. Mendel, Development of Bayesian parametric theory with applications to control, Doctoral dissertation, MIT, 1989. [157] L. Mezrag, Th´eor`emes de factorisation et de prolongement pour les op´erateurs `avaleurs dans les espaces Lp, pour p< 1, C. R. Acad. Sci. Paris S´er. I 300 (1985), 289–302. ∞ [158] J. K. Misiewicz, Elliptically contoured measures on R , Bull. Acad. Polon. Sci. S´er. Sci. Math. 30 (1982), 283–290. [159] —, Some remarks on elliptically contoured measures, in: Probability Theory on Vector Spaces III, Lecture Notes in Math. 1080, Springer, 1984, 170–174. [160] —, Characterization of the elliptically contoured measures on infinite-dimensional Banach spaces, Probab. Math. Statist. 4 (1984), 47–56. [161] —, Infinite divisibility of elliptically contoured measures, Bull. Polish Acad. Sci. Math. 33 (1985), 73–76. [162] —, On norm-dependent positive definite functions, Soobshch. Akad. Nauk Gruzin. SSR 130 (1988), 253–256. ∞ [163] —, Positive definite functions on l , Statist. Probab. Lett. 8 (1989), 255–260. [164] —, Some remarks on measures with n-dimensional versions, Probab. Math. Statist. 13 (1992), 71–76. [165] —, L1-dependent sequences of random variables, in: L1-statistical Analysis and Related Methods, Y. Dodge (ed.), Elsevier, 1992, 431–437. [166] —, Infinite divisibility of substable processes. I. Geometry of subspaces of Lα-spaces, Stochastic Process. Appl. 56 (1995), 101–116. [167] —, Infinite divisibility of substable processes. II. Logarithm of probability measure, in: Proceedings of XVII Seminar on Stability Problems, Kazan 1995, to appear. [168] —, Exchangeability and pseudo-isotropy, Demonstratio Math. 29 (1996), 107–122. [169] —, Some remarks on spectral representation for symmetric stable processes, in: Pro- ceedings of XVII Seminar on Stability Problems, Eger 1994, to appear. [170] J. K. Misiewicz andC. Ryll-Nardzewski, Norm dependent positive definite functions and measures on vector spaces, in: Probability Theory on Vector Spaces IV,La´ncut 1987, Lecture Notes in Math. 1391, Springer, 1989, 284–292. [171] J. K. Misiewicz and D. St. P. Richards, Necessary conditions for α-symmetric random vectors, preprint, 1991. [172] J. K. Misiewicz and C. L. Scheffer, Pseudo-isotropic measures, Nieuw Arch. Wisk. 8 (1990), 111–152. [173] Y. Mittal, A new mixing condition for stationary Gaussian processes, Ann. Probab. 7 (1979), 724–730. − 2 − [174] D. S. Moak, Completely monotonic functions of the form s b(s + 1) a, Rocky Moun- tain J. Math. 17 (1987), 719–725. [175] D. Nash and M. S. Klamkin, A spherical characterization of the normal distribution, J. Multivariate Anal. 55 (1976), 156–158. [176] A. Neyman, Representation of Lp-norms and isometric embedding in Lp-spaces, Israel J. Math. 48 (1984), 129–138. [177] I. Nimmo-Smith, Linear regression and sphericity, Biometrica 66 (1979), 390–392. [178] J. P. Nolan, Path properties of index-b stable fields, Ann. Probab. 16 (1988), 1596–1607. [179] —, Continuity of symmetric stable processes, J. Multivariate Anal. 29 (1989), 84–93. [180] Y. Okazaki, Elliptically contoured measures on locally convex spaces, preprint, 1973. [181] K. R. Parthasarathy, Probability Measures on Metric Spaces, Academic Press, New York, 1967. [182] G. P. Patil and M. T. Boswell, Characteristic property of the multivariate normal density function and some of its applications, Ann. Math. Statist. 41 (1970), 1970–1977. [183] V. J. Paulauskas, Some remarks on multivariate stable distributions, J. Multivariate Anal. 6 (1976), 356–368. Substable and pseudo-isotropic processes 87

[184] G. Pisier, Factorization of Linear Operators and Geometry of Banach Spaces, CBMS Regional Conf. Ser. in Math. 60, Amer. Math. Soc., 1986. [185] A. I. Plotkin, Continuation of Lp-isometries, J. Soviet Math. 2 (1974), 143–165. [186] —, An algebra generated by translation operators and Lp-norms, in: 6, Ul’yanovsk, 1976, 112–121 (in Russian). [187] G. P´olya, Herleitung des Gauss’schen Fehlergesetzes aus einer Funktionalgleichung, Math. Z. 18 (1923), 96–108. [188] A. P. Prudnikov, Yu. A. Brychkov and O. I. Marychev, Integrals and Series, Nauka, Moscow, 1981 (in Russian). [189] J. Radon, Uber¨ die Bestimmung von Funktionen durch ihre Integralwerte langs gewisser Mannigfaltigkeiten, Ber. Verh. Sachs. Akad. Wiss. Leipzig Math. Natur. Kl. 69 (1917), 262–277. [190] B. S. Rajput, A representation of the characteristic function of a stable probability measure on certain topological vector spaces, J. Multivariate Anal. 6 (1976), 592–600. [191] —, On the support of certain symmetric stable probability measures on topological vector spaces, Proc. Amer. Math. Soc. 63 (1977), 306–312. [192] —, On the support of symmetric infinitely divisible and stable probability measures on locally convex topological vector spaces, ibid. 66 (1977), 331–334. [193] B. S. Rajput and N. N. Vakhania, On the support of Gaussian probability measures on locally convex topological vector spaces, in: Multivariate Analysis IV, P. R. Krishnaiah, North-Holland, 1977, 297–309. [194] A. R´enyi, On projections of probability distributions, Acta Math. Acad. Sci. Hungar. 3 (1952), 131–141. [195] P. Ressel, De Finetti-type theorems, an analytical approach, Ann. Probab. 13 (1985), 898–922. [196] —, Integral representations for distributions of symmetric stochastic processes, Probab. Theory Related Fields 79 (1988), 451–467. [197] P. Ressel and W. Schmidtchen, A new characterization of Laplace functionals and probability generating functionals, ibid. 88 (1991), 195–213. [198] W. T. Rhee, On the distribution of the norm for Gaussian measure, Ann. Inst. H. Poin- car´eProbab. Statist. 20 (1984), 277–286. [199] D. St. P. Richards, Positive definite symmetric functions on finite dimensional spaces, Statist. Probab. Lett. 3 (1985), 325–329. [200] —, Positive definite symmetric functions on finite dimensional spaces. I. Application of the Radon transform, J. Multivariate Anal. 19 (1986), 280–298. [201] H. Rosenthal, On subspaces of Lp, Ann. of Math. 97 (1973), 344–373. [202] J. Rosi´nski, On uniqueness of the spectral representation of stable processes, preprint, University of Tennessee, Knoxville, 1993. [203] W. Rudin, Functional Analysis, McGraw-Hill, 1973. [204] —, Lp-isometries and equimeasurability, Indiana Univ. Math. J. 25 (1976), 215–228. [205] Z. Rychlik, On some inequalities for the concentration function of the sum of a random number of independent random variables, Bull. Acad. Polon. Sci. 22 (1974), 65–70. [206] Z. Rychlik and D. Szynal, On the limit behaviour of sums of a random number of independent random variables, Colloq. Math. 28 (1973), 147–159. [207] —, —, On the convergence rates in the central limit theorem for the sums of a random number of independent identically distributed random variables, Bull. Acad. Polon. Sci. 22 (1974), 683–690. [208] C. Ryll-Nardzewski, On stationary sequences of random variables and the de Finetti’s equivalence, Colloq. Math. 4 (1957), 149–156. [209] G. Samorodnitsky, Extrema of skewed stable processes, Stochastic Process. Appl. 30 (1988), 17–39. 88 J. K. Misiewicz

[210] G. Samorodnitsky andM. Taqqu, 1/α-self-similar α-stable processes with stationary increments, J. Multivariate Anal. 35 (1990), 308–313. [211] —, —, Conditional moments and linear regression for stable random variables, Stochastic Process. Appl. 39 (1991), 183–199. [212] —, —, Stable non-Gaussian Random Processes: Stochastic Models with Infinite Variance, Chapman & Hall, London, 1993. [213] M. Schilder, Some structure theorems for the symmetric stable laws, Ann. Math. Statist. 41 (1970), 412–421. [214] G. Schechtman, Fine embeddings of finite dimensional subspaces of Lp, 1 ≤ p < 2, m into ℓ1 , Proc. Amer. Math. Soc. 94 (1985), 617–623. [215] I. J. Schoenberg, Metric spaces and completely monotone functions, Ann. of Math. 38 (1938), 811–841. [216] —, On certain metric spaces arising from Euclidean spaces by change of metric and their embedding in Hilbert spaces, ibid. 38 (1938), 787–793. [217] —, Metric spaces and positive definite functions, Trans. Amer. Math. Soc. 44 (1938), 522–536. [218] M. Schreiber, Quelques remarques sur les caract´erisations des espaces Lp, 0 ≤ p < 1, Ann. Inst. H. Poincar´e8 (1972), 83–92. [219] L. Schwartz, Radon measures on arbitrary topological spaces and cylindrical measures, Tata Institute of Fundamental Research, Oxford Univ. Press, 1973. [220] A. J. Scott, A note on conservative confidence regions for the means of multivariate normal, Ann. Math. Statist. 38 (1967), 278–280. [221] Z. Sidak, Rectangular confidence regions for the means of multivariate normal distribu- tions, J. Amer. Statist. Assoc. 62 (1967), 626–633. [222] —, On multivariate normal probabilities of rectangles, their dependence on correlations, Ann. Math. Statist. 39 (1968), 1425–1434. [223] D. Slepian, The one-sided barrier problem for Gaussian noise, Bell System Tech. J. 41 (1962), 463–501. [224] W. Smole´nski andR. Sztencel, On admissible translates of sub-Gaussian stable meas- ures, to appear. [225] F. W. Steutel, A class of infinitely divisible mixtures, Ann. Math. Statist. 39 (1968), 1153–1157. [226] P. J. Szablowski, Expansions of E(X | Y + εZ) and their applications to the analysis of elliptically contoured measures, to appear. [227] —, On the properties of marginal densities and conditional moments of elliptically con- toured measures, in: Proceedings 6th Pannonian Sympos. Math. Statist. Probab. Theory, vol. A, 1987, 237–252. [228] —, From Schoenberg’s problem to rotation invariant moments. Not always standard exploitation of Lq-norms, in: L1-statistical Analysis and Related Methods., Y. Dodge (ed.), Elsevier, 1992, 439–451. [229] M. Talagrand, On subsets of Lp and p-stable processes, Ann. Inst. H. Poincar´e25 (1989), 153–166. [230] D. Teichroew, The mixture of normal distributions with different variances, Ann. Math. Statist. 29 (1958), 510–512. [231] A. Tortrat, Sur les m´elanges de lois ind´efiniment divisibles, C. R. Acad. Sci. Paris S´er. A-B 269 (1969), 784–786. [232] —, M´elange de lois et lois ind´efiniment divisibles, in: Proc. IVth Conf. Probab. Theory, Bra¸sov, Romania, 1971, 227–244. [233] —, Lois e(λ) dans les espaces vectoriels et lois stables, Z. Wahrsch. Verw. Gebiete 37 (1976), 175–182. Substable and pseudo-isotropic processes 89

[234] N. N. Vakhania, W. I. Tarieladze andS. A. Chobanian, Probability Distributions on Banach Spaces, Nauka, Moscow, 1985 (in Russian). [235] A. I. Velikoivanenko, Multidimensional analogues of the P´olya theorem, Theor. Pro- bab. Math. Statist. 34 (1987), 39–46. [236] A. M. Vershik, Some characteristic properties of Gaussian stochastic processes, Theor. Probab. Appl. 9 (1964), 353–356. [237] A. Weron, Stable processes and measures; a survey, in: Probability Theory on Vector Spaces III, D. Szynal and A. Weron (eds.), Lecture Notes in Math. 1080, Springer, New York, 1984, 300–364. [238] —, A remark on disjointness results for stable processes, Studia Math. 105 (1993), 253–254. [239] R. E. Williamson, Multiply monotone functions and their Laplace transforms, Duke Math. J. 23 (1956), 189–207. [240] H. S. Witsenhausen, Metric inequalities and the zonoid problem, Proc. Amer. Math. Soc. 40 (1973), 517–520. [241] A. Wodzi´nski, Reproducing kernels for stable measures on Banach spaces, Report of Technical Univ. of Wroclaw, 1985. [242] J. S. Wolfe, On the unimodality of spherically symmetric stable distribution functions, J. Multivariate Anal. 5 (1975), 236–242. [243] W. Woyczy´nski, Geometry and martingales in Banach spaces, Part II : independent increments, in: Adv. in Probab. 4, Dekker, 1978, 267–517. [244] K. Yao, A representation theorem and its applications to spherically invariant random processes, IEEE Trans. Inform. Theory I-T-19 (1973), 600–608. [245] W. P. Zastawny, Positive definite norm-dependent functions. Solution of the Schoen- berg theorem, preprint, 1991. [246] J. Zinn, Admissible translates of stable measures, Studia Math. 54 (1976), 245–257. [247] V. M. Zolotarev, One-dimensional Stable Distributions, Transl. Math. Monographs 65, Amer. Math. Soc., Providence. [248] —, Distribution of the superposition of infinitely subdivisible processes, Theory Probab. Appl. 3 (1958), 197–200. [249] T. Zak,˙ Admissible translates for sub-Gaussian measures, Probab. Math. Statist. 9 (1988), 125–131. Index

A(X ), 50 S(α, 1), 10 I(f), 56, 63 S(α, c), 10 I1(x), 27 approximative logarithm, 68 I2(x), 26 Ic(x), 22 Bretagnolle, Dacunha-Castelle and Krivine I∞(x,y,z), 28 theorem, 41 J(f), 63 Christensen and Ressel theorem, 43 Kα(X,Y ), 58 conditionally independent random variables, 36 Lα-dependent stochastic integral, 62 consistent characteristic functions, 35 Lα-symmetric stochastic process, 60, 62 consistent family of measures, 35 Zα, 56, 63 covariation, 58 Ω , 25 n de Finetti theorem, 36 Φ (α), 28 n Dvoretzky theorem, 43 Φn(1), 29 Φn(2), 29 elliptically contoured process, 47 Φn(∞), 29 elliptically contoured random vector, 23 Θα, 10 exchangeability condition, 36 α-stable random vector, 14 exchangeable sequence of random variables, 36 α-stable stochastic integral, 56 generalized Poisson measure, 66 α-symmetric distribution, 27 independent increments, 57 α-symmetric random vector, 27 independently scattered random measure, 56, 63 α-times monotonic function, 25 infinitely divisible distribution, 67 α-stable stochastic process, 50 αs, 53, 60 Kolmogorov theorem, 35 α◦, 15, 34, 52, 60 L´evy–Khinchin theorem, 67 f ⊙ λ, 28 + L´evy measure, 66 γα ◦λ, 33, 75 + linear regression property, 49 γα , 10 logarithm of a probability measure, 67 γα◦λ, 33, 75 maximal representation for a substable random ω(du), 22 vector, 34 Exp(ν), 67 mixing measure, 32 s Exp(ν), 66 EC(ϕ, ℜ, n), 24 orthogonality in the James sense, 58 E(α, ℜ, Θ), 32, 33 pseudo-isotropic random vector, 15 H(X ), 15, 52, 53 pure measure, 16 L(X ), 50 ◦ L (X ), 51 quasi-norm, 11, 17, 41 MS(α), 53 random limit theorem, 64 M(c, n), 19 Reproducing Kernel Hilbert Space, 46 P+, 28 Reproducing Kernel Space, 52 SS(α), 53 rotationally invariant random vector, 24 Substable and pseudo-isotropic processes 91 scale mixture of stable random vectors, 32 spherically generated random vector, 24 Schoenberg theorem, 40 stable distribution, 9 self-similar stochastic process, 57 stable random vector, 11 set of admissible translations, 50 stationary stochastic process, 57 space containing ℓn’s uniformly, 43 α substable random vector, 32 space of admissible translations, 47 spectral measure, 13, 14 substable stochastic process, 59, 78 spectral representation of a symmetric symmetric α-stable L´evy motion, 50 α-stable process, 51 symmetric Gaussian process, 46 spherically contoured random vector, 24 symmetric stable distribution, 9