IMS Collections Vol. 0 (2009) 251–265 c Institute of Mathematical Statistics, 2009 1 1 arXiv: math.PR/0000011 2 2 3 3 4 Recovery of Distributions via Moments 4 5 5 6 Robert M. Mnatsakanov1,∗ and Artak S. Hakobyan2,∗ 6 7 7 West Virginia University 8 8 9 Abstract: The problem of recovering a cumulative distribution function (cdf) 9 10 and corresponding density function from its moments is studied. This problem 10 is a special case of the classical problem. The results obtained within 11 11 the moment problem can be applied in many indirect models, e.g., those based 12 on convolutions, mixtures, multiplicative censoring, and right-censoring, where 12 13 the moments of unobserved distribution of actual interest can be easily esti- 13 14 mated from the transformed moments of the observed distributions. Nonpara- 14 metric estimation of a quantile function via moments of a target distribution 15 15 represents another very interesting area where the moment problem arises. In 16 all such models one can use the procedure, which recovers a function via its 16 17 moments. In this article some properties of the proposed constructions are de- 17 18 rived. The uniform rates of convergence of the approximants of cdf, its density 18 function, and quantile function are obtained as well. 19 19 20 20 21 Contents 21 22 22 23 1 Introduction...... 251 23 24 2 Some Notations and Assumptions...... 253 24 25 3 Asymptotic Properties of Fα,ν and fα,ν ...... 254 25 26 4 Some Applications and Examples...... 258 26 27 Acknowledgments...... 264 27 28 References...... 264 28 29 29 30 30 1. Introduction 31 31 32 The probabilistic Stielties moment problem can be described as follows: let a se- 32 33 33 quence ν = {µj, j = 0, 1,... } of real numbers be given. Find a probability distribu- 34 R j 34 tion on the non-negative real line R+ = [0, ∞), such that µj = t d F (t) for j ∈ 35 35 N = {0, 1,... }. The classical Stieltjes moment problem was introduced first by 36 Stieltjes [21]. When the of the distribution F is a compact, say, supp{F } = 36 37 [0,T ] with T < ∞, then the corresponding problem in known as a Hausdorff mo- 37 38 ment problem. 38 39 There are two important questions related to the Stieltjes (or Hausdorff) moment 39 40 problem: 40 41 41 (i) If the distribution F exists, is it uniquely determined by the moments {µj}? 42 (ii) How is this uniquely defined distribution F reconstructed? 42 43 43 44 44 1 45 Department of Statistics, West Virginia University, Morgantown, WV 26506, email: rmnatsak@ 45 stat.wvu.edu 46 ∗Work supported by the National Institute for Occupational Safety and Health, contract no. 46 47 212-2005-M-12857. 47 2 48 Department of Industrial and Management Systems Engineering, West Virginia University, 48 Morgantown, WV 26506, email: [email protected] 49 49 AMS 2000 subject classifications: Primary 62G05; secondary 62G20 50 Keywords and phrases: Probabilistic moment problem, M-determinate distribution, Moment- 50 51 recovered distribution, Uniform rate of convergence 51 251 imsart-coll ver. 2008/08/29 file: Mnatsak.tex date: January 31, 2009 252 Mnatsakanov and Hakobyan

1 If there is a positive answer to the question (i) we say that a distribution F is 1 2 moment-determinate (M-determinate), otherwise it is M-indeterminate. 2 3 In this paper we mainly address the question of recovering the M-determinate 3 4 distribution (density and quantile functions) via its moments in the Hausdorff mo- 4 5 ment problem, i.e., we study the question (ii). Another question we focus on here 5 6 is the estimation problem of unknown distribution and its quantile function, given 6 7 the estimated moments of the target distribution. 7 8 It is known from the probabilistic moment problem that under suitable condi- 8 9 tions M-determinate distribution is uniquely defined by its moments. There are 9 10 many articles that investigated the conditions (for example, the Carlemann’s and 10 11 the Krein’s conditions), under which the distributions are either M-determinate 11 12 or M-indeterminate, see, Akhiezer [2], Feller [6], Lin [10-11], and Stoyanov [22-24] 12 13 among others. However, there are very few works dealing with the reconstructions of 13 14 distributions via their moments. Several inversion formulas were obtained by invert- 14 15 ing the moment generating function and Laplace transform (Shohat and Tamarkin 15 16 [20], Widder [27], Feller [6], Chauveau et al. [4], and Tagliani and Velasquez [25]). 16 17 These methods are too restrictive, since there are many distributions for which the 17 18 moment generating functions do not exist even though all the moments are finite. 18 19 The reconstruction of an M-determinate cdf by means of mixtures having the 19 20 same assigned moments as the target distribution have been proposed in Lindsay 20 21 et al. [12]. Note that this procedure requires calculations of high-order Hankel de- 21 22 terminants, and due to ill-conditioning of the Hankel matrices this method is not 22 23 useful when the number of assigned moments is large. The reconstruction of an un- 23 24 known density functions using the Maximum Entropy principle with the specified 24 25 ordinary and fractional moments has been studied in Kevasan and Kapur [9] and 25 26 Novi Inverardi et al. [18], among others. 26 27 In Mnatsakanov and Ruymgaart [17] the constructions (2.2) and (3.13) (see 27 28 Sections 2 and 3 below) have been introduced, and only their convergence in a 28 29 weak sense have been established. 29 30 Different types of convergence of maximum entropy approximants have been 30 31 studied by Borwein and Lewis [3], Frontini and Tagliani [7], and Novi Inverardi et 31 32 al. [18], but the rates of approximations have not been established yet. Our con- 32 33 struction enables us to derive the uniform rate of convergence for moment-recovered 33 34 cdfs Fα,ν , corresponding quantile function approximate Qα, and the uniform con- 34 35 vergence of moment-recovered density approximant fα,ν , as the parameter α → ∞. 35 36 Another constructions of moment-recovered cdfs and pdfs (see, (3.13) and (3.14) in 36 37 Remark 3.3) were proposed in Mnatsakanov [13-14], where the uniform and L1-rates 37 38 of approximations have been established for them. 38 39 The paper is organized as follows: in Section 2 we introduce some notations and 39 40 assumptions, while in Section 3 we study the properties of Fα,ν and fα,ν . Note that 40 41 our construction also gives a possibility to recover different distributions through 41 42 the simple transformations of moment sequences of given distributions (Theorem 42 43 3.1 and Corollary 3.1). In Theorem 3.2 we state the uniform rate of convergence for 43 44 moment-recovered cdfs. In Corollaries 3.2 and 3.3, and in Theorem 3.3 we applied 44 45 the constructions (2.2) and (3.11) to recover pdf f, the quantile function Q, and 45 46 corresponding quantile density function q of F given the moments of F . In Section 46 47 4 some other applications of the constructions (2.2) and (3.11) are discussed: the 47 48 uniform convergence of the empirical counterparts of (2.2) and (3.11), the rate of 48 49 approximation of moment-recovered quantile function (see (4.4) in Section 4) along 49 50 with the demixing and deconvolution problems in several particular models. 50 51 Note that our approach is particularly applicable in situations where other es- 51

imsart-coll ver. 2008/08/29 file: Mnatsak.tex date: January 31, 2009 Recovery of Distributions via Moments 253

1 timators cannot be used, e.g., in situations where only moments (empirical) are 1 2 available. The results obtained in this paper will not be compared with similar 2 3 results derived by other methods. We only carry out the calculations of moment- 3 4 recovered cdfs, pdfs, and quantile functions, and compare them with the target 4 5 distributions via graphs in several simple examples. We also compare the perfor- 5 6 mances of Fα,ν and fα,ν with the similar constructions studied in Mnatsakanov 6 7 [13-14] (see, Figures 1 (b) and 3 (b)). The moment-estimated quantile function Qˆα 7 8 and well known Harrell-Davis quantile function estimator QˆHD (Sheater and Mar- 8 9 ron [19]) defined in (4.6) and (4.7), respectively, are compared as well (see, Figure 9 10 2 (b)). 10 11 11 12 12 2. Some Notations and Assumptions 13 13 14 14 Suppose that M-determinate cdf F is absolute continuous with respect to the 15 15 Lebesgue measure and has support [0,T ], T < ∞. Denote the corresponding den- 16 16 sity function by f. Our method of recovering cdf F (x), 0 x T , is based on an 17 6 6 17 inverse transformation that yields a solution of the Hausdorff moment problem. 18 18 Let us denote the ordinary moments of F by 19 19 20 Z 20 (2.1) µ = tjd F (t) = (KF )(j), j ∈ , 21 j,F N 21 22 22 23 and assume that the moment sequence ν = (µ0,F , µ1,F ,... ) determines F uniquely. 23 24 An approximate inverse of the operator K from (2.1) constructed according to 24 25 25 [αx] ∞ j−k k 26 −1  X X (−α) α 26 (2.2) Kα ν (x) = µj,F , 0 6 x 6 T , α ∈ R+, 27 (j − k)! k! 27 k=0 j=k 28 28 29 −1 29 is such that Kα KF →w F, as α → ∞ (see, Mnatsakanov and Ruymgaart [17]). 30 30 Here →w denotes the weak convergence of cdfs, i.e. convergence at each continuity 31 point of the limiting cdf. 31 32 The success of the inversion formula (2.2) hinges on the convergence 32 33 33 34 [αx] k ( 34 X (αt) −αt 1, t < x (2.3) Pα(t, x) = e → , 35 k! 0, t > x 35 36 k=0 36 37 as α → ∞. This result is immediate from a suitable interpretation of the left hand 37 38 side as a sum of Poisson probabilities. 38 39 39 For any moment sequence ν = {νj, j ∈ N}, let us denote by Fν the cdf recovered 40 −1 40 via Fα,ν = Kα ν according to (2.2), when α → ∞, i.e. 41 41 42 42

43 (2.4) Fα,ν →w Fν , as α → ∞ . 43 44 44 45 Note that if ν = {µj,F , j ∈ N} is the moment sequence of F , the statement (2.4) 45 46 with Fν = F is proved in Mnatsakanov and Ruymgaart [17]. 46 47 To recover a pdf f via its moment sequence {µj,F , j ∈ N}, consider the ratio: 47 48 48 ∆Fα,ν (x) 1 49 (2.5) f (x) = , ∆ = , 49 α,ν ∆ α 50 50

51 where ∆Fα,ν (x) = Fα,ν (x + ∆) − Fα,ν (x) and α → ∞. 51

imsart-coll ver. 2008/08/29 file: Mnatsak.tex date: January 31, 2009 254 Mnatsakanov and Hakobyan

1 In the sequel the uniform convergence on any bounded interval from R+ will be 1 2 denoted by −→u, while the L1-norm between two functions f1 and f2 2 3 3 by || f1 − f2 ||L1 . Note also that the statements from Sections 3 and 4 are valid for 4 distributions defined on any compact [0,T ], T < ∞. Without loss of generality we 4 5 assume that F has support [0, 1]. 5 6 6 7 7 8 3. Asymptotic Properties of Fα,ν and fα,ν 8 9 9 10 In this Section we present some asymptotic properties of moment-recovered cdf 10 −1 11 Fα,ν and moment-recovered pdf fα,ν functions based on transformation Kα ν from 11 12 (2.2). The uniform approximation rate of Fα,ν and the uniform convergence of fα,ν 12 13 are derived as well. 13 14 Denote the family of all cdfs defined on [0, 1] by F. The construction (2.2) gives 14 15 us the possibility to recover also two non-linear operators Ak : F × F → F, k = 15 16 1, 2, defined as follows: denote the convolution with respect to the multiplication 16 17 operation on R+ by 17 18 Z 18 19 (3.1) F1 ⊗ F2(x) = F1(x/τ) dF2(τ) := A1(F1,F2)(x), 0 6 x 6 1, 19 20 20 21 while the convolution with respect to the addition operation is denoted by 21 22 22 23 Z 23 24 F1 ?F2(x) = F1(x − τ) dF2(τ) := A2(F1,F2)(x), 0 6 x 6 2 . 24 25 25 26 26 For any two moment sequences ν1 = {µj,F1 , j ∈ N} and ν2 = {µj,F2 , j ∈ N}, let us 27 27 use the following notations: ν1 ν2 = {µj,F1 ×µj,F2 , j ∈ N} and ν1 ⊕ν2 = {ν¯j, j ∈ N} 28 where 28 29 29 j   30 X j 30 (3.2)ν ¯j = µm,F × µj−m,F . 31 m 1 2 31 m=0 32 32 33 Also denote by F ◦φ−1 the composition F (φ−1(x)), x ∈ [0, 1], with φ - continuous 33 34 and increasing function φ : [0, 1] → [0, 1]. 34 35 −1 35 Since cdfs A1(F1,F2) = F1⊗F2, A2(F1,F2) = F1?F2, and F ◦φ have a compact 36 36 support, they all are M-determinate and have the moment sequences ν1 ν2, ν1⊕ν2, 37 37 and ν = {µ¯j , j ∈ N}, with 38 38 39 Z 39 j 40 (3.3)µ ¯j = [φ(t)] dF (t), 40 41 41 42 respectively. Hence, applying the Theorem 3.1 from Mnatsakanov and Ruymgaart 42 43 [17] we derive 43 44 44 45 Theorem 3.1. 45 46 (i) If ν = ν1 ν2, then (2.4) holds with Fν = F1 ⊗ F2; 46 47 47 48 (ii) If ν = ν1 ⊕ ν2, then (2.4) holds with Fν = F1 ?F2; 48 49 49 50 (iii) If ν = {µ¯j , j ∈ N}, with µ¯j defined in (3.3), then (2.4) holds with Fν = 50 51 F ◦ φ−1. 51

imsart-coll ver. 2008/08/29 file: Mnatsak.tex date: January 31, 2009 Recovery of Distributions via Moments 255

k k 1 In the following statement let us use the notations: µF = {µj,F , j ∈ N} and 1 2 F ⊗ k = F ⊗ · · · ⊗ F for corresponding k-fold convolution (cf. (3.1)). 2 3 3 4 Corollary 3.1. 4 5 5 (i) If ν = {µaj+b,F /µb,F , j ∈ N} for some a > 0 and b > 0, then (2.4) holds 6 with 6 7 7 Z x1/a 8 1 b 8 (3.4) Fν (x) = t d F (t); 9 µb,F 0 9 10 10 1/a 11 in particular, if ν = {µaj,F , j ∈ N}, then Fν (x) = F (x ) in (3.4); 11 12 12 13 (ii) If ν = β1ν1 + β2ν2 with β1 + β2 = 1, β1, β2 > 0, then (2.4) holds with Fν = 13 14 β1 F1 + β1 F2; 14 15 15 j 16 (iii) If ν = {a µj,F , j ∈ N}, then (2.4) holds with Fν (x) = F (x/a); 16 17 17 Pm k Pm 18 (iv) If ν = k=1 βk µF , where k=1 βk = 1, βk > 0, then (2.4) holds with 18 19 m 19 20 X ⊗k 20 Fν = βkF . 21 k=1 21 22 22 23 23 Proof. The statements (i) can be justified in a similar way we proceeded in Theorem 24 24 3.1. The result of (ii) is a consequence of the linearity of K−1 ν, while, the statement 25 α 25 (iii) is a special case of Theorem 3.1 (i), since the distribution with the moments 26 26 µ = aj, j ∈ , is degenerated at a. The proof of (iv) follows from Theorem 3.1 (i) 27 j N 27 and Corollary 3.1 (ii). 28 28 29 29 Remark 3.1. If a = 1 in Corollary 3.1(i), then cdf Fν from (3.5) represents the 30 30 biased sampling model with the weight function w(t) = tb. 31 31 32 32 The construction (2.2) is also useful in the problem of recovering the quantile 33 33 function Q(t) = inf{x : F (x) t} via moments (see (4.5) in Section 4). Let us 34 > 34 denote Qα = Fα,ν , where 35 Q 35 36 Z 1 36 j 37 (3.5) νQ = { [F (u)] du, j ∈ N} . 37 0 38 38 39 The following statement is true: 39 40 40

41 Corollary 3.2. If F is continuous, then Qα →w Q, as α → ∞. 41 42 42 43 Proof. Replacing the functions φ and F in (3.3) by F and uniform cdf defined on 43

44 [0, 1], respectively, we obtain from Theorem 3.1 (iii) that Qα = Fα,νQ →w Fν = 44 −1 45 F as α → ∞. 45 46 46 47 Under additional condition on the smoothness of F one can obtain the uniform 47 48 rate of convergence in (2.4) and, hence, in Theorem 3.1 too. Consider the following 48 49 condition 49 50 50 00 0 51 (3.6) F = f is bounded on [0, 1] . 51

imsart-coll ver. 2008/08/29 file: Mnatsak.tex date: January 31, 2009 256 Mnatsakanov and Hakobyan

1 Theorem 3.2. If ν = {µj,F , j ∈ N}, then under the condition (3.6), we have 1 2   2 3 1 3 (3.7) sup Fα,ν (x) − F (x) = O , as α → ∞. α 4 06x61 4 5 Proof. Let us use the following representation 5 6 6 7 Pα(t, x) = P{Nαt 6 αx} = P{S[αx] > αt}. 7 8 8 Pm 9 Here {Nαt, t ∈ [0, 1]} is a Poisson process with intensity αt, Sm = k=0 ξk,S0 = 0, 9 10 with ξk being iid Exp(1) random variables. To show (3.7), let us use the integration 10 11 by parts in 11 12 12 Z 1 [αx] k ∞ j−k 13 −1 X (αt) X (−αt) 13 Fα,ν (x) = (Kα ν)(x) = dF (t)(3.8) 14 k! (j − k)! 14 0 k=0 j=k 15 Z 1 Z 1 15 16 = Pα(t, x)d F (t) = P{S[αx] > αt}dF (t) 16 17 0 0 17 1 18 1 Z 18 = F (t) P{S[αx] αt} − F (t) d P{S[αx] αt} 19 > > 19 0 0 20 Z 1 Z ∞ 20 21 = P{S[αx] > α} + F (t) d P{S[αx] 6 αt} = F (t) d P{S[αx] 6 αt}. 21 0 0 22 22 23 So that the condition (3.6) and the argument used in Adell and de la Cal [1] yield 23 24 the proof of the Theorem 3.2. 24 25 25 R ∞ 26 Remark 3.2. Assume supp{F } = R+. In this case Fα,ν (x) = 0 Pα(t, x)d F (t) 26 27 (cf. with (3.8)). According to Mnatsakanov and Klaassen [16] (see the proof of 27 28 Theorem 3.1), one can derive the exact rate of approximation of Fα,ν in the space 28 29 L2(R+, dF ). Namely, under the condition that pdf f is bounded, say by C > 0, we 29 30 have 30 Z ∞  2 2 C 31 31 Fα,ν (x) − F (x) dF (x) ≤ . 32 0 α 32 33 33 34 34 Now let us consider the moment-recovered density functions f defined in (2.5) 35 α,ν 35 and denote by ∆(f, δ) = sup |f(t) − f(s)| the modulus of continuity of f, 36 |t−s|≤δ 36 where 0 < δ < 1. The following statement is true: 37 37 Theorem 3.3. If pdf f is continuous on [0, 1], then f −→ f and 38 α,ν u 38 39 2 || f ||  1  39 (3.9) || f − f ||≤ ∆(f, δ) + + o , as α → ∞. 40 α,ν αδ2 α 40 41 41 42 Proof. Let us derive only (3.9). Since [α(x + 1/α)] = [αx] + 1, for any x ∈ [0, 1], we 42 43 have 43 44 [αx]+1 ∞ [αx] ∞ 44 X X (−α)j−k αk X X (−α)j−k αk 45 (3.10) f (x) = α  µ − µ , 45 α,ν (j − k)! k! j,F (j − k)! k! j,F 46 k=0 j=k k=0 j=k 46 47 47 48 and, after a simple algebra (3.10) yields 48 49 ∞ 49 α[αx]+2 X (−α)m 50 (3.11) f (x) = · µ . 50 α,ν Γ([αx] + 2) m! m+[αx]+1,F 51 m=0 51

imsart-coll ver. 2008/08/29 file: Mnatsak.tex date: January 31, 2009 Recovery of Distributions via Moments 257

1 Let denote by 1 2 2 ba ta−1 3 g(t, a, b) = e−b t , t > 0 , for any a , b > 0 , 3 4 Γ(a) 4 5 a gamma pdf with the shape and the rate parameters a and b, respectively. Substi- 5 6 tution (2.1) into the right hand side of (3.11) gives 6 7 7 8 [αx]+2 Z 1 ∞ m 8 α X (−αt) [αx]+1 9 fα,ν (x) = t dF (t)(3.12) 9 Γ([αx] + 2) 0 m! 10 m=0 10 1 11 Z 11 = g(t, [αx] + 2, α) f(t)dt. 12 12 0 13 13 14 To show (3.9), let us note that pdf g in (3.12) has the mean ([αx] + 2)/α and the 14 2 15 ([αx] + 2)/α , respectively. Application of the Tchebishev’s inequality and 15 16 splitting the integral on the right hand side of (3.12) into two parts yields 16 17 17 18 Z Z ! 18 19 sup + |f(t) − f(x) | g(t, [αx] + 2, α) dt 19 20 0≤x≤1 |t−x|≤δ |t−x|>δ 20 21 Z 21 ≤ ∆(f, δ) + 2 || f || sup g(t, [αx] + 2, α) dt 22 22 0≤x≤1 |t−x|>δ 23 2 || f ||  1  23 24 ≤ ∆(f, δ) + + o , 24 αδ2 α 25 25 26 as α → ∞ and 0 < δ < 1. 26 27 27 28 Remark 3.3. In Mnatsakanov [13-14] the uniform and L1-rates of moment- 28 29 recovered approximations of F and f defined accordingly by 29 30 30 [αx] α    31 ∗ X X α j j−k 31 (3.13) Fα,ν (x) = (−1) µj,F 32 j k 32 k=0 j=k 33 33 34 and 34 35 35 α−[αx] m 36 ∗ Γ(α + 2) X (−1) µm+[αx],F 36 (3.14) f α,ν (x) = , x ∈ [0, 1] , α ∈ N, 37 Γ([αx] + 1) m!(α − [αx] − m)! 37 m=0 38 38 39 are established. In Section 4, see Example 4.2, the cdf F (x) = x3 − 3 x3 ln x and 39 2 40 its density function f(x) = −9x ln x, 0 6 t 6 1, are recovered using both Fα,ν and 40 41 ∗ ∗ 41 F α,ν , and fα,ν and f α,ν constructions, (see Figures 1 (b) and 3 (b), respectively). 42 The formulas (3.11) and (3.14) with ν = νQ defined according to (3.5) can be 42 43 used to recover a quantile density function 43 44 44 1 45 0 45 q(x) = Q (x) = −1 , x ∈ [0, 1]. 46 f(F (x)) 46 47 47 For example, consider fα,νQ := qα: the application of the first line in (3.12) with 48 F −1 instead of F yields 48 49 49 50 Z 1 50 51 (3.15) qα(x) = g(F (u), [αx] + 2, α) du 51 0

imsart-coll ver. 2008/08/29 file: Mnatsak.tex date: January 31, 2009 258 Mnatsakanov and Hakobyan

1 and corresponding moment-recovered quantile density function 1 2 2 Z 1 3 3 qα,β(x) = g(Fβ,ν (u), [αx] + 2, α) du , α , β ∈ N. 4 0 4 5 5 Here Fβ,ν is a moment-recovered cdf of F . As a consequence of Theorem 3.3 we 6 have the following 6 7 7 Corollary 3.3. If qα = fα,νQ , with νQ defined in (3.5), and f is continuous 8 on [0, 1] with inf f(x) > γ > 0, then q −→ q and 8 06x61 α u 9 9 10 ∆(f, δ) 2 || f ||  1  10 (3.16) || qα − q ||≤ + + o , as α → ∞. 11 γ2 α δ2 γ2 α 11 12 12 Finally, note that taking ν = ν in (3.14), we derive another approximation q∗ = 13 Q α 13 f ∗ of q based on Beta densities β(·, a, b) with the shape parameters a = [αx] + 1 14 α,νQ 14 and b = α − [αx] + 1: 15 15 16 Z 1 16 ∗ 17 (3.17) qα(x) = β(F (u), [αx] + 1, α − [αx] + 1) du . 17 0 18 18 19 19 20 4. Some Applications and Examples 20 21 21 In this Section the construction of the moment-recovered cdf Fα,ν is applied to 22 22 the problem of nonparametric estimation of a cdf, its density and a quantile func- 23 23 tions as well as to the problem of demixing in exponential, binomial and negative 24 24 binomial mixtures, and deconvolution in error-in-variable model. In Theorems 4.1 25 25 we derive the uniform rate of convergence for the empirical counterpart of Fα,ν 26 26 27 denoted by Feα, i.e. for Feα = Fα,νˆ, whereν ˆ being the sequence of all empirical 27 28 moments of the sample from F . In Theorem 4.2 we establish the L1-consistency 28 29 of the density estimate when the sample X1,...,Xn from F is drawn. In Theorem 29 30 4.3 the uniform rate of approximation for moment-recovered quantile function of 30 31 F is obtained. Finally, the graphs of moment-recovered cdfs, pdfs, and quantile 31 32 functions are presented in Figures 1-3. 32 33 33 34 Direct model. Assume we have n copies X1,...,Xn from cdf F defined on [0, 1]. 34 ˆ 35 Denote by Fn the ordinary empirical cdf (ecdf) of the sample X1,...,Xn: 35 36 n 36 1 X 37 Fˆ (t) = I (X ) , 0 t 1 . 37 n n [0,t] i 6 6 38 i=1 38 39 Substitution of the empirical moments 39 40 40 n 41 1 X 41 νˆ = Xj , j ∈ , 42 j n i N 42 i=1 43 43

44 instead of µj,F into (2.2) yields to 44 45 45 Z 1 Z 1 46 ˆ ˆ 46 Feα(x) = Fα,νˆ(x) = Pα(t, x)d Fn(t) = P{S[αx] > αt}d Fn(t) . 47 0 0 47 48 48 Furthermore, the empirical analogue of (3.8) admits a similar representation 49 49 Z ∞ 50 ˆ 50 51 Feα(x) = Fn(t) d P{S[αx] 6 αt}. 51 0

imsart-coll ver. 2008/08/29 file: Mnatsak.tex date: January 31, 2009 Recovery of Distributions via Moments 259

1 The application of the Theorem 3.2 and the asymptotic properties of Fˆn yield 1 2 Theorem 4.1. If ν = {µj,F , j ∈ N}, then under the condition (3.6) we have 2 3     3 4 1 1 4 (4.1) sup Feα(x) − F (x) = O √ + O a.s., as α , n → ∞. n α 5 06x61 5 6 6 Remark 4.1. In Mnatsakanov and Ruymgaart [17] the weak convergence of the 7 √ 7 moment-empirical processes { n{F (t) − F (t)}, t ∈ [0, 1]} to the Brownian bridge 8 en 8 is obtained. 9 9 10 10 Of course, when the sample is directly drawn from the cdf F of actual interest, 11 √ 11 one might use the ordinary ecdf Fˆ and empirical process U = n (Fˆ − F ). The 12 n n n 12 result mentioned in Remark 4.1 yields, that even if the only information available 13 13 is the empirical moments, we still can construct different test statistics based on 14 √ 14 the moment-empirical processes U = n (F − F ). 15 en en 15 On the other hand, using the construction (3.11), one can estimate the density 16 16 function f given only the estimated or empirical moments in: 17 17 ∞ 18 α[αx]+2 X (−α)m 18 19 (4.2) f (x) = νˆ , x ∈ [0, 1] . 19 α,νˆ Γ([αx] + 2) m! m+[αx]+1 20 m=0 20 21 Remark 4.2. In practice, the parameter α as well as the number of summands 21 22 22 in (4.2) (and the number of summands in the inner summation of Fα,νˆ) can be 23 chosen as the functions of n: α = α(n) → ∞ and M = M(n) → ∞ as n → ∞, that 23 24 optimize the accuracy of corresponding estimates. Further analysis is required to 24 25 derive the asymptotic forms of α(n) and M(n) as n → ∞. This question is currently 25 26 under investigation and is beyond the scope of the present article. 26 27 Note that when the sample from F is given, the construction (4.2) yields the 27 28 ˆ 28 estimate fα(x) = fα,νˆ withν ˆ = {νˆj, j ∈ N}: 29 29 30 n [αx]+1 n 30 α X (αXi) 1 X fˆ (x) = e−αXi = g(X , [αx] + 2, α), x ∈ [0, 1] . 31 α n ([αx] + 1)! n i 31 32 i=1 i=1 32 33 ˆ 33 Here g(·, [αx] + 2, α) defined in (3.12). The estimator fα does not represent a tradi- 34 tional kernel density estimator of f. It is defined by a δ-sequence, which consists of 34 35 the gamma density functions of varying shapes (the shape and the rate parameters 35 36 are equal to [αx] + 2 and α, respectively). It is natural to use this estimate when 36 37 supp{F } = [0, ∞), since, in this case, the supports of f and gamma kernel densities 37 38 ˆ 38 coincide and one avoids the boundary effect of fα (cf. Chen [5]). 39 39 Some asymptotic properties such as the convergence in probability of fˆ uni- 40 α 40 formly on any bounded interval and the Integrated Mean Squared Error (IMSE) 41 ˆ 41 42 of fα have been studied in Mnatsakanov and Ruymgaart [17] and Chen [5], respec- 42 43 tively. 43 44 Applying the results from Mnatsakanov and Khmaladze [15], where the neces- 44 45 sary and sufficient conditions for L1-consistency of general kernel density estimates 45 46 are established, one can prove in a similar way (cf. Mnatsakanov [14]) the following 46 47 statement: 47 48 48 49 Theorem 4.2. If f is continuous on [0, 1], then 49 √ 50 α 50 E || fˆ − f || → 0 , as → 0 and α , n → ∞. 51 α L1 n 51

imsart-coll ver. 2008/08/29 file: Mnatsak.tex date: January 31, 2009 260 Mnatsakanov and Hakobyan

1 Note here only that the proof of Theorem 4.2 is based on the result of Theorem 1 1 2 from Mnatsakanov and Khmaladze [15] combined with the statement of Theorem 2 3 ˆ 3 3.3 that yields || fα,ν − f ||L1 → 0 (where fα,ν (x) = Efα(x)). Here we also use the 4 inequality √ 4 5 α 5 g(t, [αx] + 2, α) ≤ √ , 0 < x < 1 , 6 2π x 6 7 that follows from the Stirling’s formula for Γ([αx] + 2) and the property that the 7 8 8 function g(·, [αx] + 2, α) defined in (3.12) attends its maximum at tmax = ([αx] + 9 1)/α. 9 10 10 Exponential mixture model. Suppose, one observes a random sample Y1,...,Yn 11 from the mixture of exponentials 11 12 12 13 Z T 13 −x/τ 14 G(x) = (1 − e ) dF (τ) , x > 0 . 14 0 15 15 16 We arrive at convolution (3.1) with G = Exp(1) ⊗ F . The unknown cdf F can be 16 −1 17 recovered according to the construction Fα,ν = Kα ν with ν = {µj,G/j!, j ∈ N}. 17 −1 18 Similarly, given the sample Y1,...,Yn from G and taking Fα,νˆ = Kα νˆ, where 18 19 νˆ = {µˆj,G/j!, j ∈ N}, we obtain the estimate of F . Here {µˆj,G, j ∈ N} are the 19 20 empirical moments of the sample Y1,...,Yn. The regularized inversion of the noisy 20 21 Laplace transform and the L2-rate of convergence were obtained in Chauveau et al. 21 22 [4]. 22 23 23 24 Binomial and negative binomial mixture models. Suppose, one observes a random 24 25 sample Y1,...,Yn from the binomial or negative binomial mixture distributions, 25 26 respectively: 26 27 Z 1   27 m x m−x 28 p(x) := P (Y = x) = τ (1 − τ) dF (τ) , x = 0, . . . m , 28 x 29 0 29 30 30 31 Z 1 Γ(r + x)  1 r  τ x 31 p(x) := P (Y = x) = dG(τ) , x = 0, 1,..., 32 32 0 Γ(r) x! 1 + τ 1 + τ 33 33 34 where m and r are given positive integers. Assume that the unknown mixing cdfs F 34 m+1 35 and G are such that F has at most 2 support points in (0, 1), while G is a right 35 36 continuous cdf on (0, 1). In both models the mixing distributions are identifiable 36 37 (see, for example, Teicher [26] for binomial mixture model). Note also that the jth 37 38 moments of F and G are related to the jth factorial moments of corresponding Yi’s 38 39 in the following ways: 39 40 1 1 40 µ = E(Y [j]) and µ = E(Y [j]) . 41 j,F [j] 1 j,G 1 41 m r(j) 42 42 [j] 43 Here y = y(y − 1) ··· (y − j + 1) and r(j) = r(r + 1) ··· (r + j − 1). To estimate 43 44 F and G one can use the moment-recovered formulas (2.2) or (3.13) with µj,F and 44 45 µj,G defined in previous two equations where the theoretical factorial moments are 45 46 replaced by corresponding empirical counterparts. The asymptotic properties of the 46 47 derived estimators of F and G will be studied in a separate work. 47 48 48 49 Deconvolution problem: error-in-variable model. Assume now we observe a ran- 49 50 dom variable Y = X + U, with cdf G, where U (the error) has some known sym- 50 51 metric distribution F2, X has a cdf F1 with a support [0,T ], and U and X are 51

imsart-coll ver. 2008/08/29 file: Mnatsak.tex date: January 31, 2009 Recovery of Distributions via Moments 261

1 1 2 2

3 1.0 1.0 3 4 4

5 0.8 0.8 5 6 6

7 0.6 0.6 7 F(x) 8 G(x) 8

9 0.4 0.4 9 10 10

11 0.2 0.2 11 12 12 13 0.0 0.0 13 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 14 14 x x 15 15 16 (a) (b) 16 17 17 3 18 Fig 1. (a) Approximation of G(x) = x − x ln x by Fα,ν and (b) Approximation of G(x ) by Fα,ν 18 and by F ∗ 19 α,ν 19 20 20 21 21 22 independent. This model, known as an error-in-variable model, corresponds to the 22 23 convolution G = F1 ?F2. Assuming that all moments of X and U exist, the mo- 23 24 ments {ν¯j, j ∈ N} of Y are described according to (3.2). Hence, given the mo- 24 25 ments of U (with E(U) = 0), we can recalculate the moments of F1 as follows: 25 26 µ1,F1 =ν ¯1, µ2,F1 =ν ¯2 − µ2,F2 , and so on. So that, assuming that we already cal- 26 culated µ , or estimated them by µ∗ for 1 k j − 2, we will have, for any 27 k,F1 k,F1 6 6 27 28 j > 1: 28 j   29 X j 29 µj,F =ν ¯j − µm,F × µj−m,F 30 1 m 2 1 30 m=2 31 31 32 or, respectively, 32 33 33 j   34 ∗ X j ∗ 34 µj,F =µ ˆj,G − µm,F2 × µj−m,F 35 1 m 1 35 m=2 36 36 37 given the sample Y1,...,Yn from cdf G. Now the moment-recovered estimate of 37 38 F will have the form F = K−1 νˆ, whereν ˆ = {µ∗ , j ∈ }. The alternative 38 1 α,νˆ α j,F1 N 39 construction of the kernel type estimate of√F1 based on the Fourier transforms is 39 40 studied in Hall and Lahiri [8], where the n-consistency and other properties of 40 41 the estimated moments µ∗ , j ∈ , are derived as well. 41 j,F1 N 42 42 43 Example 4.1. Consider the moment sequence µ = {1/(j + 1), j ∈ N}. The cor- 43 −1 44 responding moment-recovered distribution Fα,µ = Kα µ is a good approximation 44 45 of F (x) = x already with α = 50 and M = 100. 45 46 Assume now that we want to recover the distribution G with corresponding 46 2 47 moments νj,G = 1/(j + 1) , j ∈ N. Since we can represent νG = µ µ, we con- 47 48 clude from Theorem 3.1, see (i), that G = F ⊗ F , with F (x) = x, and hence 48 49 49 G(x) = x − x ln x, 0 6 x 6 1. We plotted the curves of Fα,νG (the solid line) and 50 G (the dashed line) on Figure 1 (a). We took α = 50 and M = 200, the number 50 51 of terms in the inner summation of the formula (2.2). From Figure 1 (a) we can 51

imsart-coll ver. 2008/08/29 file: Mnatsak.tex date: January 31, 2009 262 Mnatsakanov and Hakobyan

1 1 see that the approximation of G by Fα,νG at x = 0 is not as good as inside of the 2 interval [0, 1]. This happened because the condition (3.6) from Theorem 3.2 is not 2 3 valid for function g0(x) = G00(x) = −1/x. 3 4 4 2 5 Example 4.2. To recover the distribution F via moments νj = 9/(j + 3) , j ∈ N, 5 3 3 3 3 6 note that νj = νaj,G, with a = 1/3. Hence, F (x) = G(x ) = x −x ln(x ), 0 6 x 6 1 6 7 (Corollary 3.1, see (i)). We conducted computations of moment-recovered cdf Fα,ν 7 8 when α = 50 and the number of terms in the inner summation of the formula (2.2) 8 ∗ 9 is equal to 200. Also, we calculated Fα,ν defined in (3.13) with α = 32. See Figure 9 ∗ 10 1 (b), where we plotted Fα,ν (the solid blue line), Fα,ν (the solid red line), and F 10 11 (the dashed line), respectively. These two approximations of cdf F justify a good 11 12 fit already with α = 50 and M = 200 for the first one and with α = 32 for the 12 ∗ 13 second one. From Figure 1 (b) we can see that the performance of Fα,ν is slightly 13 ∗ 14 better compared to Fα,ν : Fα,ν does not have the “boundary” effect around x = 1. 14 15 15 16 Estimation of a quantile function Q and quantile density function q. Assume 16 17 that a X has a continuous cdf F defined on [0, 1]. To approximate 17 18 (estimate) the quantile function Q given only the moments (estimated moments) 18 19 of F , one can use the Corollary 3.2. Indeed, after a simple algebra, we have 19 20 20 Z 1 21  21 (4.3) Qα(x) = Fα,νQ (x) = Pα F (u), x du , 0 6 x 6 1 , 22 0 22 23 23 24 where νQ and Pα(·, ·) are defined in (3.5) and in (2.3), respectively. Comparing (4.3) 24 25 and (3.8) we can prove in a similar way (see, the proof of Theorem 3.2) the following 25 26 26 . If f 0 is bounded and inf f(x) > γ > 0, then 27 Theorem 4.3 06x61 27 28   28 1 29 (4.4) sup Qα(x) − Q(x) = O , as α → ∞. 29 α 30 06x61 30 31 31 Now, given only the moment sequence ν of F , one can construct the approximation 32 32 Qα,β of Q by substituting the moment-recovered cdf Fβ,ν (instead of F ) in the right 33 33 hand side of (4.3). Let us denote the corresponding approximant of Q by 34 34 35 Z 1 35  36 (4.5) Qα,β(x) = Pα Fβ,ν (u), x du , α , β ∈ N. 36 0 37 37 38 On Figure 2 (a) we plotted cdf F (x) = x3 − x3 ln(x3) (the dashed line), introduced 38 39 39 in Example 4.2, and its quantile approximation Qα,β (the solid line), when ν = 40 40 {9/(j + 3)2, j ∈ N}, α = β = 100, and M = 200. 41 41 Now, let us use the empirical cdf Fˆn instead of F in (4.3), (3.15), and in (3.17), 42 42 when the distribution F is observed via the sample X1,...,Xn. These yield the 43 43 following estimators, respectively, based on the spacings ∆X(i) = X(i) − X(i−1), i = 44 1, . . . , n + 1: 44 45 45 46 Z 1 n+1 46 ˆ ˆ  X i − 1  47 (4.6) Qα(x) = Fα,νˆQ (x) = Pα Fn(u), x du = ∆X(i) Pα , x , 47 0 n 48 i=1 48 49 49 Z 1 n+1 50 ˆ X i − 1  50 qˆα(x) = g(Fn(u), [αx] + 2, α) du = ∆X(i) g , [αx] + 2, α , 51 n 51 0 i=1

imsart-coll ver. 2008/08/29 file: Mnatsak.tex date: January 31, 2009 Recovery of Distributions via Moments 263

1 1 2 2

3 1.0 1.0 3 4 4

5 0.8 0.8 5 6 6

7 0.6 0.6 7

8 Q(x) Q(x) 8

9 0.4 0.4 9 10 10

11 0.2 0.2 11 12 12 13 0.0 0.0 13 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 14 14 x x 15 15 16 (a) (b) 16 17 17 1/3 ˆ ˆ 18 Fig 2. (a) Approximation of Q by Qα,β and (b) Estimation of Q(x) = x by Qα and by QHD 18 19 19 20 20 21 and 21 n+1 22 ∗ X i − 1  22 qˆα(x) = ∆X(i) β , [αx] + 1, α − [αx] + 1 . 23 n 23 i=1 24 24 R 1 ˆ j 25 Hereν ˆQ = { 0 [Fn(u)] du, j ∈ N}, while X(i), i = 1, . . . , n, X(0) = 0,X(n+1) = 1, 25 26 are the order statistics of the sample X1,...,Xn. 26 27 Now, let us compare the curves of Qˆα and well known Harrell-Davis estimator 27

28 n 28 X  i  29 (4.7) Qˆ (x) = X ∆Beta , (n + 1)x, (n + 1)(1 − x) , 29 HD (i) n 30 i=1 30 31 31  32 where Beta ·, a, b denotes the cdf of a with the shape parame- 32

33 ters a > 0 and b > 0. For asymptotic expressions of MSE and the bias term of QˆHD 33 34 we refer to Sheather and Marron [19]. Let us generate n = 100 independent random 34 3 35 variables X1,...,Xn from F (x) = x , 0 6 x 6 1. Taking α = 100, we estimate (see, 35 1/3 36 Figure 2 (b)) the corresponding quantile function Q(x) = x , 0 6 x 6 1, (the 36 37 dashed line) by means of Qˆα (the solid line) and by QˆHD (the dashed-dotted line), 37 38 defined in (4.6) and (4.7), accordingly. Through simulations we conclude that the 38 39 asymptotic behavior of the moment-recovered estimator Qˆα and the Harrell-Davis 39 ˆ ˆ ∗ 40 estimator QHD are similar. The MSE and other properties of Qα, qˆα, andq ˆα will 40 41 be presented in a separate article. 41 42 42 43 Example 4.1 (continued). Assume now that we want to recover pdf of the dis- 43 2 44 tribution G studied in the Example 4.1 via the moments νj,G = 1/(j + 1) , j ∈ N. 44 45 On the Figure 3 (a) we plotted the curves of the moment-recovered density fα,ν 45 0 46 (the solid line) defined by (3.11) and g(x) = G (x) = − ln x, 0 6 x 6 1 (the dashed 46 47 line), respectively. Here we took α = 50 and M = 200. 47 48 48 2 49 Example 4.2 (continued). Now let us recover the pdf f(x) = −9x ln x, 0 6 x 6 1, 49 2 50 of distribution F defined in Example 4.2 where νj,F = 9/(j +3) , j ∈ N. We applied 50 ∗ 51 the both approximants fα,ν and f α,ν defined in (3.11) and (3.14), respectively. 51

imsart-coll ver. 2008/08/29 file: Mnatsak.tex date: January 31, 2009 264 Mnatsakanov and Hakobyan

1 1 2 2

3 2.5 3

4 1.5 4

5 2.0 5 6 6 1.5 7 1.0 7 f(x) 8 g(x) 8

9 1.0 9

10 0.5 10

11 0.5 11 12 12 0.0 13 0.0 13 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 14 14 x x 15 15 16 (a) (b) 16 17 17 2 18 Fig 3. (a) Approximation of g(x) = − ln x by fα,ν and (b) Approximation of f(x) = −9x ln x by 18 ∗ fα,ν and f 19 α,ν 19 20 20 21 21 ∗ 22 Namely, we calculated the values of fα,ν and f α,ν at the points x = k/α, k = 22 23 1, 2, . . . , α. On the Figure 3 (b) we plotted the curves of fα,ν (the blue dashed- 23 ∗ 24 dotted line), and fα,ν (the red solid line), and f (the black dashed line). Here, we 24 ∗ 25 took α = 50 and M = 200 when calculating fα,ν and α = 32 in fα,ν . One can see 25 ∗ 26 that the performance of fα,ν with α = 32 is better than the performance of fα,ν 26 27 with α = 50 and M = 200. 27 28 28 29 After conducting many calculations of moment-recovered approximants for sev- 29 30 eral models we conclude that the accuracy of the formulas (2.2) and (3.11) are not 30 31 as good as the ones defined in (3.13) and (3.14) in the Hausdorff case. On the other 31 32 hand, the constructions (2.2) and (3.11) could be useful in the Sieltjes moment 32 33 problem as well. 33 34 34 35 Acknowledgments 35 36 36 37 The authors are thankful to Estate Khmaladze, E. James Harner, and Cecil Burch- 37 38 fiel for helpful discussions. The work of the first author was supported by the 38 39 National Institute for Occupational Safety and Health, contract no. 212-2005-M- 39 40 12857. The findings and conclusions in this paper are those of the authors and do 40 41 not necessarily represent the views of the National Institute for Occupational Safety 41 42 and Health. 42 43 43 44 44 45 References 45 46 46 [1] Adell, J.A. and de la Cal, J. (1993). On the uniform convergence of normalized Poisson mixtures 47 47 to their mixing distribution. J. Statist. Prob. Letters, 18 227–232. 48 [2] Akhiezer, N. I. (1965). The Classical Moment Problem and Some Related Questions in Analysis. 48 49 Oliver & Boyd, Edinburgh. 49 [3] Borwein, J.M. and Lewis, A.S. (1993). A survey of convergence results for maximum entropy meth- 50 50 ods. In Djafari, M. and Demoments, G., eds. Maximum Entropy and Bayesian Methods. Kluwer 51 Academic Publishers, 39–48. 51

imsart-coll ver. 2008/08/29 file: Mnatsak.tex date: January 31, 2009 Recovery of Distributions via Moments 265

1 [4] Chauveau, D.E., van Rooij, A.C.M. and Ruymgaart, F.H. (1994). Regularized inversion of noisy 1 2 Laplace transforms. Advances in Appl. Math. 15, 186–201. 2 [5] Chen, S. X. (2000). Probability density function estimation using gamma kernels. Ann. Inst. Statist. 3 3 Math., 52, 471–480. 4 [6] Feller, W. (1971). An Introduction to and its Applications, vol. II. Wiley, 4 5 New York. 5 [7] Frontini, M and Tagliani, A. (1997). Entropy-convergence in Stieltejs and Hamburger moment 6 6 problem. Applied Math. and Computation, 88, 39–51. 7 [8] Hall, P. and Lahiri, S.N. (2008). Estimation of distributions, moments and quantiles in deconvo- 7 8 lution problems. Ann. Statist., 36, 2110–2134 8 [9] Kevasan, H.K. and Kapur, J.N. (1992). Entropy Optimization Principles with Applications. Aca- 9 9 demic Press. New York. 10 [10] Lin, G.D. (1992). Characterizations of distributions via moments. Sankhya 54, Series A, 128–132. 10 11 [11] Lin, G.D. (1997). On the moment problem. J. Statist. Prob. Letters, 35, 85–90. 11 [12] Lindsay, B. G., Pilla, R. S. and Basak, P. (2000). Moment-based approximations of distributions 12 12 using mixtures: theory and applications. Ann. Inst. Statist. Math., 52, 215–230. 13 [13] Mnatsakanov, R.M. (2008). Hausdorff moment problem: reconstruction of distributions, J. Statist. 13 14 Prob. Letters, 78, 1612–1618. 14 [14] Mnatsakanov, R.M. (2008). Hausdorff moment problem: Reconstruction of probability density func- 15 15 tions. J. Statist. Prob. Letters, 78, 1869–1877. 16 [15] Mnatsakanov, R. M. and Khmaladze, E.V. (1981). On L1-convergence of statistical kernel estima- 16 17 tors of distribution densities. Soviet Math. Dokl., 23, 633–636. 17 [16] Mnatsakanov, R.M. and Klaassen, C.A.J. (2003). Estimation of the mixing distribution in the 18 18 Poisson mixture models. Uncensored and censored samples. Proc. Hawaii International conference 19 on Statistics and Related Fields, Honolulu, Hawaii, June 4-8, 2003, 1-18, CD: ISSN# 1539–7211. 19 20 [17] Mnatsakanov, R. and Ruymgaart, F.H. (2003). Some properties of moment-empirical cdf’s with 20 application to some inverse estimation problems. Math. Meth. Statist. , 12, 478–495. 21 21 [18] Novi Inverardi, P. L., Petri, A., Pontuale, G. and Tagliani, A. (2003). Hausdorff moment problem 22 via fractional moments. Applied Mathematics and Computation, 144, 61–74. 22 23 [19] Sheather, S.J. and Marron, J.S. (1990). Kernel quantile estimators. JASA, 85.410, 410–416. 23 [20] Shohat, J.A. and Tamarkin, J.D. (1943). The Problem of Moments. Amer. Math. Soc., New York. 24 24 [21] Stieltjes, T. (1894). Recherches sur les fractions continues. Anns Sci. Fac. Univ. Toulouse (1894- 25 1895), 8 J1–J122; A5–A47. 25 26 [22] Stoyanov, J. (1997). Counterexamples in Probability, 2nd Ed. Wiley, Chichester. 26 [23] Stoyanov, J. (2000). Krein condition in probabilistic moment problems. Bernoulli, 6, 939–949. 27 27 [24] Stoyanov, J. (2004). Stieltjes classes for moment indeterminate probability distributions. J. Appl. 28 Probab., 41A, 281–294. 28 29 [25] Tagliani, A. and Velasquez, Y. (2003). Numerical inversion of the Laplace transform via fractional 29 moments. Applied Mathematics and Computation, 143, 99–107. 30 30 [26] Teicher, H. (1963). Identifiability of finite mixtures. Ann. Math. Statist., 34, 1265–1269. 31 [27] Widder, D.V. (1934). The inversion of the Laplace integral and the related moment problem. 31 32 Transactions of the Amer. Math. Soc., 36, 107–200. 32 33 33 34 34 35 35 36 36 37 37 38 38 39 39 40 40 41 41 42 42 43 43 44 44 45 45 46 46 47 47 48 48 49 49 50 50 51 51

imsart-coll ver. 2008/08/29 file: Mnatsak.tex date: January 31, 2009