arXiv:1603.00596v1 [math.ST] 2 Mar 2016 .Introduction 1. pro simpler a with results existing some Keywords: autho of some generalization by a is before result studied distribu main been the has calculating which for combinations provided these the is of of method distribution A joint Dirichlet. the are and ficients vectors random the of studied are distribution vectors random some of combinations linear Stochastic Abstract ierTasomtos eedn opnns Lifetime. Components; Dependent Transformations; Linear stable Multivariate Distributions; Dirichlet transform; i iercombination linear tic h admeeto h niomn ni;so it; on environment the of effect random the tcatcmtie,nua ewrsadohrapiain nso in Let applications biology. other and networks neural matrices, stochastic rpitsbitdto submitted Preprint t vect random in of systems combination of linear randomly lifetime the the Indeed, computing conditions. on focused Rece have (2015)). Homei authors (see eral environment the in lifetime average the is admyWihe vrgs utvraeCase Multivariate A Averages: Weighted Randomly o ie admvariables random given For utvraernol egtdaeae utvraeSti Multivariate average; weighted randomly Multivariate X nvriyo arz ..o 16–76,Tbi,IRAN. Tabriz, 51666–17766, P.O.Box Tabriz, of University eateto ttsis aut fMteaia Science Mathematical of Faculty , of Department i (1 6 arXiv.org i 6 Z n etelftm esrdi a n 0 and lab a in measured lifetime the be ) = P [email protected] i n =1 ahrHomei Hazhir X W 1 , i · · · X i X , sue o h rbesi lifetime, in problems the for used is n h itiuino h stochas- the of distribution the W i X i 6 itiuin;Randomly distributions; X i n thus and s, ac ,2016 3, March ilg and ciology 6 P hr the where eltjes W ty sev- ntly, i n r have ors =1 s Our rs. rcoef- ir i ereal he 6 W be 1 tion i X of. i many applications including traditional portfolio selection models, relation- ship between attitudes and behavior, number of cancer cells in tumor biology, stream flow in hydrology (Nadarajah & Kotz (2005)), branching processes, infinite particle systems and probabilistic algorithms, and vehicle speed and lifetime (cf. Homei (2015), Rezapour & Alamatsaz (2014) and the references therein) so finding their distributions has attracted the attentions of numer- ous researchers. In this paper, considering the dependent components and the real envi- ronmental conditions, the distribution of lifetimes (in case of the Dirichlet distributions) is calculated showing that the main result obtained here cov- ers several previous results and provides a simpler proof (cf. e.g. Johnson & Kotz (1990a), Sethuraman (1994), van Assche (1987), Volodin & Kotz & Johnson (1993)). The inner product of two random vectors was introduced in Homei (2014) and the exact distribution of this product was investigated for some random vectors with Beta and Dirichlet distributions. In this paper a new gener- alization for the inner product of two random vectors is introduced. For a random vector W′ = W , , W and a vector X = X , , X of ran- h 1 ··· ni h 1 ··· ni dom vectors (each Xi being k-dimensional) the inner product of X and W is essentially the linear transformation of W under the k n matrix X which × is Z= n W X . We assume that W is independent from X , , X and i=1 i i 1 ··· n has DirichletP distribution; also Xi’s have Dirichlet distributions. Identifying the distribution of Z usually requires

(i) either some long computations with combinatorial identities (see e.g. Volodin & Kotz & Johnson (1993), Johnson & Kotz (1990), Johnson

2 & Kotz (1990a), Sethuraman (1994), Homei (2014));

(ii) or advanced techniques requiring performing certain transformations and solving differential equations (see e.g. Homei (2012), Soltani & Homei (2009), van Assche (1987) or Homei (2015) and the references therein).

In this paper a new way is introduced to identify the distribution of randomly linear combinations (of Dirichlet distributions) by which a class of stochastic differential equations can be solved; this new method is much simpler than the existing ones.

2. The Main Result

Our main result identifies the distribution of the randomly linear combi- nation when the coefficients come from a Dirichlet distribution.

Theorem 2.1. If X , , X are independent k-variate random vectors with 1 ··· n respectively Dirichlet(α(1)), , Dirichlet(α(n)) distributions, for some k- ··· dimensional vectors α(j) = α(j), ,α(j) (j = 1, , n), and the random h 1 ··· k i ··· vector W = W , , W is independent from X , , X and has the dis- h 1 ··· ni 1 ··· n tribution k k Dirichlet α(1), , α(n) , i ··· i i=1 i=1 ! X Xn then the randomly linear combination Z = i=1 WiXi has the distribution

n P n Dirichlet α(j), , α(j) . 1 ··· k j=1 j=1 ! X X

3 (The First) Proof. Let Y (j = 1, , n) be independent random variables j ··· independent from (X , , X ) that have the distribution Γ( k α(j), 1 ), 1 ··· n i=1 i µ respectively. Thus, P

d Yj Wj = n for j =1, , n. l=1 Yl ···

The k-dimensional vectors TPj = YjXj have independent components and 1 1 Γ(α(j), ), , Γ(α(j), ) h 1 µ ··· k µ i distribution. Thus, we have

n T n Y n j = j X =d W X . n Y n Y j j j j=1 l=1 l j=1 l=1 l j=1 X  X X n Tj P Pn (j) n (j) Now, j=1 Pn Y has Dirichlet( j=1 α1 , , j=1 αk ) distribution, so l=1 l ··· n j=1 WPjXj= Z has the same distribution.P P ⊠⊞ P Corollary 2.2. Let the k-variate random variables X , , X be indepen- 1 ··· n dent and with common distribution. If W = W , , W is independent h 1 ··· ni from X , , X having Dirichlet(kα, ,kα) distribution, then the ran- 1 ··· n ··· domly linear combination Z = n W X has the Dirichlet(nα, , nα) i=1 i i ··· distribution if and only if Xi’s (iP= 1, , n) have Dirichlet(α, ,α) dis- ··· ··· tributions.

Proof. The “if” part follows from Theorem 2.1 and the “only if” part follows from the theorem of Volodin & Kotz & Johnson (1993). ⊠⊞

Some similar results (to Corollary 2.2) can be found in e.g. Alamat- saz (1993). The following results are obtained as special cases of Theorem 2.1 and Corollary 2.2:

4 Lemma 1 of Sethuraman (1994) (for n=2), and • Theorem of Volodin & Kotz & Johnson (1993) (when n = k and α(j) =a • i for j =1, ,k, i=1, ,k), ··· ··· which discusses the multivariate case, or the references below which discuss the single-variable case (note that for k = 2 the Dirichlet distribution leads to the ): Theorem of van Assche (1987) (for n=k =2,α(1) =α(1) =α(2) =α(2) = 1 ), • 1 2 1 2 2 Theorem 2 of Johnson & Kotz (1990a) (for n=k =2,α(1) =α(1) =α(2) = • 1 2 1 (2) α2 =a), Theorem 2.4 of Homei (2014) (for k = 2 and α(j) = α,α(j) =1 α for • 1 2 − j =1, , n), ··· Theorem 1 of Homei (2013) (for k =2); • and others; see the references in Homei (2015).

3. Four Other Proofs for Theorem 2.1 and a Variant of It

3.1. Generation Method

The Second Proof. The generating moment function of Tj’s in the first proof are t′T t′ X 1 k (j) E(e j ) = EE(e Yj j X )=E( )Pi=1 αi . | j 1 t X − ′ j By [9, page 77] we have

j 1 Pk α( ) MT (t)=( ) i=1 i . j 1 t − j

So, the components of the vector Tj are independent and have gamma dis- tributions, which proves the theorem. ⊠⊞

5 3.2. Applying Basu’s Theorem

The Third Proof. We can write (for j =1, , n) ··· Γ (α(j)) Γ (α(j)) X 1j 1 , , kj k , j ∼ k (j) ··· k (j) i=1 Γij(αi ) i=1 Γij(αi )! and P P k Γ (α(1)) k Γ (α(n)) W i=1 ij i , , i=1 ij i . ∼ n k (j) ··· n k (j) jP=1 i=1 Γij(αi ) jP=1 i=1 Γij(αi )! So, P P P P k (j) k (j) Γ1j(α ) Γkj(α ) Z =d j=1 1 , , j=1 k . n k (j) ··· n k (j) jP=1 i=1 Γij(αi ) jP=1 i=1 Γij(αi )! Let us recall thatP Γ (Pα(j))’s (for i = 1, P ,k Pand j = 1, , n) are in- ij i ··· ··· dependent random variables with gamma distributions, which implies the independence of the components of W from Xj’s (Basu’s Theorem). ⊠⊞

3.3. Mathematical Induction

The Fourth Proof. For n = 2 the theorem follows from [13, Lemma 1]. Sup- pose the theorem holds for n (the induction hypothesis). We prove it for n + 1 (the induction conclusion): By dividing the both sides of

n+1 n n Y Y X = Y j X + Y X i i i n Y j n+1 n+1 i=1 i=1 j=1 l=1 l ! X X  X n+1 P by i=1 Yi and using the induction hypothesis we have

Pn+1 Y n Y n Y Y i X = i j X + n+1 X n+1 i n+1 n Y j n+1 n+1 i=1 i=1 Yi i=1 i=1 Yi j=1 l=1 l ! i=1 Yi X X  X P in whichP the right hand sideP holds by [13, Lemma 1] (for nP= 2). ⊠⊞

6 3.4. The Moments Method The Fifth Proof. The general moments (s ,s , ,s ) of Z are as follows: 1 2 ··· k k n k n sj s E W X =E j (W X )hij j ij  h , h , , h j ij  j=1 i=1 ! j=1 1j 2j nj i=1 Y  X  Y  Xhj  ···  Y    where hj denotes summation over all non-negative integers P n h =(h , h , , h ) subject to h = s (j =1, 2, , n). j 1j 2j ··· nj ij j ··· i=1 X This equation can be rearranged as

k s k n = E j (W X )hij ··· h , h , , h i ij j=1 1j 2j nj j=1 i=1 ! Xh1 Xhk  Y  ···  Y Y  k s n k n = E j ( W hi∗ ) Xhij ··· h , h , , h i ij j=1 1j 2j nj i=1 j=1 i=1 ! Xh1 Xhk  Y  ···  Y Y Y  k (where hi = hij) and also ∗ j=1 P k s n k n = j E W hi∗ E Xhij ( ) ··· h , h , , h i ij ‡ j=1 1j 2j nj i=1 j=1 i=1 ! Xh1 Xhk Y  ···   Y   Y Y  By the Dirichlet distribution we have

n n k (i) n k (i) Γ( αj ) Γ( αj + hi ) W hi∗ i=1 j=1 j=1 ∗ E i = n k (i) k k (i) i=1 ! Γ( i=1P j=1Pαj + j=1 sj) i=1 PΓ( j=1 αj Y Y and also P P P P k n n k hij hij E Xij = E Xij , j=1 i=1 ! i=1 j=1 ! Y Y Y Y and again by the Dirichlet distribution

k k (i) k (i) Γ( α ) Γ(α + hij) Xhij j=1 j j . E ij = k (i) (i) j=1 ! Γ( jP=1 αj + hi ) j=1 Γ(αj ) Y ∗ Y P 7 So, by using ( ) ‡ n k (i) k s Γ( α ) = j i=1 j=1 j n k (i) k ··· h1j, h2j, , hnj α s h1 h j=1 Γ( i=1P j=1Pj + j=1 j) X Xk Y  ···  n k (i) n k P(i) P k (i) P Γ( j=1 αj + hi.) Γ( j=1 αj ) Γ(αj + hij) k (i) k (i) (i) i=1 PΓ( j=1 αj i=1 Γ( jP=1 αj + hi.) j=1 Γ(αj ) ! Y  Y Y P n k (i) P k Γ( i=1 j=1 αj ) sj = n k (i) k ··· h1j, h2j, , hnj Γ( i=1P j=1Pαj + j=1 sj) h h j=1 X1 Xk Y  ···  k n (i) P P P Γ(α + hij) j . (i) j=1 i=1 Γ(αj ) Y Y By considering the fact that the sum of the Dirichlet-multimonial distribu- tions on their equals to one, we have

n k (i) k n (i) Γ( α ) Γ( α + sj) = i=1 j=1 j i=1 j n k (i) k n (i) Γ( i=1P j=1Pαj + j=1 sj) j=1 PΓ( i=1 αj ) Y which is the generalP momentP of theP k-variate P

n n n Dirichlet α(i), α(i), , α(i) 1 2 ··· k i=1 i=1 i=1 ! X X X distribution, and since Z is a bounded , its distribution is uniquely determined by its moments. Thus the proof is complete. ⊠⊞

3.5. A Variant of Theorem 2.1

Theorem 3.1. The distribution of the randomly linear combination Z = n i=1 WiXi is 1 n P Dirichlet + α , 2 i i=1 ! X

8 where X , , X are two-dimensional independent multivariate random vec- 1 ··· n tors with 1 1 Dirichlet + α , , Dirichlet + α 2 1 ··· 2 n     distributions and the random vector W = W , , W is independent from h 1 ··· ni (X , , X ) and has the distribution 1 ··· n

Dirichlet(α , ,α ). 1 ··· n

Proof. Let Y (j = 1, , n) be independent random variables independent j ··· from (X , , X ) that have the distribution Γ(α ,µ), respectively. It can 1 ··· n j ′ P α be seen, by some classic ways (e.g. E et T = Ψ(t) j j from Kubo & Kuo

& Namli (2013), Table 2), that the distribution   of T(= j Tj = j YjXj) is the same distribution of Tj with the parameter j αj.P We can alsoP write T as P Y T = Y X = Y j X j j i Y j j i ! j i i ! X X X and so we have T = Y Z in which Y has theP with the parameter j αj, and T has the F distribution with the parameter

j αj, and Y andP Z are independent from each other. Of course, one can d d defineP T′ = Y ′X′ in such a way that T′ = T, Yj = Y ′ and X′ j ∼ 1 Dirichlet 2 + j αj . One can conclude that ZPand X′ have identical distributions byP calculating the general moments (s1,s2) of T and T′, i.e.,

s1 s2 s1 s2 E Z1 Z2 = E (X1′ ) (X2′ ) . ⊠⊞   Actually, the above proof also shows that:

Theorem 3.2. Let Y (j =1, , n) be independent random variables inde- j ··· pendent from (X , , X ) that have the distribution Γ(α ,µ), respectively, 1 ··· n j 9 where Xi’s are independent from each other and have Dirichlet distributions. If X has a bounded support and the independent random variable Y has d Γ( j αj,µ) distribution such that i YiXi = Y X, then X and Z = i WiXi haveP identical distributions, wherePW = W1, , Wn is independentP from h ··· i X ’s and has Dirichlet(α , ,α ) distribution. i 1 ··· n

4. Some Applications in Stochastic Differential Equations

In this section, using Theorem 2.1 and Corollary 2.2, we prove some inter- esting mathematical facts. As an example consider the following differential equation for each n (cf. Homei (2014)):

n 1 n 1 1 n ( 1) − n 1 d − (n 3)/2 2 1/2 1 (1) − − (1 t) − (z t)− dt = , n 1 2 (n 1)! 2 dz − 0 − − √z 1 − Z  −  which could be of interest for some authors, who first guess the solution and then, by using techniques like Leibniz differentiations or change of variables or integration by part, prove that it satisfies the equation (by some long inductive arguments). Let us recall that Theorem 1 of Homei (2015) identifies the distribution of (the 1-dimensional) Z from the distributions of Xi’s by means of the differential equation:

∗ ∗ n 1 n 1 n mi 1 mi 1 ( 1) − d − ( 1) − d − (2) − ∗ (FZ , z)= − (FX , z), (n 1)! dzn 1 S (m 1)! dzmi 1 S i ∗ − i=1 i − − Y − where FY denotes the cumulative distribution function of a random variable 1 C ∁ Y and (FY , z) is defined by (FY , z)= R z x FY (dx) for z (suppFY ) S S − ∈ ∩ in which suppFY stands for the supportR of FY .

10 Let the distribution of X (i = 1, , n) be Arcsin, that is (F , z) = i ··· S Xi 1 . Also, let m =1(i = 1, , n) and n = n. Then from the equa- √z2 1 i ∗ − ··· tion (2) we will have

n 1 n 1 n ( 1) − d − 1 (3) − (FZ , z)= . n 1 2 (n 1)! dz − S √z 1 −  −  The solution of the equation (3) identifies the Stieltjes transformation of the distribution of Z. Alternatively, from Theorem 2.1 (or Corollary 2.2) the distribution of Z is power semicircle (see Homei (2014) for more de- tails). Since the Stieltjes transformation of the power semicircle distribution

n 2 1 (n 3)/2 2 1/2 is − (1 t) − (z t)− dt (see e.g. Arizmendi & Perez-Abreu (2010)) 2 0 − − then byR substituting it in the equation (2) we will get the equation (1) im- mediately. As another example consider the moment generating function on the vec- tor Tj:

MTj (t′) = E exp(t′YjXj) .

Using the double conditional expectation and the fact that the components of Tj are independent with gamma distributions we have

k 1 Pk α(j) 1 α(j) E i=1 i = i 1 t X 1 t − ′ j i=1 − i  Y  which proves Proposition 4.4 of Kerov & Tsilevich (2004) (cf. also Karlin & Micchelli & Rinott (1986), page 77).

5. Conclusions

Some sporadic works of other authors have been unified here; the main re- sult (Theorem 2.1) has several other different proofs (available upon request)

11 each of which can have various applications (the five proofs presented here are dedicated to all the family members of Professor A.R. Soltani). Our method reveals the advantage of the method of Stieltjes transforms for iden- tifying the distribution of stochastic linear combinations, first used by van Assche (1987) and later generalized by others.

References

References

[1] Alamatsaz, M.H. (1993). On Characterizations of Exponential and Gamma Distributions, Statistics and Letters 17, 315–319.

[2] Arizmendi, O., Perez-Abreu, V. (2010). On the Non-classical Infi- nite Divisibility of Power Semicircle Distributions, Communications on Stochastic Analysis 4, 161–178.

[3] Homei, H. (2015). A Novel Extension of Randomly Weighted Averages, Statistical Papers 56, 933–946.

[4] Homei, H. (2014). Characterizations of Arcsin and Related Dis- tributions Based on a New Generalized Unimodality, Commu- nications in Statistics – Theory and Methods, to appear. doi: 10.1080/03610926.2015.1006788

[5] Homei, H. (2013). Uniform Random Sample and Symmetric Beta Dis- tribution, arXiv:1309.2779 [math.ST].

[6] Homei, H. (2012). Randomly Weighted Averages with Beta Random Proportions. Statistics and Probability Letters 82, 1515–1520.

12 [7] Johnson, N.L., Kotz, S. (1990). Randomly Weighted Averages: Some Aspects and Extensions, The American Statistician 44, 245–249.

[8] Johnson, N.L., Kotz, S. (1990a). Use of Moments in Deriving Dis- tributions and Some Characterizations, The Mathematical Scientist 15, 42–52.

[9] Karlin, S., Micchelli, C. A., Rinott, Y. (1986). Multivariate Splines: a probabilistic perspective. Journal of Multivariate Analysis 20, 69–90.

[10] Kerov, S. V. E., Tsilevich, N. V. (2004). The Markov-Krein corre- spondence in several dimensions. Journal of Mathematical Sciences 121, 2345–2359.

[11] Nadarajah, S., Kotz, S. (2005). On the Product and Ratio of Gamma and Beta Random Variables, Allgemeines Statistisches Archiv 89, 435-449.

[12] Rezapour, M., Alamatsaz, M.H. (2014). Stochastic Comparison of Lifetimes of Two (n-k+1)-out-of-n Systems with Heterogeneous Depen- dent Components, Journal of Multivariate Analysis 130, 240–251.

[13] Sethuraman, J. (1994). A Constructive Definition of Dirichlet Priors, Statistica Sinica 4, 639–650.

[14] Soltani, A.R. and Homei, H. (2009). Weighted Averages with Ran- dom Proportions that are Jointly Uniformly Distributed over the Unit . Statistics and Probability Letters 79, 1215–1218.

13 [15] van Assche, W. (1987). A Random Variable Uniformly Distributed Between two Independent Random Variables, Sankhy¯a: The Indian Journal of Statistics—Series A 49, 207–211.

[16] Volodin, N.A., Kotz, S., Johnson, N.L. (1993). Use of Moments in Distribution Theory: A Multivariate Case, Journal of Multivariate Analysis 46, 112–119.

14