Marquette University e-Publications@Marquette

Mathematics, and Computer Science Mathematics, Statistics and Computer Science, Faculty Research and Publications Department of (- 2019)

9-2013

Sub-Independence: An Expository Perspective

Gholamhossein Hamedani Marquette University, [email protected]

Follow this and additional works at: https://epublications.marquette.edu/mscs_fac

Part of the Computer Sciences Commons, Mathematics Commons, and the Statistics and Commons

Recommended Citation Hamedani, Gholamhossein, "Sub-Independence: An Expository Perspective" (2013). Mathematics, Statistics and Computer Science Faculty Research and Publications. 165. https://epublications.marquette.edu/mscs_fac/165

Marquette University e-Publications@Marquette

Mathematics and Statistical Sciences Faculty Research and Publications/College of Arts and Sciences

This paper is NOT THE PUBLISHED VERSION; but the author’s final, peer-reviewed manuscript. The published version may be accessed by following the link in the citation below.

Communications in Statistics : Theory and Methods, Vol. 42, No. 20 (2013): 3615-3638. DOI. This article is © Taylor & Francis and permission has been granted for this version to appear in e- Publications@Marquette. Taylor & Francis does not grant permission for this article to be further copied/distributed or hosted elsewhere without the express permission from Taylor & Francis.

Sub-Independence: An Expository Perspective

G. G. Hamedani Department of Mathematics, Statistics and Computer Sciences, Marquette University, Milwaukee, WI

Abstract Limit theorems as well as other well-known results in probability and statistics are often based on the distribution of the sums of independent random variables. The concept of sub-independence, which is much weaker than that of independence, is shown to be sufficient to yield the conclusions of these theorems and results. It also provides a measure of dissociation between two random variables which is much stronger than uncorrelatedness.

Keywords Characteristics function, Independence, Limit theorems, Sub-independence

1. Introduction Limit theorems as well as other well-known results in probability and statistics are often based on the distribution of the sums of independent (and often identically distributed) random variables rather than the joint distribution of the summands. Therefore, the full force of independence of the summands will not be required. In other words, it is the convolution of the marginal distributions which is needed rather than the joint distribution of the summands, which in the case of independence, is the product of the marginal distributions.

The concept of sub-independence can help to provide solution for some modeling problems where the variable of interest is the sum of a few components. Examples include household income, the total profit of major firms in an industry, and a regression model Y = g(X) + ϵ where g(X) and ϵ are uncorrelated but may not be independent. For example, in Bazargan et al. (2007), the return value of significant wave height (Y) is modeled by the sum of a cyclic function of random delay D, 푔̂(퐷), and a residual term 휀̂. They found that the two components are at least uncorrelated but not independent and used sub-independence to compute the distribution of the return value.

Let 푋 and 푌 be two random variables (rv's) with joint and marginal cumulative distribution functions (cdf's) 퐹푋,푌 , 퐹푋 , and 퐹푌, respectively. Then 푋 and 푌 are said to be independent if and only if (1.1)

2 퐹푋,푌(푥, 푦) = 퐹푋(푥)퐹푌(푦), 푓표푟 푎푙푙 (푥, 푦) ∈ ℝ , or equivalently if and only if

(1.2)

2 휑푋,푌(푠, 푡) = 휑푋(푠)휑푥(푡), 푓표푟 푎푙푙 (푠, 푡) ∈ ℝ where 휑푋,푌(푠, 푡), 휑푋(푠), and 휑푥(푡), are, respectively, the corresponding joint and marginal characteristic functions (cf's). Note that (1.1) and (1.2) are also equivalent to

(1.3) 푃(푋 ∈ 퐴푎푛푑푌 ∈ 퐵) = 푃(푋 ∈ 퐴)푃(푌 ∈ 퐵), for all Borel sets 퐴, 퐵.

The concept of sub-independence, as far as we have gathered, was formally introduced by Durairajan (1979) stated as follows: The rv's 푋 and 푌 with cdf's 퐹푋 and 퐹푌 are sub-independent (s.i.) if the cdf of 푋 + 푌 is given by (1.4)

퐹푋+푌(푧) = (퐹푋 ∗ 퐹푌)(푧) = ∫ 퐹푋(푧 − 푦)푑퐹푌(푦) , 푧 ∈ ℝ, ℝ or, equivalently, if and only if

(1.5)

휑푋+푌(푡) = 휑푋,푌(푡, 푡) = 휑푋(푡)휑푦(푡), 푓표푟 푎푙푙 푡 ∈ ℝ.

The drawback of the concept of sub-independence in comparison with that of independence has been that the former does not have an equivalent definition in the sense of (1.3) which some believe to be the natural definition of independence. We believe to have found such a definition now which is stated below. We will give two separate definitions, one for the discrete case (Definition 1.1) and the other for the continuous case (Definition 1.2). Let (푋, 푌): 훺 → ℝ2 be a discrete random vector with range ℜ(푋, 푌) = (푥 푖 , 푦 푗 ): 푖, 푗 = 1,2, … (finitely or infinitely countable). Consider the events

퐴푖 = {휔 ∈ Ω: 푋(휔) = 푥푖}, 퐵푗 = {휔 ∈ Ω: 푌(휔) = 푦푗}

And 퐴푧 = {휔 ∈ Ω:푋(휔) + 푌(휔) = 푧}, 푧 ∈ ℜ(푋 + 푌). Definition 1.1 The discrete rv's 푋 and 푌 are s.i. if for every 푧 ∈ ℜ(푋 + 푌)

(1.6)

푧 푃(퐴 ) = ∑ ∑ 푃(퐴푖)푃(퐵푗).

푖,푗, 푥푖+푦푗=푧

To see that (1.6) is equivalent to (1.5), suppose 푋 and 푌 are s.i. via (1.5), then

푖푡(푥푖+푦푗) 푖푡(푥푖+푦푗) ∑ ∑ 푒 푓 (푥푖 + 푦푗) = ∑ ∑ 푒 푓푋(푥푖)푓푌(푦푗), 푖 푗 푖 푗 where 푓, 푓푋 , and 푓푌 are probability functions of (푋, 푌), 푋, and 푌, respectively. Let 푧 ∈ ℜ(푋 + 푌), then

푖푡푧 푖푡푧 푒 ∑ ∑ f (푥푖 + 푦푗) =푒 ∑ ∑ 푓푋(푥푖)푓푌 (푦푗) , 푖,푗, 푥푖+푦푗=푧 푖,푗, 푥푖+푦푗=푧 which implies (1.6). For the converse, assume (1.6) holds and reverse the above last two steps to arrive at (1.5).

For the continuous case, we observe that the half-plane 퐻 = (푥, 푦): 푥 + 푦 < 0 can be written as a countable disjoint union of rectangles: ∞

퐻 = ⋃ 퐸푖 × 퐹푖, 푖=1 2 where 퐸푖 and 퐹푖 are intervals. Now, let (푋, 푌): 훺 → ℝ be a continuous random vector and for 푐 ∈ ℝ let

퐴푐 = {휔 ∈ Ω ∶ 푋(휔) + 푌(휔) < 푐} and 푐 푐 퐴(푐) = {휔 ∈ Ω: 푋(휔) − ∈ 퐸 } , 퐵(푐) = {휔 ∈ Ω: 푌(휔) − ∈ 퐹 } 푖 2 푖 푖 2 푖 Definition 1.2 The continuous rv's 푋 and 푌 are s.i. if for every 푐 ∈ ℝ

(1.7) ∞ (푐) (푐) 푃(퐴푐) = ∑ 푃 (퐴푖 ) 푃 (퐵푖 ) 푖=1 To see that (1.7) is equivalent to (1.4), observe that (LHS of (1.7))

(1.8)

푃(퐴푐) = 푃(푋 + 푌 < 푐) = 푃((푋, 푌) ∈ 퐻푐), where 퐻푐 = (푥, 푦): 푥 + 푦 < 푐. Now, if 푋 and 푌 are s.i. then

푃(퐴푐) = (푃푋 × 푃푌)(퐻푐), where 푃푋 , 푃푌 are probability measures on ℝ defined by

푃푋(퐵) = 푃(푋 ∈ 퐵) and 푃푌(퐵) = 푃(푌 ∈ 퐵), and 푃푋 × 푃푌 is the product measure. We also observe that (RHS of (1.7))

(1.9) ∞ 푐 푐 = ∑ 푃 (푋 − ∈ 퐸 ) 푃 (푌 − ∈ 퐹 ) 2 푖 2 푖 푖=1 ∞ ∞ 푐 푐 ∑ 푃 (퐴(푐)) 푃 (퐵(푐)) = ∑ 푃 (푋 ∈ 퐸 + ) 푃 (푌 ∈ 퐹 + ) 푖 푖 푖 2 푖 2 푖=1 푖=1 ∞ 푐 푐 = ∑ 푃 × 푃 (퐸 + ) × (퐹 + ) . 푋 푌 푖 2 푖 2 푖=1

∞ 푐 푐 Now, (1.8) and (1.9) will be equal if 퐻 = ⋃ {(퐸 + ) × (퐹 + )}, which is true since the points in 퐻 are 푐 푖=1 푖 2 푖 2 푐 푐 푐 obtained by shifting each point in H over to the right by units and then up by units. 2 2 Remark 1.1 (i) Note that 퐻 can be written as a union of squares and triangles. The triangles are congruent to 0 ≤ 푦 < 푥, 0 ≤ 푥 < 1 which in turn can be written as a disjoint union of squares. For example, take [0,1/2) × [0,1/2) then [1/2,3/4) × [0,1/4) and so on. (ii) The discrete rv's 푋, 푌, and 푍 are s.i. if (1.6) holds for any pair and

(1.10)

푠 푃(퐴 ) = ∑ ∑ ∑ 푃(퐴푖)푃(퐵푗)푃(퐶푘).

푖,푗,푘, 푥푖+푦푗+푧푘=푠 For 푛 variate case we need 2푛 − 푛 − 1 equations of the above form. (iii) The representation (1.7) can be extended to the multivariate case as well. (iv) For the sake of simplicity of the computations, we use (1.5) and its extension to the multivariate case as our definition of sub-independence throughout this work.

We may in some occasions have asked ourselves if there is a concept between “uncorrelatedness” and “independence” of two random variables. It seems that the concept of “sub-independence” is the one: it is much stronger than uncorrelatedness and much weaker than independence. The notion of sub-independence seems important in the sense that under usual assumptions, Khintchine's Law of Large Numbers and Lindeberg- Lévy's Central Limit Theorem as well as other important theorems in probability and statistics hold for a sequence of s.i. random variables. While sub-independence can be substituted for independence in many cases, it is difficult (in general) to find conditions under which the former implies the latter. Even in the case of two discrete identically distributed rv's X and Y the joint distribution can assume many forms consistent with sub- independence. In order for two random variables X and Y to be s.i. the

푝푖 = 푃(푋 = 푥푖), 푖 = 1,2, … , 푛 and

푞푖푗 = 푃(푋 = 푥푖, 푌 = 푥푖), 푖, 푗 = 1.2 … . , 푛, must satisfy the following conditions.

1. ∑(푞푖푗 − 푝푖푝푗) = 0, where the sum extends for all values of 푖 and 푗 for which 푥푖 + 푥푗 = 푧 and z takes all the values in the set 푚푖푛 (푥푖 + 푥푗 ), … , 푚푎푥 (푥푖 + 푥푗 ). 푛 푛 2. 푝푖 = ∑푗=1 푞푖푗 = ∑푗=1 푞푗푖, 푖 = 1, 2, … , 푛.

2 This linear system in 푛 variables 푞푖푗 is considerably underdetermined for all but the smallest value of 푛 specially if a large number of points (푥푖 , 푥푗) lie on the line 푥 + 푦 = 푧. On the other hand, the only 푞푖푗 consistent with independence is 푞푖푗 = 푝푖푝푗. If 푋 and 푌 are s.i. then unlike independence, 푋 and 훼푌 are not necessarily s.i. for any real 훼 ≠ 1 as the following simple examples (discrete and continuous cases respectively) show. Example 1.1 Let 푋 and 푌 be identically distributed rv's with support on the integers 1,2,3 and joint probabilities: 1 2 p = p = p = , p = p = p = 11 22 33 9 21 32 13 9 p12 = p23 = p31 = 0. Then 푋 and 푌 are s.i. but 푋 and −푌 are not. Example 1.2 Let X and Y have the joint cf given by

2 2 2 2 2 2 휑푋,푌(푡1,푡2) = exp {− (푡1 + 푡2) /2} [1 + 훽푡1푡2(푡1 − 푡2) ,× exp {(푡1 + 푡2) /4}] ,(푡1,푡2) ∈ ℝ , where 훽 is an appropriate constant. (The corresponding joint probability density function (pdf) is given by 1 푓(푥, 푦) = exp{−(푥2 + 푦2)/2}[1 − 16훽푝(푥, 푦) × exp{−(푥2 + 푦2)/2}], 2휋 (푥, 푦) ∈ ℝ, where 푝(푥, 푦) = {6푥푦 − 2푥2 − 2푦2 + 4푥2 푦2 − 2푥3 푦 − 2푥푦3 + 1}).

Then 푋 and 푌 are s.i. standard normal rv's and hence 푋 + 푌 is normal with mean 0 and 2, but 푋 and −푌 are not s.i. and, consequently, 푋 − 푌 does not have a .

It is clear that one is interested to know under what conditions sub-independence implies independence. Durairajan (1979) posed this question and gave two specific examples, one for the discrete case and one for the continuous case in which he claimed the given conditions (different for each example) are sufficient for the s.i. rv's to be independent. Although his discrete example with the given conditions worked nicely, his example for the continuous case did not. Here is his example: Let 푋 and 푌 be two continuous rv's such that their joint distribution function is 퐹(휎푥, 푦) for 휎 ∈ ℝ+ and 푥 > 0, 푦 > 0 with marginal distribution functions + as 퐹푋 (휎푥) and 퐹푌 (푦). If 푋 and 푌 are s.i. for all 휎 ∈ ℝ , then 푋 and 푌 are independent. It is not hard to see that under the stated conditions, the rv 푋 will have to be degenerate at 0; hence, 푋 will be independent of any rv 푌. We will revisit the above mentioned question later in Sec. 2.

The concept of sub-independence defined by (1.5) can be extended to 푛 (> 2) rv's as follows. Definition 1.3

The rv's 푋1, 푋2, … , 푋푛 are s.i. if for each subset 푋훼1 , 푋훼2 , … , 푋훼푟 of {푋1, 푋2, … , 푋푛} (1.11) 푟 ( ) ( ) 휑푋훼 ,...,푋훼 푡, . . . , 푡 = ∏ 휑푋훼 푡 , 푓표푟 푎푙푙 t ∈ ℝ 1 푟 푖=1 1 As we mentioned before, Durairajan (1979) formally introduce the concept of sub-independence and pointed out that the Khintchine's law of large numbers and Lindeberg-Levy's central limit theorem hold for a sequence of s.i. rv's. The reason we used the word “formally” in the previous sentence is that Lukacs (1970) had used (1.5) implicitly in proving certain results such as Cramér's, based on cf's but for independent rv's. We will mention them as we go along. As we mentioned earlier, Durairajan (1979) tried to find conditions under which sub- independence and independence are equivalent. It was pointed out by Hamedani (1983) that Durairajan conditions forced one of the random variables involved to be degenerate which of course is a trivial case. To show how weak the concept of sub-independence is in comparison with that of independence, even in the case involving normal distribution, Hamedani (1983) gave the following examples. Example 1.3 Consider the joint cf 1 휑 (푡 , 푡 ) = exp{−(푡2 + 푡2)/2} + 푡 , 푡 (푡2 − 푡2) × exp{−(푡2 + 푡2)/4}, 푋,푌 1 2 1 2 32 1 2 1 2 1 2 2 (푡1, 푡2) ∈ ℝ . Then, 푋, 푌, 푋 + 푌, and 푋 − 푌 are all normal, which imply (1.5) (the cf version of the definition of sub- independence) holds for 푋 and 푌 as well as 푋 and −푌, but 푋 and 푌 are not independent.

We can generalize Example 1.3 as follows. Example 1.4 2 Given (푎푘 , 푏푘 ): 푘 = 1,2, … , 푁 a finite set in ℝ . Consider the joint cf 2 2 휑푋,푌(푡1, 푡2) = exp{−(푡1 + 푡2 )/2} 2 2 + 푡1, 푡2(푡1 − 푡2 ) 푁 1 × exp {− [푐 − 푐 (푡2 + 푡2)]} ∏(푏2푡2 − 훼2푡2), (푡 , 푡 ) ∈ ℝ2, 2 1 2 1 2 푘 1 푘 2 1 2 푘=1 where 푐1 and 푐2 are suitable constants. Then 푋 and 푌 are standard normal rv's, 푋 and 푌 as well as 푋 and −푌 are s.i. and more

휑푋,푌(푎푘푡, 푏푘푡) = 휑푋(푎푘푡)휑푌(푏푘푡), for all 푡 ∈ ℝ, 푘 = 1,2, . . . , 푁, i.e., 푎푘푋 and 푏푘푌, 푘 = 1, 2, … , 푁 are s.i. and of course 푎푘푋 + 푏푘푌, 푘 = 1, 2, … , 푁 are all normally distributed but 푋 and 푌 are not independent. Remark 1.2 The set {(푎푘 , 푏푘): 푘 = 1, 2, … , 푁} in Example 1.4 cannot be taken to be infinitely countable. Hamedani and Tata (1975) showed that two normally distributed rv's 푋 and 푌 are independent only if they are uncorrelated and 푎푘푋 and 푏푘푌, 푘 = 1, 2, … are s.i., i.e.,

휑푋,푌(푎푘푡, 푏푘푡) = 휑푋(푎푘푡)휑푌(푏푘푡), for all 푡 = ℝ, 푘 = 1,2, . . .,

2 where {(푎푘 , 푏푘): 푘 = 1, 2, … } is a distinct sequence in ℝ . In the next section we present the results based on the concept of sub-independence from 1979, the starting point, to 2011 as far as we have been able to gather. We hope this article will be a good starting point for those who are interested in the concept of sub-independence and may be leaning towards using this notion in their works.

2. Results The results reviewed and established in this section will all be based on the concept of sub-independence. We will divide this section to a number of subsections each of which will be dealing with a specific distribution and/or subject. The results in each subsection will be presented in the chronological order of appearances and not that of their importance. 2.1. Characterizations of Normal Distribution and Related Results We start this subsection with the s.i. version of Cramér's famous theorem (Theorem 1, 1936) which appeared in Hamedani and Walter (1984b). Theorem 2.1 (Cramér) If the sum 푋 + 푌 of the rv's 푋 and 푌 is normally distributed and these rv's are s.i., then each of 푋 and 푌 is normally distributed. Remark 2.1 The proof of Theorem 2.1 can be deduced from Lukacs (1970). The important lemma used in the proof involves showing that the cf of each rv is an entire function. It is somewhat surprising that this lemma is true under a much weaker hypothesis about the relation of the two variables. Hamedani and Walter (1984b) presented a proof of this assertion (see Theorem 2.2 below) which, however, requires an auxiliary condition. Theorem 2.2 If the sum of two rv's is normally distributed and if the cdf G of their difference satisfies the condition 1 − 퐺(푤) = 푂(exp{−|푤|(1 + 휀)}) = 퐺(−푤), for all 휀 > 0 as 푤 → ∞, then the cf of each rv is an entire function. Remark 2.2 It is clear that if X − Y has a compact support or has a normal distribution (as in Example 1.3) then the auxiliary condition of Theorem 2.2 is satisfied.

The following simple characterization of normal distribution, given in Hamedani and Walter (1984b), is based on the concept of sub-independence which strengthen the characterizations given in Chung (1974) under the assumption of independence. Proposition 2.1 Let X and Y be s.i.i.d. (sub-independent and identically distributed) rv's with mean 0 and variance 1 such that: i. 푋 and −푌 are s.i. and ii. 푋 + 푌 and 푋 − 푌 are s.i. Then both 푋 and 푌 have standard normal distributions. Remark 2.3 We note that the hypotheses of Proposition 2.1 do not imply that 푋 and 푌 are independent (see Example 1.3 and Theorem 1 of Hamedani and Tata (1975) for further details), nor that they be jointly normal. This proposition is in spirit close to Maxwell-Kac-Berstein Theorem (see Feller, 1971, p. 78).

As with independence, distinct linear combinations of sub-independent rv's need not be sub-independent. However, if they are normal, the following holds (Hamedani and Walter, 1984b). Proposition 2.2 Let 푋 and 푌 as well as 푋 and −푌 be s.i. normally distributed rv's with the same variance. Then 푋 + 푌 and 푋 − 푌 are s.i.

Ahsanullah and Hamedani (1988) made the following observation: If 푋 and 푌 are i.d. (identically distributed) with mean 0 such that 푋 and −푌 are s.i. and (푋 − 푌)2/2 ∼ 휒2(1) (chi-square with 1 degree of freedom), then 푋 and 푌 have standard normal distributions. The following example shows that in the absence of sub- independence, 푋 and 푌 may not be standard normal variables. Example 2.1 Let X and Y be jointly normally distributed with means equal to μ and equal to (2/(1 − 휌))1/2, where ρ is their correlation coefficient. Then (푋 − 푌)2/2 ∼ 휒2(1). Remark 2.4 It can easily be seen that if X and 푌 are s.i.i.d. and if 푋 + 푌 is symmetric (about 0), then (푋 + 푌)2/2 ∼ 휒2(1) if and only if 푋 and 푌 are standard normal. Ahsanullah et al. (1991) presented two characterizations of normal distribution based on chi-square distribution and the notion of sub-independence stated in Theorems 2.3 and 2.4 below. Theorem 2.3 1 Let 푋 and 푌 be s.i.i.d. non degenerate rv's. If 푋2 and (푋 + 푌)2 are i.d. chi-square with one degree of freedom, 2 then the common distribution of 푋 and 푌 is standard normal. Theorem 2.4 2 1 푘 Let 푋 , 푋 , … , 푋 be s.i.i.d. non degenerate rv's. If k, 푋̅ , 푋̅ = ∑ 푋 , is distributed as chi-square with one 1 2 푛 푘 푘 푘 푖=1 푖 ′ degree of freedom for two positive integers 푚1 and 푚2, then 푋푖 푠 are normally distributed. 2.2. Characterizations of Reciprocal Distribution and Related Results A rv 푋 (or its pdf 푓푋) is called reciprocal if its cf is a multiple of a pdf. It is called self-reciprocal if there exist 1/2 constants A and α such that 퐴푓푋 (훼푡) is the cf of 푋. It is strictly self-reciprocal if (2휋) 푓푋 (푡) is the cf of 푋. Using the concepts of reciprocal, self-reciprocal, strictly self-reciprocal, and sub-independence Hamedani and Walter (1985) reported the following observations (Propositions 2.3–2.5 and Theorem 2.5 below). Proposition 2.3 Let 푋 and 푌 be s.i. reciprocal rv's. Then 푋 + 푌 is reciprocal. Proposition 2.4 Let 푋 and 푌 be s.i.i.d. rv's with bounded pdf. Then 푋 − 푌 is reciprocal.

The following proposition gives a characterization of the normal distribution which is based on the concepts of sub-independence and self-reciprocal. Proposition 2.5 Let 푋 be the standard normal rv and 푌 be any infinitely divisible rv s.i. of 푋. Then 푋 + 푌 is self-reciprocal if and only if 푌 is normally distributed.

We can weaken the infinitely divisible hypothesis, but at the expense of considerable work, Theorem 4.3 of Hamedani and Walter (1985) stated below. Theorem 2.5 Let X be the standard normal rv and Y be strictly self-reciprocal and s.i. of X. Then X + Y is self-reciprocal if and only if it is normally distributed.

The following characterization of strictly self-reciprocal distribution based on sub-independence is due to Hamedani and Walter (1987). Proposition 2.6 Let X be the standard normal rv and 푌 a symmetric (about 0) rv s.i. of 푋. Then 푌 is strictly self-reciprocal if and only if the cf 휙 of the rv 푋 + 푌 satisfies the functional equation 1 휑(푡) = ∫ exp{(푠 + 푖푡)2/2}휑(푠)푑푠, for all 푡 ∈ ℝ. √2휋 ℝ 2.3. Characterizations of 푛 Let 푋, 푋1, 푋2, … , 푋푛 be s.i.i.d. rv's. If X is normally distributed with mean zero, then ∑푖=1 푋푖 and √푛푋 are i.d. Hamedani et al. (2004) raised the following question: Are there other rv 푋 for which properties similar to the one mentioned on the above two lines hold? Lukacs (1956) proved the following result (restated here in terms of cf 휙 of 푋) for the i.i.d. case. Theorem 2.6 Let ϕ be a cf such that for every n and every choice of real numbers 푎1, 푎2, … , 푎푛, 푛 1⁄훼 ∏ 휑(푎푖푡) = 휑(훾푛 푡), 푡 ∈ ℝ, 푖=1 푛 훼 where 훾푛 = ∑푖=1|푎푖| . Then 휙 is cf of a symmetric stable distribution of order 훼. Laha and Lukacs (1965) improved Theorem 2.6 for the special case of normal distribution showing that if a cf 휙 satisfies

1⁄푛 휑(푡) = (휑(√푛̅푡)) , 푡 ∈ ℝ, then it must be the cf of a normal distribution with mean zero.

′ Eaton (1966) improved Theorem 2.6 for particular fixed choices of the 푎푖푠 and for fixed values of 푛 under additional assumption that the rv 푋 is symmetric about zero as follows. Theorem 2.7 Let 푋, 푋1, 푋2, … , 푋푛 be i.i.d. symmetric rv's and let 푚 and 푛 be integers 2 ≤ 푚 < 푛 such that log 푚/log 푛 is irrational. If

(2.3.1)

1⁄푚 1⁄푛 ⁄ ⁄ 휑(푡) = (휑(푚1 훼푡)) = (휑(푛1 훼푡)) , 푡 ∈ ℝ, where 0 < 훼 ≤ 2, then 푋 has a symmetric stable distribution of order 훼. Remark 2.5 It should be noted that the proofs of Theorem 2.6, the result of Laha and Lukacs (1965), and similar result by Kagan et al. (1973) depend on the Levy-Khinchine representation of a characteristic function. Eaton's proof does not employ the Levy-Khinchine representation due to the fact that the random variable 푋 is assumed to be symmetric. Hamedani et al. (2004) considered functional equation of type (2.3.1) and investigated the solutions of the equation. Since the functions that they considered are not cf's they did not have Lévy-Khinchine representation at their disposal. Nor they were able to assume that their solutions are real-valued as in the case considered by Eaton (1966). Instead, they extended Theorem 2.7 to 휙: ℝ → ℂ, with appropriate constants ′ multiple of 푋푖 푠 assumed to be s.i. and then directed their attention to the case, when the solution of their general equation is a cf (see Secs. 2–4 of Hamedani et al., 2004). 2.4. Characterizations of Sub-Independent Random Variables Mohammadpour and Safe (2002) considered, among other things, a characterization of s.i. 푆 훼 푆 rv's (symmetric 훼-stable) with discrete spectral measure. This work was presented in a conference and their main result is now included in a (2010) article by Mohammadpour which will be discussed later in this subsection. Mohammadpour (2004) considered jointly (symmetric) Cauchy rv's defined as: the rv's 푋1, 푋2, … , 푋푛 are said to be jointly (symmetric) Cauchy, if their joint cf has the form

(2.4.1)

′ 휑X(퐭) = exp {∫ |퐭′퐬|Γ(푑퐬) + 푖퐭 흁}, 푆푛 푛 where 푿 = (푋1, 푋2, … , 푋푛 )′, 풕 = (푡1, 푡2, … , 푡푛 )′ ∈ ℝ , Γ is a finite Borel symmetric measure on the unite 푛 푛 sphere 푆푛 = {풔 = (푠1, 푠2, … , 푠푛 )′|풔′풔 = 1} of ℝ and 흁 = (휇1, 휇2, … , 휇푛 )′ ∈ ℝ . The measure Γ is unique and is called the spectral measure of the random vector 푿. He also called rv's 푋1, 푋2, … , 푋푛 associated if for any functions 푓 and 푔: ℝ푛 → ℝ non-decreasing in each argument, 퐶표푣(푓(푿), 푔(푿)) ≥ 0 whenever the covariance exists. He then presented two characterizations (Theorems 2.8 and 2.9 below) of the concept of s.i. based on jointly Cauchy distributed rv's. His theorems are clearly in different direction than the previously stated results and are based on the specific underlying distribution. These theorems, however, may have interesting applications. Theorem 2.8 Let 푋1, 푋2, … , 푋푛 be jointly Cauchy rv's with joint cf (2.4.1). Then 푋1, 푋2, … , 푋푛 are s.i. if and only if the spectral measure 훤 satisfies the condition

(2.42)

± Γ(푆푛 ) = Γ(푆푛), ± Where 푆푛 = {(푠1, 푠2, … , 푠푛) ∈ 푆푛|푠푖 ≥ 0 푓표푟 푎푙푙 푖 표푟 푠푖 ≤ 0 푓표푟 푎푙푙 푖}. Theorem 2.9 Let 푋1, 푋2, … , 푋푛 be jointly Cauchy rv's with joint cf (2.4.1). Then sub-independence is a necessary and sufficient condition for association of 푋1, 푋2, … , 푋푛 . Remark 2.6 It was shown, via an example in Mohammadpour (2004), that condition (2.4.2) is not a necessary and sufficient condition for sub-independence of 훼-stable (훼 ≠ 1) random variables.

Mohammadpour (2010) stated the following definitions: A random vector 푿 is said to be a Lévy stable, 훼-stable, 푛 푛 훼 − 푆, random vector in ℝ if there are parameters 0 < 훼 ≤ 2, 흁 = (휇1, 휇2, … , 휇푛)′ ∈ ℝ , positive 푛 푛 definite matrix 퐴 of order n and a finite measure Γ on the unit sphere 푆 of ℝ such that

(2.4.3) 휋훼 ∫ |퐭′퐬| (1 − 푖 푠푔푛 (퐭′퐬)tan ( )) Γ(푑퐬) − 푖퐭′흁, 0 < 훼 ≠ 1 < 2, 2 푆푛

−ln휑퐗(퐭) = 2 ∫ |퐭′퐬| (1 + 푖 푠푔푛 (퐭′퐬) ln |퐭′퐬|) Γ(푑퐬) − 푖퐭′흁, 훼 = 1, 휋 푆푛 {퐭′퐴퐭 − 푖퐭′흁, 훼 = 2

where as before 풕 = (푡1, 푡2, … , 푡푛 )′. Equation (2.4.3) takes a slightly different form for Lévy stable, 훼- stable, 푆 훼 푆 (symmetric 훼-stable), and 훼 − 푆 random vectors (see, Mohammadpour, 2010, for the details). He then presented the following theorem. Theorem 2.10 Let 0 < 훼 < 2. A stable random vector 푿 with −푙푛 cf (2.4.3) is s.i. if and only if for each set {푙1, 푙2, … , 푙푟} ⊆ {1, 2, … , 푛}, (2.4.4) 훼 푟 푟 훼 ( ) ( ) ∫ |∑ 푆푙푗| Γ 푑퐬 = ∫ ∑ |푆푙푗| Γ 푑퐬 , 푆푛 푗=1 푆푛 푗=1 and

(2.4.5) 훼 푟 푟 푟 훼 ∫ |∑ 푆 | 푠푔푛 (∑ 푆 ) Γ(푑퐬) = ∫ ∑ |푆 | 푠푔푛 (푆 ) Γ(푑퐬), 훼 ≠ 1, 푙푗 푙푗 푙푗 푙푗 푆푛 푆푛 푗=1 푗=1 푗=1 푟 푟 푟 ( ) ( ) ∫ ln |∑ 푆푙푗| ∑ 푆푙푗 Γ 푑퐬 = ∫ ∑ ln |푆푙푗| 푆푙푗 Γ 푑퐬 , 훼 = 1. { 푆푛 푗=1 푗=1 푆푛 푗=1

2.5. Characterizations of Symmetric Distribution and Related Results Behboodian (1989) reported the following result: Let 푋 and 푌 be independent and 푋 + 푌 symmetric. If 푋 is symmetric with cf 휙푋 (푡) ≠ 0, for all 푡, then 푌 must be symmetric. Hamedani (1995) improved this result as follows (see Theorems 2.11 and 2.12 below). Theorem 2.11 Let 푋 and 푌 be s.i.i.d. rv's whose sum 푋 + 푌, is symmetric. Then 푋 and 푌 are symmetric rv's. Remark 2.7 If 푋 and 푌 are s.i. symmetric rv's then clearly 푋 + 푌 is symmetric, however, the symmetry of 푋 + 푌 alone does not imply that the sub-independent (in fact independent) rv's 푋 and 푌 are symmetric. Theorem 2.11 shows that the symmetry of 푋 + 푌 implies the symmetry of s.i. rv's 푋 and 푌 if the latter are i.d. In the absence of s.i. one may have one of the following interesting cases.

Case (i). 푋 and 푌 are symmetric and 푋 + 푌 is also symmetric. Case (ii). 푋 and 푌 are symmetric and i.d. but X + Y is not symmetric.

The following Examples 2.2 and 2.3 will demonstrate these cases respectively. Example 2.2 Let X and Y have the joint pdf 1 − , 푦 < 푥 < −1 2푥푦2 푓(푥, 푦) = 1 2 , 1 < 푥 < 푦 2푥푦 {0, otherwise Then, clearly, the rv's 푋 and 푌 and 푋 + 푌 are symmetric (indeed every linear combination of 푋 and 푌 is symmetric.) Example 2.3 Let 푈 be a one-sided stable rv for 훼 = 1 (Feller, 1971, p. 542). Then cf of 푈, 휑푈 , is given by 2 휑 (푡) = exp {−|푡| − 푖 푡ln (|푡|)} , 푡 ∈ ℝ. 푈 휋

Let 푉1, 푉2, 푉3 be i.i.d. with cf 휑푈 , and let 푋 = 푉1 − 푉2, 푌 = 푉1 − 푉3. Then, 푋 + 푌 = 2푉1 − 푉2 − 푉3 and −2|푡| 휑푋(푡) = 휑푌(푡) = 푒 , 푡 ∈ ℝ, 4 휑 (푡) = exp {−|4푡| − 푖 ( 푡ln2) 푡} , 푡 ∈ ℝ. 푋+푌 휋 We observe that 푋 and 푌 are i.d. Cauchy rv's symmetric (about 0), but 푋 + 푌 is a Cauchy rv symmetric 4ln2 about 퐶 = . 휋 The following theorem is Theorem 2 of Behboodian (1989) in which the assumption of independence is now replaced by that of sub-independence. Theorem 2.12 Let 푋 and 푌 be s.i. and 푋 + 푌 symmetric. If X is symmetric with cf 휑푋 (푡) ≠ 0 for all 푡, then 푌 must be symmetric.

Let 푝1 푋1, 푝2 푋2, … , 푝푛−1 푋푛−1, − 푋푛 be s.i.i.d. rv's, where 푝푖 > 0 for 푖 = 1, 2, … , 푛 − 1, and 푛−1 푛−1 ∑푖=1 푝푖 = 1 and let 푌 = ∑푖=1 푝푖푋푖 − 푋푛. It is clear that if 푋1 is symmetric about 푐 for some 푐 ∈ ℝ, then 푌 is symmetric about 0. The question now is whether or not the converse statement is true. Hamedani and Volkmer (2003) showed that the converse is not true without additional assumptions. To see this, let 휑 be cf of 푋1. Note ̅̅̅̅̅̅ −푖푐푡 that 휑: ℝ → ℂ is continuous, 휑(0) = 1 and 휑(−푡) = 휑(푡). 푋1 is symmetric about c if and only if 푒 휑(푡) is real-valued (see Feller, 1971). Hamedani and Volkmer (2003) gave the following characterization of symmetric rv's. Theorem 2.13 Assume that 푌 (given above) is symmetric about 0 and 휙(푡) has no zeros and is right differentiable at 0. Then there exists 푐 ∈ ℝ such that 푋1 is symmetric about 푐. Remark 2.8 푛−1 (a) It is clear that Theorem 2.13 holds for the special linear combination 푌 = ∑푖=1 푋푖 − 푛푋푛. (b) It is shown by an example (given below), in Hamedani and Volkmer (2003), that the conclusion of Theorem 2.13 is, in general, false if we drop the assumption that 휙 has no zeros. Example 2.4 Define 푢: ℝ → ℂ by 푢 = 휒[0,1/2] + 푖 휒[3/2,2], where 휒[푎, 푏] is the indicator function on [푎, 푏], and let

휑(푡) = ∫ 푢(푠)푢̅̅(̅̅푠̅̅+̅̅̅푡̅̅)푑푠. ℝ | |2 Since ∫ℝ 푢(푠) 푑푠 = 1, 휙 is a cf of a (see Feller, 1971). The pdf 푓 of this distribution is given by 푓(푥) = |푢̂(푥)|2, where 1 푢̂(푥) = ∫ 푒푖푡푥푢(푡)푑푡. √2휋 ℝ For further details we refer the reader to their article. 2.6. Characterizations of Poisson and Cauchy Distributions Here, we first present a sub-independent version of the famous Raikov's Theorem and then a simple characterization of the based on the notion of sub-independence. Theorem 2.14 (Raikov) If 푋 and 푌 are non negative integer-valued rv's such 푋 + 푌 has a and 푋 and 푌 are s.i., then each of 푋 and 푌 has a Poisson distribution. Remark 2.9 The following two simple observations were made in Bansal et al. (1998): (i) If 푊 ∼ 퐶(0) (standard 1 1 Cauchy), 푋 = 푊, and 푌 = − , then 푋 and 푌 are s.i.i.d. with common distribution Cauchy with parameters 2 2푊 1 1 1 1 휃 = 0 and 훾 = . (ii) If the rv's 푊 and − are s.i.i.d. and the rv (푊 − ) ~퐶(0), then 푊 ∼ 퐶(0). 2 푊 2 푊 2.7. Central Limit Theorem for Sub-Independent Random Variables As mentioned before, the well-known Khintchine's Law of Large Numbers and Lindeberg-Lévy's Central Limit Theorem as well as other important results can be stated in terms of s.i. rv's. Hamedani and Walter (1984a) reported several version of the central limit theorems for s.i.i.d. rv's. These results are stated in Propositions 2.7–2.9 below. For the sake of completeness, however, we first state the following two definitions. Definition 2.1 Let 푅휆, 휆 ≥ 0 denote the set of all rv's 푋 such that: i. 퐸(|푋|휆) < ∞, 푘 ii. 퐸(푋 ) = 푚푘 for all 푘 = 1, 2, … , [휆], where 푚푘 is the 푘th moment of 푍, the standard normal rv. Let 푀휆 denote the set of cdf's of 푋 ∈ 푅휆.

Definition 2.2 Let 푑휆, 휆 ≥ 0 be the function from 푀휆 × 푀휆 into ℝ given by 푒푖푋푡 − 푒푖푌푡 ( ) 푑휆 퐹, 퐺 = sup |퐸 ( 휆 )|, 푡∈ℝ |푡| where 퐹 and 퐺 are, respectively, the cdf's of two rv's 푋 and 푌. Proposition 2.7 휆 Let (푋푗)푗≥1 be a sequence of s.i.i.d. rv's with mean 0, variance 1 and such that 퐸(|푋| ) < ∞ for some 휆 > 2. Then

2푛 푛 − 2 2 ∑ 푋푗 → 푍~푁(0,1) (has standard normal distribution), 푗=1 in distribution as 푛 → ∞ and their cdf's converge in the metric of 푀휆. Moreover the rate of convergence is dominated by

푛 푛(1−휆⁄2) | |휆 | |휆 푑휆(푇√2퐹, Φ) < 2 (퐸( 푋 ) + 퐸( 푍 )), where 훷 is cdf of 푍 and 푇√2 is a strictly contractive map on (푀휆, 푑휆). Remark 2.10 Proposition 2.7 is valid only for the s.i.i.d. case and only for certain indices. However, the extension to the non- i.d. case and all indices is not difficult and is based on the following lemma. Lemma 2.1 Let 푋1, 푋2; 푌1, 푌2 be pairs of s.i. rv's in 푅휆 and let 훼 > 0. Let 퐹1, 퐹2; 퐺1, 퐺2 be their cdf's. Then −휆 푑휆(퐹, 퐺) < 훼 {푑휆(퐹1, 퐺1) + 푑휆(퐹2, 퐺2)}, 푋 +푋 푌 +푌 where 퐹 is the cdf of 1 2 and 퐺 of 1 2. 훼 훼 Remark 2.11 푋 +푋 It should be observed that Lemma 2.1 holds when 1 2 is not in 푅 necessarily. 훼 휆 Proposition 2.8 Let (푋푗)푗≥1 be a sequence of s.i. rv's in 푅휆 for some 휆 > 2 whose distribution functions belong to a bounded set in 푀휆. Then, 푛 1 ∑ 푋푗 → 푍 푖푛 distribution as 푛 → ∞. √푛̅ 푗=1 Proposition 2.9 휆 Let (푋푗)푗≥1 be a sequence of s.i.i.d. rv's such that 퐸(|푋| ) < ∞ for some 0 < 휆 < 2. If: i. 1 ≤ 휆 < 2 and the mean is 0, or ii. 0 < 휆 < 1 and 훼 satisfies 훼 > 21/휆, then

2푛 −푛 훼 ∑ 푋푗 → 푌 in distribution as 푛 → ∞, 푗=1 where 푌 is the rv with the 훿 −distribution. Remark 2.12 A similar idea was used by Trotter (1959) to give a proof of the central limit theorem. Although he gives a proof in which the use of cf is avoided, this will limit his result to the case of independent rv's. The approach in Hamedani and Walter (1984a) is considerably different and their result requires only the much weaker assumption of sub-independence. 2.8. A different But Equivalent Interpretation of Sub-Independence and Related Results Ebrahimi et al. (2010) looked at the concept of sub-independence in different but equivalent definition which provides a better understanding of this concept. Here we copy a good portion of their article since it treats this notion in completely different direction than we have dealt with so far. They presented models for the joint distribution of uncorrelated variables that are not independent, but the distribution of their sum is given by the product of their marginal distributions. These models are referred to as the summable uncorrelated marginals distributions. They are developed utilizing the assumption of sub-independence, which has been employed in the present article in various directions, for the derivation of the distribution of the sum of random variables. One proposition, 2 lemmas, 3 definitions, and 3 examples which follow are due to Ebrahimi et al. (2010). The last example and theorem are new.

We will now revisit the definition of sub-independence of the rv's 푋1, 푋2, … , 푋푛. Let (퐗 = (푋1, 푋2, … , 푋푛)′ )′ be a random vector with cdf 퐹 and cf 훹(풕). Components of 푿 are said to be s.i. if

(2.8.1) 푛 ′ 푛 Ψ(퐭) = ∏ 휓푖(푡), ∀퐭 = (푡, 푡, … , 푡) ∈ ℝ , 푖=1 where 휓푖(푡) is cf of 푋 푖. We first consider the bivariate case 푛 = 2 and let 퐹 be the cdf of 푿 = (푋1, 푋2), ∗ ∗ ∗ ∗ and 퐗 = (푋1, 푋2) denote the random vector with cdf 퐹 (푥1, 푥2) = 퐹1(푥1) 퐹2(푥2), where 퐹푖 , 푖 = 1, 2 is the marginal cdf of 푋 푖. Definition 2.3 푠푡 ∗ ∗ 퐹 is said to be SUM (summable uncorrelated marginals) bivariate distribution if 푋1 + 푋2 = 푋1 + 푋2, 푠푡 where = denotes the stochastic equality. Random variables with a SUM joint distribution are referred to as SUM random variables.

It is clear that the SUM and sub-independence are equivalent, so the two terminologies can be used interchangeably. It is also clear that the class of SUM rv's are closed under scalar multiplication and addition under independence. That is, if 푿 = (푋1, 푋2) is a SUM random vector so is 푎 푿 and if 풀 = (푌1, 푌2) is another SUM random vector independent of 푿, then 푿 + 풀 is also SUM random vector. However, the SUM property is directional in that 푋1 and 푋2 being SUM rv's does not imply that 푋1 and 푎푋2 are SUM. Definition 2.3 can be 푠푡 ∗ ∗ generalized to any specific direction by 푎1푋1 + 푎2푋2 = 푎1푋1 + 푎2푋2.

For continuous distributions, Kendall's tau τ and Spearman's rho 휌 s are given by

휏 = 4 ∫ ∫ 퐹(푥1, 푥2)푓(푥1, 푥2)푑푥1푑푥2 − 1 ℝ2 and

휌푠 = 12 ∫ ∫ 퐹1(푥1)퐹2(푥2)푓(푥1, 푥2)푑푥1푑푥2 − 3. ℝ2 These measures are invariant under strictly increasing transformations. For a SUM model both measures can be nonzero, one of them can be zero while the other one is not and both can be zero without the variables being independent.

We define a bivariate SUM copula to be a SUM distribution on the unit square [0, 1]2 with uniform marginals. We state the following two lemmas; the second one explains the construction of families of SUM models by linking the univariate pdf ′s 푓푖 (푥푖 ), 푖 = 1, 2. Lemma 2.2 For any SUM copula, 휌푠 = 0. Lemma 2.3 Let 푓푖 (푥푖), 푖 = 1, 2 be pdf's and 푔(푥1, 푥2) a measurable function. Setx (2.8.2) 2 푓훽(푥1, 푥2) = 푓1(푥1)푓2(푥1) + 훽푔(푥1, 푥2), (푥1, 푥2) ∈ ℝ .

Then for some 훽 ∈ ℝ, 푓훽(푥1, 푥2) is a SUM pdf with marginal pdf's 푓푖 (푥푖 ), 푖 = 1, 2 provided that: a. 푓훽(푥1, 푥2) ≥ 0, 2 b. ∫ℝ 푔(푥1, 푥2) 푑푥1 = ∫ℝ 푔(푥1, 푥2) 푑푥2 = 0 for all (푥1, 푥2) ∈ ℝ , and ( ) c. ∫ℝ 푔 푐 − 푡, 푡 푑푡 = 0 for all 푐 ∈ ℝ. The next example illustrates Lemmas 2.2 and 2.3.

Let f i (x i ), i = 1, 2 be two pdf's on [0, 1] and set

(2.8.3) 2 푓훽(푥1, 푥2) = 푓1(푥1)푓2(푥2) + 훽 sin[2휋(푥2 − 푥1)], (푥1, 푥2) ∈ [0,1] ,

2 such that for some 훽 ∈ ℝ, 푓훽(푥1, 푥2) is a pdf on [0,1] . Two specific examples are as follows.

1 i. Let 푓 (푥 ) = 1, 푖 = 1, 2 be the pdf of uniform distribution on [0, 1] and 훽 = − . Then by Lemma 2.2, 푖 푖 2 for (2.8.3), 휌푠 = 0. It can be shown that 휏 ≠ 0. 1 1 3 휋−4 ii. Let 푓 (푥 ) = + 푥 , 푓 (푥 ) = 1 and 훽 = − . It can be shown that, for (2.8.3), 휌 = − and 휏 = . 1 1 2 1 2 2 2 푠 4휋3 8휋3

For 푔(푥1, 푥2) = 푓1(푥1) 푓2(푥2) 푞(푥1, 푥2), (2.8.2) will have the following form (2.8.4) 2 푓훽(푥1, 푥2) = 푓1(푥1)푓2(푥2)[1 + 훽푞(푥1, 푥2)], (푥1, 푥2) ∈ ℝ ,

2 where 푓푖 (푥푖 ), 푖 = 1, 2 are the marginal pdf's, 푞(푥1, 푥2) is a measurable bounded function on ℝ with bound −1 |푞(푥1, 푥2)| ≤ 퐵 and 훽 = 퐵 . 2.8.1. Bivariate SUM The following result presents a method for constructing a bivariate SUM family with given marginal distributions and gives the mutual information measure, Kendall tau and Spearman's rho for the family. Proposition 2.10 Let 푓푖(푥) = 푓(푥), 푖 = 1, 2 in (2.8.4) be a symmetric pdf and the linking function 푞(푥1, 푥2) be such that (2.8.5)

−푞(푥1, 푥2) = 푞(푥2, 푥1) = 푞(−푥1, 푥2) = 푞(푥1, −푥2). Then: a. the bivariate function (2.8.4) is the pdf of a family of SUM distributions with marginals 푓푖 (푥) = 푓(푥), 푖 = 1, 2 and (푎1 푋1, 푎2 푋2), 푎푖 = ± 1, 푖 = 1, 2 are SUM variables; b. the mutual information for the family is given by ∞ 훽2푛 푀 (푋 , 푋 ) = ∑ 퐸 {퐸 [푞(푋 , 푋 )]2푛}, 훽 1 2 (2푛 − 1)2푛 2 1 1 2 푛=1 where 퐸푖 , 푖 = 1, 2 denotes the expectation with respect to 푓 푖; c. 휏 = 휌푠 = 0.

Remark 2.13 (a) As applications of Proposition 2.10, Ebrahimi et al. (2010) presented two examples of bivariate pdf's, with appropriate functions 푞(푥1, 푥2), for which (푋1, 푋2) has a SUM distribution and 푋1 and 푋2 are i.d. 푁(0, 1). For one of the examples they find a closed form for 푀훽(푋1, 푋2) and for the other they provide an approximation. For both examples, of course, 휌 = 휏 = 휌푠 = 0. We mention here examples in which (푋1, 푋2) has a SUM distribution for appropriate functions 푞(푥1, 푥2) and 푋1 and 푋2 are i.d. with symmetric pdf's other than 푁(0, 1).

1 i. Standard Cauchy: 푓(푥) = , 푥 ∈ ℝ; 휋(1+푥2 1 ii. Laplace Double Exponential: 푓(푥) = 푒−|푥−휇|/휎, 푥 ∈ ℝ; 2휎 iii. 1 −훼√훾2+(푥−휇)2 Hyperbolic Secant: 푓(푥) = 푒 , 푥 ∈ ℝ, where 퐾1 is a modified Bessel function of the 2훾퐾1(훼훾) second kind; iv. 푒−(푥−휇)/푠 1 푥−휇 Logistic or Sech-Square(d): 푓(푥) = 2 = 푆푒푐ℎ ( ), 휇 = 푚푒푎푛 and s is proportion to 푠(1+푒−(푥−휇)/푠) 4푠 2푠 standard deviation; 1 휋(푥−휇) v. Raised Cosine: 푓(푥) = [1 + cos ( )], 휇 − 푠 ≤ 푥 ≤ 휇 + 푠; 2푠 푠 2 vi. Wigner Semicircle: 푓(푥) = √푟2 − 푥2 , −푟 < 푥 < 푟; 휋푟2 2 푥 2 2 푥 푥 2 푥 푥 vii. 1 sin ( ) 2(2+푥 )sin ( )+푥(푥+sin( )) 4sin ( ) 4(푥−2 sin( )) 푓(푥) = 2 푓(푥) = 4 2 푓(푥) = 4 푓(푥) = 2 푥 ∈ ℝ 푥 2 ; 2 2 ; 2 ; 3 , . 2휋 ( ) 2휋푥 (1+푥 ) 휋푥 휋푥 2 (b) The cf's corresponding to pdf's in (vii) are, respectively, 1 1 − |푡|, | | if |푡| ≤ 1 if 푡 ≤ 1 − |푡|, 1 2 휑(푡) = { ; 휑(푡) = {1 −|푡|+ , ; 0, if |푡| > 1 푒 2 1 2 if |푡| > 2 1 if |푡| ≤ 1 − 2|푡|, 2 휑 (푡) = { ; 휑 (푡) = |휑 (푡)|2, 푡 ∈ ℝ . 1 0, 1 2 1 if |푡| > 2 (c) The graphs of the first two pdf's in (vii) are bell shaped and can be used to approximate normal pdf. (d) Hamedani et al. (2013) presented various examples of bivariate mixture SUM distributions based on the pdf's given in (vii).

2.8.2. Multivariate SUM ∗ We consider multivariate SUM random variables. Let 퐹 be the cdf of 퐗 = (푋1, 푋2, … , 푋푛)′ and 퐗 = ∗ ∗ ∗ ∗ 푛 (푋1, 푋2, … , 푋푛)′ denote the random vector with cdf 퐹 = ∏푖=1 퐹푖, where 퐹푖 is the cdf of 푋푖 . Definition 2.4 푛 푠푡 푛 ∗ 퐹 is said to be a SUMn (SUM distribution of order 푛) if ∑푖=1 푋푖 = ∑푖=1 푋푖 .

푠푡 ∗ Definition 2.4 can be extended to the product of a linear combination of marginals, that is 퐚′퐗 = 퐚′퐗 , where 퐚′ = (푎1, 푎2, … , 푎푛). A particular case of interest is when 푎푖 = 0, 1, which leads to the following extension of Definition 2.4. Definition 2.5 퐹 is said to be a multivariate SUM distribution if it is SUMn and all 푝-dimensional marginal 푠푡 ∗ 푛 distributions, 푝 < 푛 are SUMp. That is, 퐚′퐗 = 퐚′퐗 , for all a′s such that 푎푘 = 0, 1 and ∑푘=1 푎푘 = 푝 ≤ 푛. The following examples show variant of SUM distributions. Example 2.6 Let 퐗 = (푋1, 푋2, 푋3)′. a. Consider the distribution with pdf 1 1 1 − 퐱′퐱 − 퐱′퐱 푓 (퐱) = 푒 2 (1 + 훽(푥 − 푥 )(푥 − 푥 )(푥 − 푥 )푒 2 ) , 퐱 ∈ ℝ3, 훽 (2휋)3⁄2 1 2 1 3 2 3 where 훽 = 퐵−1 and (2.8.6) 1 − 퐱′퐱 |(푥1 − 푥2)(푥1 − 푥3)(푥2 − 푥3)푒 2 | ≤ 퐵. The corresponding cf is 1 1 − 퐭′퐭 1 − 퐭′퐭 Ψ (퐭) = 푒 2 − 훽푖(푡 − 푡 )(푡 − 푡 )(푡 − 푡 )푒 4 , 퐭 ∈ ℝ3, 훽 29⁄2 1 2 1 3 2 3 where 퐭 = (푡1, 푡2, 푡3)′. Clearly, 푓훽(퐱) is SUM3. It can be shown that 푓훽(푥푖 , 푥푗), 푖 ≠ 푗 = 1, 2, 3 are SUM2 for all 훽 satisfying (2.8.6). So, 푓훽(퐱) is a trivariate SUM distribution. The univariate marginals 3 are 푁(0, 1), so the distributions of 퐚′퐗 where ∑푘=1 푎푘 = 푝 ≤ 3 are 푁(0, 푝), 푝 = 2, 3 given by the independent trivariate normal model. b. consider the distribution with pdf 1 1 1 − 퐱′퐱 2 2 − 퐱′퐱 푓 (퐱) = 푒 2 (1 + 훽푥 (푥 − 푥 )푒 2 ) , 퐱 ∈ ℝ3, 훽 (2휋)3⁄2 2 1 3 where 훽 = 퐵−1 and (2.8.7) 1 2 2 − 퐱′퐱 |푥2(푥1 − 푥3 )푒 2 | ≤ 퐵. The corresponding cf is

1 1 1 − 퐭′퐭 2 2 − 퐭′풕 Ψ (퐭) = 푒 2 − 훽푖푡 (푡 − 푡 )푒 4 , 퐭 ∈ ℝ3. 훽 29⁄2 2 1 3

Clearly, 푓훽(퐱) is SUM3. It can be shown that for 훽 ≠ 0, 푓훽(푥1, 푥2), and 푓훽(푥2, 푥3) are not SUM2 and 푓훽(푥1, 푥3) is an independent BVN (bivariate normal) for all 훽 satisfying (2.8.7). So, 푓훽(퐱) is SUM3 but not trivariate SUM distribution. The univariate marginals are 푁(0, 1), so the distribution of 푋1 + 푋2 + 푋3 is 푁(0, 3) given by the independent trivariate normal model. Example 2.7 ′ Let (퐗 = (푋1, 푋2, … , 푋푛) have pdf 푛 1 1 1 − 퐱′퐱 2 2 − 퐱′퐱 푓 (퐱) = 푒 2 (1 + 훽(푥 − 푥 )푒 2 ∏ 푥 ) , 퐱 ∈ ℝ푛, 훽 (2휋)푛⁄2 1 2 푘 푘=1 so that 훽 = 퐵−1 and 푛 1 2 2 − 퐱′퐱 |(푥1 − 푥2 )푒 2 ∏ 푥푘| ≤ 퐵. 푘=1 The corresponding cf is 푛 푛 1 훽 푖 1 − 퐭′퐭 2 2 − 퐭′풕 푛 2 4 Ψ훽(퐭) = 푒 − ( ) (푡1 − 푡2)푒 ∏ 푡푘 , 퐭 ∈ ℝ , 4 ̅ 2√2 푘=1

퐭′ = (푡1, 푡2, … , 푡푛). Clearly, 푓훽(퐱) is SUMn. It can be shown that all 푝-dimensional marginals, 푝 < 푛, are independent normal. So, 푓훽(퐱) is a multivariate SUM distribution. The univariate marginals are 푁(0, 1), so the 푛 2 distributions of 퐚′퐗 where ∑푘=1 푎푘 = 푝 ≤ 푛 are 푁(0, 푝), 푝 = 2, 3, … , 푛 given by the independent 푛-variate normal model.

The following example is quite interesting in the sense that any subset of size 푟 < 푛 is s.i. but not independent. Example 2.8 Let (푋1, 푋2, … , 푋푛 ) have pdf given by 1 1 − 퐱′퐱 푓 (퐱) = 푒 2 {1 훽 (2휋)푛⁄2

훽 2 2 2 2 2 2 2 + 푛 (푥2 − 푥1 )[12푐 − 2푐(푥2 − 푥1 ) + 푥1 푥2 ] +6 (2푐)2 푛 푘−2 푘 1 1 1 −( − ) 2 4푐 2 ′ 푛 × [1 + ∑ ( 2) ∏ (2푐 − 푥푖 )] 푒 퐱 퐱} , 퐱 ∈ ℝ , 4푐 푖=3 푘=3 1 where 0 < 푐 < , 훽 = 퐵−1 and 2

1 2 (푥2 − 푥2) [12푐 − 2푐 (푥2 − 푥2) + 푥2푥2] 푛+6 2 1 2 1 1 2 | (2푐)2 | ≤ 퐵. 푛 푘−2 1 1 1 푘 −( − ) | 2 4푐 2 ′ | × [1 + ∑ ( ) ∏ (2푐 − 푥푖 )] 푒 퐱 퐱 2 푖=3 푘=3 4푐

The cf for 푓훽 is 푛 1 푘 ∑푛 푡2 푛 2 −2 푗=1 푗 −푐 ∑푗=1 푡푗 2 2 2 Ψ훽(푡1, 푡2, … , 푡푛) = 푒 + 훽푒 × (∑ ∏ 푡푖 ) (푡1 − 푡2), (푡1, 푡2, … , 푡푛) 푖=1 푘=2 ∈ ℝ푛.

′ Then 푋푗 푠 are s.i.i.d. 푁(0, 1). The same is also true for random vector (푋1, 푋2, … , 푋푗), 푗 = 2, 3, … , 푛 − 1. So, 푋1, 푋2, … , 푋푛 indeed form a sequence of s.i.i.d. rv's.

1 For 푐 = , we have pdf 4

1 1 − 퐱′퐱 푓 (퐱) = 푒 2 {1 훽 (2휋)푛⁄2 푛 +4 2 2 2 2 2 2 + 훽22 (푥2 − 푥1 )[3 − 2(푥2 − 푥1 ) + 4푥1 푥2 ] 푛 푘 1 1 −( − ) 2푘−5 2 4푐 2 ′ 푛 × [1 + ∑ ∏ 2 (1 − 2푥푖 )] 푒 퐱 퐱} , 퐱 ∈ ℝ , 푖=3 푘=3 −1 where 훽 = 퐵1 and

푛 +4 2 2 2 2 2 2 22 (푥2 − 푥1) [[3 − 2 (푥2 − 푥1) + 4푥1푥2]] | | 푛 ≤ 퐵1. | 2푘−5 푘 2 −1퐱′퐱 | × [1 + ∑ 2 ∏ (1 − 2푥푖 )] 푒 2 푖=3 푘=3

The corresponding cf is 푛 1 1 푘 − ∑푛 푡2 푛 2 2 푗=1 푗 −2 ∑푗=1 푡푗 2 2 2 Ψ훽(푡1, 푡2, … , 푡푛) = 푒 + 훽 푒 × (∑ ∏ 푡푖 ) (푡1 − 푡2), (푡1, 푡2, … , 푡푛) 푖=1 푘=2 ∈ ℝ푛.

We end this subsection with a characterization of the multivariate SUM distribution. Theorem 2.15 Let 휑푗 , 푗 = 1, 2, … , 푛 be cf's and let 푛 Ψ훽(푡1, 푡2, … , 푡푛) = ∏ 휑푗(푡푗) + 훽푞(푡1, 푡2, … , 푡푛), 푗=1 where 푞(풕), 풕 ∈ ℝ푛 is non negative definite, continuous at the origin and 푞(푡, 푡, … , 푡) = 0 for 푡 ∈ ℝ. Then 푛 for some constant 훽, 휳훽 is cf of a SUM distribution if |휳훽(풕)| ≤ 1 for all 풕 ∈ ℝ . Proof 횿훽 is non negative definite, continuous at the origin and 횿훽(ퟎ) = 1. Then by Bochner's Theorem 횿훽 is a cf. Remark 2.14 Construction of a desirable 횿훽 boils down to choosing appropriate function q(t 1, t 2,…, t n ). 2.9. Equivalence of Sub-Independence and Independence in Special Cases We raised the question earlier that under what conditions sub-independence implies independence. It is possible to have an answer for this question if the underlying joint distribution has a specific form, for example, jointly distributed and uncorrelated normal rv's are independent. Mohammadpour (2010) had an answer for our question which again is based on a specific underlying distribution as follows. Theorem 2.16 Let 0 < 훼 < 1 and 푋1, 푋2, … , 푋푛 be jointly 훼-stable rv's. Then 푋1, 푋2, … , 푋푛 are s.i. if and only if they are independent. Proposition 2.11 ′ Let 0 < 훼 < 1. A stable random vector 푿 = (푋1, 푋2, … , 푋푛) with −ln cf (2.4.3) is s.i. if and only if all of its ′ components 푋푗 푠 are independent rv's. Proposition 2.12 Let 퐴 = (푎푗푘) be a positive definite matrix such that 푎푗푘 ≤ 0 for 푗 ≠ 푘, 푗, 푘 = 1, 2, … , 푛. A sub-Gaussian random vector 푿 with

′ 훼⁄2 ′ − ln 휑X(퐭) = (퐭 퐴퐭) − 푖퐭 훍, 0 < 훼 ≤ 2, is s.i. if and only if 훼 = 2 and 퐴 is a diagonal matrix (i.e., components of 푿 are independent and normally distributed). Proposition 2.13 ′ ′ A Gaussian random vector 푿 = (푋1, 푋2, … , 푋푛) is s.i. if and only if all its components 푋푗 푠 are independent rv's. The final result relates the SUM distributions to the well-known notions of POD (Positive Orthant Dependence) and NOD (Negative Orthant Dependence) defined as follows. We like to mention here that Mohammadpour (2010) has implicitly used POD for stable distribution which was discussed earlier. Definition 2.6 A multivariate distribution 퐹 is said to be POD (NOD) if 푛 퐹̅(푥1, 푥2, … , 푥푛) ≥ (≤) ∏ 퐹̅푖(푥푖), 푖=1 where 퐹̅(푥1, 푥2, … , 푥푛) = 푃(푋1 > 푥1, 푋2 > 푥2, … , 푋푛 > 푥푛) and 퐹̅푖(푥푖) = 푃(푋푖 > 푥푖). It should be noted that POD (NOD) are the weakest among all the existing notions of dependence. The special case of 푛 = 2 is known as positive (negative) quadrant dependence. It is known that under POD (NOD), if 휌(푋푖 , 푋푗) = 0, then 푋푖 and 푋푗 are pairwise independent, without implying any higher order dependency ′ among 푋푖 푠. For details about POD (NOD) and other notions of dependence see Lehmann (1966) and Barlow and Proschan (1981). The following result due to Ebrahimi et al. (2010) shows that under POD (NOD), SUM\ model implies independence. Lemma 2.4 ′ Let 푿 = (푋1, 푋2, … , 푋푛) be a non negative random vector with a POD (NOD) distribution 퐹. Then 퐹 is a SUM 푛 distribution if and only if 퐹(푥) = ∏푖=1 퐹푖(푥푖), where 퐹푖 is cdf of 푋푖 . 2.10. Dissociation, Sub-Independence De Paula (2008) presented a bivariate distribution for which

(2.10.1)

퐸(푌푛|푋) = 퐸(푌푛) and 퐸(푋푛|푌) = 퐸(푋푛), 푛 = 1,2, …, i.e., 푋푚 and 푌푛 are uncorrelated for all positive integers 푚 and 푛, but 푋 and 푌 are not independent. De Paula's goal was to show a measure of dissociation between two dependent rv's 푋 and 푌 beyond the concept of uncorrelatedness of 푋 and 푌. Hamedani and Volkmer (2009a,b) showed that the rv's considered in De Paula (2008) are not s.i. Then, they presented a bivariate distribution for which (2.10.1) holds, 푋 and 푌 are s.i. but not independent. This provides a stronger measure of dissociation between 푋 and 푌. Here is the example. Example 2.9 Let

2 1 퐴푒1⁄(4푠 −1) if |푠| < 휃(푠) = { 2 , 0 otherwise where 퐴 > 0 is chosen so that ∞ ∫ 휃2(푠)푑푠 = 1. −∞ The inverse Fourier transformation 푔 of 휃 is 1⁄2 2 푔(푥) = (2휋)−1퐴 ∫ 푒1⁄(4푠 −1)푒−푖푠푥푑푠 −1⁄2 1⁄2 2 = 휋−1퐴 ∫ 푒1⁄(4푠 −1) cos(푠푥) 푑푠. 0 Define 휑 by ∞ 휑(푠) = ∫ 휃(푢)휃(푠 + 푢)푑푢. −∞ Then 휑 is the cf of a pdf 푓(푥) and 휑(푠) = 0 for |푠| ≥ 1. In fact, we have

1⁄2 2 2 푓(푥) = 2휋푔2(푥) = 2휋−1퐴2 (∫ 푒1⁄(4푠 −1) cos(푠푥) 푑푠) . 0 Now define ℎ(푥, 푦) = 푓(푥)푓(푦)[1 + cos(푥) cos(3푦)], (푥, 푦) ∈ ℝ2. Then ℎ(푥, 푦) is a pdf with marginals 푓(푥), 푓(푦), and it can be shown that 퐶표푣(푋푚 , 푌푛 ) = 0 for 푚, 푛 = 1, 2, … and 푋 and 푌 are s.i. Remark 2.15 a. A general version of the above example with corresponding technical derivations can be found in Hamedani and Volkmer (2009a). b. One can easily generalize the above example to a random vector (푋1, 푋2, … , 푋푛 ). Acknowledgments The author is grateful to Barry Arnold for his numerous communications regarding this work, in particular his elegant proof of (1.7) and his suggestion to construct Example 2.8. Thanks go to Hans Volkmer for his invaluable suggestion concerning the definition of sub-independent continuous random variables in terms of events. Thanks also go to Adel Mohammadpour for his careful reading of the first draft of the article. Finally, we are also grateful to the referees for their constructive suggestions improving the presentation of the content of this article.

References Ahsanullah , M. , Bansal , N. , Hamedani , G. G. ( 1991 ). On characterizations of normal distributions . Calcutta Statist. Assoc. Bull. 41 : 157 – 162 . Ahsanullah , M. , Hamedani , G. G. ( 1988 ). Some characterizations of normal distribution . Calcutta Statist. Assoc. Bull. 37 : 95 – 99 . Bansal , N. , Hamedani , G. G. , Zhang , H. , Behboodian , J. ( 1998 ). A note on Cauchy. chi-square and normal distributions . Technical Report 448 , Marquette University, Dept. of MSCS . Barlow , R. E. , Proschan , F. ( 1981 ). Statistical Theory of Reliability and Life Testing: Probability Models . To Begin With , Silver Spring , MD . Bazargan , H. , Bahai , H. , Aminzadeh-Gohari , A. ( 2007 ). Calculating the return value using a mathematical model of significant wave height . J. Marine Sci. Technol. 12 : 34 – 42 . Behboodian , J. ( 1989 ). Symmetric sum and symmetric product of two independent random variables . J. Theor. Probab. 2 : 267 – 270 . Chung , K. L. ( 1974 ). A Course in Probability Theory . New York : Harcourt, Brace and World . De Paula , A. ( 2008 ). Conditional moments and independence . Amer. Statistician 62 : 219 – 221 . Durairajan , T. M. ( 1979 ). A classroom note on sub-independence . Gujarat Statist. Rev. VI : 17 – 18 . Eaton , M. L. ( 1966 ). Characterization of distributions by the identical distribution of linear forms . J. Appl. Probab 3 : 481 – 494 . Ebrahimi , N. , Hamedani , G. G. , Soofi , E. S. , Volkmer , H. (2010). A class of models for uncorrelated random variables. J. Multivariate Anal. 101:1859–1871. Feller , W. ( 1971 ). An Introduction to Probability Theory and Its Applications . Vol. 2 . New York : Wiley . Hamedani , G. G. ( 1983 ). Some remarks on a paper of Durairajan . Gujarat Statist. Rev. X : 29 – 30 . Hamedani , G. G. ( 1995 ). On symmetric sum of sub-independent random variables . Calcutta Statist. Assoc. Bull. 45 : 119 – 124 . Hamedani , G. G. , Key , E. S. , Volkmer , H. ( 2004 ). Solution to a functional equation and its application to stable and stable-type distributions . Statist. Probab. Lett. 69 : 1 – 9 . Hamedani , G. G. , Tata , M. N. ( 1975 ). On the determination of the bivariate normal distribution from distributions of linear combinations of the variables . Amer. Mathemat. Month. 82 : 913 – 915 . Hamedani , G. G. , Volkmer , H. ( 2003 ). A characterization of symmetric random variables . Commun. Statist. Theor. Meth. 32 : 723 – 728 . Hamedani , G. G. , Volkmer , H. ( 2009a ). Conditional moments, sub-independence and independence . Technical Report 478. Marquette University, Dept. of MSCS . Hamedani , G. G. , Volkmer , H. ( 2009b ). Letter to the Editor . Amer. Statistician 63 : 295 . Hamedani , G. G. , Volkmer , H. , Behboodian , J. ( 2013 ). A note on sub-independent random variables and a class of bivariate mixture . Studia Sci. Math Hungar. 49 : 19 – 25 . Hamedani , G. G. , Walter , G. G. ( 1984a ). A fixed point theorem and its application to the central limit theorem . Arch. Math. 43 : 258 – 264 . Hamedani , G. G. , Walter , G. G. ( 1984b ). On properties of sub-independent random variables . Bull. Iran. Math. Soc. 11 : 45 – 51 . Hamedani , G. G. , Walter , G. G. ( 1985 ). A characterization of reciprocal random variables . Pub. Instit. Stat. Univ. XXX : 45 – 60 . Hamedani , G. G. , Walter , G. G. ( 1987 ). On self-reciprocal random variables . Pub. Instit. Stat. Univ. XXXII : 45 – 66 . Kagan , A. M. , Linnik , Y. , Rao , C. R. ( 1973 ). Characterization Problems in Mathematical Statistics . New York : Wiley . Laha , R. G. , Lukacs , E. ( 1965 ). On a linear form whose distribution is identical with that of a monomial . Pacific J. Math. 15 : 207 – 214 . Lehmann , E. L. ( 1966 ). Some concepts of dependence . Ann. Math. Statist. 37 : 1137 – 1153 . Lukacs , E. ( 1956 ). Characterization of populations by properties of suitable statistics . Proc. Third Berkeley Symp. Mathemat. Statist. Probab. Vol. 2 . Berkeley , CA : University of California Press , pp. 195 – 214 . Lukacs , E. ( 1970 ). Characteristic Functions. , 2nd ed. London : Griffin . Mohammadpour , A. ( 2004 ). Sub-independence and association for Cauchy random variables . XXXVIéme Journées de Statistique , SFDS, http://www.agro-montpellier.fr/sfds/CD/textes.htm . Mohammadpour , A. ( 2010 ). Sub-independence of stable random variables . J. Statist. Theor. Applic. 9 : 29 – 40 . Mohammadpour , A. , Safe , A. ( 2002 ). Sub-independence SαS random vectors with discrete spectral measure . 6th Iran. Statist. Conf. , Tarbiat Modarres University , Tehran , Iran . Trotter , J. F. ( 1959 ). An elementary proof of the central limit theorem . Arch. Math. 10 : 226 – 234 .