R u t c o r Research R e p o r t

Discrete Problem with the Given Shape of the Distribution

Ersoy Subasi a Mine Subasi b Andr´as Pr´ekopa c

RRR 41-2005, DECEMBER, 2005

RUTCOR Rutgers Center for Operations Research Rutgers University 640 Bartholomew Road Piscataway, New Jersey aRUTCOR, Rutgers Center for Operations Research, 640 Bartholomew 08854-8003 Road Piscataway, NJ 08854-8003, USA. Email: [email protected] Telephone: 732-445-3804 bRUTCOR, Rutgers Center for Operations Research, 640 Bartholomew Telefax: 732-445-5472 Road Piscataway, NJ 08854-8003, USA. Email: [email protected] Email: [email protected] cRUTCOR, Rutgers Center for Operations Research, 640 Bartholomew http://rutcor.rutgers.edu/∼rrr Road Piscataway, NJ 08854-8003, USA. Email: [email protected] Rutcor Research Report RRR 41-2005, DECEMBER, 2005

Discrete Moment Problem with the Given Shape of the Distribution

Ersoy Subasi Mine Subasi Andr´asPr´ekopa

Abstract. Discrete moment problems with finite, preassigned supports and given shapes of the distributions are formulated and used to obtain sharp upper and lower bounds for expectations of convex functions of discrete random variables as well as probabilities that at least one out of n events occurs. The bounds are based on the knowledge of some of the power moments of the random variables involved, or the binomial moments of the number of events which occur. The bounding problems are formulated as LP’s and dual feasible basis structure theorems as well as the application of the dual method provide us with the results. Applications in PERT and reliability are presented. Key words: Power Moment Problem, Binomial Moment Problem, Linear Pro- gramming, Logconcavity. Page 2 RRR 41-2005

1 Introduction

Let ξ be a , the possible values of which are known to be the nonnegative numbers z0 < z1 < ... < zn. Let pi = P (ξ = zi), i = 0, 1, ..., n. Suppose that these k probabilities are unknown but either the power moments µk = E(ξ ), k = 1, ..., m or the  ξ  binomial moments S = E , k = 1, ..., m, where m < n are known. k k The starting points of our investigation are the following linear programming problems

min(max){f(z0)p0 + f(z1)p1 + ... + f(zn)pn} subject to

p0 + p1 + ... + pn = 1

z0p0 + z1p1 + ... + znpn = µ1 2 2 2 z0p0 + z1p1 + ... + znpn = µ2 (1.1) . . m m m z0 p0 + z1 p1 + ... + zn pn = µm

p0 ≥ 0, p1 ≥ 0, ..., pn ≥ 0,

min(max){f(z0)p0 + f(z1)p1 + ... + f(zn)pn} subject to

p0 + p1 + ... + pn = 1

z0p0 + z1p1 + ... + znpn = S1  z   z   z  0 p + 1 p + ... + n p = S (1.2) 2 0 2 1 2 n 2 . .  z   z   z  0 p + 1 p + ... + n p = S m 0 m 1 m n m

p0 ≥ 0, p1 ≥ 0, ..., pn ≥ 0. Problems (1.1) and (1.2) are called the power and binomial moment problems, respec- tively and have been studied extensively in Pr´ekopa (1988; 1989a, b; 1990), Boros and Pr´ekopa (1989). The two problems can be transformed into each other by the use of a sim- ple linear transformation (see Pr´ekopa, 1995, Section 5.6). Let A; a0, a1, ..., an and b denote the matrix of the equality constraints in either problem, its columns and the vector of the right-hand side values. We will alternatively use the notation fk instead of f(zk). In this paper we specialize problems (1.1) and (1.2) in the following manner. RRR 41-2005 Page 3

(1) In case of problem (1.1) we assume that the function f has positive divided differences of order m + 1, where m is some fixed nonnegative integer satisfying 0 ≤ m ≤ n. The optimum values of problem (1.1) provides us with sharp lower and upper bounds for E[f(ξ)].

(2) In case of problem (1.2) we assume that zi = i, i = 0, ..., n and f0 = 0, fi = 1, i = 1, ..., n. The problem can be used in connection with arbitrary events A1, ..., An, to obtain sharp lower and upper bounds for the probability of the union. In fact if we define S0 = 1 and X Sk = P (Ai1 ...Aik ), k = 1, ..., n,

1≤i1<...

then by a well-known theorem (see, e.g., Pr´ekopa, 1995) we have the equation

 ξ  S = E , k = 1, ..., n, (1.3) k k

where ξ is the number of those events which occur. The equality constraints in (1.2) are just the same as S0 = 1 and the equations in (1.3) for k = 1, ..., m and the objective function is the probability of ξ ≥ 1 under the distribution p0, ..., pn. The distribution, however, is allowed to vary subject to the constraints, hence the optimum value of problem (1.2) provide us with the best possible bounds for the probability P (ξ ≥ 1), given S1, ..., Sm. For small m values (m ≤ 4) closed form bounds are presented in the literature. For power moment bounds see Pr´ekopa (1990, 1995). Bounds for the probability of the union have been obtained by Fr´echet (1940, 1943) when m = 1, Dawson and Sankoff (1967) when m = 2, Kwerel (1975) when m ≤ 3, Boros and Pr´ekopa (1989) when m ≤ 4. In the last two paper bounds for the probability that at least r events occur, are also presented. For other closed form probability bounds see Pr´ekopa (1995), Galambos and Simonelli (1996). Pr´ekopa (1988, 1989, 1990) discovered that the probability bounds based on the binomial and power moments of the number of events that occur, out of a given collection A1, ..., An, can be obtained as optimum values of discrete moment problems (DMP) and showed that for arbitrary m values simple dual algorithms solve problems (1.1) and (1.2) if f is of type (1) or (2) (and if fr = 1, fk = 0, k 6= r). In this paper we formulate and use moment problems with finite, preassigned and with given shape of the probability distribution to obtain sharp lower and upper bounds for unknown probabilities and expectations of higher order convex functions of discrete random variables. We assume that the probability distribution {pi} is either decreasing (Type 1) or increasing (Type 2) or unimodal with a known modus (Type 3). The reasoning goes along the lines presented in above cited papers by Pr´ekopa. In Section 2 some basic notions and theorems are given. In Section 3 and 4 we use the dual feasible basis structure theorems (Pr´ekopa, 1988, 1989, 1990) to obtain sharp bounds for Page 4 RRR 41-2005

E[f(ξ)] and P (ξ ≥ r) in case of problems, where the first two moments are known. In Section 5 we give numerical examples to compare the sharp bounds obtained by the original binomial moment problem and the sharp bounds obtained by the transformed problems: Type 1, Type 2 and Type 3. In Section 6 we present three examples for the application of our bounding technique, where shape information about the unknown probability distribution can be used.

2 Basic Notions and Theorems

Let f be a function on the discrete set Z = {z0, ..., zn}, z0 < z1 < ... < zn. The first order divided differences of f are defined by

f(zi+1) − f(zi) [zi, zi+1]f = , i = 0, 1, ..., n − 1. zi+1 − zi The kth order divided differences are defined recursively by

[zi+1, ..., zi+k]f − [zi, ..., zi+k−1]f [zi, ..., zi+k]f = , k ≥ 2. zi+1 − zi The function f is said to be kth order convex if all of its kth divided differences are positive.

Theorem 1(Pr´ekopa, 1988, 1989a,1989b, 1990) Suppose that all (m+1)st divided dif- ferences of the function f(z), z ∈ {z0, z1, ..., zn} are positive. Then, in problems (1.1) and (1.2), all bases are dual nondegenerate and the dual feasible bases have the following struc- tures, presented in terms of the subscripts of the basic vectors:

m + 1 even m + 1 odd

min problem {j, j + 1, ..., k, k + 1}{0, j, j + 1, ..., k, k + 1} max problem {0, j, j + 1, ..., k, k + 1, n}{j, j + 1, ..., k, k + 1, n}, where in all parentheses the numbers are arranged in increasing order. The following theorem is a conclusion of the dual feasible basis structure theorem by Pr´ekopa (1988, 1989a, 1989b, 1990) where r = 1. Theorem 2 Suppose that f0 = 0, fi = 1, i = 1, ..., n. Then every dual feasible basis subscript set has one of the following structures: minimization problem, m + 1 even •{0, i, i + 1, ..., j, j + 1, k, k + 1, ..., t, t + 1, n}; minimization problem, m + 1 odd •{0, i, i + 1, ..., j, j + 1, k, k + 1, ..., t, t + 1}; maximization problem, m + 1 even •I ⊂ {1, ..., n}, if n − 1 ≥ m, •{1, i, i + 1, ..., j, j + 1, k, k + 1, ..., t, t + 1, n}, if 1 ≤ n − 1, •{0, 1, i, i + 1, ..., j, j + 1, k, k + 1, ..., t, t + 1}, if 1 ≤ n; RRR 41-2005 Page 5

maximization problem, m + 1 even •I ⊂ {1, ..., n}, if n − 1 ≥ m, •{1, i, i + 1, ..., j, j + 1, k, k + 1, ..., t, t + 1}, if 1 ≤ n, •{0, 1, i, i + 1, ..., j, j + 1, k, k + 1, ..., t, t + 1, n}, if 1 ≤ n − 1, where in all parentheses the numbers are arranged in increasing order. Those bases for which I ⊂ {1, ..., n} are dual nondegenerate in the maximization problem if n > m + 1. The bases in all other cases are dual nondegenerate.

3 The Case of the Power Moment Problem

In this section we consider the power moment problem (1.1). We assume that the distribution is either decreasing or increasing or unimodal with a known and fixed modus. We give sharp lower and upper bounds for E[f(ξ)] in case of three problem types: the probabilities p0, ..., pn are (1) decreasing, (2) increasing; (3) form a unimodal sequence.

3.1 TYPE 1: p0 ≥ p1 ≥ ... ≥ pn

We assume that the probabilities p0, ..., pn are unknown but satisfy the above inequalities and the divided differences of order m + 1 of the function f are positive. Let us introduce the variables vi, i = 0, 1, ..., n as follows:

p0 − p1 = v0, ..., pn−1 − pn = vn−1, pn = vn.

Our assumptions imply v0, ..., vn ≥ 0. If we write up problem (1.1) by the use of v0, ..., vn, then we obtain

min(max){f0v0 + (f0 + f1)v1 + ... + (f0 + ... + fn)vn} subject to

a0v0 + (a0 + a1)v1 + ... + (a0 + ... + an)vn = b (3.1)

v0 ≥ 0, v1 ≥ 0, ..., vn ≥ 0 , m T T where ai = (1, zi, ..., zi ) , i = 0, ..., n and b = (1, µ1, ..., µm) . Let A be the (m + 1) × n coefficient matrix. In order to use Theorem 1, we need to  f T  show that all minors of order m + 1 from A and all minors of order m + 2 from are A T positive, where f = (f0, f0 + f1, ..., f0 + ... + fn) . To show that the first assertion is true we consider the (m + 1) × (m + 1) determinant taken from A:

a0 + ... + ai1 a0 + ... + ai2 . . . a0 + ... + aim+1 , (3.2)

where 0 ≤ i1 < ... < im+1 ≤ n. One can easily show that the determinant (3.2) is equal to

a0 + ... + ai1 ai1+1 + ... + ai2 . . . aim+1 + ... + aim+1 Page 6 RRR 41-2005

i1 + 1 i2 − i1 − 1 . . . im+1 − im − 1

z0 + ... + zi zi +1 + ... + zi . . . zi +1 + ... + zi 1 1 2 m m+1 = ...... (3.3) . . . . zm + ... + zm zm + ... + zm . . . zm + ... + zm 0 i1 i1+1 i2 im+1 im+1 The determinant (3.43) can be written as a sum of nonnegative determinants. Notice that some of these terms are Vandermonde determinants. Thus, any (m+1)×(m+1) determinant taken from the matrix of the equality constraints is positive.  f T  Similarly, the following (m + 2) × (m + 2) determinants taken from are positive, A where f is the same as before:

f0 + ... + fi1 f0 + ... + fi2 . . . f0 + ... + fim+1 (3.4) a0 + ... + ai1 a0 + ... + ai2 . . . a0 + ... + aim+1 and this is same as

f0 + ... + fi1 fi1+1 + ... + fi2 . . . fim+1 + ... + fim+1

a0 + ... + ai1 ai1+1 + ... + ai2 . . . aim+1 + ... + aim+1

f0 + ... + fi fi +1 + ... + fi . . . fi +1 + ... + fi 1 1 2 m m+1 i1 + 1 i2 − i1 − 1 . . . im+1 − im − 1

z0 + ... + zi zi +1 + ... + zi . . . zi +1 + ... + zi = 1 1 2 m m+1 . (3.5) ...... zm + ... + zm zm + ... + zm . . . zm + ... + zm 0 i1 i1+1 i2 im+1 im+1 Since f has positive divided differences of order m + 1, the determinant (3.5) is positive. Thus, we can use Theorem 1 to find the dual feasible bases in problem (3.1). Below we present, by the use of Theorem 1, the sharp lower and upper bounds for E[f(ξ)] for the case of m = 1 and m = 2, respectively. Case 1. m = 1 If m = 1, then the LP in (1.1) is equivalent to

min(max){f0v0 + (f0 + f1)v1 + ... + (f0 + ... + fn)vn} subject to v0 + 2v1 + ... + (n + 1)vn = 1

z0v0 + (z0 + z1)v1 + ... + (z0 + ... + zn)vn = µ1 (3.6)

v0 ≥ 0, v1 ≥ 0, ..., vn ≥ 0. Let A be the coefficient matrix of the equality constraints in (3.6). Since m + 1 is even, by Theorem1, the dual feasible basis of the minimization problem in (3.8), that we designate by Bmin, has the following form:

Bmin = (aj, aj+1), RRR 41-2005 Page 7

where 0 ≤ j ≤ n − 1. Similarly, by Theorem 1, the only dual feasible basis of the maximization problem in (3.8), that we designate by Bmax, is given by

Bmax = (a0, an).

First we notice that Bmin is also primal feasible if the following conditions are satisfied:

(j + 1)vj + (j + 2)vj+1 = 1

(z0 + ... + zj)vj + (z0 + ... + zj+1)vj+1 = µ1

vj ≥ 0, vj+1 ≥ 0 .

Therefore, by solving the equations given above, Bmin is an optimal basis if the index j is determined by the inequality z + ... + z z + ... + z 0 j ≤ µ ≤ 0 j+1 . (3.7) j + 1 1 j + 2

In the maximization problem (3.6) there is just one dual feasible basis. It follows that it must also be primal feasible. In case of m = 1 the bounding of E[f(ξ)] is based on the knowledge of µ1. Using the optimal bases Bmin and Bmax, we obtain the following sharp lower and upper bounds for E[f(ξ)]:

Pj+1 z − (j + 2)µ (j + 1)µ − Pj z i=0 i 1 [f + ... + f ] + 1 i=0 i [f + ... + f ] Pj 0 j Pj 0 j+1 (j + 1)zj+1 − i=0 zi (j + 1)zj+1 − i=0 zi ≤ E[f(ξ)] ≤ Pn (n + 1)µ1 − i=0 zi µ1 − z0 Pn f0 + Pn [f0 + ... + fn] , (3.8) (n + 1)z0 − i=0 zi (n + 1)z0 − i=0 zi where j satisfies (3.7). Case 2. m = 2 If m = 2, then the third order divided differences of f are positive. The bounding of E[f(ξ)] is based on the knowledge of µ1 and µ2. In case of m = 2, problem (1.1) is equivalent to the following problem:

min(max){f0v0 + (f0 + f1)v1 + ... + (f0 + ... + fn)vn}

subject to v0 + 2v1 + ... + (n + 1)vn = 1

z0v0 + (z0 + z1)v1 + ... + (z0 + ... + zn)vn = µ1 (3.9) 2 2 2 2 2 z0v0 + (z0 + z1)v1 + ... + (z0 + ... + zn)vn = µ2 Page 8 RRR 41-2005

v0 ≥ 0, v1 ≥ 0, ..., vn ≥ 0 . Let A be the coefficient matrix of the equality constraints in (3.9). Since m + 1 is odd, any dual feasible basis in the minimization (maximization) problem is of the form:

Bmin = (a0, ai, ai+1)(Bmax = (aj, aj+1, an)) .

Let us introduce the notations:

j i−1 2 X 2 X 2 Σi,j = i zt − (j − i + 1) zt t=i t=0

j i−1 X X Σi,j = i zt − (j − i + 1) zt t=i t=0 j 2 X 2 2 σi,j = zt − (j − i + 1)zi−1 t=i j X σi,j = zt − (j − i + 1)zi−1 t=i j 2 X 2 γi,j = zt − (j − i + 1)zj+1 t=i j X γi,j = zt − (j − i + 1)zj+1 t=i j n 2 X 2 X 2 αi,j = (n − j) zt − (j − i + 1) zt t=i t=j+1

j n X X αi,j = (n − j) zt − (j − i + 1) zt. (3.10) t=i t=j+1

The basis Bmin is also primal feasible if i is determined by the inequality:

iz2 − Pi z2 µ − z2 (i + 1)z2 − Pi+1 z2 0 t=1 t ≤ 2 0 ≤ 0 t=1 t . (3.11) Pi µ − z Pi+1 iz0 − t=1 zt 1 0 (i + 1)z0 − t=1 zt

Similarly, the basis Bmax is also primal feasible if j is determined by the inequality

2 Pn 2 2 Σj+1,n (n + 1)µ2 − s=0 zs Σj+2,n ≤ P ≤ . (3.12) Σj+1,n (n + 1)µ1 − s=0 zs Σj+2,n

Using the bases Bmin and Bmax we have the following lower and upper bounds for E[f(ξ)]: RRR 41-2005 Page 9

2 2 i 2 2 i+1 (µ1 − z0)σ1,i+1 − (µ2 − z0)σ1,i+1 X (µ2 − z0)σ1,i − (µ1 − z0)σ1,i X ( f − if )+ ( f − (i + 1)f ) σ2 σ − σ σ2 t 0 σ2 σ − σ σ2 t 0 1,i+1 1,i 1,i+1 1,i t=1 1,i+1 1,i 1,i+1 1,i t=1

≤ E[f(ξ)] ≤ (3.13)

2 Pn Pn 2 Pj Pn Σj+2,n [(n + 1)µ1 − s=0 zs] − Σj+2,n [(n + 1)µ2 − s=0 zs ] n s=0 fs − s=j+1 fs 2 2 ( ) Σj+1,nΣj+2,n − Σj+2,nΣj+1,n n + 1

Pn 2 2 Pn Pj+1 Pn Σj+1,n [(n + 1)µ2 − s=0 zs ] − Σj+1,n [(n + 1)µ1 − s=0 zs] n s=0 fs − s=j+2 fs + 2 2 ( ) . Σj+1,nΣj+2,n − Σj+2,nΣj+1,n n + 1 The above lower and upper bounds are sharp, because they are the optimum values of minimization and maximization problems (3.9), respectively.

3.2 TYPE 2: p0 ≤ p1 ≤ ... ≤ pn We assume that the above inequalities are satisfied and f has positive divided differences of order m + 1. If we introduce the new variables vi, i = 0, 1..., n as follows:

p0 = v0, p1 − p0 = v1, ..., pn − pn−1 = vn, then problem (1.1) can be written as

min(max){(f0 + ... + fn)v0 + (f1 + ... + fn)v1 + ... + fnvn} subject to

(a0 + ... + an)v0 + (a1 + ... + an)v1 + ... + anvn = b (3.14)

v0 ≥ 0, v1 ≥ 0, ..., vn ≥ 0 .  f T  All minors of order m + 1 from A and all minors of order m + 2 from are positive, A where A is the coefficient matrix of the equality constraints in problem (3.14) and f = T (f0 +...+fn, f1 +...+fn, ..., fn) . In order to show that the first assertion is true we consider the following (m + 1) × (m + 1) determinant taken from A (0 ≤ i1, ..., in+1 ≤ n):

ai1 + ... + an ai2 + ... + an . . . aim+1 + ... + an (3.15) One can easily show that the determinant (3.15) is equal to

ai1 + ... + ai2−1 ai2 + ... + ai3−1 . . . aim+1 + ... + an Page 10 RRR 41-2005

i2 − i1 i3 − i2 . . . in − im+1 − 1

zi + ... + zi −1 zi + ... + zi −1 . . . zi + ... + zn 1 2 2 3 m+1 = ...... (3.16) . . . . zm + ... + zm zm + ... + zm . . . zm + ... + zm i1 i2−1 i2 i3−1 im+1 n The determinant (3.16) can be written as a sum of nonnegative determinants. Notice that some of the terms are Vandermonde determinants. Thus any (m + 1) × (m + 1) determinant taken from the matrix of the equality constraints is positive. We can handle, in a similar way, the following (m + 2) × (m + 2) determinant taken from  f T  : A

fi1 + ... + fn fi2 + ... + fn . . . fim+1 + ... + fn . (3.17) ai1 + ... + an ai2 + ... + an . . . aim+1 + ... + an It is equal to

fi1 + ... + fi2−1 fi2 + ... + fi3−1 . . . fim+1 + ... + fn

ai1 + ... + ai2−1 ai2 + ... + ai3−1 . . . aim+1 + ... + an

fi + ... + fi −1 fi + ... + fi −1 . . . fi + ... + fn 1 2 2 3 m+1 i2 − i1 i3 − i2 . . . in − im+1 − 1

zi + ... + zi −1 zi + ... + zi −1 . . . zi + ... + zn = 1 2 2 3 m+1 . (3.18) ...... zm + ... + zm zm + ... + zm . . . zm + ... + zm i1 i2−1 i2 i3−1 im+1 n Since f has positive divided differences of order m + 1, the determinant (3.18) is positive. Thus, we can use Theorem 1 to find the dual feasible bases. We give the sharp lower and upper bounds for E[f(ξ)] in case of m = 1 and m = 2. Case 1. m = 1 If m = 1, then problem (3.14) is equivalent to

min(max){(f0 + ... + fn)v0 + (f1 + ... + fn)v1 + ... + fnvn}

subject to (n + 1)v0 + nv1 + ... + vn = 1

(z0 + ... + zn)v0 + (z1 + ... + zn)v1 + ... + znvn = µ1 (3.19)

v0 ≥ 0, v1 ≥ 0, ..., vn ≥ 0 . Let A be the coefficient matrix of the equality constraints in (3.19). Since m + 1 is even, by Theorem 1, the dual feasible bases in the minimization and maximization problems in (3.19) are Bmin = (aj, aj+1),Bmax = (a0, vn), respectively, where 0 ≤ j ≤ n − 1. RRR 41-2005 Page 11

The basis Bmin is also primal feasible if j is determined by the inequality z + ... + z z + ... + z j n ≤ µ ≤ j+1 n . (3.20) n − j + 1 1 n − j

The basis Bmax is the only dual feasible basis, hence it must also be primal feasible. In case of m = 1 the bounds of E[f(ξ)] are based on the knowledge of µ1 and we get the following sharp lower and upper bounds for E[f(ξ)]:

Pn Pn t=j+1 zt − (n − j)µ1 t=j zt − (n − j + 1)µ1 [fj + ... + fn] − [fj+1 + ... + fn] σj+1,n σj+1,n ≤ E[f(ξ)] ≤ Pn µ1 − zn (n + 1)µ1 − t=0 zt [f0 + ... + fn] + fn . (3.21) γ0,n−1 γ0,n−1 The bounds are sharp if j satisfies (3.20). Case 2. m = 2 If m = 2, then the third order divided differences of f are positive and problem (3.14) is equivalent to

min(max){(f0 + ... + fn)v0 + (f1 + ... + fn)v1 + ... + fnvn}

subject to (n + 1)v0 + nv1 + ... + vn = 1

(z0 + ... + zn)v0 + (z1 + ... + zn)v1 + ... + znvn = µ1 (3.22) 2 2 2 2 2 (z0 + ... + zn)v0 + (z1 + ... + zn)v1 + ... + znvn = µ2

v0 ≥ 0, v1 ≥ 0, ..., vn ≥ 0 .

The bounding of E[f(ξ)] is based on the knowledge of µ1 and µ2. Let A be the coefficient matrix of the equality constraints in (3.22). Since m + 1 is odd, any dual feasible bases in the minimization and maximization problems are of the form

Bmin = (a0, ai, ai+1},Bmax = (aj, aj+1, an), where 1 ≤ i ≤ n − 1, 0 ≤ j ≤ n − 2. Bmin is also primal feasible if

2 Pn 2 2 Σi,n (n + 1)µ2 − t=0 zt Σi+1,n ≤ Pn ≤ (3.23) Σi,n (n + 1)µ1 − t=0 zt Σi+1,n

and Bmax is also primal feasible if

γ2 µ − z2 γ2 j,n−1 ≤ 2 n ≤ j+1,n−1 . (3.24) γj,n−1 µ1 − zn γj+1,n−1 Page 12 RRR 41-2005

It follows that we have the following lower and upper bounds for E[f(ξ)]:

Pn 2 Pn 2 " n # [(n + 1)µ1 − zt]Σi+1,n − [(n + 1)µ2 − zt ]Σi+1,n X t=0 t=0 f + ... + f − (n − i + 1) f Σ Σ2 − Σ2 Σ i n t i,n i+1,n i,n i+1,n t=0

Pn 2 Pn 2 " n # [(n + 1)µ2 − zt ]Σi,n − [(n + 1)µ1 − zt]Σi,n X + t=0 t=0 f + ... + f − (n − i) f Σ Σ2 − Σ2 Σ i+1 n t i,n i+1,n i,n i+1,n t=0

≤ E[f(ξ)] ≤ (3.25)

2 2 " n # (µ1 − zn)γj+1,n−1 − (µ2 − zn)γj+1,n−1 X f − (n − j + 1)f γ γ2 − γ γ2 s n j,n−1 j+1,n−1 j+1,n−1 j,n−1 s=j

2 2 " n # (µ2 − zn)γj,n−1 − (µ1 − zn)γj,n−1 X + f − (n − j)f , γ γ2 − γ γ2 s n j,n−1 j+1,n−1 j+1,n−1 j,n−1 s=j+1 2 2 where Σi,j, Σi,j, γi,j, γi,j are defined in (3.10) and i satisfies (3.23) and j satisfies (3.24).

3.3 TYPE 3: p0 ≤ p1 ≤ ... ≤ pk ≥ pk+1 ≥ ... ≥ pn

Now we assume that the distribution is unimodal with a known modus zk, i.e., p0 ≤ p1 ≤ ... ≤ pk ≥ pk+1 ≥ ... ≥ pn. We also assume that f has positive divided differences of order m + 1. Let us introduce the variables vi, i = 0, 1, ..., k, ...n as follows:

v0 = p0, v1 = p1 − p0, ..., vk = pk − pk−1,

vk+1 = pk+1 − pk+2, ..., vn−1 = pn−1 − pn, vn = pn . Then the problem (1.1) can be written as

min(max){(f0 + ... + fk)v0 + (f1 + ... + fk)v1 + ... + fkvk + fk+1vk+1 + ... + (fk+1 + ... + fn)vn} subject to

(a0 + ... + ak)v0 + (a1 + ... + ak)v1 + ... + akvk + ak+1vk+1 + ... + (ak+1 + ... + an)vn = b

v0 ≥ 0, v1 ≥ 0, ..., vn ≥ 0. (3.26) Let A be the coefficient matrix of the equality constraints of problem (3.26). One can easily show, by the method that we applied in Section 3.1 and 3.2, that all minors of order RRR 41-2005 Page 13

 f T  m + 1 from A and all minors of order m + 2 from are positive, where f = (f + ... + A 0 T fk, f1 + ... + fk, ..., fk, fk+1, ..., fk+1 + ... + fn) . Thus, if the distribution is unimodal and we know where it takes the largest value, then we can find the dual feasible bases for problem (1.1). This allows for obtaining the bounding formulas for small m values. Case 1. m = 1 If m = 1, then the problem in (3.26) is equivalent to the problem:

min(max){(f0 + ... + fk)v0 + (f1 + ... + fk)v1 + ... + fkvk + fk+1vk+1 + ... + (fk+1 + ... + fn)vn} subject to

(k + 1)v0 + kv1 + ... + vk + vk+1 + 2vk+2 + ... + (n − k)vn = 1

(z0 + ... + zk)v0 + (z1 + ... + zk)v1 + ... + zkvk + zk+1vk+1 + ... + (zk+1 + zn)vn = µ1

v0 ≥ 0, v1 ≥ 0, ..., vn ≥ 0 . (3.27) Let A be the coefficient matrix of the equality constraints in (3.27). Since m + 1 is even, by the use of Theorem 1, a dual feasible basis of the minimization problem in (3.27) is in the form

Bmin = (aj, aj+1), where 0 ≤ j ≤ n − 1. The only dual feasible basis of the maximization problem (3.27) is

Bmax = (a0, an) .

Bmin is also primal feasible if

Pk Pk zt zt t=j ≤ µ ≤ t=j+1 if j + 1 ≤ k (3.28) k − j + 1 1 k − j

Pj z Pj+1 z t=k+1 t ≤ µ ≤ t=k+1 t if j ≥ k + 1 (3.29) j − k 1 j − k + 1

zk ≤ µ1 ≤ zk+1 if j = k. (3.30) If j + 1 ≤ k and (3.28) is satisfied, then the sharp lower and upper bounds for E[f(ξ)] are as follows:

Pk k Pk k t=j+1 zt − (k − j)µ1 X (k − j + 1)µ1 − t=j zt X f + f Pk Pk t Pk Pk t (k − j + 1) t=j+1 zt − (k − j) t=j zt t=j (k − j + 1) t=j+1 zt − (k − j) t=j zt t=j+1 Page 14 RRR 41-2005

≤ E[f(ξ)] ≤ (3.31)

Pn k Pk n t=k+1 zt − (n − k)µ1 X (k + 1)µ1 − t=0 zt X ft + ft . Σk+1,n Σk+1,n t=0 t=k+1 If j ≥ k + 1 and (3.29) is satisfied, then the sharp lower and upper bounds for E[f(ξ)] are as follows:

Pj+1 j Pj j+1 zt − (j − k + 1)µ1 X (j − k)µ1 − zt X t=k+1 f + t=k+1 f Pj+1 Pj t Pj+1 Pj t (j − k) t=k+1 zt − (j − k + 1) t=k+1 zt t=k+1 (j − k) t=k+1 zt − (j − k + 1) t=k+1 zt t=k+1

≤ E[f(ξ)] ≤ (3.32)

Pn k Pk n t=k+1 zt − (n − k)µ1 X (k + 1)µ1 − t=0 zt X ft + ft . Σk+1,n Σk+1,n t=0 t=k+1 If j = k and (3.30) is satisfied, then the sharp lower and upper bounds for E[f(ξ)] are the following:

zk+1 − µ1 µ1 − zk fk + fk+1 zk+1 − zk zk+1 − zk

≤ E[f(ξ)] ≤ (3.33)

Pn k Pk n t=k+1 zt − (n − k)µ1 X (k + 1)µ1 − t=0 zt X ft + ft , Σk+1,n Σk+1,n t=0 t=k+1

where Σi,j is given in (3.10). Case 2. m = 2 In case of m = 2, problem in (3.26) is equivalent to the problem:

min(max){(f0 + ... + fk)v0 + (f1 + ... + fk)v1 + ... + fkvk + fk+1vk+1 + ... + (fk+1 + ... + fn)vn} subject to (k + 1)v0 + kv1 + ... + vk + vk+1 + 2vk+2 + ... + (n − k)vn = 1

(z0 + ... + zk)v0 + (z1 + ... + zk)v1 + ... + zkvk + zk+1vk+1 + ... + (zk+1 + zn)vn = µ1 2 2 2 2 2 2 2 2 (z0 + ... + zk)v0 + (z1 + ... + zk)v1 + ... + zkvk + zk+1vk+1 + ... + (zk+1 + zn)vn = µ2

v0 ≥ 0, v1 ≥ 0, ..., vn ≥ 0 . (3.34) RRR 41-2005 Page 15

The third order divided differences of f are positive. The bounds of E[f(ξ)] are based on the knowledge of µ1 and µ2. Let A be the coefficient matrix of the equality constraints in (3.32). Since m + 1 is odd, any dual feasible bases in the minimization and in the maximization problems are of the form Bmin = (a0, ai, ai+1) and Bmax = (aj, aj+1, an), where 1 ≤ i ≤ n − 1, 0 ≤ j ≤ n − 2. The basis Bmin is also primal feasible if one of the following conditions is satisfied:

Σ2 (k + 1)µ − Pk z2 Σ2 i,k ≤ 2 t=0 t ≤ i+1,k , i + 1 ≤ k (3.35) Σ Pk Σ i,k (k + 1)µ1 − t=0 zt i+1,k Σ2 (k + 1)µ − Pk z2 Σ2 k+1,i ≤ 2 t=0 t ≤ k+1,i+1 , i ≥ k + 1 (3.36) Σ Pk Σ k+1,i (k + 1)µ1 − t=0 zt k+1,i+1 γ2 (k + 1)µ − Pk z2 γ2 0,k−1 ≤ 2 t=0 t ≤ 0,k , i = k , (3.37) γ Pk γ 0,k−1 (k + 1)µ1 − t=0 zt 0,k 2 2 where Σi,j, Σi,j, γi,j, γi,j are defined in (3.10). The basis Bmax is also primal feasible if one of the following conditions is satisfied:

2 Pn 2 2 αj,k (n − k)µ2 − t=k+1 zt αj+1,k ≤ Pn ≤ , j + 1 ≤ k (3.38) αj,k (n − k)µ1 − t=k+1 zt αj+1,k 2 Pn 2 2 αk+1,j (n − k)µ2 − t=k+1 zt αk+1,j+1 ≤ Pn ≤ , j ≥ k (3.39) αk+1,j (n − k)µ1 − t=k+1 zt αk+1,j+1 2 Pn 2 2 σk+1,n (n − k)µ2 − t=k+1 zt σk+2,n ≤ Pn ≤ , j = k , (3.40) σk+1,n (n − k)µ1 − t=k+1 zt σk+2,n 2 2 where σi,j, σi,j, αi,j, αi,j are defined in (3.10).

We have the following sharp lower bound for E[f(ξ)]:

• If i + 1 ≤ k,

1 Pk k+1 t=0 ft

2 Pk Pk 2 " k Pk ! Σi+1,k[(k + 1)µ1 − t=0 zt] − Σi+1,k[(k + 1)µ2 − t=0 zt ] X ft + f − t=0 ] Σ Σ2 − Σ Σ2 t k + 1 i,k i+1,k i+1,k i,k t=i

Pk 2 2 Pk " k Pk # Σi,k[(k + 1)µ2 − t=0 zt ] − Σi,k[(k + 1)µ1 − t=0 zt] X ft + f − t=0 . Σ Σ2 − Σ Σ2 t k + 1 i,k i+1,k i+1,k i,k t=i+1 Page 16 RRR 41-2005

(3.41)

• If i ≥ k + 1,

1 Pk k+1 t=0 ft

2 Pk Pk 2 " i Pk # Σk+1,i+1[(k + 1)µ1 − t=0 zt] − Σk+1,i+1[(k + 1)µ2 − t=0 zt ] X (i − k) t=0 ft + 2 2 ft − Σk+1,iΣ − Σ Σk+1,i+1 k + 1 k+1,i+1 k+1,i t=k+1

Pk 2 2 Pk " i+1 Pk # Σk+1,i[(k + 1)µ2 − t=0 zt ] − Σk+1,i+1[(k + 1)µ1 − t=0 zt] X (i − k + 1) t=0 ft + 2 2 ft − . Σk+1,iΣ − Σ Σk+1,i+1 k + 1 k+1,i+1 k+1,i t=k+1

(3.42)

• If i = k,

1 Pk k+1 t=0 ft

2 Pk Pk 2 " Pk # γ0,k[(k + 1)µ1 − t=0 zt] − γ0,k[(k + 1)µ2 − t=0 zt ] t=0 ft + 2 2 fk − γ0,k−1γ0,k − γ0,kγ0,k−1 k + 1

Pk 2 2 Pk " Pk # γ0,k−1[(k + 1)µ2 − t=0 zt ] − γ0,k−1[(k + 1)µ1 − t=0 zt] t=0 ft + 2 2 fk+1 − . γ0,k−1γ0,k − γ0,kγ0,k−1 k + 1

(3.43) The sharp upper bound for E[f(ξ)] can be given as follows:

• If j + 1 ≤ k,

1 Pn n−k t=k+1 ft

2 Pn Pn 2 " k Pn # αj+1,k[(n − k)µ1 − t=k+1 zt] − αj+1,k[(n − k)µ2 − t=k+1 zt ] X (k − j + 1) + f − t=k+1 α α2 − α α2 t n − k j,k j+1,k j+1,k j,k t=j

Pn 2 2 Pn " k Pn # αj,k[(n − k)µ2 − t=k+1 zt ] − αj,k[(n − k)µ1 − t=k+1 zt] X (k − j) f − t=k+1 . α α2 − α α2 t n − k j,k j+1,k j+1,k j,k t=j+1

(3.44) RRR 41-2005 Page 17

• If j ≥ k + 1,

1 Pn n−k t=k+1 ft

2 Pn Pn 2 " j Pn # αk+1,j+1[(n − k)µ1 − t=k+1 zt] − αk+1,j+1[(n − k)µ2 − t=k+1 zt ] X (j − k) t=k+1 + 2 2 ft − αk+1,jα − αk+1,j+1α n − k k+1,j+1 k+1,j t=k+1

Pn 2 2 Pn " j+1 Pn # αk+1,j[(n − k)µ2 − t=k+1 zt ] − αk+1,j[(n − k)µ1 − t=k+1 zt] X (j − k + 1) t=k+1 + 2 2 ft − . αk+1,jα − αk+1,j+1α n − k k+1,j+1 k+1,j t=k+1

(3.45)

• If j = k,

1 Pn n−k t=k+1 ft

2 Pn Pn 2  Pn  σk+2,n[(n − k)µ1 − t=k+1 zt] − σk+2,n[(n − k)µ2 − t=k+1 zt ] t=k+1 ft + 2 2 fk − σk+1,nσk+2,n − σk+1,nσk+2,n n − k

Pn 2 2 Pn  Pn  σk+1,n[(n − k)µ2 − t=k+1 zt ] − σk+1,n[(n − k)µ1 − t=k+1 zt] t=k+1 ft + 2 2 fk+1 − . σk+1,nσk+2,n − σk+1,nσk+2,n n − k

(3.46)

2 2 2 2 In all formulas given above, Σi,j , Σi,j , σi,j , σi,j , γi,j , γi,j , αi,j and αi,j are defined as in (3.10).

4 The Case of the Binomial Moment Problem

In case of the binomial moment problem (1.2) we look at the special case, where

zi = i, i = 0, ..., n , f0 = 0, f1 = ... = fn = 1 .

In case of m = 2 we give the sharp lower and upper bounds for the probability that at least r = 1 out of n events occur. We look at the problem (1.2) but the constraints are supplemented by shape constraints of the unknown probability distribution p0, ..., pn. In the following three subsections we us the same shape constraints that we have used in Section 3.1-3.3. Page 18 RRR 41-2005

4.1 TYPE 1: p0 ≥ p1 ≥ ... ≥ pn Let m = 2. If we introduce the variables

v0 = p0 − p1, ... , vn−1 = pn−1 − pn, vn = pn,

then problem (1.2) can be written as

min(max){v1 + 2v2 + ... + nvn} subject to

v0 + 2v1 + 3v2 + 4v3 + ... + (n + 1)vn = 1

 2   3   4   n + 1  v + v + v + ... + v = S 2 1 2 2 2 3 2 n 1  2   2   3   2   n  v + + v + ... + + ... + v = S 2 2 2 2 3 2 2 n 2

v0, ..., vn ≥ 0 . (4.1) Taking into account the equation:

 3   k  (k − 1)k(k + 1) 1 + + ... + = , 2 2 6 the problem is the same as the following:

n X min(max) = ivi i=1 subject to n X (i + 1)vi = 1 i=0 n X (i + 1)ivi = 2S1 (4.2) i=0 n X (i + 1)i(i − 1)vi = 6S2 i=0

v0, ..., vn ≥ 0 . Problem (4.2) is equivalent to the following:

min(max){v1 + 2v2 + ... + nvn} RRR 41-2005 Page 19

subject to

v0 + 2v1 + 3v2 + 4v3 + ... + (n + 1)vn = 1

2v1 + 6v2 + 12v3 + ... + (n + 1)nvn = 2S1 (4.3)

6v2 + 24v3 + ... + (n + 1)n(n − 1)vn = 6S2

v0, ..., vn ≥ 0 . Let A be the coefficient matrix of equality constraints in (4.3). One can easily show that any 3 × 3 matrix taken from A has a positive determinant. In fact, any 3 × 3 determinant of the form

i + 1 j + 1 k + 1

(i + 1)i (j + 1)j (k + 1)k

(i + 1)i(i − 1) (j + 1)j(j − 1) (k + 1)k(k − 1)  f T  is positive. Similarly, one can easily show that all minors of order 4 from are positive, A where f = (0, 1, 2, ..., n)T . By Theorem 1, the dual feasible bases of the minimization and maximization problems in (4.3) are in the form

Bmin = (a0, ai, ai+1) and Bmax = (aj, aj+1, an),

respectively, where 1 ≤ i ≤ n − 1 and 0 ≤ j ≤ n − 2. The basis Bmin is also primal feasible if 3S  3S  i − 1 ≤ 2 ≤ i, i = 2 . (4.4) S1 S1

The basis Bmax is also primal feasible if

2(n + j)S1 − n(j + 1) ≤ 6S2 ≤ 2(n + j − 1)S1 − nj . (4.5)

Since a both primal and dual feasible basis is optimal, it follows that the objective function values corresponding to these bases are sharp lower and upper bounds for the probability of ξ ≥ 1. Thus, we have the following sharp lower and upper bounds for P (ξ ≥ 1):

2(2i + 1)S − 6S n 2(2j + n + 1)S − 6S − 2n(j + 1) 1 2 ≤ P (ξ ≥ 1) ≤ + 1 2 . (4.6) (i + 1)(i + 2) n + 1 (n + 1)(j + 1)(j + 2)

The bounds in (4.6) hold for any i, j, but they are sharp if (4.4) and (4.5) are satisfied. Remark. We consider the special case

zi = i, i = 0, ..., n . Page 20 RRR 41-2005

Since  ν   ν  ν(ν − 1) ν2 − ν = ν and = = , 1 2 2 2 the first and the second binomial moments are equal to µ − µ S = µ and S = 2 1 . 1 1 2 2 If we substitute µ1 = S1 and µ2 = 2S2 + S1 in the closed bound formulas in (3.12) in Section 3, then we obtain the sharp bounds in (4.6).

4.2 TYPE 2: p0 ≤ p1 ≤ ... ≤ pn

Let us introduce the variables v0 = p0, v1 = p1 −p0, ..., vn = pn −pn−1. In this case problem (1.2) can be written as

min(max){nv0 + nv1 + (n − 1)v2 + ... + vn} subject to

(n + 1)v0 + nv1 + (n − 1)v2 + ... + vn = 1  n + 1   n + 1    n + 1   n  (v + v ) + − 1 v + ... + − v = S 2 0 1 2 2 2 2 n 1

 2   n   3   n   n  + ... + (v + v + v ) + + ... + v + ... + v = S 2 2 0 1 2 2 2 3 2 n 2

v ≥ 0 . (4.7) Taking into account the equations:  n + 1   i  (n + i)(n − i + 1) − = , 2 ≤ i ≤ n 2 2 2 and  i   n  (n + 1)n(n − 1) − (i − 2)(i − 1)i + ... + = , 2 ≤ i ≤ n 2 2 6 problem (4.7) can be written as

min(max){nv0 + nv1 + (n − 1)v2 + .. + vn} subject to (n + 1)v0 + nv1 + (n − 1)v2 + ... + vn = 1 RRR 41-2005 Page 21

(n + 1)n(v0 + v1) + (n + 2)(n − 1)v2 + ... + (n + i)(n − i + 1)vi + ... + 2nvn = 2S1

(n+1)n(n−1)(v0 +v1 +v2)+...+[(n+1)n(n−1)−(i−2)(i−1)i]vi +...+3n(n−1)vn = 6S2

v0, ..., vn ≥ 0 . (4.8) We consider the following 4 × 4 matrix.  n − i + 1 n − j + 1 n − k + 1 n − l + 1   n − i + 1 n − j + 1 n − k + 1 n − l + 1  B =    (n + i)(n − i + 1) (n + j)(n − j + 1) (n + k)(n − k + 1) (n + l)(n − l + 1)  n3 − n − i3 + 3i2 − 2i n3 − n − j3 + 3j2 − 2j n3 − n − k3 + 3k2 − 2k n3 − n − l3 + 3l2 − 2l for all 0 ≤ i < j < k < l ≤ n. Since detB = 0 we cannot use Theorem 1. Let A be the coefficient matrix of equality constraint in (4.8). By the use of Theorem 2, dual feasible bases in the minimization and the maximization problems in (4.8) are in the form   (a0, a1, an) if n − 1 ≥ 1 Bmin = (a0, ai, ai+1) and Bmax = (a1, aj, aj+1) if n ≥ 1 ,  (a1, as, at), if n − 1 ≥ 2 respectively, where 1 ≤ i ≤ n − 1 , 2 ≤ j , s ≤ n − 1 and s + 1 ≤ t ≤ n. The basis Bmin is also primal feasible if i is determined by the following inequalities:

2(n + i − 1)S1 − 6S2 ≥ in, (4.9)

2[(n − i)(n + i − 1) + 2(i − 1)]S1 − 6(n − i + 1)S2 ≤ n(i − 1)(n − i + 1) . (4.10)

First, we notice that the basis Bmax = (a0, a1, an) is also primal feasible. The basis Bmax = (a1, aj, aj+1) is also primal feasible if j is determined by the following inequalities:

3 2 3 2 2 6j(n − j)S2 − 2(n − n − n − j + j + 1)S1 ≤ (n + 1)[(1 − j)(n − 1)(n − j − 1) − j(1 − j )] , (4.11) 3 2 3 2 3 3 2 6j(n−j+1)S2−2(n −n −n−j +3j −2j+1)S1 ≥ (n+1)[n +n−j +4j −4j+2−nj(n−j+3)] . (4.12)

We also remark that if Bmax = (a1, aj, aj+1) or Bmax = (a1, as, at), then the objective function and the left hand side of the first constraint are the same and therefore the upper bound is equal to 1. Thus, we have the following lower and upper bounds for P (ξ ≥ 1). 2(−2i2 + n2 + 3i + ni − 1)S − 6(n − i + 1)S n(i − 1) 1 2 + (n + 1)i(i + 1)(n − i + 1) (n + 1)(i + 1)(n − i) ≤ P (ξ ≥ 1) ≤ (4.13) 2(2n − 1)S − 6S min{1, 1 2 } . n(n + 1) The bounds in (4.14) are sharp if i satisfies (4.9) and (4.10). Page 22 RRR 41-2005

4.3 TYPE 3: p0 ≤ ... ≤ pk ≥ ... ≥ pn Now we assume that the distribution is unimodal with a known modus. If we introduce the variables vi, i = 0, 1, ..., n:

v0 = p0, v1 =, ... , vk = pk − pk−1 ,

vk+1 = pk+1 − pk+2, ... , vn−1 = pn−1 − pn, vn = pn , then problem (1.2) can be written as

min(max){kv0 + kv1 + (k − 1)v2 + ... + vk + vk+1 + 2vk+2 + ... + (n − k)vn}

subject to

(k + 1)v0 + kv1 + (k − 1)v2 + ... + vk + vk+1 + ... + (n − k)vn = 1

 k + 1   k + 1   i  (v + v ) + − v + ... + kv + (k + 1)v 2 0 1 2 2 i k k+1  k + 3   k + 1   n + 1   k + 1  + − v + ... + − v = S 2 2 k+2 2 2 n 1  2   k   3   k   k   k + 1  + ... + (v + v + v ) + + v + v + v + 2 2 0 1 2 2 2 3 2 k 2 k+1  k + 1   k + 2   k + 1   n  + v + ... + + ... + v = S 2 2 k+1 2 2 n 2 v ≥ 0 . (4.14) This is equivalent to

min(max){kv0 + kv1 + (k − 1)v2 + ... + vk + vk+1 + 2vk+2 + ... + (n − k)vn}

subject to

(k + 1)v0 + kv1 + (k − 1)v2 + ... + vk + vk+1 + ... + (n − k)vn = 1

(k+1)kv0+(k+1)kv1+...+(k+i)(k−i+1)vi+...+2kvk+2(k+1)vk+1+...+(n−k)(n+k+1)vn = 2S1 3 (k + 1)k(k − 1)(v0 + v1 + v2) + ... + [(k − k) − (i − 2)(i − 1)i]vi + ... + 3k(k − 1)vk 2 2 +3(k + 1)kvk+1 + ... + (n − k)(n + nk + k − 1)vn = 6S2 v ≥ 0 . (4.15) Let A be the coefficient matrix of equality constraints in (4.15). By Theorem 2, a dual feasible basis for minimization problem in (4.15) is of the form

Bmin = (a0, ai, ai+1), RRR 41-2005 Page 23

where 1 ≤ i ≤ n − 1 and a dual feasible basis for maximization problem in (4.1) is of the form   (a0, a1, an) if n − 1 ≥ 1 Bmax = (a1, aj, aj+1) if n ≥ 1 ,  (a1, as, at), if n − 1 ≥ 2 where 2 ≤ j , s ≤ n − 1 and s + 1 ≤ t ≤ n. The basis Bmin = (a0, ai, ai+1) is also primal feasible if one of the following conditions is satisfied:

2(k + i − 1)S1 − 6S2 ≥ ik ,

2[(k − i)(k + i − 1) + 2(i − 1)]S1 − 6(k − i + 1)S2 ≤ k(i − 1)(k − i + 1) if i + 1 ≤ k , (4.16)

2(i + k − 1)S1 − ik ≤ 6S2 ≤ 2(i + k)S1 − k(i + 1) if i ≥ k + 1 , (4.17)

4(k − 1)S1 − k(k − 1) ≤ 6S2 ≤ 4kS1 − k(k + 1) if i = k . (4.18)

The basis Bmax = (a1, aj, aj+1) is also primal feasible if one of the following conditions is satisfied:

3 2 3 2 2 6j(k − j)S2 − 2(k − k − k − j + j + 1)S1 ≤ (k + 1)[(1 − j)(k − 1)(k − j − 1) − j(1 − j )]

3 2 3 2 3 3 2 6j(k−j+1)S2−2(k −k −k−j +3j −2j+1)S1 ≥ (k+1)[k +k−j +4j −4j+2−kj(k−j+3)] if j + 1 ≤ k , (4.19)

2(j − k)(j + k)S − 6(j − k)S 2(j + k + 1)S − 6S 1 2 ≤ k + 1 ≤ 1 2 (j + 1)(j − k) j + 2 if j ≥ k + 1 , (4.20)

2(2k − 1)S1 − k(k + 1) ≤ 6S2 ≤ 2(2k + 1)S1 − (k + 1)(k + 2) if j = k . (4.21)

The basis Bmax = (a0, a1, an). We remark that Bmin = (a0, ak, ak+1) and Bmax = (a1, ak, ak+1) are also primal feasible since k is known. Using the same reasoning that we have used in Section 4.2, one can easily show that the upper bound is equal to 1 if Bmax = (a1, aj, aj+1) or Bmax = (a1, as, at). Three possible cases for the lower and upper bounds for P (ξ ≥ 1) can be given as follows: Page 24 RRR 41-2005

Case 1. i + 1 ≤ k 2(−2i2 + k2 + 3i + ki − 1)S − 6(k − i + 1)S k(i − 1) 1 2 + (k + 1)i(i + 1)(k − i + 1) (k + 1)(i + 1)(k − i)

≤ P (ξ ≥ 1) ≤

2(n + k)S − 6S min{1, 1 2 } . (4.22) (n + 1)(k + 1) Case 2. i ≥ k + 1 k 2k(2i + k + 1)S − 6kS − 2k2(i + 1) + 1 2 k + 1 (k + 1)(i + 1)(i + 2)

≤ P (ξ ≥ 1) ≤

2(n + k)S − 6S min{1, 1 2 } . (4.23) (n + 1)(k + 1) Case 3. i = k 6kS − 6S + k3 − k 2(n + k)S − 6S 1 2 ≤ P (ξ ≥ 1) ≤ min{1, 1 2 } . (4.24) k(k + 1)(k + 2) (n + 1)(k + 1) The above inequalities are sharp if the bases are primal feasible too, i.e., the inequalities (4.16), ... , (4.21) are satisfied.

5 Examples

We present four examples to show that if the shape of the distribution is given, then by the use of our bounding methodology, we can obtain tighter bounds for P (ξ ≥ 1). First, let us consider the following binomial moment problem where the shape information is not given. min(max){p1 + ... + pn} subject to n X pi = 1 i=0 n X ipi = S1 (5.1) i=1 n X  i  p = S 2 i 2 i=2 RRR 41-2005 Page 25

pi ≥ 0 , i = 0, ..., n . Let A be the coefficient matrix of the equality constraints in (5.1). By Theorem 2, a dual feasible basis for the minimization problem in (5.1) is of the form:

Bmin = (a0, ai, ai+1), where 1 ≤ i ≤ n − 1. Similarly, a dual feasible basis for the maximization problem in (5.1) is given as follows:   (a0, a1, an) if n − 1 ≥ 1 Bmax = (a1, aj, aj+1) if n ≥ 1 ,  (a1, as, at), if n − 1 ≥ 2 where 2 ≤ j, s ≤ n − 1 and s + 1 ≤ t ≤ n. The primal feasibility of the basis Bmin is ensured if the following condition is satisfied: 2S i − 1 ≤ 2 ≤ i . (5.2) S1

First, we remark that the basis Bmax = (a0, a1, an) is primal feasible. We also note that the upper bound for P (ξ ≥ 1) is equal to 1 if Bmax = (a0, a1, an) or Bmax = (a1, as, at). The basis Bmax = (a1, aj, aj + 1) is also primal feasible if j is determined by the following condition: 2S j ≤ 2 ≤ j + 1 . (5.3) S1 − 1

Similarly, the basis Bmax = (a1, as, at) is also primal feasible if 2S s ≤ 2 ≤ t . (5.4) S1 − 1 The sharp lower and upper bounds for P (ξ ≥ 1) can be given as follows:

2iS − 2S 2S 1 2 ≤ P (ξ ≥ 1) ≤ min{1,S − 2 } , (5.5) i(i + 1) 1 n where i satisfies (5.2). Example 1. In order to create example for S1,S2 we take the following probability ∗ ∗ ∗ ∗ ∗ distribution p0 = 0.4, p1 = 0.3, p2 = 0.25, p3 = 0.03, p4 = 0.02. With these probabilities the binomial moments are

4 4 X X  i  S = ip∗ = 0.97 and S = p∗ = 0.46 . 1 i 2 2 i i=1 i=2 We have the following binomial moment problem:

min(max){p1 + p2 + p3 + p4} Page 26 RRR 41-2005

subject to

p0 + p1 + p2 + p3 + p4 = 1

4 X ipi = 0.97 i=1 4 X  i  p = 0.46 2 i i=2

p0, ..., p4 ≥ 0 . (5.6) Let A be the coefficient matrix of the equality constraints in (5.6). By Theorem 2, we have Bmin = (a0, a1, a2),Bmax = (a0, a1, a4) for problem (5.1). By the use of (5.5), we obtain the following lower and upper bounds:

0.51 ≤ P (ξ ≥ 1) ≤ 0.74 . (5.7)

Now, we assume that the probability distribution in (5.6) is decreasing, i.e., p0 ≥ ... ≥ p4. In this case problem (5.6) can be transformed to Type 1 problem in Section 4.1 as follows:

min(max){v1 + 2v2 + 3v3 + 4v4}

subject to

v0 + 2v1 + 3v2 + 4v3 + 5v4 = 1

v1 + 3v2 + 6v3 + 10v4 = 0.97

v2 + 4v3 + 10v4 = 0.46 v ≥ 0 . (5.8) Now let A be the coefficient matrix of the equality constraints in (5.8). By using (4.4) and (4.5), the optimal bases for problem (5.8) are Bmin = (a0, a2, a3) and Bmax = (a1, a2, a4). The following are the lower and upper bounds obtained from (4.6):

0.5783 ≤ P (ξ ≥ 1) ≤ 0.6273. (5.9)

One can easily see that these bounds are the optimum values of problem (5.6) together with the shape constraint p0 ≥ ... ≥ p4. Example 2. Let n = 5,S1 = 3.95,S2 = 7. The corresponding binomial problem is

min(max){p1 + p2 + p3 + p4 + p5}

subject to

p0 + p1 + p2 + p3 + p4 + p5 = 1 RRR 41-2005 Page 27

5 X ipi = 3.95 i=1 5 X  i  p = 7 2 i i=2

p0, ..., p5 ≥ 0 . (5.10) From (5.5) we obtain 0.88 ≤ P (ξ ≥ 1) ≤ 1 . (5.11) Now we assume that the distribution is increasing. Problem (5.10) together with the shape constraint p0 ≤ ... ≤ p5 can be transformed to Type 2 problem in Section 4.2 as follows: min(max){5v0 + 5v1 + 4v2 + 3v3 + 2v4 + v5} subject to

6v0 + 5v1 + 4v2 + 3v3 + 2v4 + v5 = 1

15v0 + 15v1 + 14v2 + 12v3 + 9v4 + 5v5 = 3.95

20v0 + 20v1 + 20v2 + 19v3 + 16v4 + 10v5 = 7 v ≥ 0 . (5.12)

The optimal bases for (5.12) are Bmin = (a0, a4, a5) and Bmax = (a0, a1, a5). By the use of Problem (5.12) and the formulas given in (4.13), the sharp lower and upper bounds for P (ξ ≥ 1) can be found as follows:

0.94 ≤ P (ξ ≥ 1) ≤ 0.97 . (5.13)

Example 3.Let S1 = 8.393,S2 = 34.625. The corresponding binomial problem is

10 X min(max) pi i=1 subject to

10 X pi = 1 i=0

10 X ipi = 8.393 i=1 10 X  i  p = 34.625 2 i i=2 Page 28 RRR 41-2005

p0, ...p10 ≥ 0 . (5.14)

Bmin = (a0, a9, a10) and Bmax = (a1, a9, a10) are the optimal bases for (5.14) and from (5.5) we obtain 0.909 ≤ P (ξ ≥ 1) ≤ 1 . (5.15)

By adding the shape constraint p0 ≤ ... ≤ p10, problem (5.14) can be transformed to Type 2 problem in Section 4.2 as follows:

min(max){10v0 + 10v1 + 9v2 + 8v3 + ... + v10}

subject to 10 X (11 − i)vi = 1 i=0 10 X (10 + i)(11 − i)vi = 8.393 2 i=0 10 X (990 − (i − 2)(i − 1)i)vi = 34.625 6 i=0 v ≥ 0 . (5.16) By the use of Problem (5.16) and the sharp bound formulas in (4.13), the sharp lower and upper bounds for P (ξ ≥ 1) can be given as follows:

0.975 ≤ P (ξ ≥ 1) ≤ 1 . (5.17)

Let n = 5,S1 = 3.95,S2 = 7. The corresponding binomial problem is

min(max){p1 + p2 + p3 + p4 + p5}

subject to

p0 + p1 + p2 + p3 + p4 + p5 = 1

5 X ipi = 3.95 i=1 5 X  i  p = 7 2 i i=2

p0, ..., p5 ≥ 0 . (5.18) From (5.5) we obtain 0.88 ≤ P (ξ ≥ 1) ≤ 1 . (5.19) RRR 41-2005 Page 29

Now we assume that the distribution is increasing. Problem (5.10) together with the shape constraint p0 ≤ ... ≤ p5 can be transformed to Type 2 problem in Section 4.2 as follows: min(max){5v0 + 5v1 + 4v2 + 3v3 + 2v4 + v5} subject to

6v0 + 5v1 + 4v2 + 3v3 + 2v4 + v5 = 1

15v0 + 15v1 + 14v2 + 12v3 + 9v4 + 5v5 = 3.95

20v0 + 20v1 + 20v2 + 19v3 + 16v4 + 10v5 = 7 v ≥ 0 . (5.20)

The optimal bases for (5.12) are Bmin = (a0, a4, a5) and Bmax = (a0, a1, a5). By the use of Problem (5.12) and the formulas given in (4.13), the sharp lower and upper bounds for P (ξ ≥ 1) can be found as follows:

0.94 ≤ P (ξ ≥ 1) ≤ 0.97. (5.21)

Example 4.Let n = 4,S1 = 1.93,S2 = 1.27. The corresponding binomial problem is

4 X min(max) pi i=1 subject to

4 X pi = 1 i=0

4 X ipi = 1.93 i=1 4 X  i  p = 1.27 2 i i=2

p0, ...p4 ≥ 0 . (5.22) By the use of (5.5) and Problem (5.22) we have

0.8633 ≤ P (ξ ≥ 1) ≤ 1 . (5.23)

Now we assume that the distribution is unimodal and k = 2. By adding the shape constraint p0 ≤ p1 ≤ p2 ≥ p3 ≥ p4, problem (5.22) can be transformed to Type 3 problem in Section 4.3 as follows: min(max){2v0 + 2v1 + v2 + v3 + 2v4} Page 30 RRR 41-2005

subject to

3v0 + 2v1 + v2 + v3 + 2v + 4 = 1

3v0 + 3v1 + 2v2 + 3v3 + 7v4 = 1.93

v0 + v1 + v2 + 3v3 + 9v4 = 1.27 v ≥ 0 . (5.24) By the use of Problem (5.24) and the closed bound formulas in (4.24), the sharp lower and upper bounds for P (ξ ≥ 1) can be given as follows:

0.8975 ≤ P (ξ ≥ 1) ≤ 1 . (5.25)

6 Applications

In this section we present two examples for the application of our bounding technique, where shape information about the unknown probability distribution can be used. Example 1. Application in PERT. In PERT we frequently concerned with the problem to approximate the expectation or the values of the probability distribution of the length of the critical path. In the paper by Pr´ekopa et al. (2004) a bounding technique is presented for the c.d.f. of the critical, i.e., the largest path under moment information. In that paper first an enumer- ation algorithm finds those paths that are candidates to become critical. Then probability distribution of the path lengths are approximated by a multivariate that serves a basis for the bounding procedure. In the present example we are concerned with only one path length but drop the normal approximation to the path length. Instead, we assume that the random length of each arc follows , as it is usually assumed in PERT. Arc lengths are assumed to be independent, thus the probability distribution of the path length is the convolution of beta distribution with different parameters. The p.d.f. of the beta distribution in the interval (0, 1) is defined as

Γ(α + β) f(x) = xα−1(1 − x)β−1, 0 < x < 1, (6.1) Γ(α)Γ(β) where Γ(.) is the gamma function, Z ∞ Γ(p) = xp−1e−xdx, p > 0. 0 The kth moment of this distribution can easily be obtained by the use of the fact that

Z 1 Γ(α + β) xα−1(1 − x)β−1dx = 1. 0 Γ(α)Γ(β) RRR 41-2005 Page 31

In fact, Z 1 Γ(α + β) Z 1 xkf(x)dx = xk+α−1(1 − x)β−1dx 0 Γ(α)Γ(β) 0 Γ(α + β) Γ(k + α)Γ(β) = (6.2) Γ(α)Γ(β) Γ(k + α + β)

Γ(α + k)Γ(α + k − 1)...Γ(α + 1) = Γ(α + β + k)Γ(α + β + k − 1)...Γ(α + β + 1) If α, β are integers, then using the relation: Γ(m) = (m − 1)! the above expression takes a simple form. The beta distribution in PERT is defined over a more general interval (a, b) and we define its p.d.f. as the p.d.f. of a + (b − a)X, where X has p.d.f. given by (6.1). In practical problems the values a, b, α, β are obtained by the use of expert estimations of the shortest largest and most probable times of accomplishing the job represented by the arc (see, e.g., Battersby, 1970).

Let n be the number of arcs in a path and assume that each arc length ξi has beta distribution with known parameters ai, bi, αi, βi, i = 1, ..., n. Assume that αi ≥ 1, βi ≥ 1, i = 1, ..., n. We are interested to approximate the values of the c.d.f. of the path length, i.e., ξ = ξ1 + ... + ξn. The analytic form of the c.d.f. cannot be obtained in closed form but we know that the p.d.f. of ξ is unimodal. In fact, each ξi has logconcave p.d.f. hence the sum ξ also has logconcave p.d.f. (for the proof of this assertion see, e.g., Pr´ekopa 1995) and any logconcave function is also unimodal. In order to apply our bounding methodology we discretize the distribution of ξ, by subdi- Pn Pn viding the interval ( i=1 ai, i=1 bi) and handle the corresponding discrete distribution as unknown, but unimodal such that some of its first m moments are also known. In principle any order moment of ξ is known but for practical calculation it is enough to use the first few moments, at least in many cases, to obtain good approximation to the values of the c.d.f. of ξ. The probability functions obtained by the discretizations, using equal length subintervals, are logconcave sequences. Convolution of logconcave sequences are also logconcave and any logconcave sequence is unimodal in the sense of Section 3.3. In order to apply our methodology we need to know the modus of the distribution of ξ. A heuristic method to obtain it is the following. We take the sum of the modi of the terms in ξ = ξ1 + ... + ξn and the compute a few probability around it. Example 2. Application in Reliability.

Let A1, ..., An be independent events and define the random variables X1, ..., Xn as the characteristic variables corresponding to the above events, respectively, i.e.,

 1 if A occurs X = i i 0 otherwise Page 32 RRR 41-2005

Let pi = P (Xi = 1), i = 1, ..., n. The random variables X1, ..., Xn have logconcave discrete distributions on the nonnegative integers, consequently the distribution of

X = X1 + ... + Xn is also logconcave on the same set. In many applications it is an important problem to compute, or at least approximate, e.g., by the use of probability bounds the probability

X1 + ... + Xn ≥ r, 0 ≤ r ≤ n. (6.3)

If I , ..., I  designate the k-element subsets of the set {1, ..., n} and J = {1, ..., n}\I , l = 1 n l l  k   n  1, ..., , then we have the equation k

 n    n k X X Y Y P (X1 + ... + Xn ≥ r) = pi (1 − pj), 0 < r ≤ n (6.4)

k=r l=1 i∈Il j∈Jl If n is large, then the calculation of the probabilities on the right hand side of (6.4) may be hard, even impossible. However, we can calculate lower and upper bounds for the probability on the left hand side of (6.4) by the use of the sums:  n   k  X X Y Sk = pi1 ...pik = pi, k = 1, ..., m, (6.5)

1≤i1<...

However, to compute the binomial moments involved may be extremely difficult, if not impossible. The advantage of our approach is that we use the first few binomial moments S1, ..., Sm, where m is relatively small and we can obtain very good bounds, at least in many cases.

References

[1] Boros, E., A. Pr´ekopa (1989). Closed form two-sided bounds for probabilities that ex- actly r and at least r out of n events occur. Math. Oper. Res. 14, 317–347. [2] Fr´echet, M. (1940/43). Les Probabilit´esAssoci´eesa un Syst´emed’Ev´enement´ Compati- bles et D´ependants. Actualit´esScientifique et Industrielles Nos.859, 942 Paris. [3] Galambos, J., I. Simonelli (1996). Bonferroni-type inequalities with applications. Springer, Wahrscheinlichkeits. [4] Kwerel, S.M. (1975). Most Stringent bounds on aggregated probabilities of partially specified dependent probability systems. J. Amer. Stat. Assoc. 70, 472–479. [5] Mandelbrot, B.B. (1977). The fractal geometry of nature. Freeman, New York. [6] Pr´ekopa, A. (1988). Boole-Bonferroni inequalities and linear programming. Oper. Res. 36, 145–162. [7] Pr´ekopa, A. (1989a). Totally positive linear programming problems. L.V. Kantorovich Memorial Volume, Oxford Univ. Press, New York. 197–207. [8] Pr´ekopa, A. (1989b). Sharp bounds on probabilities using linear programming. Oper. Res. 38, 227–239. [9] Pr´ekopa, A. (1990). The discrete moment problem and linear programming. Discrete Applied Mathematics 27, 235–254. [10] Pr´ekopa, A.(1995). Stochastic Programming. Kluwer Academic Publishers, Dordtecht, Boston. [11] Pr´ekopa, A. (1996). A brief introduction to linear programming. Math.Scientist 21, 85–111. [12] Pr´ekopa, A. (2001). Discrete higher order convex functions and their applications. In: Generalized Convexity and Monotonicity (N. Hadjisavvas, J.E. Martinez-Legaz, J-P. Penot, eds.) Lecture Notes in Economics and Mathematical Systems, Springer, 21–47. [13] Pr´ekopa, A. J. Long, T. Sz´antai (2004). New bounds and approximations for the prob- ability distribution of the length of critical path. In: Dynamic Stochastic Optimization. Lecture Notes in Economics and Mathematical Systems 532 (K. Marti, Yu. Ermoliev, G. Pflug eds.), Springer, Berlin, Heidelberg, 293–320.