Median Bounds and Their Application*

Median Bounds and Their Application*

Median Bounds and their Application* Alan Siegel Department of Computer Science, Courant Institute, New York University, NYC, NY 10012-1185 E-mail: [email protected] Basic methods are given to evaluate or estimate the median for various probability distributions. These methods are then applied to determine the precise median of sev- eral nontrivial distributions, including weighted selection, and the sum of heterogeneous Bernoulli Trials conditioned to lie within any centered interval about the mean. These bounds are then used to give simple analyses of algorithms such as interpolation search and some aspects of PRAM emulation. Key Words: median, Bernoulli Trials, hypergeometric distribution, weighted selection, conditioned Bernoulli Trials, conditioned hypergeometric distribution, interpolation search, PRAM emulation 1. INTRODUCTION While tail estimates have received significant attention in the probability, statistics, discrete mathematics, and computer science literature, the same cannot be said for medians of probability distributions, and for good reason. First, the number of known results in this area seems to be fairly meager, and even they are not at all well known. Second, there seems to have been very little development of mathematical machinery to establish median estimates for probability distributions. Third (and consequently), median estimates have not been commonly used in the analysis of algorithms, apart from the kinds of analyses frequently used for Quicksort, and the provably good median approximation schemes typified by efficient selection algorithms. This paper addresses these issues in the following ways. First, a framework (Theorems 2.1 and 2.4) is presented for establishing median estimates. It is strong enough to prove, as simple corollaries, the two or three non-trivial median bounds (not so readily identified) in the literature. Second, several new median results are presented, which are all, apart from one, derived via this framework. Third, median estimates are shown to simplify the analysis of some probabilistic algorithms and processes. Applications include both divide-and-conquer calculations and tail bound estimates for monotone functions of weakly dependent random variables. In particular, a simple analysis is given for the log2 log2 n + O(1) probe cost for both successful and unsuccessful Interpolation Search, which is less than two probes worse than the best bound but much simpler. Median bounds are also used, for example, to attain a tail bound to show that cn n random numbers can be sorted in linear time with probability 1 2− , for any fixed constant c. This result supports the design of a pipelined version of Ranade’s Common PRAM emulation− algorithm on an n 2n butterfly network with only one column of 2n processors, by showing that each processor can perform a sorting step×that was previously distributed among n switches. The tenor of the majority of median estimates established in Section 2 is that whereas it may be difficult to prove 1 that some explicit integral (or discrete sum) exceeds 2 , by some tiny amount, it is often much easier to establish global shape-based characteristics of a function – such as the number of zeros in some interval, or an inequality of the form f < g – by taking a mix of derivatives and logarithmic derivatives to show that, say, f and g both begin at zero, but g grows faster than f. This theme characterizes Theorems 2.1 and 2.4, plus all of their applications to discrete random variables. *Supported, in part, by NSF grant CCR-9503793. 1 2 A. SIEGEL 1.1. Notation, conventions, and background By supposition, all random variables are taken to be real valued. A median of a random variable X is defined to be any 1 1 value x where Prob X x 2 , and Prob X x 2 . Many random variables—including all of the applications in this paper—will havfe essentially≥ g ≥ unique medians.f ≤ g ≥ The functions X, Y , and Z will be random variables. Cumulative distribution functions will be represented by the letters F or G, and will have associated density functions f(x) = F 0(x), and g(x) = G0(x). The mean of a random variable X will be represented as E[X], and the variable µ will be used to denote the mean of the random variable of current interest. The variance of X is defined to be E[X 2] E[X]2, and will sometimes be denoted by the expression σ2. The characteristic function χ event is defined to be 1−when the Boolean variable event is true, and 0 otherwise. If X and Y are random variables,fthe conditionalg expectation E[X Y ] is a new random variable that is defined on the range of Y . In particular, if Y is discrete, then E[X Y ] is also discrete,j and for any x where Prob Y = x = 0, E[X χ Y =x ] j f g 6 E[X Y ](x) = · f g with probability Prob Y = x . Thus, E[X Y ](x) is just the average value of X as restricted j Prob Y =x f g j to the domain wherefY =g x. Conditional expectations preserve the mean: E[E[X Y ]] = E[X]. However, they reduce variances: E[E[X Y ]2] E[X2]. Intuitively, averaging reduces uncertainty. j We also would jlike to≤define the random variable Z as X conditioned on the event Y = k. More formally, Z is Prob X=s Y =k distributed according to the conditional probability: Prob Z = s Prob X = s Y = k = f ^ g . f g ≡ f j g Prob Y =k Sometimes Z will be defined with the wording X given Y , and sometimes it will be formulated as Z = [X Y =f k]. gHere the intention is that k is fixed and X is confined to a portion of its domain. The the underlying measure isjrescaled to be 1 on the subdomain of X where Y = k. According to the conventions of probability, this conditioning should be stated in terms of the underlying probability measure for the r.v. pair (X; Y ) as opposed to the random variables themselves. Thus, our definition in terms of X and Y and the notation [X Y = k] are conveniences that lie outside of the formal standards. j These definitions all have natural extensions to more general random variables. Indeed, modern probability supports the mathematical development of distributions without having to distinguish between discrete and continuous random variables. While all of the median bounds in this paper concern discrete random variables, almost all of the proofs will be for continuous distributions. When a result also applies to discrete formulations, we will say so without elaboration, since a point mass (or delta function) can be defined as a limit of continuous distributions. In such a circumstance, the only question to resolve would be how to interpret an evaluation at the end of an interval where a point mass is located. For this paper, the issue will always be uneventful. Z If Z is a nonnegative integer valued random variable, its generating function is often defined as GZ (x) = E[x ] xj Prob Z = j . For more general random variables, the usual variant is G (λ) = E[eλZ ] eλxProb Z ≡ j f g Z ≡ f 2 P[x; x+dx) . This notation is intended to be meaningful regardless of whether the underlying density for ZRcomprises point masses, a densityg function, etc. One of the advantages of these transformations is that if X and Y are independent, then GX+Y = GX GY (where all G’s use the same formulation). Hoeffding-Chernoff tail estimates are based on generating · λa λ(X) functions. The basic procedure is as follows. For λ > 0: Prob X > a = Prob λX > λa = Prob e− e > 1 = λa λ(X) λa λ(X) λa λ(X) f g λa λ(fX) g λa λf(X) g E[χ e− e > 1 ] E[e− e ] = e− E[e ]; because χ e− e > 1 < e− e . This procedure f g ≤ aλ f g aλ frequently yields a specific algebraic expression e− f(λ), where by construction Prob X > a < e− f(λ) for any aλ f g λ > 0. The task is then to find a good estimate for minλ>0 e− f(λ) as a function of a, which can often be done. If X = X1 + X2 + : : : + Xn is the sum of n independent random variables Xi, then the independence can be used to represent f(λ) as a product of n individual generating functions GXi (λ). If the random variables have some kind of dependencies, then this procedure does not apply, and an alternative analysis must be sought. Some of the basic random variables that will be used to build more complex distributions are as follows. A Boolean variable X with mean p satisfies: 1 with probability p; X = 0 with probability 1 p. − A random variable X that is exponentially distributed with (rate) parameter λ satisfies: Prob X t = 1 eλt; t 0: f ≤ g − ≥ λt 1 λt 1 The density function is f(t) = λe− , and the mean is 0 tλe− dt = λ . R MEDIAN BOUNDS 3 A Poisson random variable P with (rate) parameter λ satisfies: j λ λ P = j with probability e− ; for j = 0; 1; 2; : : : : j! A straightforward summation gives j j λ λ λ λ E[P ] = e− = λe− = λ. (j 1)! j! Xj>0 Xj 0 − ≥ The hypergeometric distribution corresponds to the selection (without replacement) of n balls from an urn containing r red balls and g green balls, for n r + g. If R is the number of red balls chosen, then ≤ r g k n k Prob R = k = − : r+g f g n Informally, a stochastic process is a random variable that evolves over time.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    36 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us