Functions of Random Variables

Total Page:16

File Type:pdf, Size:1020Kb

Functions of Random Variables Names for Eg(X ) Generating Functions Topic 8 The Expected Value Functions of Random Variables 1 / 19 Names for Eg(X ) Generating Functions Outline Names for Eg(X ) Means Moments Factorial Moments Variance and Standard Deviation Generating Functions 2 / 19 Names for Eg(X ) Generating Functions Means If g(x) = x, then µX = EX is called variously the distributional mean, and the first moment. • Sn, the number of successes in n Bernoulli trials with success parameter p, has mean np. • The mean of a geometric random variable with parameter p is (1 − p)=p . • The mean of an exponential random variable with parameter β is1 /β. • A standard normal random variable has mean0. Exercise. Find the mean of a Pareto random variable. Z 1 Z 1 βαβ Z 1 αββ 1 αβ xf (x) dx = x dx = βαβ x−β dx = x1−β = ; β > 1 x β+1 −∞ α x α 1 − β α β − 1 3 / 19 Names for Eg(X ) Generating Functions Moments In analogy to a similar concept in physics, EX m is called the m-th moment. The second moment in physics is associated to the moment of inertia. • If X is a Bernoulli random variable, then X = X m. Thus, EX m = EX = p. • For a uniform random variable on [0; 1], the m-th moment is R 1 m 0 x dx = 1=(m + 1). • The third moment for Z, a standard normal random, is0. The fourth moment, 1 Z 1 z2 1 z2 1 4 4 3 EZ = p z exp − dz = −p z exp − 2π −∞ 2 2π 2 −∞ 1 Z 1 z2 +p 3z2 exp − dz = 3EZ 2 = 3 2π −∞ 2 3 z2 u(z) = z v(t) = − exp − 2 0 2 0 z2 u (z) = 3z v (t) = z exp − 2 4 / 19 Names for Eg(X ) Generating Functions Factorial Moments If g(x) = (x)k , where( x)k = x(x − 1) ··· (x − k + 1), then E(X )k is called the k-th factorial moment. If X is uniformly distributed on f1; 2;:::; ng, then n n X 1 1 1 X 1 (n + 1)k+1 E(X ) = (x) = · ∆ (x) = · : k k n n k + 1 + k+1 n k + 1 x=1 x=1 The Stirling numbers of the second kind gives a linear transformation from the factorial moments to the moments. 5 / 19 Names for Eg(X ) Generating Functions Variance and Standard Deviation m m • If g(x) = (x − µX ) , then E(X − µX ) is called the m-th central moment. • The most frequently used central moment is the second central moment 2 2 σX = E(X − µX ) commonly called the variance. We often write Var(X ) for the variance. 2 2 2 2 σX = Var(X ) = E(X − µX ) = EX − 2µX EX + µX 2 2 2 2 2 = EX − 2µX + µX = EX − µX : The square root of the variance is known as the standard deviation and denoted σX . If we subtract the mean and divide by the standard deviation, then the result X − µ Z = σ has mean0 and variance1. Z is called the standardized version of X . 6 / 19 Names for Eg(X ) Generating Functions Variance and Standard Deviation ForT , an exponential random variable, Z 1 Z 1 ET m = mtm−1PfT > tg dt = mtm−1 exp(−βt) dt 0 0 m Z 1 m = tm−1β exp(−βt) dt = ET m−1 β 0 β Thus, by induction, we have that m! ET m = : βm Thus, 2 1 2 1 Var(T ) = ET 2 − (ET )2 = − = : β2 β β2 7 / 19 Names for Eg(X ) Generating Functions Variance and Standard Deviation Exercise. Compute the variance and standard deviation for 1. a single Bernoulli trial, 2 2 2 p Var(X ) = EX − µX = p − p = p(1 − p); σX = p(1 − p): 2. a fair dice 7 1 91 EX = ; EX 2 = (12 + 22 + 32 + 42 + 52 + 62) = 2 6 6 91 72 182 − 147 35 r35 Var(X ) = − = = ; σ = 6 2 12 12 X 12 3. Y = a + bX . Answer in terms of the variance or standard deviation of X . 2 2 2 2 2 µY = a + bµX ; σY = E[(Y − µY ) ] = E[((a + bX ) − (a + bµX )) ]] = b σX σY = jbjσY 8 / 19 Names for Eg(X ) Generating Functions Skewness The third moment of the standardized random variable " # X − µ3 γ = E 1 σ is called the skewness. Random variables with positive skewness have a more pronounced tail to the density on the right. Random variables with negative skewness have a more pronounced tail to the density on the left. Exercise. Find the skewness of X a Bernoulli random variable. Answer. The third central moment E[(X − p)3] = (−p)3PfX = 0g + (1 − p)3PfX = 1g = − p3(1 − p) + (1 − p)3p = p(1 − p)(−p2 + (1 − p)2) = p(1 − p)(−p2 + 1 − 2p + p2) = p(1 − p)(1 − 2p): 9 / 19 Names for Eg(X ) Generating Functions Kurtosis Thus, " # 2 !33 X − µ3 X − p p(1 − p)(1 − 2p) 1 − 2p E = E 4 5 = = σ pp(1 − p) (p(1 − p))3=2 pp(1 − p) and X is positively skewed if p < 1=2 and is negatively skewed if p > 1=2. The fourth moment of the standard normal random variable is3. The kurtosis compares the fourth moment of the standardized random variable to this value " # X − µ4 E − 3: σ Random variables with a negative kurtosis are called leptokurtic. Lepto means slender. Those with a positive kurtosis are called platykurtic. Platy means broad. 10 / 19 Names for Eg(X ) Generating Functions Characteristic Function For generating functions, it is useful to recall that if h has a converging infinite Taylor series in a interval about the point x = a, then 1 X h(n)(a) h(x) = (x − a)n n! n=0 Where h(n)(a) is the n-th derivative of h evaluated at x = a. If g(x) = exp(iθx), then φX (θ) = E exp(iθX ) is called the Fourier transform or the characteristic function. Because jg(x)j = j exp(iθx)j = 1, the expectation exists for any random variable. 11 / 19 Names for Eg(X ) Generating Functions Moment Generating Function Similarly, g(x) = exp(tx), then MX (t) = E exp(tX ) is called the Laplace transform or the moment generating function. To see the basis for this name, note that if we can reverse the order of differentiation and integration, then d d dt MX (t) = EX exp(tX ) dt M(0) = EX d2 2 d2 2 dt2 MX (t) = EX exp(tX ) dt2 M(0) = EX . dk k dk k dtk MX (t) = EX exp(tX ) dtk M(0) = EX 12 / 19 Names for Eg(X ) Generating Functions Moment Generating Function For a standard normal random variable Z, Z 1 2 Z 1 2 2 1 tz z t2=2 1 z − 2tz + t MZ (t) = p e exp − dz = e p exp − dz 2π −∞ 2 2π −∞ 2 1 2 2 1 Z (z − t) 2 = et =2 p exp − dz = et =2: 2π −∞ 2 Writing out the power series, 1 (n) 1 n 1 2k X M (0) X EZ 2 X t M (t) = Z tn = tn and et =2 = Z n! n! 2k k! n=0 n=0 k=0 Thus, the odd moments are0 and the even moments (2k)! 4! 6! EZ 2k = so EZ 4 = = 3; EZ 6 = = 15;::: 2k k! 222! 233! 13 / 19 Names for Eg(X ) Generating Functions Moment Generating Function For an exponential random variable T and for t < β, Z 1 Z 1 tu MT (t) = e β exp(−βu) du = β exp u(t − β) du 0 0 1 m β 1 X t = = = β − t 1 − t/β β m=0 and the moments of T can be determined by equating the coefficients of tm. Mm(0) ET m 1 m! = = and ET m = : m! m! βm βm 14 / 19 Names for Eg(X ) Generating Functions Cumulant Generating Function The cumulant generating function is defined to be KX (t) = log MX (t): The k-th cumulant, dk κ = K (0) k dxk X can be determined from k-th terms in the Taylor series expansion at0. For the first and second cumulant, M0 (t) K 0 (t) = X K 0 (0) = M0 (0) = EX X MX (t) X X 00 0 2 00 MX (t)MX (t)−MX (t) 00 00 0 2 2 2 K (t) = 2 K (0) = M (0) − M (0) = EX − (EX ) = Var(X ) X MX (t) X X X 15 / 19 Names for Eg(X ) Generating Functions Probability Generating Function + x If X is Z -valued and g(x) = z , then 1 1 X X x X x ρX (z) = Ez = PfX = xgz = fX (x)z x=0 x=0 is called the (probability) generating function. ρ is an analytic function with radius of convergence is at least1. Thus we can obtain the derivatives of ρ by differentiating term by term. At z = 0 we obtain, 1 dx f (x) = ρ (0): X x! dzx X 16 / 19 Names for Eg(X ) Generating Functions Probability Generating Function If we can take the derivative at z = 1, we have d X −1 0 dz ρX (z) = EXz ρx (1) = EX d2 X −2 00 dz2 ρX (z) = EX (X − 1)z ρX (1) = E(X )2 . dk X −k (k) dzk ρX (z) = E(X )k z ρX (1) = E(X )k For X ,a geometric random variable, 1 1 X X p ρ (z) = EzX = f (x)zx = p(1 − p)x zx = : X X 1 − (1 − p)z x=0 x=0 0 p(1−p) 0 1−p ρX (z) = (1−(1−p)z)2 ρX (1) = p 00 2p(1−p)2 00 2(1−p)2 ρX (z) = (1−(1−p)z)3 ρX (1) = p2 17 / 19 Names for Eg(X ) Generating Functions Probability Generating Function For X , a binomial random variable based on n Bernoulli trials, n n X X n ρ (z) = EzX = f (x)zx = px (1 − p)n−x zx = ((1 − p) + zp)n: X X x x=0 x=0 by the binomial theorem.
Recommended publications
  • Generalized Factorial Cumulants Applied to Coulomb-Blockade Systems Signal
    Generalized factorial cumulants applied to Coulomb-blockade systems signal time Von der Fakultät für Physik der Universität Duisburg-Essen genehmigte Dissertation zur Erlangung des Grades Dr. rer. nat. von Philipp Stegmann aus Bottrop Tag der Disputation: 05.07.2017 Referent: Prof. Dr. Jürgen König Korreferent: Prof. Dr. Christian Flindt Korreferent: Prof. Dr. Thomas Guhr Summary Tunneling of electrons through a Coulomb-blockade system is a stochastic (i.e., random) process. The number of the transferred electrons per time interval is determined by a prob- ability distribution. The form of this distribution can be characterized by quantities called cumulants. Recently developed electrometers allow for the observation of each electron transported through a Coulomb-blockade system in real time. Therefore, the probability distribution can be directly measured. In this thesis, we introduce generalized factorial cumulants as a new tool to analyze the information contained in the probability distribution. For any kind of Coulomb-blockade system, these cumulants can be used as follows: First, correlations between the tunneling electrons are proven by a certain sign of the cumulants. In the limit of short time intervals, additional criteria indicate correlations, respectively. The cumulants allow for the detection of correlations which cannot be noticed by commonly used quantities such as the current noise. We comment in detail on the necessary ingredients for the presence of correlations in the short-time limit and thereby explain recent experimental observations. Second, we introduce a mathematical procedure called inverse counting statistics. The procedure reconstructs, solely from a few experimentally measured cumulants, character- istic features of an otherwise unknown Coulomb-blockade system, e.g., a lower bound for the system dimension and the full spectrum of relaxation rates.
    [Show full text]
  • The Unilateral Z–Transform and Generating Functions
    The Unilateral z{Transform and Generating Functions Recall from \Discrete{Time Linear, Time Invariant Systems and z-Transforms" that the behaviour of a discrete{time LTI system is determined by its impulse response function h[n] and that the z{transform of h[n] is 1 k H(z) = z− h[k] k=X −∞ If the LTI system is causal, then h[n] = 0 for all n < 0 and 1 k H(z) = z− h[k] Xk=0 Definition 1 (Unilateral z{Transform) The unilateral z{transform of the discrete{time signal x[n] (whether or not x[n] = 0 for negative n's) is defined to be 1 n (z) = z− x[n] X nX=0 When there is any danger of confusing the regular z{transform with the unilateral z{transform, 1 n X(z) = z− x[n] nX= 1 − is called the bilateral z{transform. Example 2 The signal x[n] = anu[n] is zero for all n < 0. So the unilateral z{transform of x[n] is the same as the ordinary z{transform. So, as we saw in Example 7 of \Discrete{Time Linear, Time Invariant Systems and z-Transforms", 1 n n 1 (z) = X(z) = z− a = 1 X 1 z− a nX=0 − 1 provided that z− a < 1, or equivalently z > a . Since the unilateral z{transform of any x[n] is always j j j j j j equal to the bilateral z{transform of the right{sided signal x[n]u[n], the region of convergence of a unilateral z{transform is always the exterior of a circle.
    [Show full text]
  • Ordinary Generating Functions
    CHAPTER 10 Ordinary Generating Functions Introduction We’ll begin this chapter by introducing the notion of ordinary generating functions and discussing the basic techniques for manipulating them. These techniques are merely restatements and simple applications of things you learned in algebra and calculus. You must master these basic ideas before reading further. In Section 2, we apply generating functions to the solution of simple recursions. This requires no new concepts, but provides practice manipulating generating functions. In Section 3, we return to the manipulation of generating functions, introducing slightly more advanced methods than those in Section 1. If you found the material in Section 1 easy, you can skim Sections 2 and 3. If you had some difficulty with Section 1, those sections will give you additional practice developing your ability to manipulate generating functions. Section 4 is the heart of this chapter. In it we study the Rules of Sum and Product for ordinary generating functions. Suppose that we are given a combinatorial description of the construction of some structures we wish to count. These two rules often allow us to write down an equation for the generating function directly from this combinatorial description. Without such tools, we may get bogged down in lengthy algebraic manipulations. 10.1 What are Generating Functions? In this section, we introduce the idea of ordinary generating functions and look at some ways to manipulate them. This material is essential for understanding later material on generating functions. Be sure to work the exercises in this section before reading later sections! Definition 10.1 Ordinary generating function (OGF) Suppose we are given a sequence a0,a1,..
    [Show full text]
  • 18.440: Lecture 9 Expectations of Discrete Random Variables
    18.440: Lecture 9 Expectations of discrete random variables Scott Sheffield MIT 18.440 Lecture 9 1 Outline Defining expectation Functions of random variables Motivation 18.440 Lecture 9 2 Outline Defining expectation Functions of random variables Motivation 18.440 Lecture 9 3 Expectation of a discrete random variable I Recall: a random variable X is a function from the state space to the real numbers. I Can interpret X as a quantity whose value depends on the outcome of an experiment. I Say X is a discrete random variable if (with probability one) it takes one of a countable set of values. I For each a in this countable set, write p(a) := PfX = ag. Call p the probability mass function. I The expectation of X , written E [X ], is defined by X E [X ] = xp(x): x:p(x)>0 I Represents weighted average of possible values X can take, each value being weighted by its probability. 18.440 Lecture 9 4 Simple examples I Suppose that a random variable X satisfies PfX = 1g = :5, PfX = 2g = :25 and PfX = 3g = :25. I What is E [X ]? I Answer: :5 × 1 + :25 × 2 + :25 × 3 = 1:75. I Suppose PfX = 1g = p and PfX = 0g = 1 − p. Then what is E [X ]? I Answer: p. I Roll a standard six-sided die. What is the expectation of number that comes up? 1 1 1 1 1 1 21 I Answer: 6 1 + 6 2 + 6 3 + 6 4 + 6 5 + 6 6 = 6 = 3:5. 18.440 Lecture 9 5 Expectation when state space is countable II If the state space S is countable, we can give SUM OVER STATE SPACE definition of expectation: X E [X ] = PfsgX (s): s2S II Compare this to the SUM OVER POSSIBLE X VALUES definition we gave earlier: X E [X ] = xp(x): x:p(x)>0 II Example: toss two coins.
    [Show full text]
  • Randomized Algorithms
    Chapter 9 Randomized Algorithms randomized The theme of this chapter is madendrizo algorithms. These are algorithms that make use of randomness in their computation. You might know of quicksort, which is efficient on average when it uses a random pivot, but can be bad for any pivot that is selected without randomness. Analyzing randomized algorithms can be difficult, so you might wonder why randomiza- tion in algorithms is so important and worth the extra effort. Well it turns out that for certain problems randomized algorithms are simpler or faster than algorithms that do not use random- ness. The problem of primality testing (PT), which is to determine if an integer is prime, is a good example. In the late 70s Miller and Rabin developed a famous and simple random- ized algorithm for the problem that only requires polynomial work. For over 20 years it was not known whether the problem could be solved in polynomial work without randomization. Eventually a polynomial time algorithm was developed, but it is much more complicated and computationally more costly than the randomized version. Hence in practice everyone still uses the randomized version. There are many other problems in which a randomized solution is simpler or cheaper than the best non-randomized solution. In this chapter, after covering the prerequisite background, we will consider some such problems. The first we will consider is the following simple prob- lem: Question: How many comparisons do we need to find the top two largest numbers in a sequence of n distinct numbers? Without the help of randomization, there is a trivial algorithm for finding the top two largest numbers in a sequence that requires about 2n − 3 comparisons.
    [Show full text]
  • Probability Cheatsheet V2.0 Thinking Conditionally Law of Total Probability (LOTP)
    Probability Cheatsheet v2.0 Thinking Conditionally Law of Total Probability (LOTP) Let B1;B2;B3; :::Bn be a partition of the sample space (i.e., they are Compiled by William Chen (http://wzchen.com) and Joe Blitzstein, Independence disjoint and their union is the entire sample space). with contributions from Sebastian Chiu, Yuan Jiang, Yuqi Hou, and Independent Events A and B are independent if knowing whether P (A) = P (AjB )P (B ) + P (AjB )P (B ) + ··· + P (AjB )P (B ) Jessy Hwang. Material based on Joe Blitzstein's (@stat110) lectures 1 1 2 2 n n A occurred gives no information about whether B occurred. More (http://stat110.net) and Blitzstein/Hwang's Introduction to P (A) = P (A \ B1) + P (A \ B2) + ··· + P (A \ Bn) formally, A and B (which have nonzero probability) are independent if Probability textbook (http://bit.ly/introprobability). Licensed and only if one of the following equivalent statements holds: For LOTP with extra conditioning, just add in another event C! under CC BY-NC-SA 4.0. Please share comments, suggestions, and errors at http://github.com/wzchen/probability_cheatsheet. P (A \ B) = P (A)P (B) P (AjC) = P (AjB1;C)P (B1jC) + ··· + P (AjBn;C)P (BnjC) P (AjB) = P (A) P (AjC) = P (A \ B1jC) + P (A \ B2jC) + ··· + P (A \ BnjC) P (BjA) = P (B) Last Updated September 4, 2015 Special case of LOTP with B and Bc as partition: Conditional Independence A and B are conditionally independent P (A) = P (AjB)P (B) + P (AjBc)P (Bc) given C if P (A \ BjC) = P (AjC)P (BjC).
    [Show full text]
  • Joint Probability Distributions
    ST 380 Probability and Statistics for the Physical Sciences Joint Probability Distributions In many experiments, two or more random variables have values that are determined by the outcome of the experiment. For example, the binomial experiment is a sequence of trials, each of which results in success or failure. If ( 1 if the i th trial is a success Xi = 0 otherwise; then X1; X2;:::; Xn are all random variables defined on the whole experiment. 1 / 15 Joint Probability Distributions Introduction ST 380 Probability and Statistics for the Physical Sciences To calculate probabilities involving two random variables X and Y such as P(X > 0 and Y ≤ 0); we need the joint distribution of X and Y . The way we represent the joint distribution depends on whether the random variables are discrete or continuous. 2 / 15 Joint Probability Distributions Introduction ST 380 Probability and Statistics for the Physical Sciences Two Discrete Random Variables If X and Y are discrete, with ranges RX and RY , respectively, the joint probability mass function is p(x; y) = P(X = x and Y = y); x 2 RX ; y 2 RY : Then a probability like P(X > 0 and Y ≤ 0) is just X X p(x; y): x2RX :x>0 y2RY :y≤0 3 / 15 Joint Probability Distributions Two Discrete Random Variables ST 380 Probability and Statistics for the Physical Sciences Marginal Distribution To find the probability of an event defined only by X , we need the marginal pmf of X : X pX (x) = P(X = x) = p(x; y); x 2 RX : y2RY Similarly the marginal pmf of Y is X pY (y) = P(Y = y) = p(x; y); y 2 RY : x2RX 4 / 15 Joint
    [Show full text]
  • The Bloch-Wigner-Ramakrishnan Polylogarithm Function
    Math. Ann. 286, 613424 (1990) Springer-Verlag 1990 The Bloch-Wigner-Ramakrishnan polylogarithm function Don Zagier Max-Planck-Insfitut fiir Mathematik, Gottfried-Claren-Strasse 26, D-5300 Bonn 3, Federal Republic of Germany To Hans Grauert The polylogarithm function co ~n appears in many parts of mathematics and has an extensive literature [2]. It can be analytically extended to the cut plane ~\[1, ~) by defining Lira(x) inductively as x [ Li m_ l(z)z-tdz but then has a discontinuity as x crosses the cut. However, for 0 m = 2 the modified function O(x) = ~(Liz(x)) + arg(1 -- x) loglxl extends (real-) analytically to the entire complex plane except for the points x=0 and x= 1 where it is continuous but not analytic. This modified dilogarithm function, introduced by Wigner and Bloch [1], has many beautiful properties. In particular, its values at algebraic argument suffice to express in closed form the volumes of arbitrary hyperbolic 3-manifolds and the values at s= 2 of the Dedekind zeta functions of arbitrary number fields (cf. [6] and the expository article [7]). It is therefore natural to ask for similar real-analytic and single-valued modification of the higher polylogarithm functions Li,. Such a function Dm was constructed, and shown to satisfy a functional equation relating D=(x-t) and D~(x), by Ramakrishnan E3]. His construction, which involved monodromy arguments for certain nilpotent subgroups of GLm(C), is completely explicit, but he does not actually give a formula for Dm in terms of the polylogarithm. In this note we write down such a formula and give a direct proof of the one-valuedness and functional equation.
    [Show full text]
  • The Q-Factorial Moments of Discrete Q-Distributions and a Characterization of the Euler Distribution
    3 The q-Factorial Moments of Discrete q-Distributions and a Characterization of the Euler Distribution Ch. A. Charalambides and N. Papadatos Department of Mathematics, University of Athens, Athens, Greece ABSTRACT The classical discrete distributions binomial, geometric and negative binomial are defined on the stochastic model of a sequence of indepen- dent and identical Bernoulli trials. The Poisson distribution may be defined as an approximation of the binomial (or negative binomial) distribution. The cor- responding q-distributions are defined on the more general stochastic model of a sequence of Bernoulli trials with probability of success at any trial depending on the number of trials. In this paper targeting to the problem of calculating the moments of q-distributions, we introduce and study q-factorial moments, the calculation of which is as ease as the calculation of the factorial moments of the classical distributions. The usual factorial moments are connected with the q-factorial moments through the q-Stirling numbers of the first kind. Several ex- amples, illustrating the method, are presented. Further, the Euler distribution is characterized through its q-factorial moments. Keywords and phrases: q-distributions, q-moments, q-Stirling numbers 3.1 INTRODUCTION Consider a sequence of independent Bernoulli trials with probability of success at the ith trial pi, i =1, 2,.... The study of the distribution of the number Xn of successes up to the nth trial, as well as the closely related to it distribution of the number Yk of trials until the occurrence of the kth success, have attracted i−1 i−1 special attention.
    [Show full text]
  • Expected Value and Variance
    Expected Value and Variance Have you ever wondered whether it would be \worth it" to buy a lottery ticket every week, or pondered questions such as \If I were offered a choice between a million dollars, or a 1 in 100 chance of a billion dollars, which would I choose?" One method of deciding on the answers to these questions is to calculate the expected earnings of the enterprise, and aim for the option with the higher expected value. This is a useful decision making tool for problems that involve repeating many trials of an experiment | such as investing in stocks, choosing where to locate a business, or where to fish. (For once-off decisions with high stakes, such as the choice between a sure 1 million dollars or a 1 in 100 chance of a billion dollars, it is unclear whether this is a useful tool.) Example John works as a tour guide in Dublin. If he has 200 people or more take his tours on a given week, he earns e1,000. If the number of tourists who take his tours is between 100 and 199, he earns e700. If the number is less than 100, he earns e500. Thus John has a variable weekly income. From experience, John knows that he earns e1,000 fifty percent of the time, e700 thirty percent of the time and e500 twenty percent of the time. John's weekly income is a random variable with a probability distribution Income Probability e1,000 0.5 e700 0.3 e500 0.2 Example What is John's average income, over the course of a 50-week year? Over 50 weeks, we expect that John will earn I e1000 on about 25 of the weeks (50%); I e700 on about 15 weeks (30%); and I e500 on about 10 weeks (20%).
    [Show full text]
  • Generating Functions
    Mathematical Database GENERATING FUNCTIONS 1. Introduction The concept of generating functions is a powerful tool for solving counting problems. Intuitively put, its general idea is as follows. In counting problems, we are often interested in counting the number of objects of ‘size n’, which we denote by an . By varying n, we get different values of an . In this way we get a sequence of real numbers a0 , a1 , a2 , … from which we can define a power series (which in some sense can be regarded as an ‘infinite- degree polynomial’) 2 Gx()= a01++ ax ax 2 +. The above Gx() is the generating function for the sequence a0 , a1 , a2 , …. In this set of notes we will look at some elementary applications of generating functions. Before formally introducing the tool, let us look at the following example. Example 1.1. (IMO 2001 HK Preliminary Selection Contest) Find the coefficient of x17 in the expansion of (1++xx5720 ) . Solution. 17 5 7 20 The only way to form an x term is to gather two x and one x . Since there are C2 =190 ways to choose two x5 from the 20 multiplicands and 18 ways to choose one x7 from the remaining 18 multiplicands, the answer is 190×= 18 3420 . To gain a preliminary insight into how generating functions is related to counting, let us describe the above problem in another way. Suppose there are 20 bags, each containing a $5 coin and a $7 coin. If we can use at most one coin from each bag, in how many different ways can we pay $17, assuming that all coins are distinguishable (i.e.
    [Show full text]
  • 3 Formal Power Series
    MT5821 Advanced Combinatorics 3 Formal power series Generating functions are the most powerful tool available to combinatorial enu- merators. This week we are going to look at some of the things they can do. 3.1 Commutative rings with identity In studying formal power series, we need to specify what kind of coefficients we should allow. We will see that we need to be able to add, subtract and multiply coefficients; we need to have zero and one among our coefficients. Usually the integers, or the rational numbers, will work fine. But there are advantages to a more general approach. A favourite object of some group theorists, the so-called Nottingham group, is defined by power series over a finite field. A commutative ring with identity is an algebraic structure in which addition, subtraction, and multiplication are possible, and there are elements called 0 and 1, with the following familiar properties: • addition and multiplication are commutative and associative; • the distributive law holds, so we can expand brackets; • adding 0, or multiplying by 1, don’t change anything; • subtraction is the inverse of addition; • 0 6= 1. Examples incude the integers Z (this is in many ways the prototype); any field (for example, the rationals Q, real numbers R, complex numbers C, or integers modulo a prime p, Fp. Let R be a commutative ring with identity. An element u 2 R is a unit if there exists v 2 R such that uv = 1. The units form an abelian group under the operation of multiplication. Note that 0 is not a unit (why?).
    [Show full text]