Revisiting the Riemann Zeta Function at Positive Even Integers 1. Introduction One of the Most Famous Mathematical Problems

Total Page:16

File Type:pdf, Size:1020Kb

Revisiting the Riemann Zeta Function at Positive Even Integers 1. Introduction One of the Most Famous Mathematical Problems July 10, 2018 13:29 WSPC/S1793-0421 203-IJNT 1850110 International Journal of Number Theory Vol. 14, No. 7 (2018) 1849–1856 c World Scientific Publishing Company DOI: 10.1142/S1793042118501105 Revisiting the Riemann Zeta function at positive even integers Krishnaswami Alladi Department of Mathematics University of Florida, Gainesville, FL 32611, USA alladik@ufl.edu Colin Defant Department of Mathematics Princeton University, Princeton, NJ 08544, USA [email protected] Received 4 September 2017 Accepted 27 November 2017 Published 26 April 2018 Using Parseval’s identity for the Fourier coefficients of xk, we provide a new proof that k+1 2k ζ k (−1) B2k (2π) (2 )= 2(2k)! . Keywords: Riemann zeta function; even positive integers, Euler’s formula; Parseval iden- tity; Bernoulli numbers. Mathematics Subject Classification 2010: 11M06, 11B68, 11B83 1. Introduction One of the most famous mathematical problems was the evaluation of ∞ 1 . n2 n=1 The Italian mathematician Pietro Mengoli originally posed this problem in 1644. The problem was later popularized by the Bernoullis who lived in Basel, Switzer- land, so this became known as the Basel Problem [3]. Leonhard Euler solved this brilliantly when he was just 28 years old by showing that the sum in question was equal to π2/6. Indeed, this was Euler’s first mathematical work, and it brought him world fame. In solving the Basel Problem, Euler found closed-form evaluations more generally of ∞ ζ k 1 (2 )= n2k n=1 1849 July 10, 2018 13:29 WSPC/S1793-0421 203-IJNT 1850110 1850 K. Alladi & C. Defant for all even integers 2k ≥ 2. The value of ζ(2k) is given as a rational multiple of π2k. More precisely, Euler showed that [6, p. 16] k+1 2k (−1) B2k(2π) ζ(2k)= , (1) 2(2k)! where the Bernoulli numbers Bm are defined by the exponential generating function x ∞ xm B . ex − = m m 1 m=0 ! Since Euler’s time, numerous proofs of his formula for ζ(2k) have been given. Our goal here is to give yet another proof of (1) that involves a new identity for Bernoulli numbers and a different induction process than used previously. We consider the Fourier coefficients an(k)andbn(k) of the function that is periodic of period 2π and is given by f(x)=xk on the interval (−π, π], and we obtain a pair of intertwining recurrences for these coefficients (see (7) and (8) below). We then apply Parseval’s theorem to connect these Fourier coefficients with ζ(2k) and then establish (1) by solving this recurrence. The novelty in our approach is a new identity for Bernoulli numbers that we obtain and use. A very recent paper by Navas, Ruiz, and Varona [5] establishes new connections between the Fourier coefficients of many fundamental sequences of polynomials such as the Legendre polynomials and the Gegenbauer polynomials by making them periodic of period 1. In doing so, these authors consider the Fourier coefficients of xk, but they do not use their ideas to establish Euler’s formula (1). Another paper by Kuo [4] provides a recurrence for the values of ζ(2k) involving only the values ζ(2j)forj ≤ k/2 instead of requiring j ≤ k − 1aswedo.Ourmethodisvery different from Kuo’s. Other recent proofs of Euler’s formula appear in [1, 2]. 2. Warm Up For each positive integer k,letan(k)andbn(k)bethenth Fourier coefficients of x → xk a k 1 π xk dx a k 1 π xk nx dx the function .Thatis, 0( )= 2π −π , n( )= π −π cos( ) , 1 π k and bn(k)= π −π x sin(nx) dx for n, k ≥ 1. Parseval’s identity [7, p. 191] informs us that π ∞ 1 x2k dx a k 2 a k 2 b k 2 . π =2 0( ) + ( n( ) + n( ) ) (2) −π n=1 Let us apply (2) in the case k = 1. We easily calculate that a0(1) = an(1) = 0 (−1)n+1 and bn(1) = 2 n for all n ≥ 1. This implies that π2 π ∞ ∞ 2 1 x2 dx b 2 4 , = π = n(1) = n2 3 −π n=1 n=1 π2 so we obtain Euler’s classic result ζ(2) = 6 . In Sec. 4, we use Parseval’s identity to obtain a new inductive proof of Euler’s famous formula (1). July 10, 2018 13:29 WSPC/S1793-0421 203-IJNT 1850110 Revisiting the Riemann Zeta function at positive even integers 1851 3. An Identity for Bernoulli Numbers Before we proceed, let us recall some well-known properties of Bernoulli numbers (see [6, Chap. 1]). In Lemma 3.1, we also establish one new identity involving these numbers. 1 1 The first few Bernoulli numbers are B0 =1,B1 = − 2 ,andB2 = 6 .Ifm ≥ 3is odd, then Bm = 0. The equation n−1 n B m m =0 (3) m=0 holds for any integer n ≥ 2. If n is odd, then n n B m . m2 m =0 (4) m=0 n n n− One can prove (4) by evaluating the Bernoulli polynomial Bn(x)= =0 B x n at x =1/2 and using the known identity Bn(x)=(−1) Bn(1 − x). n In what follows, we write r1,r2,r3 to denote the trinomial coefficient given by n! r1!r2!r3! . Lemma 3.1. For any positive integer k, k k B 2t 2 +2 k 2k − k 2 , 2t2 t, i , k − t − i =( +1) 2 +( 1) k i+t≤k/2 2 2 +1 2 2 2 +1 where the sum ranges over all non-negative integers i and t satisfying i + t ≤k/2. Proof. The difficulty in evaluating the given sum spawns from its unusual limits of summation. Therefore, we will first evaluate the sum obtained by allowing i and t to k−t 2(k−t)+2 range over all non-negative integers satisfying i + t ≤ k. Because i=0 2i+1 = 22k−2t+1,wehave k B 2t 2 +2 2t2 t, i , k − t − i i+t≤k 2 2 +1 2 2 2 +1 k k k−t k − t B 2t 2 +2 2( )+2 = 2t2 t i t=0 2 i=0 2 +1 k k 2k+1 B 2 +2 . =2 2t t t=0 2 Since Bm =0foralloddm ≥ 3, k B 2t 2 +2 2t2 t, i , k − t − i i+t≤k 2 2 +1 2 2 2 +1 2k+1 k k 2k+1 B 2 +2 − B 2 +2 2k+1 k . =2 1 =2 ( +1) (5) =0 1 July 10, 2018 13:29 WSPC/S1793-0421 203-IJNT 1850110 1852 K. Alladi & C. Defant 1 Note that we used (3) along with the fact that B1 = − 2 to deduce the last equality above. We next compute k B 2t 2 +2 2t2 t, i , k − t − i k/2<i+t≤k 2 2 +1 2 2 2 +1 k k m m 2 +2 B 2t 2 +1 = m 2t2 t m=k/2+1 2 +1 t=0 2 k k 2m+1 m m 2 +2 B 2 +1 − B 2 +1 = m 2 2 1 m=k/2+1 2 +1 =0 1 k k 2 +2 m , = m (2 +1) m=k/2+1 2 +1 2m+1 2m+1 wherewehaveused(4)toseethat B2 = 0. Therefore, =0 k B 2t 2 +2 2t2 t, i , k − t − i k/2<i+t≤k 2 2 +1 2 2 2 +1 k k k k 2 2 = (2 +2) m + m − m=k/2+1 2 2 1 k k k k k k 2 2 2 2 =( +1) m + k − m + m − + k − m m=k/2+1 2 2 2 2 1 2 2 +1 k k 2k − − k 2 . =( +1) 2 ( 1) k (6) Lemma 3.1 now follows if we subtract (6) from (5). 4. The Formula for ζ(2k) To begin this section, let us use integration by parts to see that for any n ≥ 1and k ≥ 2, π π 1 k k 1 k−1 k an(k)= x cos(nx) dx = − x sin(nx) dx = − bn(k − 1). (7) π −π n π −π n Similarly, if k is odd, then π n+1 k−1 1 k (−1) π k bn(k)= x sin(nx) dx =2 + an(k − 1). (8) π −π n n Now, it is clear from the definitions of an(k)andbn(k)thatan(k) = 0 whenever k 2 2 is odd and bn(k) = 0 whenever k is even. Hence, an(k) + bn(k) is actually equal July 10, 2018 13:29 WSPC/S1793-0421 203-IJNT 1850110 Revisiting the Riemann Zeta function at positive even integers 1853 2 to (an(k)+bn(k)) . If we appeal to (7) and (8) recursively and use the fact that (−1)n+1 bn(1) = 2 n , then a simple inductive argument shows that when n, k ≥ 1, k−1 2 π2 a k b k c k, , n( )+ n( )= n( )nk−2 (9) =0 wherewedefine k k − 2 ! − k/2++n+1, ≤ ≤ 1 ( 1) if 0 ; cn(k, )= (2 +1)! 2 (10) 0, otherwise. Gathering (2), (9), and (10) together yields 2 k−1 π2k π ∞ 2 π2 − a k 2 1 x2k dx − a k 2 c k, 2 k 2 0( ) = π 2 0( ) = n( )nk−2 2 +1 −π n=1 =0 2 k−1 2 k−1 ∞ 2 π2j 2 r k, j r k, j π2j ζ k − j , = ( )n2k−2j = ( ) (2 2 ) (11) n=1 j=0 j=0 where min( k−1 ,j) j 2 r(k, j)= cn(k, i)cn(k, j − i)= cn(k, i)cn(k, j − i) i=0 k−1 i=max(0,j− 2 ) min( k−1 ,j) 2 k k 2 ! − k/2+i+n+1 2 ! − k/2+j−i+n+1 = i ( 1) j − i ( 1) k−1 (2 +1)! (2 2 +1)! i=max(0,j− 2 ) min( k−1 ,j) − jk 2 2 j 4( 1) ! 2 +2 .
Recommended publications
  • The Unilateral Z–Transform and Generating Functions
    The Unilateral z{Transform and Generating Functions Recall from \Discrete{Time Linear, Time Invariant Systems and z-Transforms" that the behaviour of a discrete{time LTI system is determined by its impulse response function h[n] and that the z{transform of h[n] is 1 k H(z) = z− h[k] k=X −∞ If the LTI system is causal, then h[n] = 0 for all n < 0 and 1 k H(z) = z− h[k] Xk=0 Definition 1 (Unilateral z{Transform) The unilateral z{transform of the discrete{time signal x[n] (whether or not x[n] = 0 for negative n's) is defined to be 1 n (z) = z− x[n] X nX=0 When there is any danger of confusing the regular z{transform with the unilateral z{transform, 1 n X(z) = z− x[n] nX= 1 − is called the bilateral z{transform. Example 2 The signal x[n] = anu[n] is zero for all n < 0. So the unilateral z{transform of x[n] is the same as the ordinary z{transform. So, as we saw in Example 7 of \Discrete{Time Linear, Time Invariant Systems and z-Transforms", 1 n n 1 (z) = X(z) = z− a = 1 X 1 z− a nX=0 − 1 provided that z− a < 1, or equivalently z > a . Since the unilateral z{transform of any x[n] is always j j j j j j equal to the bilateral z{transform of the right{sided signal x[n]u[n], the region of convergence of a unilateral z{transform is always the exterior of a circle.
    [Show full text]
  • Ordinary Generating Functions
    CHAPTER 10 Ordinary Generating Functions Introduction We’ll begin this chapter by introducing the notion of ordinary generating functions and discussing the basic techniques for manipulating them. These techniques are merely restatements and simple applications of things you learned in algebra and calculus. You must master these basic ideas before reading further. In Section 2, we apply generating functions to the solution of simple recursions. This requires no new concepts, but provides practice manipulating generating functions. In Section 3, we return to the manipulation of generating functions, introducing slightly more advanced methods than those in Section 1. If you found the material in Section 1 easy, you can skim Sections 2 and 3. If you had some difficulty with Section 1, those sections will give you additional practice developing your ability to manipulate generating functions. Section 4 is the heart of this chapter. In it we study the Rules of Sum and Product for ordinary generating functions. Suppose that we are given a combinatorial description of the construction of some structures we wish to count. These two rules often allow us to write down an equation for the generating function directly from this combinatorial description. Without such tools, we may get bogged down in lengthy algebraic manipulations. 10.1 What are Generating Functions? In this section, we introduce the idea of ordinary generating functions and look at some ways to manipulate them. This material is essential for understanding later material on generating functions. Be sure to work the exercises in this section before reading later sections! Definition 10.1 Ordinary generating function (OGF) Suppose we are given a sequence a0,a1,..
    [Show full text]
  • The Bloch-Wigner-Ramakrishnan Polylogarithm Function
    Math. Ann. 286, 613424 (1990) Springer-Verlag 1990 The Bloch-Wigner-Ramakrishnan polylogarithm function Don Zagier Max-Planck-Insfitut fiir Mathematik, Gottfried-Claren-Strasse 26, D-5300 Bonn 3, Federal Republic of Germany To Hans Grauert The polylogarithm function co ~n appears in many parts of mathematics and has an extensive literature [2]. It can be analytically extended to the cut plane ~\[1, ~) by defining Lira(x) inductively as x [ Li m_ l(z)z-tdz but then has a discontinuity as x crosses the cut. However, for 0 m = 2 the modified function O(x) = ~(Liz(x)) + arg(1 -- x) loglxl extends (real-) analytically to the entire complex plane except for the points x=0 and x= 1 where it is continuous but not analytic. This modified dilogarithm function, introduced by Wigner and Bloch [1], has many beautiful properties. In particular, its values at algebraic argument suffice to express in closed form the volumes of arbitrary hyperbolic 3-manifolds and the values at s= 2 of the Dedekind zeta functions of arbitrary number fields (cf. [6] and the expository article [7]). It is therefore natural to ask for similar real-analytic and single-valued modification of the higher polylogarithm functions Li,. Such a function Dm was constructed, and shown to satisfy a functional equation relating D=(x-t) and D~(x), by Ramakrishnan E3]. His construction, which involved monodromy arguments for certain nilpotent subgroups of GLm(C), is completely explicit, but he does not actually give a formula for Dm in terms of the polylogarithm. In this note we write down such a formula and give a direct proof of the one-valuedness and functional equation.
    [Show full text]
  • Functions of Random Variables
    Names for Eg(X ) Generating Functions Topic 8 The Expected Value Functions of Random Variables 1 / 19 Names for Eg(X ) Generating Functions Outline Names for Eg(X ) Means Moments Factorial Moments Variance and Standard Deviation Generating Functions 2 / 19 Names for Eg(X ) Generating Functions Means If g(x) = x, then µX = EX is called variously the distributional mean, and the first moment. • Sn, the number of successes in n Bernoulli trials with success parameter p, has mean np. • The mean of a geometric random variable with parameter p is (1 − p)=p . • The mean of an exponential random variable with parameter β is1 /β. • A standard normal random variable has mean0. Exercise. Find the mean of a Pareto random variable. Z 1 Z 1 βαβ Z 1 αββ 1 αβ xf (x) dx = x dx = βαβ x−β dx = x1−β = ; β > 1 x β+1 −∞ α x α 1 − β α β − 1 3 / 19 Names for Eg(X ) Generating Functions Moments In analogy to a similar concept in physics, EX m is called the m-th moment. The second moment in physics is associated to the moment of inertia. • If X is a Bernoulli random variable, then X = X m. Thus, EX m = EX = p. • For a uniform random variable on [0; 1], the m-th moment is R 1 m 0 x dx = 1=(m + 1). • The third moment for Z, a standard normal random, is0. The fourth moment, 1 Z 1 z2 1 z2 1 4 4 3 EZ = p z exp − dz = −p z exp − 2π −∞ 2 2π 2 −∞ 1 Z 1 z2 +p 3z2 exp − dz = 3EZ 2 = 3 2π −∞ 2 3 z2 u(z) = z v(t) = − exp − 2 0 2 0 z2 u (z) = 3z v (t) = z exp − 2 4 / 19 Names for Eg(X ) Generating Functions Factorial Moments If g(x) = (x)k , where( x)k = x(x − 1) ··· (x − k + 1), then E(X )k is called the k-th factorial moment.
    [Show full text]
  • Generating Functions
    Mathematical Database GENERATING FUNCTIONS 1. Introduction The concept of generating functions is a powerful tool for solving counting problems. Intuitively put, its general idea is as follows. In counting problems, we are often interested in counting the number of objects of ‘size n’, which we denote by an . By varying n, we get different values of an . In this way we get a sequence of real numbers a0 , a1 , a2 , … from which we can define a power series (which in some sense can be regarded as an ‘infinite- degree polynomial’) 2 Gx()= a01++ ax ax 2 +. The above Gx() is the generating function for the sequence a0 , a1 , a2 , …. In this set of notes we will look at some elementary applications of generating functions. Before formally introducing the tool, let us look at the following example. Example 1.1. (IMO 2001 HK Preliminary Selection Contest) Find the coefficient of x17 in the expansion of (1++xx5720 ) . Solution. 17 5 7 20 The only way to form an x term is to gather two x and one x . Since there are C2 =190 ways to choose two x5 from the 20 multiplicands and 18 ways to choose one x7 from the remaining 18 multiplicands, the answer is 190×= 18 3420 . To gain a preliminary insight into how generating functions is related to counting, let us describe the above problem in another way. Suppose there are 20 bags, each containing a $5 coin and a $7 coin. If we can use at most one coin from each bag, in how many different ways can we pay $17, assuming that all coins are distinguishable (i.e.
    [Show full text]
  • 3 Formal Power Series
    MT5821 Advanced Combinatorics 3 Formal power series Generating functions are the most powerful tool available to combinatorial enu- merators. This week we are going to look at some of the things they can do. 3.1 Commutative rings with identity In studying formal power series, we need to specify what kind of coefficients we should allow. We will see that we need to be able to add, subtract and multiply coefficients; we need to have zero and one among our coefficients. Usually the integers, or the rational numbers, will work fine. But there are advantages to a more general approach. A favourite object of some group theorists, the so-called Nottingham group, is defined by power series over a finite field. A commutative ring with identity is an algebraic structure in which addition, subtraction, and multiplication are possible, and there are elements called 0 and 1, with the following familiar properties: • addition and multiplication are commutative and associative; • the distributive law holds, so we can expand brackets; • adding 0, or multiplying by 1, don’t change anything; • subtraction is the inverse of addition; • 0 6= 1. Examples incude the integers Z (this is in many ways the prototype); any field (for example, the rationals Q, real numbers R, complex numbers C, or integers modulo a prime p, Fp. Let R be a commutative ring with identity. An element u 2 R is a unit if there exists v 2 R such that uv = 1. The units form an abelian group under the operation of multiplication. Note that 0 is not a unit (why?).
    [Show full text]
  • Formal Power Series License: CC BY-NC-SA
    Formal Power Series License: CC BY-NC-SA Emma Franz April 28, 2015 1 Introduction The set S[[x]] of formal power series in x over a set S is the set of functions from the nonnegative integers to S. However, the way that we represent elements of S[[x]] will be as an infinite series, and operations in S[[x]] will be closely linked to the addition and multiplication of finite-degree polynomials. This paper will introduce a bit of the structure of sets of formal power series and then transfer over to a discussion of generating functions in combinatorics. The most familiar conceptualization of formal power series will come from taking coefficients of a power series from some sequence. Let fang = a0; a1; a2;::: be a sequence of numbers. Then 2 the formal power series associated with fang is the series A(s) = a0 + a1s + a2s + :::, where s is a formal variable. That is, we are not treating A as a function that can be evaluated for some s0. In general, A(s0) is not defined, but we will define A(0) to be a0. 2 Algebraic Structure Let R be a ring. We define R[[s]] to be the set of formal power series in s over R. Then R[[s]] is itself a ring, with the definitions of multiplication and addition following closely from how we define these operations for polynomials. 2 2 Let A(s) = a0 + a1s + a2s + ::: and B(s) = b0 + b1s + b1s + ::: be elements of R[[s]]. Then 2 the sum A(s) + B(s) is defined to be C(s) = c0 + c1s + c2s + :::, where ci = ai + bi for all i ≥ 0.
    [Show full text]
  • Functions and Transforms
    Appendix A Functions and Transforms A.1 Nonlinear Transforms Many theoretical and practical problems can be converted into easier forms if instead of the discrete or continuous distributions their different transforms are applied, which can be solved more readily. In probability theory, numerous trans- forms are applied. Denote by F the distribution function of a random variable X and by f the density function if it exists. The general form of the most frequently used transform depending on the real or complex parameter w is ∞ E wX = wx dF(x). −∞ If the density function exists, then the last Riemann-Stieltjes integral can be rewritten in the form of Riemann integral as follows: ∞ E wX = wx f(x)dx. −∞ 1. In the general case, setting w = et ,t∈ R,wehavethecharacteristic function (Fourier-Stieltjes transform) ∞ ϕ(t) = E etX = e txdF(x). −∞ 2. If the random variable X has a discrete distribution with values 0, 1,... and probabilities p0,p1,...corresponding to them, then setting z = w, |z| < 1, we get © Springer Nature Switzerland AG 2019 547 L. Lakatos et al., Introduction to Queueing Systems with Telecommunication Applications, https://doi.org/10.1007/978-3-030-15142-3 548 A Functions and Transforms ∞ ∞ X x k G(z) = E z = z dF(x)= pkz , −∞ k=0 which is the generating function of X. 3. The Laplace-Stieltjes transform plays a significant role when considering random variables taking only nonnegative values (usually we consider this type of random variable in queuing theory), which we obtain with w = e−s,s≥ 0: ∞ ∼ − F (s) = E e−sX = e sxdF(x).
    [Show full text]
  • Enumerative Combinatorics 2: Formal Power Series
    Enumerative Combinatorics 2: Formal power series Peter J. Cameron Autumn 2013 Probably you recognised in the last chapter a few things from analysis, such as the exponential and geometric series; you may know from complex analysis that convergent power series have all the nice properties one could wish. But there are reasons for considering non-convergent power series, as the following example shows. Recall the generating function for the factorials: X F (x) = n!xn, n≥0 which converges nowhere. Now consider the following problem. A permu- tation of {1, . , n} is said to be connected if there is no number m with 1 ≤ m ≤ n − 1 such that the permutation maps {1, . , m} to itself. Let Cn be the number of connected permutations of {1, . , n}. Any permutation is composed of a connected permutation on an initial interval and an arbitrary permutation of the remainder; so n X n! = Cm(n − m)!. m=1 Hence, if X n G(x) = 1 − Cnx , n≥1 we have F (x)G(x) = 1, and so G(x) = 1/F (x). Fortunately we can do everything that we require for combinatorics (ex- cept some asymptotic analysis) without assuming any convergence proper- ties. 1 2.1 Formal power series Let R be a commutative ring with identity. A formal power series over R is, formally, an infinite sequence (r0, r1, r2,...) of elements of R; but we always represent it in the suggestive form 2 X n r0 + r1x + r2x + ··· = rnx . n≥0 We denote the set of all formal power series by R[[x]].
    [Show full text]
  • Our Friends: Transforms
    IEOR 6711: Stochastic Models I Professor Whitt, Tuesday, September 11 Our Friends: Transforms 1. Our Friends Identify the following: (a) generating function (of a sequence) (b) probability generating function (of a probability distribution or of a random variable) (c) exponential generating function (of a sequence) (d) z transform (e) moment generating function (of a probability distribution or of a random variable) (f) characteristic function (of a probability distribution or of a random variable) (g) Fourier transform (of a function) (h) Laplace transform (of a function) ||||||||||||||- Answers: (a) generating function ||||||||||||||{ The generating function of the sequence fan : n ¸ 0g is X1 n a^(z) ´ anz ; n=0 which is de¯ned where it converges. There exists (theorem) a radius of convergence R such that the series converges absolutely for all z with jzj < R and diverges if jzj > R. The series is uniformly convergent on sets of the form fz : jzj < R0g, where 0 < R0 < R. References: (1) Wilf, Generatingfunctionology, (2) Feller, I, Chapter 11, (3) Grimmett and Stirzaker, Chapter 5. The book, An Introduction to Stochastic Processes, by E. P. C. Kao, 1997, Duxbury, provides introductory discussions of both generating functions and Laplace transforms, including numerical inversion algorithms using MATLAB (based on the 1992 and 1995 Abate and Whitt papers). ||||||||||||||{ (b) probability generating function ||||||||||||||{ Given a random variable X with a probability mass function (pmf) pn ´ P (X = n) ; the probability generating function of X (really of its probability distribution) is the generating function of the pmf, i.e., X1 X n P^(z) ´ E[z ] ´ pnz : n=0 ||||||||||||||{ (c) exponential generating function ||||||||||||||{ The exponential generating function of the sequence fan : n ¸ 0g is X1 a zn a^ (z) ´ n ; exp n! n=0 which is de¯ned where it converges; see p.
    [Show full text]
  • Generating Functions ITT9131 Konkreetne Matemaatika
    Generating Functions ITT9131 Konkreetne Matemaatika Chapter Seven Domino Theory and Change Basic Maneuvers Solving Recurrences Special Generating Functions Convolutions Exponential Generating Functions Dirichlet Generating Functions Contents 1 Convolutions Fibonacci convolution m-fold convolution Catalan numbers 2 Exponential generating functions Next section 1 Convolutions Fibonacci convolution m-fold convolution Catalan numbers 2 Exponential generating functions Convolutions Given two sequences: hf0;f1;f2;:::i = hfni and hg0;g1;g2;:::i = hgni The convolution of hfni and hgni is the sequence * + * + f g f g f g f g f g f g f g f g h 0 0; 0 1 + 1 0; 0 2 + 1 1 + 2 0;:::i = ∑ k n−k = ∑ k ` : k k+`=n If F (z) and G(z) are generating functions on the sequences hfni and hgni, then their convolution has the generating function F (z) · G(z). Three or more sequences can be convolved analogously, for example: * + f g h f g h h nih nih ni = ∑ j k ` j+k+`=n and the generating function of this three-fold convolution is the product F (z) · G(z) · H(z). Convolutions Given two sequences: hf0;f1;f2;:::i = hfni and hg0;g1;g2;:::i = hgni The convolution of hfni and hgni is the sequence * + * + f g f g f g f g f g f g f g f g h 0 0; 0 1 + 1 0; 0 2 + 1 1 + 2 0;:::i = ∑ k n−k = ∑ k ` : k k+`=n If F (z) and G(z) are generating functions on the sequences hfni and hgni, then their convolution has the generating function F (z) · G(z).
    [Show full text]
  • Generating Functions Handout
    Chapter 19 Generating Functions and Convolutions A common complaints in mathematics is the ubiquitous: I can follow the proof when you do it line by line, but how could anyone ever think of doing this! Of all the areas in probability, one of the most appropriate for sounding this complaint is in generating functions. At first glance, it seems like it’s making our lives needlessly complex; however, at the end of the chapter you’ll have learned how many different problems generating functions solve. Further, the time you spend learning these techniques will continue to pay dividends as you continue your studies, as these are used not just in probability, but throughout mathematical physics. The reason for their helpfulness is that they allow us to package neatly a lot of information about a problem. You should be skeptical as to whether or not this is worthwhile; however, we’ll see that time and time again that this new viewpoint simplifies the algebra we need to do. We’ll give several motivating examples from previous courses of how a change in viewpoint can save you hours of labor, and then describe many of the propertiesand applications of generating functions. While there are many problems where it’s quite difficult to find and use the correct generating function, a lot of useful problems can be handled with a small bag of tricks. Thus, be patient as you read on – the time you spend mastering this material will help you for years to come. In probability, the most important use of generating functions is to understand moments of random variables.
    [Show full text]