5 Infinite products 5.1 Interpolation A common and classical problem in calculus is to find a function that takes specified values at certain specified points. For example, it is easy to find a polynomial p for which you have specified N values p(zj)= cj, j =1,...,N. Indeed you can choose the Lagrange interpolating polynomial, N (z zi) p(z)= c i6=j − . j (z z ) j=1 Qi6=j j i X − Q Written another way, this expresses p(z) as cjℓj(z) where ℓj is a polynomial for which ℓj(zi)= δij. P If you wish to find an analytic function that satisfies infinitely many conditions, things are much more complicated. We have already seen, for example, that it is impossible to find an entire function f such that 0, if j is even f(1/j)= (1/j, if j is odd. (It is of course easy to construct a continuous function with these constraints!) A natural question to ask is whether one can ever construct anything like the Lagrange interpolating polynomial with infinitely many terms. This would require us to write some form of infinite product of terms (z z ). This obviously requires some care as such an − j infinite product can easily be zero or unbounded! One of the amazing truths about analytic functions is that essentially all entire functions can be written as infinite products. This was discovered but not proved, as was so much else, by Euler in about 1750. We shall begin this section by studying one example: sin πz ∞ z2 = 1 . πz − n2 n=1 Y This example introduces most of the key ideas. We shall then look more carefully at the theory of infinite products of numbers, such as 2 ∞ 1 = 1 π − 4n2 n=1 Y 24 and infinite products of functions. Later we shall apply the ideas to two further examples, the gamma function and the zeta function. 5.2 A nontrivial example The basis of our example is the following partial fraction series representation of cot πz. 1 ∞ 2z π cot πz = + (for z =0, 1, 2,...). z z2 n2 6 ± ± n=1 X − To prove this we consider the contour CN which is the boundary of the rectangle bounded by the lines y = N, y = N, x = N + 1 and x = N + 1 . − 2 − 2 y x = N + 1 x = N + 1 − 2 y = N 2 1 0 1 N N +1 −bb b b b x y = N − We first show that on C there is a bound for cot πz which is independent of N, that N | | is, there is a B such that, for all N =1, 2, 3,... cot πz B whenever z C . Note that | | ≤ ∈ N cos πz = cos πx cosh πy i sin πx sinh πy − sin πz = sin πx cosh πy + i cos πx sinh πy. Therefore, on x = (N +1/2), ± cos πz sinh πy sinh πy | | = | | = | | 1 sin πz cosh πy 2 ≤ | | | | 1 + sinh πy p 25 whilst on y = N, ± cos πz cos2 πx cosh2 πN + sin2 πx sinh2 πN | | = sin πz 2 2 2 2 | | psin πx cosh πN + cos πx sinh πN pcos2 πx + sinh2 πN = psin2 πx + sinh2 πN p1 + sinh2 πN 1 1+ ≤ s sinh2 πN ≤ sinh πN 1 1+ < 2. ≤ πN So in fact we can take B = 2. Suppose now that z / Z and make sure that N > z . We will now evaluate ∈ | | 1 π cot πw I = dw. N,z 2πi w2 z2 ZCN − Note that from the above bounds, and using the fact that if w C , then w2 N 2 and ∈ N | | ≥ hence w2 z2 N 2 z 2, | − | ≥ −| | 1 π cot πw 1 1 I | | dw dw = 2(4N + 1), | N,z| ≤ 2π w2 z2 | | ≤ N 2 z 2 | | N 2 z 2 · ZCN | − | ZCN −| | −| | which tends to 0 as N . →∞ On the other hand, for any N, the integral is just the sum of the residues at the singu- larities inside C . The singularities occur where sin πw = 0 and where w2 z2 = 0, that is, N − at integers w = N, (N 1),..., (N 1), N and w = z. π−cot−πw − − ± Let f(w) = . Then each integer is a simple zero of sin πw and thus is a simple w2 z2 pole of f and the Laurent− expansion is of the form c f(w)= −1 + c + c (w n)+ c (w n)2 + w n 0 1 − 2 − ··· − Calculating the residue is easy at such a point: π(w n) cos πw 1 Res(f, n) = lim (w n)f(w) = lim − = . w→n − w→n (w2 z2) sin πw n2 z2 − − A similar argument shows that the residues at z and z are − π cot πz π cot( πz) and − 2z 2z − Thus the Residue Theorem gives N π cot πz π cot( πz) N 1 I = Res(f, z) + Res(f, z)+ Res(f, n)= + − + N,z − 2z 2z n2 z2 n=−N n=−N X − X − 26 Thus, with z fixed, and taking a limit as N , →∞ π cot πz π cot( πz) ∞ 1 lim IN,z =0= + − + N→∞ 2z 2z n2 z2 n=−∞ − X − or, using symmetry and pulling out the n = 0 term, π cot πz 1 ∞ 2 0= + . z − z2 n2 z2 n=1 X − Therefore, multiplying through by z, 1 ∞ 2z π cot πz = − z z2 n2 n=1 X − as claimed. Clearly each term on the right hand side has an antiderivative related to log(z2 n2) and − the left hand side has an antiderivative related to log(sin πz) log(πz). If no-one is looking − you might write that sin πz ∞ ∞ log(sin πz) log(πz) = log = log(z2 n2) = log (z2 n2) − πz − − n=1 n=1 X Y or sin πz ∞ = (z2 n2) πz − n=1 Y . before you realize that you have no idea what any of this means! Our task then is to turn this into something that does make sense to make this rigorous. sin πz Let us begin by considering ℓsin(z) = Log on the punctured disk, 0 < z < 1. πz | | Lemma 16. ℓsin(z) is analytic on the punctured disk, 0 < z < 1, where, as usual, Log is | | the principal branch of log. Proof. Log f(z) is analytic in any region where the analytic function f(z) is neither zero nor takes a negative real value, since this is the cut for the function Log. First note that sin πz/z can only vanish where sin πz = 0, and none of the zeros is inside the punctured disk. Next suppose that the function takes a negative real value, k, or sin πz = kπz − − where k > 0. Equating real parts gives sin πx cosh πy = kπx or simply sin πx = k′πx − − 27 where k′ > 0. y f(x) = sin πx 1 2 x g(x)= k′πx, k′ > 0 − From the graphs this can only occur if x = 0 or x > 1. So we can assume that x = 0 | | and equate the imaginary parts to obtain sinh πy = kπy. Since k > 0 the only solution is − y = 0 and therefore the function ℓsin is analytic on the punctured disk. In fact, the function sin πz/πz clearly has a removable singularity at z = 0 and so we can make ℓsin analytic at z = 0, provided we give sin πz/πz the value 1 there. Consequently ℓsin is analytic on the disk z < 1. Moreover, for 0 < z < 1 we have | | | | d d sin πz d 1 ℓsin z = Log = (Log sin πz Log z Log π)= π cot πz . dz dz πz dz − − − z From this we deduce that the deriviative of ℓsin at z = 0 must equal lim (π cot πz 1 )=0. z→0 − z Note also that 2 −2z d z n2 2z Log 1 2 = z2 = 2 2 . dz − n 1 2 z n − n − Thus, for z < 1, | | d ∞ 2z ℓsin z = dz z2 n2 n=1 X − ∞ d z2 = Log 1 dz − n2 n=1 X d ∞ z2 = Log 1 . dz − n2 n=1 X 28 Here we could interchange d/dz and since the series of Logs converges uniformly (on any compact set not containing an integer.)P One easy way to prove this is to observe that Log(1 w) 2 w when w 1/2. Therefore | − | ≤ | | | | ≤ z2 2 z 2 2 Log 1 | | for z < 1 and n 2. − n2 ≤ n2 ≤ n2 | | ≥ ∞ 2 But 2 < and so we have the result by the Weierstrass M-test (we can clearly forget n=2 n ∞ a finiteP number of terms in the series if we want to). Therefore, for z < 1, | | sin πz ∞ z2 Log = Log 1 + C πz − n2 n=1 X for some constant C. Substituting z = 0 shows that C = 0. That is, sin πz ∞ z2 N z2 Log = Log 1 = lim Log 1 . πz − n2 N→∞ − n2 n=1 n=1 X X At this point we would like to take the sum inside the logariPi*xthm, but we need to remember that Log a + Log b is not necessarily Log ab. Rather Log a + Log b = ln a + i Arg a + ln b + i Arg b = ln ab + i(Arg a + Arg b) | | | | If Arg a + Arg b ( π, π) then everything does work OK. ∈ − 2 z2 −1 |z| Exercise: (a) Show that Arg 1 2 sin 2 . − n ≤ n (b) Use the fact that t 1.1 sin t for t [0, 0.25] to show that for z < 1 and n 2, ≤ ∈ | | ≥ z 2 1.1 sin−1 | | < .
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages20 Page
-
File Size-