<<

Complex Numbers and Functions

Richard Crew January 20, 2018

This is a brief review of the basic facts of complex numbers, intended for students in my section of MAP 4305/5304. I will discuss basic facts of com- plex arithmetic, limits and of complex functions, power and functions like the complex exponential, sine and cosine which can be defined by convergent power series. This is a preliminary version and will be added to later.

1 Complex Numbers

1.1 Arithmetic. A is an expression a + bi where i2 = −1. Here the a is the real part of the complex number and bi is the imaginary part. If z is a complex number we write <(z) and =(z) for the real and imaginary parts respectively. Two complex numbers are equal if and only if their real and imaginary parts are equal. In particular a + bi = 0 only when a = b = 0. The set of complex numbers is denoted by . Complex numbers are added, subtracted and multiplied according to the usual rules of algebra: (a + bi) + (c + di) = (a + c) + (b + di) (1.1) (a + bi) − (c + di) = (a − c) + (b − di) (1.2) (a + bi)(c + di) = (ac − bd) + (ad + bc)i (1.3) (note how i2 = −1 has been used in the last ). Division performed by rationalizing the denominator: a + bi (a + bi)(c − di) (ac − bd) + (bc − ad)i = = (1.4) c + di (c + di)(c − di) c2 + d2 Note that denominator only vanishes if c + di = 0, so that a complex number can be divided by any nonzero complex number. If z = a + bi, the complex conjugate or simiply conjugate of z is z¯ = a − bi. (1.5)

1 A quick calculation shows that

z + w =z ¯ +w, ¯ zw =z ¯w¯ (1.6) for all z, w ∈ C. From 1.3 we see that if z = a + bi then zz¯ = a2 + b2. (1.7)

Note that this kind of expression appears in the denominator of 1.4, and in fact we could have written 1.4 as z zw¯ = . (1.8) w ww¯ From 1.9 we see that zz¯ ≥ 0 and zz¯ = 0 if and only if z = 0. The expression zz¯ occurs so frequently that we write √ |z| = zz¯ (1.9) and then 1.8 becomes z zw¯ = . (1.10) w |w|2 One can show that

|z + w| ≤ |z| + |w|, (1.11) |zw| = |z||w|, (1.12) |z| = 0 if and only if z = 0. (1.13)

In fact 1.12 can be checked by direct calculation (hint: square it first). We have already seen that 1.13 is true, and we will prove 1.11 later. The properties 1.11 to 1.13 show that | | acts like the of a real number, and accordingly we call it the absolute value or magnitude of the complex number.

1.2 Geometry. We can identify a complex number z = a + bi with a vector ha, bi in R2. The formula 1.1 then says that addition of complex numbers is the same as addition of vectors. Furthermore if we put b = 0 in 1.3 we see that multiplying a complex number by a real one is the same as scalar multiplication. To interpret 1.3 geometrically we introduce the polar form of a complex number. From ??? we see that the vector z, viewed as a line from the origin to the point z √= a + bi is the of a right . The length of the hypotenuse is a2 + b2 = |z| and if θ is the the vector z makes with the x-axis, we have a = |z| cos θ, b = |z| sin θ and therefore z = |z|(cos θ + i sin θ). (1.14) This is called the polar form of z. If

w = |w|(cos ψ + i sin ψ)

2 is another complex number in polar form, zw = |z||w|((cos θ cos ψ − sin θ sin ψ) + i(cos θ sin ψ + sin θ cos ψ)) or, using the addition theorems for the cosine and sine, zw = |z||w|(cos(θ + ψ) + i sin(θ + ψ)). (1.15) In other words, to multiply two complex numbers you multiply the magnitudes and add the . For example the nth power of a complex number z = |z|(cos θ + i sin θ) is zn = |z|n(cos(nθ) + i sin(nθ)). From this we see that any complex number has a : if z = |z|(cos θ + i sin θ) then w = |z|1/n(cos(θ/n) + i sin(θ/n)) satisfies wn = z. Again, if z = |z|(cos θ + i sin θ), the angle θ is called the argument of z, written θ = arg z. A great deal of confusion can be avoided by keeping in mind the difference between an angle and a number: if θ is an angle then θ +2π is the same angle, even though it is different as a number. This causes no problems when adding, subtracting, or multiplying by an (it makes no sense to multiply two angles). But there is there is no unambiguous way to divide an angle by an integer. In fact if you multiply any of the angles 2π 2 · 2π (n − 1)2π θ, θ + , θ + , . . . , θ + n n n by n you get nθ, and in fact these are all angles whose nth multiple is nθ. From this we see that any nonzero complex number z = |z|(cos θ + i sin θ) has n distinct nth roots, namely  θ + 2kπ θ + 2kπ  |z|1/n cos + i sin n n for k = 0, 1, . . . , n − 1. In other words there is no single-valued z1/n of z.

2 Functions and Limits

2.1 Functions. A complex function of a complex variable is a rule which to any z ∈ D in some subset D ⊆ C assigns a value f(z) ∈ C. The set D is the domain of the function and the set of values f(z) for all z ∈ D is the range. Functions are added, substracted, multiplied and divided just like real functions. If z = x + iy then f(z) = u(x, y) + iv(x, y) for some real-valued functions u and v of x and y. From this it looks like complex is going to be four times as complicated as regular calculus, but this will out not to be the case.

3 2.2 Limits of functions. Since the complex absolute value | | has the same properties as the usual absolute value, limits can be defined for complex functions in the same was as for real functions: we say

lim f(z) = L (2.1) z→z0 if for every positive  > 0 there is a δ > 0 such that

if 0 < |z − z0| < δ then |f(x) − L| <  (2.2)

This looks like Calculus 1 but is really more complicated since both z and f(z) have real and imaginary parts. If

z = x + iy, z0 = x0 + iy0, f(z) = u(x, y) + iv(x, y), and,L = A + iB then

|z − z0| ≤ |x − x0| + |y − y0| and |f(z) − L| ≤ |u(x, y) − A| + |v(x, y) − B|.

Using this one can show (give it a try!) that 2.1 is equivalent to two limits of two functions of two variables: lim u(x, y) = A and (x,y)→(x ,y ) 0 0 (2.3) lim v(x, y) = B. (x,y)→(x0,y0)

Now in some sense it is “harder” for a function of two variables to have a limit since (x, y) can approach (x0, y0) in infinitely many different ways. Amazingly this will not turn out to be a problem for us.

The rules of limits from Calculus 1 hold in this new situation: if limz→z0 f(z) =

L and limz→z0 g(z) = M then

lim (f(z) ± g(z) = L ± M (2.4) z→z0 lim (f(z)g(z) = LM (2.5) z→z0 f(z) L lim = if M 6= 0 (2.6) z→z0 g(z) M

2.3 Continuity. A function f(z) is continuous at z0 if

1. z0 is in the domain of f(z),

2. limz→z0 f(z) = f(z0 (in particular, the limit exists). This is the same definition as in Calculus 1. If f(z) and g(z) are continuous at z0, so are f(z) ± g(z) and f(z)g(z). So is f(z)/g(z) if g(z0) 6= 0.

4 3 Derivatives

Having defined limits, we can define derivatives in the usual way: for any func- tion f(z), df f(z + h) − f(z) = f 0(z) = lim (3.1) dz h→0 h when the limit exists. If it does, we say that f(z) is differentiable at z. Like continuity this is a pointwise property, i.e. depends both on the function and the point. Since limits of complex functions are a more complicated affair then for real functions, it is in some sense harder for a complex function to have a . It might therefore come as a surprise to learn that most of the functions we will deal with in fact have derivatives at every point of their domain. For example the function f(z) = z3 − z has a derivative everywhere since

(z + h)3 − (z + h) − (z3 − z) f 0(z) = lim h→0 h 3z2h + 3zh2 + h3 − h = lim h→0 h = 3z2 − 1. which is even the expected result. In fact any polynomial is differentiable ev- erywhere, and the usual formula holds:

n n d X X a zi = na zi−1. (3.2) dz i i i=0 i=0 You will be delighted to learn that the usual rules for computing derivatives hold when the relevant derivatives exist:

(f(z) ± g(z))0 = f 0(z) ± g0(z) (3.3) (f(z)g(z)0 = f 0(z)g(z) + f(z)g0(z) (3.4) g(z)f 0(z) − f(z)g0(z) (f(z)/g(z)0 = . (3.5) g(z)2

Finally, the derivative of a constant is zero. The proofs of these rules (which are the same as in Calculus 1) shows that if f(z) and g(z) are differentiable at z0 then so are f(z) ± g(z) and f(z)g(z). So is f(z)/g(z) if g(z0) 6= 0. One might venture to ask about : if f(z) is given, is there a function F (z) such that F 0(z) = f(z)? One imagines that they could somehow be computed by means of a in the . In general the answer is problematic but for the sort of functions we are concerned with this will mostly not be a problem. But we will see some bad examples later.

5 4 Sequences and Series

4.1 Convergence of sequences and series. We define convergence for sequences of complex numbers in the same way as for sequences of real numbers. If (an)n≥0 is a sequence of complex numbers, we say that limn→∞ zn = L if for every  > 0 there is an integer N such that n ≥ N implies |an − L| < . P If k≥0 ak is a series whose terms are complex numbers, the nth partial sum of the series is X sn = an 0≤k

n−1 sn = 1 + a + ··· + a and if a 6= 1, this is 1 − an s = n 1 − a

(to see that this is so, multiply the previous expression for sn by 1 − a and observe that all but two terms cancel). Now if |a| < 1 then an → 0 as n → ∞ −1 n and therefore sn → (1 − a) . On the other hand if |a| ≥ 1, the sequence a has no limit and then the same is true for sn. Therefore X 1 an = if |a| < 1 1 − a n≥0 and the series diverges otherwise. P P A series n≥0 an converges absolutely if the series n≥0 |an| converges. If a series converges absolutely then it converges (this is not obvious!). A that does not converge absolutely is conditionally convergent. If series converges absolutely then its terms can be added up in any desired order and then the resulting series will converge to the same value as the original one. This is not true for series that converge but not absolutely. In fact a celebrated theorem says that a conditionally convergent series with real terms can be reordered so as to converge to any desired real number, or diverge.

4.2 The Ratio and root tests. These work the same way as in Calculus P 1. If n≥0 an is a series, the root test considers the limit

pn lim |an| = L. n→∞

6 If the limit exists and L < 1, the series converges absolutely. If L > 1 or the limit does not exist, the series diverges. If L = 1 the test fails. If an 6= 0 for all n we may consider the limit

an+1 lim = L. n→∞ an Again the series converges absolutely if L < 1, diverges if L < 1 or the limit does not exist, and the test fails if L = 1; this is the ratio test. If we apply these tests to the geometric series 4.1 we find n+1 pn a |an| = = |a| an and again we find that the series converges if |a| < 1 and diverges if |a| > 1.

4.3 Power series. A power series centered at z0 is a series of the form

X n an(z − z0) (4.2) n≥0 in which z0 and the an may be any complex numbers. We can use the root test to see when it converges; the equality

pn n pn lim |an(z − z0) | = lim |an| · |z − z0| n→∞ n→∞ shows that the answer depends on

pn L = lim |an|. n→∞

If L = 0 the series converges for any z. If L 6= 0 the series converges if L|z−z0| < −1 1. If we set R = L the series converges absolutely for |z−z0| < R and diverges if |z − z0| > R. If L = 0 we arbitrarily set R = ∞; in any case this R is called the radius of convergence of the series. For any z0 the set |z − z0| < R is the interior of a disk of radius R (or all of C, if R = ∞). We call it the region of absolute convergence of the series. If an 6= 0 for all n one can use the ratio test in the same way. Then

an+1 lim = L n→∞ an and R = L−1 (or R = ∞ when L = 0) is the radius of convergence. It is clear that a power series like 4.2 defines a function of z

X n f(z) = an(z − z0) (4.3) n≥0 in its region of absolute convergence. This function in fact has a derivative at every point inside this region, namely

0 X n−1 f (z) = nan(z − z0) . (4.4) n≥0

7 It also has an F (z), given by n+1 X (z − z0) F (z) = a . (4.5) n n + 1 n≥0 Furthermore these series for f 0(z) and F (z) have the same radius of convergence as that of 4.3, and thus the same region of convergence. We have seen for example that 1 X = zn 1 − z n≥0 for z such that |z| < 1 (i.e. the radius of convergence is 1). Differentiating, we find 1 X X X = nzn−1 = nzn−1 = (n + 1)zn (1 − z)2 n≥0 n≥1 n≥0 (shift the summation index to get the last equality). Differentiating again and dividing by 2 yields 1 X (n + 1)(n + 2) = zn (1 − z)3 2 n≥0 and so forth.

4.4 Analytic functions. . A function f(z) is analytic at z0 ∈ C if it is equal to a power series in some neighborhood of z0, i.e. if X n f(z) = an(z − z0) n for some series whose region of absolute convergence lies within the domain of f. The analytic functions are by far the most important class of functions we will look at. They have many wonderful properties, for example if f(z) is analytic (n) (n) at z0 then aall derivatives f (z) exist at z0, and furthermore all of the f (z) are analytic at z0. By the same token f(z) has an antiderivative defined in a neighborhood of z0, and this is likewise analytic. Taylor’s theorem says that if f(z) is analytic at z0 it is equal to exactly one power series centered at z0, namely (n) X f (z0) f(z) = (z − z )n. (4.6) n! 0 n≥0

(n) In particular if f (z0) = 0 then f(z) ≡ 0 for all z in a neighborhood of z0. From this we get the following important fact about convergent power series: if

X n an(z − z0) = 0 n for all z in some neighborhood of z0 then an = 0 for all n. This is of fundamental importance in solving differential by means of power series.

8 4.5 Exponential, sine and cosine Let us now test the power series

X zn n! n≥0 using the ratio test. Since an = 1/n! we have

1/(n + 1)! n! 1 L = lim = lim = lim = 0. n→∞ 1/n! n→∞ (n + 1)! n→∞ (n + 1)

We conclude that this sum defines an analytic function of z for all z ∈ C. It is called the :

X zn ez = (4.7) n! n≥0 since we in fact know that ex can be calculated by means of this series when x is real. When z is not real we take 4.7 as the definition of ez. An orgy of computations with the binomial formula show that

ez+w = ezew (4.8) for all z, w ∈ C. We can check in a similar way that the series in

X z2n+1 sin z = (−1)n (4.9) (2n + 1)! n≥0 X z2n cos z = (−1)n (4.10) (2n)! n≥0 converge for all z and we take them to define sin z and cos z for complex z, since we know that these are equalities for real z. Since i2 = −1, i4 = 1 and we find that ( 1 k even i2k = −1 k odd ( i k even i2k+1 = −i k odd or equivalently i2k = (−1)k, i2k+1 = (−1)ki.

9 We now replace z in 4.7 by iz; the result is X (iz)n eiz = n! n≥0 X (iz)2k X (iz)2k+1 = + (2k)! (2k + 1)! k≥0 k≥0 X z2k X z2k+1 = (−1)k + i (−1)k (2k)! (2k + 1)! k≥0 k≥0 and therefore eiz = cos z + i sin z. (4.11) for all z ∈ C. This celebrated identity is due to Euler has a number of remarkable consequences. The first is that eπi = −1 which equality packages up all the constants we learn about in Calculus 1. Squaring this yields e2πi = 1 and therefore ez+2πi = ez or in other words the exponential function is periodic with period 2πi (whereas the are periodic with period 2π). Replacing z by −z in 4.11 yields e−iz = cos z − i sin z. and we can then solve for the sine and cosine: eiz + e−iz cos z = (4.12) 2 eiz − e−iz sin z = . (4.13) 2i If we replace z and w in 4.8 by iz and iw and multiply, we find using 4.11 that ei(z+w) = cos(z + w) + i sin(z + w) eizeiw = (cos z + i sin z)(cos w + i sin w) = (cos z cos w − sin z sin w) + i(cos z sin w + sin z cos w). This and the formulas 4.12, 4.13 then yield cos(z + w) = cos z cos w − sin z sin w (4.14) sin(z + w) = cos z sin w + sin z cos w (4.15) or in other words the usual addition theorem for the sine and cosine hold for all complex values. Finally, we see that the polar form of a complex number can be written in exponential form: z = |z|eiθ θ = arg z. (4.16)

10 4.6 The . Complex powers. The polar form 4.16 of a complex number suggests that it is a simple affair to define the logarithm of a nonzero complex number, namely by

ln z = ln |z| + iθ. (4.17)

The problem of course is that θ is not really a number but an angle, so by this definition if w = log z is a logarithm of z then so, apparently is w + 2πi, or w plus any integral multiple of 2π. The same difficulty arises in the case of the trigonometric functions, and for the same reason: they are periodic. Fot the trigonometric functions this problem is solved by choosing, for every possible angle a number representing the angle. For the sine for example one takes for sin−1 x the number in the interval [−π/2, π/2] representing the angle whose sine is x. For the cosine we use numbers in the interval [0, π] and for the we use the interval [−π/2, π/2]. For the logarithm the most common choice is to use one of the intervals [−π, π) or (−π, π]. Note that if we include π on one end of the interval we must exclude it on the other end, both π and −π can be the argument of the same negative real number. Exactly which choice is made does not much matter but it needs to be made. Note carefully however one consequence of this procedure: the logarithm is discontinuous along the entire negative real axis, since its imaginary part changes abruptly from π to −π as one crosses the negative real axis in the counterclockwise direction. If one really needs a logarithm function the is continuous in a neighborhood of the negative real axis on should make a choice like [0, 2π). In this case the discontinuity has been moved to the positive real axis. There is no way to avoid this problem completely. A choice of interval in which the imaginary part of the complex logarithm takes its values is called a branch of the logarithm, and the first one mentioned above is called the . Every branch will have a discontinuity somewhere. Another consequence of choosing a particular branch is that

log zw 6= log z + log w (4.18) in general (think of what happens for the principal branch when z and w are in the second quadrant). In general though the two sides of 4.18 will differ by an integer multiple of 2π. One runs into the same problem when defining powers with complex (or even nonintegral) exponents. We have seen for example that every nonzero complex number has n distinct nth roots, so which one do we pick for z1/n? The answer is to pick a branch of the logarithm and then define

za = ea ln z. (4.19)

If we use the principal branch of ln z, this definition has the reasonable conse- quence that za is real and positive whenever both z and a are real and positive.

11 This might not happen with other branches. Because of 4.18 one finds that

(zw)a 6= zawa (4.20) in general. It is nonetheless true that

za+b = zazb (4.21) for all complex a, b and nonzero complex z, regardless of the choice of a branch of ln z. Because of these problems it is best to avoid using expressions like za and use the natural exponential function in its place.

4.7 . The hyperbolic functions sinh z, cosh z are de- fined by ez + e−z ez − e−z cosh z = sinh z = . (4.22) 2 2 and they satisfy many identities similar to the sine and cosine, for example:

cosh 0 = 1 (4.23) sinh 0 = 0 (4.24) cosh(z + 2πi) = cosh z (4.25) sinh(z + 2πi) = sinh z (4.26) d cosh z = sinh z (4.27) dz d sinh z = cosh z (4.28) dz cosh2 z − sinh2 z = 1 (4.29) cosh(z + w) = cosh z cosh w − sinh z sinh w (4.30) sinh(z + w) = sinh z cosh w + cosh z sinh w. (4.31)

In other words these are basically the twisted sisters of the sine and cosine. These formulas can be checked from the definition, but it is perhaps quicker to use the equalities

sinh z = −i sin iz, cosh z = cos iz (4.32) which follow from the definition 4.22 and the equalities 4.12, 4.13. Their power series expansions are

X z2n cosh z = (4.33) (2n)! n≥0 X z2n+1 sinh z = (4.34) (2n + 1)! n≥0

12