<<

Definition: The residue Res(f, c) of a f(z) at c is the coefficient of (z − c)−1 in the Laurent expansion of f at c.

Computation: To compute Res(f, c) , consider (z‐c)f(z) =: g(z) = g(c) + … +(z‐c)ng(n)(c)/n! +… . ‐1 n‐1 (n) Then f(z)=(z‐c) g(c) + … + (z‐c) g (c)/n! . Thus Res(f, c) = g(c) = (z‐c)f(z)|z=c .

If f is analytical in a neighborhood of c, then Res(f, c) = (z‐c)f(z)|z=c = 0. The converse is not generally true. At a simple pole c, the residue of f is given by:

More generally, if c is a pole of order n, then f(z)=h(z)/(z‐c)n, and so Res(f, c) is given by:

n n‐1 (n‐1) n (n‐1) (z‐c)f(z)|z=c = (z‐c) h(z)/(z‐c) |z=c = h(z)/(z‐c) |z=c = h (z)/(n‐1)!|z=c =[(z‐c) f(z)] / (n‐1)!|z=c , where the 3rd equality is obtained by differentiating the numerator and denominator (n‐1) times. In other words, if c is a pole of order n, then residue of f at c can be determined by the formula:

This formula can be very useful in determining the residues for low‐order poles. For higher order poles, the calculations can become unmanageable, and a direct series expansion may be easier.

Application: According to :

th Here, γ is a closed‐, and I(γ, ak) counts the number of times γ winds around ak (k inside the closed‐curve γ where f is not analytic) in a counter‐clockwise manner. Since residues can be computed quite easily (as discussed above), they can be used to easily determine a contour via the above residue theorem. As an example, the above theorem is used in deriving the Nyquist criterion for stability (#RHP poles of closed‐loop = #RHP poles of open‐loop + #encirclements of (‐1,0) by plot of open‐loop freq.‐response). The residue theorem and its applications

Oliver Knill Caltech, 1996

This text contains some notes to a three hour lecture in given at Caltech. The lectures start from scratch and contain an essentially self-contained proof of the Jordan normal form theorem, I had learned from Eugene Trubowitz as an undergraduate at ETH Z¨urich in the third semester of the standard education at that school. My text also includes two proofs of the fundamental theorem of algebra using complex analysis and examples, which examples showing how residue calculus can help to calculate some definite . Except for the proof of the normal form theorem, the material is contained in standard text books on complex analysis. The notes assume familiarity with partial derivatives and line integrals. I use Trubowitz approach to use Greens theorem to prove Cauchy’s theorem. [ When I had been an undergraduate, such a direct multivariable link was not in my complex analysis text books (Ahlfors for example does not mention Greens theorem in his book).] For the Jordan form section, some linear algebra knowledge is required.

1 The residue theorem

Definition Let D C be open (every point in D has a small disc around it which still is in D). Denote by C1(D) the differentiable⊂ functions D C. This means that for f(z) = f(x + iy) = u(x + iy)+ iv(x + iy) the partial derivatives → ∂u ∂u ∂v ∂v , , , ∂x ∂y ∂x ∂y are continuous, real-valued functions on D.

Definition. Let γ : (a,b) D be a differentiable curve in D. Define the complex → b f(z) dz = f(γ(t)) γ˙ (t) dt Zγ Za · If z = x + iy, and f = u + iv, we have

b f(z) dz = (ux˙ vy˙)+ i(uy˙ + vx˙) dt Zγ Za − The integral for piecewise differentiable γ is obtained by adding the integrals of the pieces. We always assume from now on that all curves γ are piecewise differentiable.

Example. D = z < r with r> 1, γ : [0, k 2π] D,γ(t) = (cos(t), sin(t)), f(z)= zn with k N, n Z. {| | } · → ∈ ∈ k2π k2π zn dz = einteiti dt = ei(n+1)ti dt Zγ Z0 Z0 If n = 1, we get 6 − n 1 i(n+2)t k2π z dz = e 0 =0 Zγ n +1 | n k2π If n = 1, we have γ z dz = 0 i dt = k 2πi. − R R · We recall from the Green formula for a vector field (u, v) in R2

(ux˙ + vy˙) dt = (vx uy) dx dy , Zγ ZD − ∧ where D is the open set enclosed by closed curve γ = δD and where dx dy = dxdy is the volume form in the . Write dz = dx + idy, dz = dx idy and dz dz =2idx dy. Define∧ for f C1(D) − ∧ ∧ ∈ Theorem 1.1 (Complex Green Formula) f C1(D), D C, γ = δD. ∈ ⊂ ∂f f(z)dz = dz dz . Zγ ZD ∂z ∧

Proof. Green’s theorem applied twice (to the real part with the vector field (u, v) and to the imaginary part with the vector field (v,u)) shows that −

b f(z) dz = (ux˙ vy˙)+ i (uy˙ + vx˙) dt Zγ Za − · coincides with ∂f 1 i dz dz = (ux vy)+ (uy + vx)2idx dy ZD ∂z ∧ ZD 2 − 2 ∧

= ( uy vx)dx dy + i (ux vy)dx dy ZD − − ∧ · ZD − ∧

We check that ∂f ∂u ∂v ∂v ∂u =0 = , = . ∂z ⇔ ∂x ∂y ∂x − ∂y The right hand side are called the Cauchy-Riemann differential equations.

ω 1 ∂f ω Definition. Denote by C (D) the set of functions in C (D) for which ∂z = 0 for all z D. Functions f C (D) are called analytic or holomorphic in D. ∈ ∈ Corollary 1.2 (Theorem of Cauchy) f Cω(D), D C, γ = δD. ∈ ⊂

f(z) dz =0 . Zγ

Proof. ∂f f(z) dz = dz dz =0 Zγ ZD ∂z ∧

Corollary 1.3 (Cauchy’s Integral formula) f Cω(D), D simply connected, γ = δD. For any a D ∈ ∈ 1 f(w) dw f(a)= . 2πi Z w a γ −

it Proof. Define for small enough ǫ> 0 the set Dǫ = D z a ǫ and the curve γǫ : t z + ǫe . (Because D is open, the set D is contained in D for small enough ǫ\). {| Because− |≤γ } γ = δD we get by7→ Cauchy’s theorem 1.2 ǫ ∪− ǫ ǫ f(w) f(w) dw dw =0 . Z w a − Z w a γ − γǫ − We compute f(w) 2π f(z + ǫ eit) d 2π dw = · (a + ǫeit) dt = i f(a + ǫ eit) dt Z w a Z a + ǫ eit a dt · Z · γǫ − 0 · − 0 The right hand side converges for ǫ 0 to 2πif(a) because f C1(D) implies → ∈ f(a + ǫeit) f(a) C ǫ | − |≤ · Corollary 1.4 (Generalized Cauchy Integral formulas) Assume f Cω(D) and D C simply connected, and δD = γ. For all n N one has f (n∈)(z) Cω(D) and⊂ for any z / γ ∈ ∈ ∈ n! f(w) dz f (n)(z)= . 2πi Z (w z)n+1 γ −

Proof. Just differentiate Cauchy’s integral formula n times.

It follows that f Cω(D) is arbitrary often differentiable. ∈ Definition Let f Cω(D a ) and a D with simply connected D C with boundary γ. Define the residue of f at a as ∈ \{ } ∈ ⊂ 1 Res(f,a) := f(z) dz . 2πi Zγ By Cauchy’s theorem, the value does not depend on D.

Example. f(z) = (z a)−1 and D = z a < 1 . Our calculation in the example at the beginning of the section gives Res(f,a)=1. − {| − | }

A generalization of Cauchy’s theorem is the following residue theorem:

Corollary 1.5 (The residue theorem) f Cω(D z n ), D open containing z with boundary δD = γ. ∈ \{ i}i=1 { i} 1 n f(z) dz = Res(f,z ) . 2πi Z i γ Xi=1

Proof. Take ǫ so small that Di = z zi ǫ are all disjoint and contained in D. Applying Cauchy’s theorem to the domain D n D leads to{| the− above|≤ formula.} \ 1=1 i S

2 Calculation of definite integrals

The residue theorem has applications in functional analysis, linear algebra, analytic number theory, quantum field theory, algebraic geometry, Abelian integrals or dynamical systems.

In this section we want to see how the residue theorem can be used to computing definite real integrals.

The first example is the integral-sine x sin(t) Si(x)= dt , Z0 t a function which has applications in electrical engineering. It is used also in the proof of the prime number theorem which states that the function π(n)= p n p prime satisfies π(n) x/log(x) for x . { ≤ | } ∼ → ∞ ∞ sin(x) π Si( )= dx = ∞ Z0 x 2

eiz ω sin(x) Proof. Let f(z) = z which satisfies f C (C 0 ). For z = x R, we have Im(f(z)) = x . Define for ∈ \{ }4 ∈ R>ǫ> 0 the open set D enclosed by the curve γ = i=1 γi, where S Figure 1 γ1 : t [ǫ, R] t +0 i. γ : t ∈ [0, π] 7→ R eit·. 2 ∈ 7→ · γ3 : t [ R, ǫ] t +0 i. γ : t ∈ [π,− 0] − ǫ7→eit. · 4 ∈ 7→ ·

-R R

By Cauchy’s theorem

R ix π iReit ǫ ix 0 iǫeit e e it e e it 0= f(z) dz = dx + it iRe dt + dx + it iǫe dt . Zγ Zǫ x Z0 Re Z−R x Zπ ǫe The imaginary part of the first and the third integral converge for ǫ 0, R both to Si( ). The imaginary part of the fourth integral converges to π because → → ∞ ∞ − π it lim eiǫe i dt iπ . → ǫ 0 Z0 →

it The second integral converges to zero for R because eiRe = e−R sin(t) e−Rt for t (0,π/2]. → ∞ | | | | ≤ | | ∈

∞ 1 dx = π Z−∞ 1+ x2

Proof. Take f(z)=1+ z2 which has a simple pole a = i in the upper half plane. Define for R> 0 the half-disc D 2 with a hole which has as a boundary the curve γ = i=1 γi with γ1 : t [ R, R] t +0 i. S ∈ − 7→ it · γ2 : t [0, π] R e . f is analytic∈ in7→ D · i and by the residue theorem \{ } R 1 π iReit dt f(z) dz = 2 dx = 2 2 =2πi Res(f(z),i)=2πi lim(z i) f(z)= π Z Z− 1+ x Z 1+ R e it · · z→i − · γ R 0 · The second integral from the curve γ goes to zero for R . 2 → ∞

Let f,g be two polynomials with n = deg(g) 2 + deg(f) such that the poles z of h := f/g are not on ≥R+ 0 . i ∪{ } ∞ n h(x) dx = Res(h(z) log(z),z ) Z − · i 0 Xi=1

Proof. Define for R>r> 0 the domain D enclosed by the curves γ γ− γ γ with γ : t [r, R] t +0 i. R ∪ ∪ r ∪ + + ∈ 7→ · γ− : t [R, r] t +0 i. ∈ 7→ it· γR : t [0, 2π] R e . ∈ 7→ · −it γr : t [0, 2π] r e . and apply∈ the7→ residue· theorem for the function h(z) log(z): · h(z) log(z) dz = h(z) log(z) dz + h(z) log(z) dz + h(z) log(z) dz + h(z) log(z) dz Z Z Z Z Z γ γR γr γ+ γ− Because of the degree assumption, h(z) log(z) 0 for R . Because h is analytic near 0 and log(z) goes γR → → ∞ slower to than z 0 we get alsoR h(z) log(z) dz 0 as r 0. The sum of the last two integrals goes to γr ∞∞ → R → → (2πi) 0 h(x) dx because − R h(z) log(z) dz = h(z)(log(z)+2πi) dz . Zγ+ − Zγ−

π dθ π dθ = ,a> 1 2 Z0 a + cos(θ) √a 1 −

Proof. Put z = eiθ. Then z + z−1 2az + z2 +1 a + cos(θ)= a + = . 2 2z Let γ : θ eiθ 7→ π dθ 1 2π dθ 1 2dz dθ = dθ = 2 . Z0 a + cos(θ) 2 Z0 a + cos(θ) Zγ 2i 2az + z +1

2 2 From the two zeros a √a 1 of the polynomial 2az + z + 1 the root λ+ is in the unit disc and λ− outside the unit disc. From the− ± residue− theorem, the integral is 1 1 2π π 2πi Res( 2 , λ+)= = . i 2az + z +1 λ+ λ− √a2 1 − − 3 Jordan normal form for matrices

As an other application of complex analysis, we give an elegant proof of Jordan’s normal form theorem in linear algebra with the help of the Cauchy-residue calculus.

Let M(n, R) denote the set of real n n matrices and by M(n, C) the set n n matrices with complex entries. For A M(n, C) the characteristic× polynomial is × ∈ k det(λ A)= (λ λ )µi . − − i iY=1

We simply write λ A instead of λI A, where I is the identity matrix. The complex numbers λi are called the − − k eigenvalues of A and µi denote their multiplicities. Clearly i=1 µi = n. P Two matrices A, B M(n, C) are called similar if there exists an invertible matrix S such that A = S−1BS. ∈ Theorem 3.1 (Jordan normal form theorem)

[A1] 0 . . . 0  0 [A ]0 0  A M(n, C) is similar to a matrix 2 where A = ...... i ∈    0 0 [A ]   k  λi 1  1  · are called normal blocks.    · λ 1   i   λ   i  Remark. It follows if all eigenvalues of A are different, then A is diagonalizable.

Denote for λ = λi the resolvent matrix − 6 R(λ) = (λ A) 1 . − The function λ R(λ) is analytic in D = C λi in the sense that for all i, j, the functions λ [R(λ)]ij are analytic in D. The7→ reason is that there exist polynomials\{ } 7→ (where A(ij) are the matrices obtained by deleting the i’ th row and the j′ th column of A), such that α (λ) [R(λ)] = ij . ij det(λ A) − Lemma 3.2 (The resolvent identity) ′ ′ ′ R(λ) R(λ ) = (λ λ)R(λ)R(λ ) − − Proof. From A(λ A) = (λ A)A follows R(λ)A = AR(λ) and we get by filling in (λ′ A)R(λ′) = I and (λ A)R(λ)= I − − − − ′ ′ ′ ′ ′ ′ R(λ) R(λ )= R(λ)(λ A)R(λ ) (λ A)R(λ)R(λ ) = (λ λ)R(λ)R(λ ) . − − − − −

ω Definition C (D,M(n, C)) denotes the set of functions f : D M(n, C), such that for all i, j the map z [f(z)]ij is in Cω(D). Given a curve γ in D we define the complex integral7→ f(z) dz by 7→ R [ f(z) dz] = [f(z)] dz . Z ij Z ij Define for δ < min λ λ and γ : t z + δeit the matrices ij | i − j | i 7→ i 1 = R(λ) dλ 2πi Zγi 1 N = (λ λ )R(λ) dλ . i Z i 2πi γi − Theorem 3.3 (Jordan decomposition of a matrix)

1) PiPj = δij Pj , k 2) i=1 Pi = I. 3) NPiPj = δij Ni = Pj Ni µi 4) NiNj =0, i = j, Pi(A λi)= Ni, Ni =0 k 6 k − 5) A = i=1 λiPi + i=1 Ni P P Proof. 1) For i = j we have using the resolvent identity 6 2 ′ ′ ′ ′ (2πi) PiPj = R(λ) dλ R(λ ) dλ = R(λ)R(λ ) dλdλ Zγi Zγj Zγi Zγj ′ R(λ) R(λ ) ′ = − dλ dλ Z Z λ′ λ γi γj − 1 ′ ′ 1 ′ = R(λ) dλ dλ + R(λ ) dλ dλ =0 . Z Z λ′ λ Z Z λ′ λ γi γj − γi γj − ′ it On the other hand, with γi : t λi + δ/2 e 7→ · ′ ′ 2 R(λ) R(λ ) ′ R(λ) R(λ ) ′ (2πi) PiPi = ′− dλ dλ = ′− dλ dλ Zγ Zγ λ λ Zγ Zγ′ λ λ i i − i i − 1 ′ ′ 1 ′ = R(λ) ′ dλ dλ R(λ ) ′ dλ dλ Zγ Zγ′ λ λ − Zγ′ Zγ λ λ i i − i i − ′ ′ = 2πi R(λ )dλ , · Z ′ γi 1 ′ 1 where we used ′ ′ dλ = 0 and ′ dλ =2πi. γ λ −λ γi λ −λ R i R 2) Using Cauchy’s theorem we have for any curve γ = λ = R enclosing all the eigenvalues R {| | } k k 1 1 P = R(λ) dλ = R(λ) dλ i 2πi Z 2πi Z Xi=1 Xi=1 γi γR 1 A 1 A = R(λ) R(λ) dλ + R(λ) dλ 2πi Z − λ 2πi Z λ The claim follows from 1 1 dλ =1 Z 2πi γR λ and from the fact that for R → ∞ A R(λ) dλ 0 ZγR λ → since R(λ) C λ−1 for a constant C only dependent on A. | ij |≤ · 3) For i = j, we get with the resolvent identity 6 ′ ′ (2πi)2N P = (λ λ ) R(λ)R(λ ) dλ dλ i j Z Z i γi γj − · λ λ ′ ′ = − i (R(λ) R(λ )) dλ dλ Z Z λ′ λ − γi γj − λ λ ′ λ λ ′ ′ = − i R(λ) dλ dλ − i R(λ ) dλ dλ =0 . Z Z λ′ λ − Z Z λ′ λ γi γj − γi γj − Using the curve γ′ = λ λ = δ/2 we have i {| − i| }

2 ′ ′ (2πi) NiPi = (λ λi) R(λ)R(λ ) dλ dλ Z Z ′ − · γi γi λ λi ′ ′ = ′− (R(λ) R(λ )) dλ dλ Zγ Zγ′ λ λ − i i − 1 ′ ′ 1 ′ = (λ λi)R(λ) dλ ′ dλ + (λ λi)R(λ ) dλ ′ dλ Zγ − Zγ′ λ λ Zγ − Zγ′ λ λ i i − i i − 2 = (2πi) Ni .

4) NiNj = 0 is left as an exercise. (The calculation goes completely analogue to the already done calculations in 1) or in 3)

(2πi)P (A λ ) = R(λ)(A λ ) dλ = R(λ)(A λ ) I dλ i i Z i Z i − γi − γi − −

= R(λ)(A λi) R(λ)(A λ) dλ Zγi − − −

= (λ λi)R(λ) dλ = (2πi)Ni . Zγi − Using 3) and 1) we get from the just obtained equality

k k k (2πi)Ni = (2πi)Pi(A λi) = R(λ)(A λi) dλ − Zγi −

k k−1 = R(λ)(A λi) R(λ)(A λi) (A λ) dλ Zγi − − − − − = R(λ)(A λ )k 1(λ λ ) dλ Z i i γi − − − − = R(λ)(A λ )(k 1)(λ λ ) R(λ)(A λ )k 2(A λ) dλ Z i i i γi − − − − − − = R(λ)(A λ )k 2(λ λ )2 dλ . Z i i γi − − Repeating like this k 2 more times, we get − k k (2πi)Ni = (λ λi) R(λ) dλ . Zγi −

µi µi The claim Ni follows from the fact that (λ λi) R(λ) is analytic. 5) From P (A λ )= N in 4) and using 2) we− have i − i i Remark. It follows that the matrix A leaves invariant the subspaces = P of = Cn and acts on as Hi iH H Hi v λ v + N v . 7→ i i

There is a basis of i, the matrices Ni have 1 in the side diagonal and are 0 everywhere else: by 4), we know that µi H k is the smallest k such that Ni = 0. It implies that all eigenvalues of N are 0. There exists a vector v i such that − ∈ H N kv = v n 1 form a basis in . (This is an exercise: any nontrivial relation a N j v = 0 would imply that N { i k}k=0 Hi j j i i P 0 1  1  · had an eigenvalue different from 0). In that basis, the transformation N is the matrix N = . i i    · 0 1     0    The matrix A is now a direct sum of Jordan blocks A = k A , where ⊕i=1 i

λi 1  1  · A = . i    · λ 1   i   λ   i  Exercises: 1) Perform the calculation which had been left out in the above proof: show that

N N =0,i = j . i j 6 2) Show that if a linear transformation N on a µ dimensional space has the property N µ = 0 and N k = 0 for k < µ, then there is a basis in which N is a Jordan block with zeros in the diagonal. 6

4 The

Cauchy’s integral formula and the residue formula can be expressed more naturally using the notion of the .

Let γ be closed curve in C avoiding a point a C. There exists k Z such that ∈ ∈ Lemma 4.1 dz =2πik . Z z a γ −

Proof. Define t z′(s) ds h(t) := . Z z(s) a 0 − Since h′(t)= z′(t)/(z(t) a), we get − d − ′ − − ′ e h(t)(z(t) a)= h (t)e h(t)(z(t) a)+ e h(t)z (t)=0 . dt − − − − and e−h(t)(z(t) a)= e−h(0)(z(0) a) is a constant. Therefore eh(t) = z(t) a and especially eh(2π) = z(2π) a =1 − − z(0)−a z(0)−a which means h(2π)=2πik.

Definition The index or winding number of a closed curve γ with respect to a point a / γ is ∈ 1 dz n(γ,a)= 2πiZ . 2πi Z z a ∈ γ − The definition is legitimated by the above lemma.

Cauchy’s integral formula and the residue theorem holds more generally for any closed curve γ in a simply con- nected set D. If f Cω(D) ∈ 1 f(w) dw n(γ,z) f(z)= . · Z 2πi w z γ −

If f Cω(D z n ) ∈ \{ i}i=1 n 1 f(z) dz = n(γ,z ) Res(f,z ) . 2πi Z i i γ Xi=1

Theorem 4.2 (Argument principle for analytic functions) Given f Cω(D) with D simply connected. Let a be the zeros of f in D and ∈ i γ a curve in D avoiding ai. Then 1 f ′(z) n(γ,a )= dz i 2πi Z f(z) Xi γ

Proof. Write f(z) = (z a )(z a ) . . . (z a )g(z) − 1 − 2 − n where g(z) has no zeros in D. We compute

f ′(z) 1 1 1 g′(z) = + + . . . + . f(z) z a z a z a g(z) − 1 − 2 − n Cauchy’s theorem gives g′(z) dz =0 Zγ g(z) and the formula follows from the definition of the winding number.

Definition If f Cω(D z ) for a neighborhood D of a, then a is called an of f. If there exists n N such∈ that \{ } ∈ lim (z a)n+1 f(z)=0 n→∞ − · then a is called a pole of f. The smallest n such that the above limit is zero is called the order of the pole. If an ω isolated singularity is not a pole, it is called an essential singularity. If f C (D zi ) and each zi is a pole then f is called meromorphic in D. ∈ \{ }

Theorem 4.3 (Argument principle for meromorphic functions)

Let f be meromorphic in the simply connected set D, ai the zeros of f, bi the poles of f in D and γ a closed curve avoiding ai,bi.

n k 1 f ′(z) n(γ,a ) n(γ,b )= dz i − j 2πi Z f(z) Xi Xj γ

Proof. The function g(z) := f(z) (z b ) . . . (z b ) . . . (z b ) · − 1 − 2 − k is analytic in D and has the zeros ai. Write 1 f ′(z) 1 1 1 1 g′(z) ( + + . . . + ) dz = dz 2πi Z f(z) z b z b z b 2πi Z g(z) γ − 1 − 2 − k γ

The right hand side is by argument principle for analytic maps equal to i n(γ,ai). The left hand side is P 1 f ′(z) dz + n(γ,b ) . 2πi Z f(z) j γ Xj Theorem 4.4 (Generalized argument principle)

Let f be meromorphic in the simply connected set D, ai the zeros of f, bi the poles of f in D and γ a closed curve avoiding a ,b and g Cω(D). i i ∈ n k 1 f ′(z) g(a )n(γ,a ) g(b )n(γ,b )= g(z) dz i i − i j 2πi Z · f(z) Xi Xj γ

Proof. Write again n (z a ) f(z)= h(z) i=1 − i Qk (z b ) j=1 − j Q with analytic h which is nowhere zero and so

f ′(z) n g(a ) k g(b ) h′(z) g(z) = i j + g(z) . · f(z) z a − z b h(z) Xi=1 − i Xj=1 − j

As an application, take h Cω(∆) with ∆ = z a < r and D = h(∆). For z ∆ put ξ = h(z). The function f(w)= h(w) ξ has only∈ one zero in D. Apply{| the− last| theorem} with g(w)= w.∈ We get − ′ − 1 wh (w) h 1(ξ)= z = dw 2πi Z h(w) ξ δ(D) − which is a formula for the inverse of h.

Assume f Cω(D z ) is meromorphic. Denote by Z = Z (D) the number of zeros of a function in D and ∈ \{ i} f f with Pf = Pf (D) the number of poles of f in D.

Theorem 4.5 (Rouch´e’s theorem) Given meromorphic f,g Cω(D z ). Assume for D = z z a < R ∈ \{ i} R { | | − | }⊂ D, γ = δDR f(z) g(z) < g(z) , z γ . | − | | | ∈ Then Z P = Z P . f − f g − g

Proof. The assumption implies that f(z) 1 < 1, z γ | g(z) − | ∈ and h := f/g maps therefore γ into the right half plane. We can define log(f/g) by requiring that

Im(log(h(z))) = arg(h(z)) ( π/2,π/2) . ∈ − ′ ′ (f/g) The function log(h(z)) is a primitive of h /h = f/g . We have so

1 (f/g)′ 1 f ′ g′ 0= dz = dz . 2πi Zγ f/g 2πi Zγ f − g

Apply the argument principle.

Rouch´e’s theorem leads to an other proof of the fundamental theorem of algebra:

Theorem 4.6 (Fundamental theorem of algebra) n n−1 Every polynomial f(z)= p(z)= z + a1z + . . . + an has exactly n roots.

p(z) n n Proof. zn =1+ a1/z + ...an/z goes to 1 for z . Consider the function g(z) = z and D = z < R . For R sufficiently large, f(z) g(z) < g(z) . Rouch´e’s| | → ∞ theorem tells that f and g must have the same{| number| of} | − | | |