An Introduction to the Conjugate Method Without the Agonizing Pain 1 Edition 1 4 Jonathan Richard Shewchuk August 4, 1994

School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213

Abstract

The Conjugate is the most prominent for solving sparse systems of linear equations. Unfortunately, many textbook treatments of the topic are written with neither illustrations nor intuition, and their victims can be found to this day babbling senselessly in the corners of dusty libraries. For this reason, a deep, geometric understanding of the method has been reserved for the elite brilliant few who have painstakingly decoded the mumblings of their forebears. Nevertheless, the Conjugate Gradient Method is a composite of simple, elegant ideas that almost anyone can understand. Of course, a reader as intelligent as yourself will learn them almost effortlessly. The idea of quadratic forms is introduced and used to derive the methods of Steepest Descent, Conjugate Directions, and Conjugate . Eigenvectors are explained and used to examine the convergence of the , Steepest Descent, and Conjugate Gradients. Other topics include preconditioning and the nonlinear Conjugate Gradient Method. I have taken pains to make this article easy to read. Sixty-six illustrations are provided. Dense prose is avoided. Concepts are explained in several different ways. Most equations are coupled with an intuitive interpretation.

Supported in part by the Natural Sciences and Engineering Research Council of Canada under a 1967 Science and Engineering Scholarship and by the National Science Foundation under Grant ASC-9318163. The views and conclusions contained in this document are those of the author and should not be interpreted as representing the of®cial policies, either express or implied, of NSERC, NSF, or the U.S. Government. Keywords: conjugate gradient method, preconditioning, convergence analysis, agonizing pain Contents

1. Introduction 1

2. Notation 1

3. The Quadratic Form 2

4. The Method of Steepest Descent 6

5. Thinking with Eigenvectors and Eigenvalues 9

5.1. Eigen do it if I try ¡ ¡ ¢ ¡ ¡ £ ¢ ¡ ¡ ¢ ¡ ¡ ¢ £ ¡ ¡ ¢ ¡ ¡ ¢ £ ¡ ¡ ¢ ¡ ¡ ¢ £ ¡ ¡ ¢ ¡ ¡ ¢ £ ¡ ¢ 9

5.2. Jacobi iterations ¢ ¡ ¡ ¢ ¡ ¡ £ ¢ ¡ ¡ ¢ ¡ ¡ ¢ £ ¡ ¡ ¢ ¡ ¡ ¢ £ ¡ ¡ ¢ ¡ ¡ ¢ £ ¡ ¡ ¢ ¡ ¡ ¢ £ ¡ ¢ 11

5.3. A Concrete Example ¡ ¢ ¡ ¡ £ ¢ ¡ ¡ ¢ ¡ ¡ ¢ £ ¡ ¡ ¢ ¡ ¡ ¢ £ ¡ ¡ ¢ ¡ ¡ ¢ £ ¡ ¡ ¢ ¡ ¡ ¢ £ ¡ ¢ 12

6. Convergence Analysis of Steepest Descent 13

6.1. Instant Results £ ¢ ¡ ¡ ¢ ¡ ¡ £ ¢ ¡ ¡ ¢ ¡ ¡ ¢ £ ¡ ¡ ¢ ¡ ¡ ¢ £ ¡ ¡ ¢ ¡ ¡ ¢ £ ¡ ¡ ¢ ¡ ¡ ¢ £ ¡ ¢ 13

6.2. General Convergence ¢ ¡ ¡ £ ¢ ¡ ¡ ¢ ¡ ¡ ¢ £ ¡ ¡ ¢ ¡ ¡ ¢ £ ¡ ¡ ¢ ¡ ¡ ¢ £ ¡ ¡ ¢ ¡ ¡ ¢ £ ¡ ¢ 17

7. The Method of Conjugate Directions 21

7.1. Conjugacy ¡ ¡ £ ¢ ¡ ¡ ¢ ¡ ¡ £ ¢ ¡ ¡ ¢ ¡ ¡ ¢ £ ¡ ¡ ¢ ¡ ¡ ¢ £ ¡ ¡ ¢ ¡ ¡ ¢ £ ¡ ¡ ¢ ¡ ¡ ¢ £ ¡ ¢ 21

7.2. Gram-Schmidt Conjugation £ ¢ ¡ ¡ ¢ ¡ ¡ ¢ £ ¡ ¡ ¢ ¡ ¡ ¢ £ ¡ ¡ ¢ ¡ ¡ ¢ £ ¡ ¡ ¢ ¡ ¡ ¢ £ ¡ ¢ 25

7.3. Optimality of the Error Term £ ¢ ¡ ¡ ¢ ¡ ¡ ¢ £ ¡ ¡ ¢ ¡ ¡ ¢ £ ¡ ¡ ¢ ¡ ¡ ¢ £ ¡ ¡ ¢ ¡ ¡ ¢ £ ¡ ¢ 26

8. The Method of Conjugate Gradients 30

9. Convergence Analysis of Conjugate Gradients 32

9.1. Picking Perfect Polynomials £ ¢ ¡ ¡ ¢ ¡ ¡ ¢ £ ¡ ¡ ¢ ¡ ¡ ¢ £ ¡ ¡ ¢ ¡ ¡ ¢ £ ¡ ¡ ¢ ¡ ¡ ¢ £ ¡ ¢ 33

9.2. Chebyshev Polynomials ¡ ¡ £ ¢ ¡ ¡ ¢ ¡ ¡ ¢ £ ¡ ¡ ¢ ¡ ¡ ¢ £ ¡ ¡ ¢ ¡ ¡ ¢ £ ¡ ¡ ¢ ¡ ¡ ¢ £ ¡ ¢ 35

10. Complexity 37

11. Starting and Stopping 38

11.1. Starting ¡ ¢ ¡ ¡ £ ¢ ¡ ¡ ¢ ¡ ¡ £ ¢ ¡ ¡ ¢ ¡ ¡ ¢ £ ¡ ¡ ¢ ¡ ¡ ¢ £ ¡ ¡ ¢ ¡ ¡ ¢ £ ¡ ¡ ¢ ¡ ¡ ¢ £ ¡ ¢ 38

11.2. Stopping ¢ ¡ ¡ £ ¢ ¡ ¡ ¢ ¡ ¡ £ ¢ ¡ ¡ ¢ ¡ ¡ ¢ £ ¡ ¡ ¢ ¡ ¡ ¢ £ ¡ ¡ ¢ ¡ ¡ ¢ £ ¡ ¡ ¢ ¡ ¡ ¢ £ ¡ ¢ 38

12. Preconditioning 39

13. Conjugate Gradients on the Normal Equations 41

14. The Nonlinear Conjugate Gradient Method 42

14.1. Outline of the Nonlinear Conjugate Gradient Method ¡ ¢ £ ¡ ¡ ¢ ¡ ¡ ¢ £ ¡ ¡ ¢ ¡ ¡ ¢ £ ¡ ¢ 42

14.2. General ¡ ¢ ¡ ¡ £ ¢ ¡ ¡ ¢ ¡ ¡ ¢ £ ¡ ¡ ¢ ¡ ¡ ¢ £ ¡ ¡ ¢ ¡ ¡ ¢ £ ¡ ¡ ¢ ¡ ¡ ¢ £ ¡ ¢ 43

14.3. Preconditioning ¢ ¡ ¡ ¢ ¡ ¡ £ ¢ ¡ ¡ ¢ ¡ ¡ ¢ £ ¡ ¡ ¢ ¡ ¡ ¢ £ ¡ ¡ ¢ ¡ ¡ ¢ £ ¡ ¡ ¢ ¡ ¡ ¢ £ ¡ ¢ 47

A Notes 48

B Canned 49

B1. Steepest Descent ¢ ¡ ¡ ¢ ¡ ¡ £ ¢ ¡ ¡ ¢ ¡ ¡ ¢ £ ¡ ¡ ¢ ¡ ¡ ¢ £ ¡ ¡ ¢ ¡ ¡ ¢ £ ¡ ¡ ¢ ¡ ¡ ¢ £ ¡ ¢ 49

B2. Conjugate Gradients ¡ ¢ ¡ ¡ £ ¢ ¡ ¡ ¢ ¡ ¡ ¢ £ ¡ ¡ ¢ ¡ ¡ ¢ £ ¡ ¡ ¢ ¡ ¡ ¢ £ ¡ ¡ ¢ ¡ ¡ ¢ £ ¡ ¢ 50

B3. Preconditioned Conjugate Gradients ¢ ¡ ¡ ¢ £ ¡ ¡ ¢ ¡ ¡ ¢ £ ¡ ¡ ¢ ¡ ¡ ¢ £ ¡ ¡ ¢ ¡ ¡ ¢ £ ¡ ¢ 51

i B4. Nonlinear Conjugate Gradients with Newton-Raphson and Fletcher-Reeves ¡ ¢ ¡ ¡ ¢ £ ¡ ¢ 52

B5. Preconditioned Nonlinear Conjugate Gradients with Secant and Polak-RibiereÁ ¡ ¡ ¢ £ ¡ ¢ 53

C Ugly Proofs 54 ¤¦¥¨§ ©

C1. The Solution to Minimizes the Quadratic Form ¢ £ ¡ ¡ ¢ ¡ ¡ ¢ £ ¡ ¡ ¢ ¡ ¡ ¢ £ ¡ ¢ 54 ¡ ¢ £ ¡ ¡ ¢ ¡ ¡ ¢ £ ¡ ¡ ¢ ¡ ¡ ¢ £ ¡ ¢ C2. A Has Orthogonal Eigenvectors. 54

C3. Optimality of Chebyshev Polynomials ¡ ¡ ¢ £ ¡ ¡ ¢ ¡ ¡ ¢ £ ¡ ¡ ¢ ¡ ¡ ¢ £ ¡ ¡ ¢ ¡ ¡ ¢ £ ¡ ¢ 55

D Homework 56

ii About this Article

An electronic copy of this article is available by anonymous FTP to WARP.CS.CMU.EDU (IP address 128.2.209.103) under the ®lename quake-papers/painless-conjugate-gradient.ps. A PostScript ®le containing full-page copies of the ®gures herein, suitable for transparencies, is available as quake-papers/painless-conjugate-gradient-pics.ps. Most of the illustrations were created using Mathematica.

c 1994 by Jonathan Richard Shewchuk. This article may be freely duplicated and distributed so long as no consideration is received in return, and this copyright notice remains intact.

This guide was created to help students learn Conjugate Gradient Methods as easily as possible. Please mail me ([email protected]) comments, corrections, and any intuitions I might have missed; some of these will be incorporated into a second edition. I am particularly interested in hearing about use of this guide for classroom teaching.

For those who wish to learn more about iterative methods, I recommend William L. Briggs' ªA Multigrid Tutorialº [2], one of the best-written mathematical books I have read.

Special thanks to Omar Ghattas, who taught me much of what I know about numerical methods, and provided me with extensive comments on the ®rst draft of this article. Thanks also to James Epperson, David O'Hallaron, James Stichnoth, Nick Trefethen, and Daniel Tunkelang for their comments.

To help you skip chapters, here's a dependence graph of the sections:

1 Introduction 10 Complexity

2 Notation 13 Normal Equations

3 Quadratic Form

4 Steepest Descent 5 Eigenvectors

7 Conjugate Directions 6 SD Convergence

8 Conjugate Gradients 9 CG Convergence

11 Stop & Start 12 Preconditioning 14 Nonlinear CG

This article is dedicated to every mathematician who uses ®gures as abundantly as I have herein.

iii iv 1. Introduction

When I decided to learn the Conjugate Gradient Method (henceforth, CG), I read four different descriptions, which I shall politely not identify. I understood none of them. Most of them simply wrote down the method, then proved its properties without any intuitive explanation or hint of how anybody might have invented CG in the ®rst place. This article was born of my frustration, with the wish that future students of CG will learn a rich and elegant , rather than a confusing mass of equations.

CG is the most popular iterative method for solving large systems of linear equations. CG is effective for systems of the form

¤ ¥§© (1)

© ¤ where ¥ is an unknown vector, is a known vector, and is a known, square, symmetric, positive-de®nite (or positive-inde®nite) matrix. (Don't worry if you've forgotten what ªpositive-de®niteº means; we shall review it.) These systems arise in many important settings, such as ®nite difference and ®nite element methods for solving partial differential equations, structural analysis, circuit analysis, and math homework.

Iterative methods like CG are suited for use with sparse matrices. If ¤ is dense, your best course of

action is probably to factor ¤ and solve the equation by backsubstitution. The time spent factoring a dense ¤ ¤ is roughly equivalent to the time spent solving the system iteratively; and once is factored, the system can be backsolved quickly for multiple values of © . Compare this dense matrix with a of larger size that ®lls the same amount of memory. The triangular factors of a sparse ¤ usually have many more nonzero elements than ¤ itself. Factoring may be impossible due to limited memory, and will be time-consuming as well; even the backsolving step may be slower than iterative solution. On the other hand, most iterative methods are memory-ef®cient and run quickly with sparse matrices.

I assume that you have taken a ®rst course in , and that you have a solid understanding of and linear independence, although you probably don't remember what those eigenthingies were all about. From this foundation, I shall build the edi®ce of CG as clearly as I can.

2. Notation

Let us begin with a few de®nitions and notes on notation.

With a few exceptions, I shall use capital letters to denote matrices, lower case letters to denote vectors,

¤ ¥ ©  and Greek letters to denote scalars. is an  matrix, and and are vectors Ð that is, 1 matrices.

Written out fully, Equation 1 is

     

¤ ¤ ¤ ¥ ©

     

 



    

11 12 1  1 1

¤ ¤ ¤ ¥ ©

       21 22 2 2 § 2

......

    

. . .  . .

¤ ¤ ¤ ¥ ©

 

   

 1 2

¥ ¥

" "

"$#

The inner product of two vectors is written , and represents the scalar sum ! 1 . Note that

¥ §% & ¥ ¥ ¥ §

. If and are orthogonal, then 0. In general, expressions that reduce to 1  1 matrices, ¥'(¤¦¥ such as ¥ and , are treated as scalar values.

2 Jonathan Richard Shewchuk ¥ 2 4

2

¥ ¥ §

3 1 ) 2 2 2 ¥

-4 -2 2 4 6 1

¥ ¥ §+* -2 2 1 ) 6 2 8

-4

-6

Figure 1: Sample two-dimensional linear system. The solution lies at the intersection of the lines. ¥

A matrix ¤ is positive-de®nite if, for every nonzero vector ,



¥ ¤¦¥-,

0 (2)

This may mean little to you, but don't feel bad; it's not a very intuitive idea, and it's hard to imagine how a matrix that is positive-de®nite might look differently from one that isn't. We will get a feeling for what

positive-de®niteness is about when we see how it affects the shape of quadratic forms.

¤0/7198 §4/:8 ¤£8

¤0/123§4/5 ¤6 1 1 1 . Finally, don't forget the important basic identities . and .

3. The Quadratic Form

A quadratic form is simply a scalar, quadratic function of a vector with the form

 

¥<1=§ ¥ ¤¦¥>*?© ¥

; 1 .

)A@ (3)

2

¤ ¥ © ¤

where is a matrix, and are vectors, and @ is a scalar constant. I shall show shortly that if is symmetric

¥<1 ¤ ¥§© ;

and positive-de®nite, . is minimized by the solution to .

Throughout this paper, I will demonstrate ideas with the simple sample problem

©F§CB §

¤4§CB 3 2 2

*

@ 0 (4) DGE

2 6 D¢E 8 ¥

The system ¤ ¥H§I© is illustrated in Figure 1. In general, the solution lies at the intersection point

* ¥%§KJ *



L

of hyperplanes, each having dimension 1. For this problem, the solution is 2 2 . The

¥M1 ¥<1

; ;

E . corresponding quadratic form . appears in Figure 2. A contour plot of is illustrated in Figure 3. The Quadratic Form 3

150

100

¥<1 ;

4 . 50 2 0 0

¥ 6 2 -2 4

2 ¥ -4 0 ¥ 1 -2

-6 -4 TUPWVYX

Figure 2: Graph of a quadratic form N

2 ¥ -4 -2 2 4 6 1

-2

-4

-6

Figure 3: Contours of the quadratic form. Each ellipsoidal curve has constant N

4 Jonathan Richard Shewchuk ¥ 2

6

4

2 ¥ -4 -2 2 4 6 1

-2

-4

-6

-8 P Figure 4: Gradient NZ[\O$PZR of the quadratic form. For every , the gradient points in the direction of steepest

increase of N

¤ ¥M1 ;

Because is positive-de®nite, the surface de®ned by . is shaped like a paraboloid bowl. (I'll have more to say about this in a moment.)

The gradient of a quadratic form is de®ned to be

_

 

 

¥<1

;

 

_`

.

 

_ 1

¥<1

;

 

_`

. ¥M1=§

;^] 2

. (5)

. 

 .

_

¥M1

;

_a`ab

.

¥<1 ¥ ;

The gradient is a vector ®eld that, for a given point , points in the direction of greatest increase of . .

Figure 4 illustrates the gradient vectors for Equation 3 with the constants given in Equation 4. At the bottom

¥<1 ¥M1

; ;

] . of the paraboloid bowl, the gradient is zero. One can minimize . by setting equal to zero.

With a little bit of tedious math, one can apply Equation 5 to Equation 3, and derive



] ¥M1c§ ¤ ¥ ¤¦¥:*?©

; 1 1

.

) (6) 2 2

If ¤ is symmetric, this equation reduces to

] ¥M1c§4¤ ¥d*?©

;

. (7)

Setting the gradient to zero, we obtain Equation 1, the linear system we wish to solve. Therefore, the

¤ ¥%§e© ¥M1 ¤ ;

solution to is a critical point of . . If is positive-de®nite as well as symmetric, then this The Quadratic Form 5

(a) (b)

¥<1

;

¥<1

;

.

.

¥ ¥

2 2

¥ ¥

¥ ¥ 1 1

(c) (d)

¥<1 ¥<1

; ;

. .

¥ ¥

2 2

¥ ¥ ¥ ¥ 1 1

Figure 5: (a) Quadratic form for a positive-de®nite matrix. (b) For a negative-de®nite matrix. (c) For a singular (and positive-inde®nite) matrix. A line that runs through the bottom of the valley is the set of solutions. (d) For an inde®nite matrix. Because the solution is a saddle point, Steepest Descent and CG

will not work. In three dimensions or higher, a singular matrix can also have a saddle.

¥M1 ¤¦¥f§H© ¥ ¥<1 ¤

; ; .

solution is a minimum of . , so can be solved by ®nding an that minimizes . (If is not ¤G1g¥Y§h©

1 ¤6 .

symmetric, then Equation 6 hints that CG will ®nd a solution to the system 2 ) . Note that ¤G1

1 ¤6 . 2 ) is symmetric.)

Why do symmetric positive-de®nite matrices have this nice property? Consider the relationship between

¥W§ ¤ © ; 8 1 at some arbitrary point i and at the solution point . From Equation 3 one can show (Appendix C1)

that if ¤ is symmetric (be it positive-de®nite or not),



1=§ ¥M1 *Y¥<1 ¤ *k¥M1 ;

; 1

.ji . .ji .ji

) (8)

2

¤ §4¥

If is positive-de®nite as well, then by Inequality 2, the latter term is positive for all iYl . It follows that ¥

is a global minimum of ; .

¥M1 ;

The fact that . is a paraboloid is our best intuition of what it means for a matrix to be positive-de®nite. ¤ If ¤ is not positive-de®nite, there are several other possibilities. could be negative-de®nite Ð the result

of negating a positive-de®nite matrix (see Figure 2, but hold it upside-down). ¤ might be singular, in which ¤ case no solution is unique; the set of solutions is a line or hyperplane having a uniform value for ; . If

is none of the above, then ¥ is a saddle point, and techniques like Steepest Descent and CG will likely fail. ©

Figure 5 demonstrates the possibilities. The values of and @ determine where the minimum point of the paraboloid lies, but do not affect the paraboloid's shape.

Why go to the trouble of converting the linear system into a tougher-looking problem? The methods under study Ð Steepest Descent and CG Ð were developed and are intuitively understood in terms of minimization problems like Figure 2, not in terms of intersecting hyperplanes such as Figure 1. 6 Jonathan Richard Shewchuk

4. The Method of Steepest Descent ¥ m

In the method of Steepest Descent, we start at an arbitrary point 0 n and slide down to the bottom of the

¥ m ¥ m

  n

paraboloid. We take a series of steps 1 n 2 until we are satis®ed that we are close enough to the

¥ E solution . E

When we take a step, we choose the direction in which ; decreases most quickly, which is the direction

* ¥ 1 ¥ 1U§ ©o*Y¤¦¥

; ; m m m

] ]

" " "

. .

n n

opposite n . According to Equation 7, this direction is .

§q¥ *A¥

m m

" "

p n

Allow me to introduce a few de®nitions, which you should memorize. The error n is a

§©s*t¤¦¥

m m

" "

r

n

vector that indicates how far we are from the solution. The residual n indicates how far we

© m §u* ¤ m

" "

r p n

are from the correct value of . It is easy to see that n , and you should think of the residual as

¤ © §v* ¥ 1

m ; m

]

" "

r . n

being the error transformed by into the same space as . More importantly, n , and you should also think of the residual as the direction of steepest descent. For nonlinear problems, discussed in Section 14, only the latter de®nition applies. So remember, whenever you read ªresidualº, think ªdirection

of steepest descent.º

¥ m §wJx* *

 L

Suppose we start at 0 n 2 2 . Our ®rst step, along the direction of steepest descent, will fall

somewhere on the solid line in FigureE 6(a). In other words, we will choose a point

¥ §4¥

m m m

r

)Ay

n n 1 n 0 0 (9)

The question is, how big a step should we take? ;

A line search is a procedure that chooses y to minimize along a line. Figure 6(b) illustrates this task: we are restricted to choosing a point on the intersection of the vertical plane and the paraboloid. Figure 6(c)

is the parabola de®ned by the intersection of these surfaces. What is the value of y at the base of the

parabola?

¥ m 1

; ;

. z y

From basic calculus, minimizes when the directional derivative 1 n is equal to zero. By the

¥ 16§ 1 § 1 ¥ ¥ ¥

; m ;  ;  m m m m

] ]

z|{

. z r z . .

y

n n n n

chain rule, 1 n 1 1 1 0 . Setting this expression to zero, we ®nd that

m ¥ m 1

;

]

z|{ z|{

r . n should be chosen so that 0 n and 1 are orthogonal (see Figure 6(d)).

There is an intuitive reason why we should expect these vectors to be orthogonal at the minimum. Figure 7 shows the gradient vectors at various points along the search line. The slope of the parabola

(Figure 6(c)) at any point is equal to the magnitude of the projection of the gradient onto the line (Figure 7, ; dotted arrows). These projections represent the rate of increase of ; as one traverses the search line. is

minimized where the projection is zero Ð where the gradient is orthogonal to the search line.

¥(m 1=§+* m

;

]

. r

y n

To determine , note that 1 n 1 , and we have



m §

m

r r

n 0

1 n 0



m m

©s*Y¤ ¥ 1 §

. r n

1 n 0 0



©o*Y¤ ¥ 1}1 §

m m m

. . r r

)Ay

n n

0 n 0 0 0

 

©s*Y¤ ¥ 1 * ¤ 1 §

m m m m

r r . . r

y

n n n

0 n 0 0 0 0

 

m m m m

©s*Y¤ ¥ 1 § 1 ¤

. r . r r

y

n n n

0 n 0 0 0

 

m m

§ 1 ¤

m m

r r r . r

y

n n n

0 n 0 0 0

m



m

r r

§ n

0 n 0

¤

m



y m

r r n 0 n 0

The Method of Steepest Descent 7 ¥ 2 (a) (b) 4

2

¥ 150 ¥M1 1 ;

-4 -2 2 4 6 100 .

m

¥ ¥

m 2.5 50

0 n -2 ¥ p 0 0 n 0

2 -2.5 5 ¥ -4 ¥ 2.5 0 -5 -2.5 1

-6

¥

m m

¥ 1

; 2 "

" (d)

. r

)Ay n n (c) 4 140 120 2

100 ¥

1 ¥ 80 -4 -2 m 2 4 6

60 1 n -2 40 20 -4 0.2 0.4 0.6 y

-6

€ ‚„ƒ

Figure 6: The method of Steepest Descent. (a) Starting at ~j 2 2 , take a step in the direction of steepest N descent of N . (b) Find the point on the intersection of these two surfaces that minimizes . (c) This parabola is the intersection of surfaces. The bottommost point is our target. (d) The gradient at the bottommost point

is orthogonal to the gradient of the previous step. ¥ 2 2

1 ¥ -2 -1 1 2 3 1

-1

-2

-3

Figure 7: The gradient NZ[ is shown at several locations along the search line (solid arrows). Each gradient's projection onto the line is also shown (dotted arrows). The gradient vectors represent the direction of

steepest increase of N , and the projections represent the rate of increase as one traverses the search line.

On the search line, N is minimized where the gradient is orthogonal to the search line.

8 Jonathan Richard Shewchuk ¥ 2 4

2 ¥

-4 -2 2 4 6 1

¥

m ¥

0 n -2

-4

-6

€ ‚„ƒ ~ €9 ‚„ƒ Figure 8: Here, the method of Steepest Descent starts at ~j 2 2 and converges at 2 2 .

Putting it all together, the method of Steepest Descent is:

m §©U*?¤¦¥(m

" "

r n

n (10)

E

m



m

"

"

r r

m

§

n

n

"

¤ m



m y

n (11)

"

"

E

r r

n

n

¥(m §%¥(m m m

"$ " " "

r

)Ay

n n n 1 n (12)

The example is run until it converges in Figure 8. Note the zigzag path, which appears because each gradient is orthogonal to the previous gradient.

The algorithm, as written above, requires two matrix-vector multiplications per iteration. The computa-

tional cost of Steepest Descent is dominated by matrix-vector products; fortunately, one can be eliminated. ©

By premultiplying both sides of Equation 12 by *¦¤ and adding , we have

m § m * m ¤ m

"$ " " "

r r r

y

n n n

1 n (13)

m r

Although Equation 10 is still needed to compute 0 n , Equation 13 can be used for every iteration thereafter. ¤

The product r , which occurs in both Equations 11 and 13, need only be computed once. The disadvantage

of using this recurrence is that the sequence de®ned by Equation 13 is generated without any feedback from

m m

¥ ¥

" " n the value of n , so that accumulation of ¯oating point roundoff error may cause to converge to some point near ¥ . This effect can be avoided by periodically using Equation 10 to recompute the correct residual.

Before analyzing the convergence of Steepest Descent, I must digress to ensure that you have a solid understanding of eigenvectors. Thinking with Eigenvectors and Eigenvalues 9 5. Thinking with Eigenvectors and Eigenvalues

After my one course in linear algebra, I knew eigenvectors and eigenvalues like the back of my head. If your instructor was anything like mine, you recall solving problems involving eigendoohickeys, but you never really understood them. Unfortunately, without an intuitive grasp of them, CG won't make sense either. If you're already eigentalented, feel free to skip this section.

Eigenvectors are used primarily as an analysis tool; Steepest Descent and CG do not calculate the value of any eigenvectors as part of the algorithm1.

5.1. Eigen do it if I try

/ /

An eigenvector † of a matrix is a nonzero vector that does not rotate when is applied to it (except

perhaps to point in precisely the opposite direction). † may change length or reverse its direction, but it

/ §

† ‡† ‡

won't turn sideways. In other words, there is some scalar constant ‡ such that . The value is

/

† ‡ y

an eigenvalue of . For any constant y , the vector is also an eigenvector with eigenvalue , because

/ 1c§ / §

. † † ‡ †

y y y . In other words, if you scale an eigenvector, it's still an eigenvector.

Why should you care? Iterative methods often depend on applying / to a vector over and over again. " "

/ / §

† ˆ‰‡ŠˆS‹ † ‡ †

When is repeatedly applied to an eigenvector , one of two things" can happen. If 1, then

, /

ˆ‰‡Šˆ †

will vanish as Œ approaches in®nity (Figure 9). If 1, then will grow to in®nity (Figure 10). Each /

time is applied, the vector grows or shrinks according to the value of ˆ‰‡Šˆ .  B 2 v v

Ž 3

Bv B v

  ‘ ’ ¦“\ Figure 9:  is an eigenvector of with a corresponding eigenvalue of 0 5. As increases, converges

to zero.

 Ž

v Bv B 2 v B 3 v

’  “Q Figure 10: Here,  has a corresponding eigenvalue of 2. As increases, diverges to in®nity.

1However, there are practical applications for eigenvectors. The eigenvectors of the stiffness matrix associated with a discretized structure of uniform density represent the natural modes of vibration of the structure being studied. For instance, the eigenvectors of the stiffness matrix associated with a one-dimensional uniformly-spaced mesh are sine waves, and to express vibrations as a linear combination of these eigenvectors is equivalent to performing a discrete Fourier transform.

10 Jonathan Richard Shewchuk /

If is symmetric (and often if it is not), then there exists a set of linearly independent eigenvectors

/

 

† † †

of , denoted 1 2  . This set is not unique, because each eigenvector can be scaled by an arbitrary

E E E

 

‡ ‡ ‡

nonzero constant. Each eigenvector has a corresponding eigenvalue, denoted 1 2  . These are

E E

uniquely de®ned for a given matrix. The eigenvalues may or may not be equal to eachE other; for instance, ” the eigenvalues of the identity matrix ” are all one, and every nonzero vector is an eigenvector of .

What if / is applied to a vector that is not an eigenvector? A very important skill in understanding

linear algebra Ð the skill this section is written to teach Ð is to think of a vector as a sum of other vectors

"—–

 ˜

whose behavior is understood. Consider that the set of eigenvectors •a† forms a basis for (because a

/ symmetric has eigenvectors that are linearly independent). Any -dimensional vector can be expressed as a linear combination of these eigenvectors, and because matrix-vector multiplication is distributive, one

can examine the effect of / on each eigenvector separately.

¥ / ¥ †

In Figure 11, a vector is illustrated as a sum of two eigenvectors † 1 and 2. Applying to is

/

" " " " " equi" valent to applying to the eigenvectors, and summing the result. On repeated application, we have

/ ¥™§4/ / § / ¥

‡ † ‡ † † † )

1 ) 2 1 1 2 2. If the magnitudes of all the eigenvalues are smaller than one, will / converge to zero, because the eigenvectors that compose ¥ converge to zero when is repeatedly applied. If one of the eigenvalues has magnitude greater than one, ¥ will diverge to in®nity. This is why numerical

analysts attach importance to the spectral radius of a matrix:

/1=§ /

" "

š

ˆ‰‡ ˆ ‡

. max is an eigenvalue of .

E

¥ /1 š

If we want to converge to zero quickly, . should be less than one, and preferably as small as possible. Ž B 3 x

v v  Bx

1 2  x B 2 x

Figure 11: The vector P (solid arrow) can be expressed as a linear combination of eigenvectors (dashed

V ‘ › VA  P

arrows), whose associated eigenvalues are › 1 0 7 and 2 2. The effect of repeatedly applying to  is best understood by examining the effect of  on each eigenvector. When is repeatedly applied, one

eigenvector converges to zero while the other diverges; hence, ¦“\P also diverges.

It's important to mention that there is a family of nonsymmetric matrices that do not have a full set of

independent eigenvectors. These matrices are known as defective, a name that betrays the well-deserved hostility they have earned from frustrated linear algebraists. The details are too complicated to describe in

this article, but the behavior of defective matrices can be analyzed in terms of generalized eigenvectors and " ¥ generalized eigenvalues. The rule that / converges to zero if and only if all the generalized eigenvalues have magnitude smaller than one still holds, but is harder to prove.

Here's a useful fact: the eigenvalues of a positive-de®nite matrix are all positive. This fact can be proven

from the de®nition of eigenvalue:

/ §

† ‡œ†

 

/ §

† ‡œ† † †

(/

† † ‡ By the de®nition of positive-de®nite, the † is positive (for nonzero ). Hence, must be positive also. Thinking with Eigenvectors and Eigenvalues 11 5.2. Jacobi iterations

Of course, a procedure that always converges to zero isn't going to help you attract friends. Consider a more

¤ ¥3§u© ¤

useful procedure: the Jacobi Method for solving . The matrix is split into two parts:  , whose ¤

diagonal elements are identical to those of , and whose off-diagonal elements are zero; and ž , whose

¤ ¤ §

 ž diagonal elements are zero, and whose off-diagonal elements are identical to those of . Thus, ) .

We derive the Jacobi Method:

¤ ¥ § ©

¥ § * ¥ ©

 ž

)

8 8

¥ § * ©

1 ¥ 1

 ž 

)

8 8

¥ § /W¥ / §+* ©

1 § 1

 ž  )AŸ

where Ÿ (14)

E E

Because  is diagonal, it is easy to invert. This identity can be converted into an iterative method by

forming the recurrence

¥ m § /5¥(m

"„ "

)¡Ÿ n

1 n (15)

¥ m

Given a starting vector 0 n , this formula generates a sequence of vectors. Our hope is that each successive ¥

vector will be closer to the solution ¥ than the last. is called a stationary point of Equation 15, because if

¥ m §%¥ ¥ m ¥

" "„ n n , then 1 will also equal .

Now, this derivation may seem quite arbitrary to you, and you're right. We could have formed any ¤

number of identities for ¥ instead of Equation 14. In fact, simply by splitting differently Ð that is, ž by choosing a different  and Ð we could have derived the Gauss-Seidel method, or the method of Successive Over-Relaxation (SOR). Our hope is that we have chosen a splitting for which / has a small

spectral radius. Here, I chose the Jacobi splitting arbitrarily for simplicity.

¥ / m

Suppose we start with some arbitrary vector 0 n . For each iteration, we apply to this vector, then add

Ÿ to the result. What does each iteration do?

Again, apply the principle of thinking of a vector as a sum of other, well-understood vectors. Express

¥ ¥

m m

" "

p n

each iterate n as the sum of the exact solution and the error term . Then, Equation 15 becomes

¥ § /W¥

m m

"„ "

)AŸ n

1 n

m 1 § / ¥

"

p .

)¡Ÿ )

n

§ /W¥ /

m

"

p

)AŸ )

n

m

§ ¥ /

"

p )

n (by Equation 14)

E

m § / m

"„ "

¢

p p n

1 n (16)

¥ m ¥ "

Each iteration does not affect the ªcorrect partº of n (because is a stationary point); but each iteration

m

/1

"

š

. ‹ p

does affect the error term. It is apparent from Equation 16 that if 1, then the error term n will

¥ m Œ converge to zero as approaches in®nity. Hence, the initial vector 0 n has no effect on the inevitable

outcome!

¥(m ¥

Of course, the choice of 0 n does affect the number of iterations required to converge to within

/1 š

a given tolerance. However, its effect is less important than that of the spectral radius . , which /

determines the speed of convergence. Suppose that †a£ is the eigenvector of with the largest eigenvalue

/1¤§

m

š

. ണ p

(so that ). If the initial error 0 n , expressed as a linear combination of eigenvectors, includes a

component in the direction of †£ , this component will be the slowest to converge.

12 Jonathan Richard Shewchuk ¥ 2 4

2 ¥ -4 -2 2 4 6 1

7 -2 2

-4

-6

Figure 12: The eigenvectors of T are directed along the axes of the paraboloid de®ned by the quadratic

form N

the steepness of the corresponding slope. ¤

/ is not generally symmetric (even if is), and may even be defective. However, the rate of convergence

/71 ¤ š

of the Jacobi Method depends largely on . ,which depends on . Unfortunately, Jacobi does not converge ¤ for every ¤ , or even for every positive-de®nite .

5.3. A Concrete Example

To demonstrate these ideas, I shall solve the system speci®ed by Equation 4. First, we need a method of ‡

®nding eigenvalues and eigenvectors. By de®nition, for any eigenvector † with eigenvalue ,

¤ § §

† ‡† ‡'”¦†

*Y¤G1 §

†

.§‡'” 0 *Y¤

Eigenvectors are nonzero, so ‡” must be singular. Then,

*?¤G1c§

det .§‡'” 0

*¨¤

‡ The determinant of ‡” is called the characteristic polynomial. It is a degree polynomial in

whose roots are the set of eigenvalues. The characteristic polynomial of ¤ (from Equation 4) is

* *

B § * § * 1 * 1

‡ 3 2 2

* *

‡ ‡ .§‡ .§‡

det 9 ) 14 7 2

D E

2 ‡ 6 §

and the eigenvalues are 7 and 2. To ®nd the eigenvector associated with ‡ 7,

*

*Y¤G1 §CB B §

4 2 † 1

* †

.§‡” 0

† D

2 1 D 2

* §

¢

† 4 † 1 2 2 0

Convergence Analysis of Steepest Descent 13

§ J Jx*

 

L L

Any solution to this equation is an eigenvector; say, † 1 2 . By the same method, we ®nd that 2 1 E is an eigenvector corresponding to the eigenvalue 2. In FigureE 12, we see that these eigenvectors coincide with the axes of the familiar ellipsoid, and that a larger eigenvalue corresponds to a steeper slope. (Negative eigenvalues indicate that ; decreases along the axis, as in Figures 5(b) and 5(d).)

Now, let's see the Jacobi Method in action. Using the constants speci®ed by Equation 4, we have

1 1

m m

¥ § * B B ¥ B B "

"$ 3 0 0 2 3 0 2

*

) n

1 n 1 1

D D D 0 6 D 2 0 0 6 8

* 2 2

§ B ¥ B m

0 3 " 3

* * )

1 n 4 D

3 0 D 3

/ Jj© *¢© Jx*¢© ©  

ª L ª

The eigenvectors of are 2 1 L with eigenvalue 2 3, and 2 1 with eigenvalue 2 3.

¤ E These are graphed in Figure 13(a); E note that they do not coincide with the eigenvectors of , and are not related to the axes of the paraboloid.

Figure 13(b) shows the convergence of the Jacobi method. The mysterious path the algorithm takes can be understood by watching the eigenvector components of each successive error term (Figures 13(c), (d), and (e)). Figure 13(f) plots the eigenvector components as arrowheads. These are converging normally at the rate de®ned by their eigenvalues, as in Figure 11.

I hope that this section has convinced you that eigenvectors are useful tools, and not just bizarre torture devices in¯icted on you by your professors for the pleasure of watching you suffer (although the latter is a nice fringe bene®t).

6. Convergence Analysis of Steepest Descent

6.1. Instant Results

m

" p

To understand the convergence of Steepest Descent, let's ®rst consider the case where n is an eigenvector

§u*¦¤ §+*

m m m

" " "

‡« r p ‡'«p

n n

with eigenvalue . Then, the residual n is also an eigenvector. Equation 12 gives



m

m

"

"

r r

§

m m m

n

n

"$ " "

p p r

¤ m



)

m

n n n

1 "

"

r r

n

n

m



m

"

"

r r

* m 1 § m

n

n

" "

. ‡'«p p

m



)

m

n n

"

"

«

‡ r r

n

n §

0

m

¥ "

Figure 14 demonstrates why it takes only one step to converge to the exact solution. The point n lies

on one of the axes of the ellipsoid, and so the residual points directly to the center of the ellipsoid. Choosing

m

§ 8

" 1

‡

« y

n gives us instant convergence.

m

" p

For a more general analysis, we must express n as a linear combination of eigenvectors, and we shall furthermore require these eigenvectors to be orthonormal. It is proven in Appendix C2 that if ¤ is

14 Jonathan Richard Shewchuk

¥ ¥ 2 (a) 2 (b) 4 4

2 2

¥ ¥

-4 -2 2 4 6 1 -4 -2 2 4 6 1

¥

m

*

¥

-2 0 47 0 47 0 n -2

-4 -4 ¥ -6¥ -6 2 (c) 2 (d) 4 4

2 2

¥ ¥

-4 -2 2 4 6 1 -4 -2 2 m 4 6 1 p

1 n

m -2 p

0 n

-4 -4 ¥ -6¥ -6 2 (e) 2 (f) 4 4

2 2

¥ ¥

-4 -2 2 4 6 1 -4 -2 2 4 6 1

-2 m -2 p

2 n

-4 -4

-6 -6

Figure 13: Convergence of the Jacobi Method. (a) The eigenvectors of  are shown with their corresponding

eigenvalues. Unlike the eigenvectors of T , these eigenvectors are not the axes of the paraboloid. (b) The

~ €9 ‚¬ƒ ~ € ‚„ƒ ­¯® ­¯® ­¯®

° ° Jacobi Method starts at 2 2 and converges at 2 2 . (c, d, e) The error vectors 0 ° , 1 , 2 (solid arrows) and their eigenvector components (dashed arrows). (f) Arrowheads represent the eigenvector components of the ®rst four error vectors. Each eigenvector component of the error is converging to zero at the expected rate based on its eigenvalue.

Convergence Analysis of Steepest Descent 15 ¥ 2 4

2 ¥ -4 -2 2 4 6 1

-2

-4

-6

Figure 14: Steepest Descent converges to the exact solution on the ®rst iteration if the error term is an

eigenvector. ¤

symmetric, there exists a set of orthogonal eigenvectors of . As we can scale eigenvectors arbitrarily,

let us choose so that each eigenvector is of unit length. This choice gives us the useful property that

§ ¶ 

§´³ 1

§ ¶

† †²± E µ E

£ (17)

0 l

E µ

Express the error term as a linear combination of eigenvectors



·

§

m

"

p £†a£

n (18)

#

E £

1 ¸

m

"

£ p

where is the length of each component of n . From Equations 17 and 18 we have the following identities:

¸

·

§ *¦¤ §+*

m m

" "

r p £¯‡¥£a†a£

n

n (19)

E

£

¸

·



¹ ¹

§ §

m m

m

" "

2 " 2

p p p

£

n

n (20)

n

E

£

¸

· ·

 

¤ § 1 1

m

m

"

"

p p . £† . £‡¥£º†a£

£

n

n

£ £

¸ ¸ ·

§ 2 ‡¥£

£ (21)

E

£

¸

·



¹ ¹

§ §

m m

m

" "

2 " 2 2

r r ‡ r

£ £

n

n (22)

n

E

£

¸

·



§ ¤

m

m "

" 2 3

r r ‡

£ £

n (23)

n

£ ¸

16 Jonathan Richard Shewchuk ¥ 2 6

4

2 ¥ -4 -2 2 4 6 1

-2

-4

Figure 15: Steepest Descent converges to the exact solution on the ®rst iteration if the eigenvalues are all

equal.

m

" r

Equation 19 shows that n too can be expressed as a sum of eigenvector components, and the length of *

these components are £¯‡&£ . Equations 20 and 22 are just Pythagoras' Law. ¸

Now we can proceed with the analysis. Equation 12 gives



m

m

"

"

r r

§

m m m

n

n

" "$ "

r p p

¤ m



)

m

n n n

1 "

"

r r

n n

2 2

‡

!

m m

§

£

£ £

" "

p r

) n n (24)

¸ 2 3

‡

!

£

£ £

¸

m

" p

We saw in the last example that, if n has only one eigenvector component, then convergence is

§ 8

m m "

1 "

‡ p

«

y

n achieved in one step by choosing . Now let's examine the case where n is arbitrary, but all the

eigenvectors have a common eigenvalue ‡ . Equation 24 becomes

2 2

‡

!

m § m * m 1

£

£

"$ " "

p p . ‡'p

)

n n n

1 3 ¸ 2

‡ !

£

£ § 0 ¸

Figure 15 demonstrates why, once again, there is instant convergence. Because all the eigenvalues are

equal, the ellipsoid is spherical; hence, no matter what point we start at, the residual must point to the center

§

m 8

" 1

‡ y

of the sphere. As before, choose n .

Convergence Analysis of Steepest Descent 17 ¥ 2 8

6

4

2 ¥ -6 -4 -2 2 1

-2

Figure 16: The energy norm of these two vectors is equal.

m

" y

However, if there are several unequal, nonzero eigenvalues, then no choice of n will eliminate all the eigenvector components, and our choice becomes a sort of compromise. In fact, the fraction in Equation 24

8 1 2

‡ £

is best thought of as a weighted average of the values of £ . The weights ensure that longer components

m m

" "

¸

p p n of n are given precedence. As a result, on any given iteration, some of the shorter components of might actually increase in length (though never for long). For this reason, the methods of Steepest Descent and Conjugate Gradients are called roughers. By contrast, the Jacobi Method is a smoother, because every eigenvector component is reduced on every iteration. Steepest Descent and Conjugate Gradients are not smoothers, although they are often erroneously identi®ed as such in the mathematical literature.

6.2. General Convergence

To bound the convergence of Steepest Descent in the general case, we shall de®ne the energy norm

¹ ¹9»

§ Š¤ 1

1 ½ 2

p p

.¼p (see Figure 16). This norm is easier to work with than the Euclidean norm, and is in

¹ ¹9»

m

" p

some sense a more natural norm; examination of Equation 8 shows that minimizing n is equivalent to

18 Jonathan Richard Shewchuk

¥ 1

; m

" .

minimizing n . With this norm, we have



¹ ¹

m m

§ ¤

»

m

"$ "$

2 "$

p p p

n n

1 1 n 1

 

1 1—¤ §

m m m m

m m

" " " "

" "

.¼p r .¼p r

)Ay )ty

n n n

n (by Equation 12)

n n

  

§ ¤ m ¤ m ¤ m ¤ m

m m m m

" " " "

" " "

" 2

p p r p r r

) y )Ay

n n n

2 n (by symmetry of )

n n n n

m m

 

m m "

" 2

" "

r r r r

 

¹ ¹

m m m

* § ¤

n n

»

m m

n n

" " " "

2 "

r r ¤ ¤ r r p

m m

 

)wÀ m m )

ng¿ n

n 2

" "

n n

" "

r r r r

n ¾ nÁ

n n

1 

m

m "

" 2

.Âr r

¹ ¹

§ *

m

n

» n

" 2

p

¤

m



m

n

"

"

r r

n

n

m 1



m "

" 2

r .Âr

¹ ¹

§ *

m

n

» n

" 2

p

m m

¤ 1 ¤ 1

 

m À m

n 1

" "

" "

p r .¼p .Âr

n n

Á

n n

2 2 1 2

. ‡

!

¹ ¹

§ m *

£

£ £ »

" 2

p

1 1 À

n 1 (by Identities 21, 22, 23)

2 3 ¸ 2

. ‡ . ‡¥£

! !

£ £

£ £ £

Á

1 ¸

¸ 2 2 2

. ‡

!

¹ ¹

§ § *

m

£

£ £

»(Ã Ã

" 2 2 2

p

1 1 n 1 (25)

2 3 ¸ 2

E

. ‡ . ‡¥£

! !

£ £

£ £ £

¸ ¸

The analysis depends on ®nding an upper bound for à . To demonstrate how the weights and eigenvalues

§

‡ ‡

affect convergence, I shall derive a result for 2. Assume that 1 Ä 2. The spectral

¤ § m

"

Å ‡ ª²‡ p Ä

of is de®ned to be 1 2 1. The slope of n (relative to the coordinate system de®ned by the

§ ª

eigenvectors), which depends on the starting point, is denoted Æ 2 1. We have

¸ ¸

2 2 2 2 1 2

§ *

. ‡ ‡ Ã

2 1 1 ) 2 2 1

1 1 ¸

2 ¸ 2 2 3 2 3

. ‡ ‡ . ‡ ‡ )

1 1 ) 2 2 1 1 2 2

1

¸ ¸ ¸

¸ 2 2 2

§ *

.¼Å Æ

) 1

1 2 1 3 2 (26)

Æ .¼Å Æ .¼Å

) )

The value of à , which determines the rate of convergence of Steepest Descent, is graphed as a function

m

Æ Å p Æ of and in Figure 17. The graph con®rms my two examples. If 0 n is an eigenvector, then the slope

is zero (or in®nite); we see from the graph that à is zero, so convergence is instant. If the eigenvalues are Ã

equal, then the condition number Å is one; again, we see that is zero.

Figure 18 illustrates examples from near each of the four corners of Figure 17. These quadratic forms are graphed in the coordinate system de®ned by their eigenvectors. Figures 18(a) and 18(b) are examples with a large condition number. Steepest Descent can converge quickly if a fortunate starting point is chosen

(Figure 18(a)), but is usually at its worst when Å is large (Figure 18(b)). The latter ®gure gives us our best

¥<1 ;

intuition for why a large condition number can be bad: . forms a trough, and Steepest Descent bounces back and forth between the sides of the trough while making little progress along its length. In Figures 18(c) and 18(d), the condition number is small, so the quadratic form is nearly spherical, and convergence is quick

regardless of the starting point. ¤

Holding Å constant (because is ®xed), a little basic calculus reveals that Equation 26 is maximized

§vÇ Å

when Æ . In Figure 17, one can see a faint ridge de®ned by this line. Figure 19 plots worst-case

¤ §ÈÇ Å

starting points for our sample matrix . These starting points fall on the lines de®ned by 2 ª 1 . An

¸ ¸ Convergence Analysis of Steepest Descent 19

0.8

0.6 Ã

100 0.4 80 0.2 60 0

20 Å 40 15 10

20 Æ 5 Æ

1 0

É Ê ­¯® Ë

Figure 17: Convergence of Steepest Descent as a function of (the slope of ° ) and (the condition

“

Ê Ë number of T ). Convergence is fast when or are small. For a ®xed matrix, convergence is worst when

ÊV?Ì¤Ë . † † 2 (a) 2 (b) 6 6

4 4

2 2 † -4 -2 2 4 † 1 -4 -2 2 4 1 -2 -2

-4 -4 † † 2 (c) 2 (d) 6 6

4 4

2 2 † -4 -2 2 4 † 1 -4 -2 2 4 1 -2 -2

-4 -4

Figure 18: These four examples represent points near the corresponding four corners of the graph in

Ê Ë Ê Ë

Figure 17. (a) Large Ë , small . (b) An example of poor convergence. and are both large. (c) Small

Ë Ê and Ê . (d) Small , large .

20 Jonathan Richard Shewchuk ¥ 2

4

m ¥

0 n

2 ¥ -4 -2 2 4 6 1

-2

-4

-6

Figure 19: Solid lines represent the starting points that give the worst convergence for Steepest Descent. Dashed lines represent steps toward convergence. If the ®rst iteration starts from a worst-case point, so do

all succeeding iterations. Each step taken intersects the paraboloid axes (gray arrows) at precisely a 45 Í ‘

angle. Here, Ë5V 3 5. §

à 2 2 Å upper bound for (corresponding to the worst-case starting points) is found by setting Æ :

4

* Å Ã 4 2 Î

1 5 4 3

Å Å Å ) ) 2

5 * 4 3

§

Å Å Å

2 )

5 4 3

Å Å Å )

) 2 1

* 2 § .¼Å 1

1 2 .¼Å

) 1

* Å

à 1 Î

(27) Å

) 1

Inequality 27 is plotted in Figure 20. The more ill-conditioned the matrix (that is, the larger its condition

number Å ), the slower the convergence of Steepest Descent. In Section 9.2, it is proven that Equation 27 is ,

also valid for 2, if the condition number of a symmetric, positive-de®nite matrix is de®ned to be

§

"

`

Å ‡ÏoÐ ª²‡œÏ

 E

the ratio of the largest to smallest eigenvalue. The convergence results for Steepest Descent are

"

*

¹ ¹9» ¹ ¹9»

m m

Å 1 "

ÎvÑ

p p n

n 0 and (28)

E

Å Ò

) 1

The Method of Conjugate Directions 21 Ã

1

0.8

0.6

0.4

0.2

0 Å 20 40 60 80 100

Figure 20: Convergence of Steepest Descent (per iteration) worsens as the condition number of the matrix

increases.

¤ m



¥(m 1Š* ¥M1 m

; ;

" "

1 "

p p

. .

§

n n

2 n

¥ 1Ó* ¥M1

; m ;

¤

m 

1 m (by Equation 8)

. .

p p n

0 n

2 0 n 0 " * 2

Å 1

Î Ñ

Å Ò

) 1

7. The Method of Conjugate Directions

7.1. Conjugacy

Steepest Descent often ®nds itself taking steps in the same direction as earlier steps (see Figure 8). Wouldn't

it be better if, every time we took a step, we got it right the ®rst time? Here's an idea: let's pick a set of

m m m

8

 

Ô Ô Ô

n n n

orthogonal search directions 0 1  1 . In each search direction, we'll take exactly one step,

¥

E E E and that step will be just the right length to line up evenly with . After steps, we'll be done.

Figure 21 illustrates this idea, using the coordinate axes as search directions. The ®rst (horizontal) step

¥

m p

leads to the correct 1-coordinate; the second (vertical) step will hit home. Notice that 1 n is orthogonal to

m Ô

0 n . In general, for each step we choose a point

¥ m §4¥ m m m

"$ " " "

Ô

)Ay

n n n

1 n (29)

m m m

" "„ "

p Ô

y

n n To ®nd the value of n , use the fact that 1 should be orthogonal to , so that we need never step in

22 Jonathan Richard Shewchuk ¥ 2 4

2 ¥

-4 -2 2 4 6 1 ¥

-2 m p

1 n

m

¥

m

¥(m

Ô n

0 n 1

0 n -4

-6

Figure 21: The Method of Orthogonal Directions. Unfortunately, this method only works if you already know

the answer.

m

" Ô

the direction of n again. Using this condition, we have



§

m

m

"$

"

Ô p n 0

n 1



m m m 1Õ§

m

" " "

"

.¼p Ô Ô

)¡y

n n

n 0 (by Equation 29)

n

 m

m

"

"

Ô p

m § *

n

n

"

m



y m

n (30)

"

"

Ô Ô

n

n

m m

" "

p

y n

Unfortunately, we haven't accomplished anything, because we can't compute without knowing n ;

m

"

p

and if we knew n , the problem would already be solved.

¤

m

"

Ô

The solution is to make the search directions -orthogonal instead of orthogonal. Two vectors n and

m ¤

Ô £

n are -orthogonal, or conjugate, if



¤ §

m

m

"

Ô Ô

£

n 0 n Figure 22(a) shows what ¤ -orthogonal vectors look like. Imagine if this article were printed on bubble gum, and you grabbed Figure 22(a) by the ends and stretched it until the ellipses appeared circular. The

vectors would then appear orthogonal, as in Figure 22(b).

m ¤ m

"$ "

p Ô n

Our new requirement is that 1 n be -orthogonal to (see Figure 23(a)). Not coincidentally, this

m

" Ô

orthogonality condition is equivalent to ®nding the minimum point along the search direction n , as in

The Method of Conjugate Directions 23 ¥ ¥ 2 2

4 4

2 2

¥ ¥ -4 -2 2 4 1 -4 -2 2 4 1

-2 -2

-4 -4

(a) (b) ‘‘9‘ Figure 22: These pairs of vectors are T -orthogonal because these pairs of vectors are orthogonal.

Steepest Descent. To see this, set the directional derivative to zero:

¥ m 1Õ§

;

Ô

"$ .

1 n 0

Ô

y



1 § ¥ ¥

; m m

]

Ô

"$ "$

. n

1 n 1 0

Ô

y



* m §

m

"

"$

r Ô

n 0

1 n



m

§ ¤

m

"$

"

Ô p

n 0

n 1

m

"

y Following the derivation of Equation 30, here is the expression for n when the search directions are

¤ -orthogonal:

¤

 m

m

"

"

Ô p

§ *

m

n

n

"

m

¤



y m

n (31)

"

"

Ô Ô

n

n

m



m

"

"

Ô r

§

n

n

¤

m 

m (32)

"

"

Ô Ô

n n

Unlike Equation 30, we can calculate this expression. Note that if the search vector were the residual, this formula would be identical to the formula used by Steepest Descent.

24 Jonathan Richard Shewchuk ¥ ¥ 2 2 4 4

2 2

¥ ¥

m 1 1 ¥

-4 m -2 2 4 6 -4 -2 2 4 6

p n

n 1

m 1 Ô

0 n ¥

-2 -2 m m

¥

p n 0 n 0

-4 -4

-6 -6 (a) (b)

Figure 23: The method of Conjugate Directions converges in Ö steps. (a) The ®rst step is taken along some

×¥® P'® ­Ø® T צ®

° ° °

direction 0 ° . The minimum point 1 is chosen by the constraint that 1 must be -orthogonal to 0 . (b)

­Ø® T

The initial error 0 ° can be expressed as a sum of -orthogonal components (gray arrows). Each step of

Conjugate Directions eliminates one of these components. ¥

To prove that this procedure really does compute in steps, express the error term as a linear

combination of search directions; namely, 8

 1

·

§

m m

p £¯Ô

£ n

0 n (33)

# £

0 Ù ¤

The values of £ can be found by a mathematical trick. Because the search directions are -orthogonal, it Ù

is possible to eliminate all the £ values but one from Expression 33 by premultiplying the expression by

¤



m

Ù Ô

± :

n

·

 

m m m

¤ ¤ §

m m

Ô Ô Ô p

£ £

± ±

n n n n

n 0

£

Ù

 

¤ m § ¤ m ¤ m

m m

Ô p Ô Ô Ô

± ±

± ±

n n

n (by -orthogonality of vectors) n

0 n

 ¤

m

m

Ù

Ô p

±

§

m n

n 0

¤ m



±

m

n

Ô Ô

±

±

n

n

Ù

8

m m m

 ¤ 1

m ±

" "

"$# 1

Ô .¼p Ô

!

) y

±

¤ §

n n n

n 0 0

Ô

¤ m 

m (by -orthogonality of vectors)

Ô Ô

±

±

n

n

 ¤ m

m

Ô p

±

±

§

n

n

¤

m 

m (By Equation 29). (34)

Ô Ô

±

±

n

n

§Ú*

m m

" "

y

n

By Equations 31 and 34, we ®nd that n . This fact gives us a new way to look at the error ¥ term. As the following equation shows, the process Ù of building up component by component can also be The Method of Conjugate Directions 25

viewed as a process of cutting down the error term component by component (see Figure 23(b)). "

8 1

·

m m m § m

"

Ô p p

£ £

) y

n n n

n 0 #

£ 0

"

8 8

 1 1

· ·

§ *

m m m m

Ô Ô

£ £ £ £

n n n n

# #

£ £ Ù

0 Ù 0 8

 1

·

§ m m

Ô

£ £ n

n (35)

#^"

£

Ù

m §

p

After iterations, every component is cut away, and ²n 0; the proof is complete.

7.2. Gram-Schmidt Conjugation

¤

m

–

"

•¯Ô All that is needed now is a set of -orthogonal search directions n . Fortunately, there is a simple way

to generate them, called a conjugate Gram-Schmidt process.

8

 

Û Û Û

Suppose we have a set of linearly independent vectors 0 1  1. The coordinate axes will

m

E E E

"

"

Ô Û

do in a pinch, although more intelligent choices are possible. To construct n , take and subtract out ¤

any components that are not -orthogonal to the previous Ô vectors (see Figure 24). In other words, set

m

, §

Ô Œ Û

0 n 0, and for 0, set "

8 1

·

m § m

"

"

"

Ô Û ±ØÔ

±

) n

n (36)

#

E ±

0 Ü

,ݶ

"

± Œ

where the are de®ned for . To ®nd their values, use the same trick used to ®nd £ :

"

Ù Ü

8 1

·

  

¤ m § ¤ m ¤ m

m m

"

"

"

Ô Ô Û ±ØÔ Ô Ô

£ £ £

±

)

n n n

n n

# ±

0 Ü

 

m m

§ ¤ ¤ ¤ ,

m

"

"

Û Ô Ô £¯Ô Ô Œ

£ £

£

) n

0 n (by -orthogonality of vectors)

n

E µ

¤

 m

Ü

"

Û Ô

§ * £

n

"

£ ¤

m 

m (37)

Ô Ô

£

£

n

n Ü

The dif®culty with using Gram-Schmidt conjugation in the method of Conjugate Directions is that all 3 1

the old search vectors must be kept in memory to construct each new one, and furthermore Þ. operations

ß

á ß

à d u (0) á 0 d(0) u+ u 1 ß dá (1) u *

Figure 24: Gram-Schmidt conjugation of two vectors. Begin with two linearly independent vectors â 0 and

â ×¥® Vkâ â âã T

1. Set 0 ° 0. The vector 1 is composed of two components: , which is -orthogonal (or conjugate)

×¥® âSä צ® T °

to 0 ° , and , which is parallel to 0 . After conjugation, only the -orthogonal portion remains, and

×¥® Vkâœã

1 ° .

26 Jonathan Richard Shewchuk ¥ 2 4

2 ¥ -4 -2 2 4 6 1

-2

-4

-6

Figure 25: The method of Conjugate Directions using the axial unit vectors, also known as . are required to generate the full set. In fact, if the search vectors are constructed by conjugation of the axial unit vectors, Conjugate Directions becomes equivalent to performing Gaussian elimination (see Figure 25). As a result, the method of Conjugate Directions enjoyed little use until the discovery of CG Ð which is a method of Conjugate Directions Ð cured these disadvantages.

An important key to understanding the method of Conjugate Directions (and also CG) is to notice that Figure 25 is just a stretched copy of Figure 21! Remember that when one is performing the method of Conjugate Directions (including CG), one is simultaneously performing the method of Orthogonal Directions in a stretched (scaled) space.

7.3. Optimality of the Error Term

Conjugate Directions has an interesting property: it ®nds at every step the best solution within the bounds

" Œ

of where it's been allowed to explore. Where has it been allowed to explore? Let å be the -dimensional

m m m m m

– "

" "

8

 

•¯Ô Ô Ô p p å

)

n n n n

subspace span 0 n 1 1 ; the value is chosen from 0 . What do I mean by ªbest

¹9» ¹

m m

E E E

"

"

p å p

) n

solutionº? I mean that Conjugate Directions chooses the value from 0 n that minimizes (see

¹9» ¹

m m

"

"

p p å

) n Figure 26). In fact, some authors derive CG by trying to minimize within 0 n .

In the same way that the error term can be expressed as a linear combination of search directions The Method of Conjugate Directions 27

æ 0 ß

(1) ß á dá e

á (2) (0) á

e(0) e(1)

­Ø® Vk­¯® éê×¥® €g×¥®

°Sç ° °—ë

Figure 26: In this ®gure, the shaded area is 0 °&ç:è 2 0 span 0 1 . The ellipsoid is a contour on

­¯® ­Ø® °œç¨è which the energy norm is constant. After two steps, Conjugate Directions ®nds 2 ° , the point on 0 2

that minimizes ì|­¦ì}í .

(Equation 35), its energy norm can be expressed as a summation.

8 8 

 1 1

· ·



¹ ¹9»

m § m m ¤ m

m

"

p Ô Ô

± ±

£

£

n n n

n (by Equation 35)

n

#^"

#^"

£

±

Ù Ù 8

 1

·



¤ § ¤

m m

2m

Ô Ô Ô

£

£ £

n (by -orthogonality of vectors).

n n

#^"

£ Ù

Each term in this summation is associated with a search direction that has not yet been traversed. Any other

m m

"

"

p p å p

) n vector chosen from 0 n must have these same terms in its expansion, which proves that must have the minimum energy norm.

Having proven the optimality with equations, let's turn to the intuition. Perhaps the best way to visualize the workings of Conjugate Directions is by comparing the space we are working in with a ªstretchedº space,

2 3 ˜ as in Figure 22. Figures 27(a) and 27(c) demonstrate the behavior of Conjugate Directions in ˜ and ; lines that appear perpendicular in these illustrations are orthogonal. On the other hand, Figures 27(b) and 27(d) show the same drawings in spaces that are stretched (along the eigenvector axes) so that the ellipsoidal

contour lines become spherical. Lines that appear perpendicular in these illustrations are ¤ -orthogonal.

¥

m m

Ô n

In Figure 27(a), the Method of Conjugate Directions begins at 0 n , takes a step in the direction of 0 ,

m m ¥(m ¤

p Ô

n n

and stops at the point 1 n , where the error vector 1 is -orthogonal to 0 . Why should we expect

¥

m

å )

this to be the minimum point on 0 n 1? The answer is found in Figure 27(b): in this stretched space,

¤

m m m

p Ô p

n n

1 n appears perpendicular to 0 because they are -orthogonal. The error vector 1 is a radius of the

¹ ¹9»

¥ ¥

m m

p å

) n

concentric circles that represent contours on which is constant, so 0 n 1 must be tangent at 1

¹ ¹9»

m ¥ m ¥(m ¥(m

p å

)

n n n to the circle that 1 n lies on. Hence, 1 is the point on 0 1 that minimizes 1 .

This is not surprising; we have already seen in Section 7.1 that ¤ -conjugacy of the search direction and

¹ ¹9» ;

the error term is equivalent to minimizing (and therefore p ) along the search direction. However,

¹ ¹9»

m

p Ô

after Conjugate Directions takes a second step, minimizing along a second search direction 1 n , why

¹ ¹9»

m

p Ô Œ

should we expect that will still be minimized in the direction of 0 n ? After taking steps, why will

m m

¥ ¥ 1

;

"

"

. å

) n n be minimized over all of 0 ?

28 Jonathan Richard Shewchuk

¥ m

¥ m

0 n

0 n

¥

m

m

¥

å )

0 n 1

å )

0 n 1

¥(m

m n

1 r

¥ m m

1 n

r n

1 1 n

m p

1 n

m

Ô m

1 n p

1 n

¥

m

m

¥

Ô

Ô n

0 n 1

m Ô

0 n

(a) (b)

¥ m

0 n

¥

m

å

¥(m

)

m ¥

0 n 1

å n

) 0

0 n 1

¥ m

m

¥

¥(m

m m

1 n n

1 å

) ¥(m

r r

0 n 2 n

1 n 1

å

m )

0 n 2

m

¥(m Ô

1 n

Ô ¥ m 1 n

2 n

m ¥

2 n

Ô

¥ n

m 0 Ô

0 n

(d)

(c)

Figure 27: Optimality of the Method of Conjugate Directions. (a) A two-dimensional problem. Lines that

appear perpendicular are orthogonal. (b) The same problem in a ªstretchedº space. Lines that appear P

perpendicular are T -orthogonal. (c) A three-dimensional problem. Two concentric ellipsoids are shown;

P'® P® P'®

° °îçfè

is at the center of both. The line 0 °îçfè 1 is tangent to the outer ellipsoid at 1 . The plane 0 2 is P® tangent to the inner ellipsoid at 2 ° . (d) Stretched view of the three-dimensional problem.

The Method of Conjugate Directions 29

¤

m m m

Ô Ô Ô

n n

In Figure 27(b), 0 n and 1 appear perpendicular because they are -orthogonal. It is clear that 1

¥ ¥ ¥

m m

Ô n must point to the solution , because 0 n is tangent at 1 to a circle whose center is . However, a

three-dimensional example is more revealing. Figures 27(c) and 27(d) each show two concentric ellipsoids.

¥ ¥

m m n

The point 1 n lies on the outer ellipsoid, and 2 lies on the inner ellipsoid. Look carefully at these ®gures:

m m

¥ ¥

å

) n the plane 0 n 2 slices through the larger ellipsoid, and is tangent to the smaller ellipsoid at 2 . The

point ¥ is at the center of the ellipsoids, underneath the plane. ¥(m

Looking at Figure 27(c), we can rephrase our question. Suppose you and I are standing at 1 n , and

¹ ¹

¥ m

å p )

want to walk to the point that minimizes on 0 n 2; but we are constrained to walk along the search

m m m

Ô Ô Ô

n n direction 1 n . If 1 points to the minimum point, we will succeed. Is there any reason to expect that 1

will point the right way?

¤

m m

Ô Ô n

Figure 27(d) supplies an answer. Because 1 n is -orthogonal to 0 , they are perpendicular in this

¥

m

å )

diagram. Now, suppose you were staring down at the plane 0 n 2 as if it were a sheet of paper; the

¥ m

sight you'd see would be identical to Figure 27(b). The point 2 n would be at the center of the paper,

¥ ¥

m m m

Ô Ô

n n

and the point would lie underneath the paper, directly under the point 2 n . Because 0 and 1 are

m ¥ m ¥ m ¥ ¥ m

å å Ô

) )

n n n

perpendicular, 1 n points directly to 2 , which is the point in 0 2 closest to . The plane 0 2

¥ ¥

m m n

is tangent to the sphere on which 2 n lies. If you took a third step, it would be straight down from 2 to

¥ ¤

, in a direction -orthogonal to å 2.

Another way to understand what is happening in Figure 27(d) is to imagine yourself standing at the

m

¥ ¥

"

å ) solution point , pulling a string connected to a bead that is constrained to lie in 0 n . Each time the

expanding subspace å is enlarged by a dimension, the bead becomes free to move a little closer to you. Now if you stretch the space so it looks like Figure 27(c), you have the Method of Conjugate Directions.

Another important property of Conjugate Directions is visible in these illustrations. We have seen that,

¥ ¥

m m

"

"

å

) n

at each step, the hyperplane 0 n is tangent to the ellipsoid on which lies. Recall from Section 4

m

"

r

that the residual at any point is orthogonal to the ellipsoidal surface at that point. It follows that n is

* ¤



m

"

" Ô

orthogonal to å as well. To show this fact mathematically, premultiply Equation 35 by :

n 8

 1

·

 

* ¤ m § * m ¤ m

m m

" "

Ô p Ô Ô

£ £ £

n n

n (38)

n n

#^"

£

Ù



m § ¤

m

"

Ô r Ô Œo‹ £

n 0 (by -orthogonality of -vectors). (39)

n

E µ We could have derived this identity by another tack. Recall that once we take a step in a search direction,

we need never step in that direction again; the error term is evermore ¤ -orthogonal to all the old search

m m

§u*¦¤

" "

r p n

directions. Because n , the residual is evermore orthogonal to all the old search directions.

"

8

 

Û Û

Because the search directions are constructed from the vectors, the subspace spanned by 0 Û 1

m

E E

"

"

å r Û

is , and the residual n is orthogonal to these previous vectors as well (see Figure 28). This is proven

m

r

£

by taking the inner product of Equation 36 and n : "

8 1

·

  

m m m

§

m m

"

"

"

Ô r Û r ±ïÔ r

£ £ £

±

)

n n

n (40)

n n

# ±

0 Ü



§

m

"

Û r ŒU‹ £

0 n (by Equation 39) (41)

E µ

There is one more identity we will use later. From Equation 40 (and Figure 28),

 

§

m m

m

" "

"

"

r Û Ô r

n

n (42) n 30 Jonathan Richard Shewchuk

rá (2) ß eá d á (2) u  (2)

2 ß

(1) ß u á d 1 à

(0) u 0

×¥® €g×¥® ⠀2â °

Figure 28: Because the search directions 0 ° 1 are constructed from the vectors 0 1, they span the

­Ø® T ð¯®

° è °

same subspace è 2 (the gray-colored plane). The error term 2 is -orthogonal to 2, the residual 2 is

צ® â T

° è

orthogonal to è 2, and a new search direction 2 is constructed (from 2) to be -orthogonal to 2. The

â ×¥® ×¥® â

è ° endpoints of 2 and 2 ° lie on a plane parallel to 2, because 2 is constructed from 2 by Gram-Schmidt conjugation.

To conclude this section, I note that as with the method of Steepest Descent, the number of matrix-vector

products per iteration can be reduced to one by using a recurrence to ®nd the residual:

m m

§ * ¤

"$ "$

r p n

1 n 1

§ * ¤ 1

m m m

" " "

.¼p Ô

)¡y

n n n

¤ § *

m m m

" " "

Ô r

y

n n n (43)

8. The Method of Conjugate Gradients

It may seem odd that an article about CG doesn't describe CG until page 30, but all the machinery is now in

place. In fact, CG is simply the method of Conjugate Directions where the search directions are constructed

§

m

"

"

Û r by conjugation of the residuals (that is, by setting n ).

This choice makes sense for many reasons. First, the residuals worked for Steepest Descent, so why not for Conjugate Directions? Second, the residual has the nice property that it's orthogonal to the previous search directions (Equation 39), so it's guaranteed always to produce a new, linearly independent search direction unless the residual is zero, in which case the problem is already solved. As we shall see, there is an even better reason to choose the residual.

Let's consider the implications of this choice. Because the search vectors are built from the residuals, the

m m m

– "

"

8

 

•ar r r å

n n

subspace span 0 n 1 1 is equal to . As each residual is orthogonal to the previous search

E E

directions, it is alsoE orthogonal to the previous residuals (see Figure 29); Equation 41 becomes



m

§ §

m

"

r r Œ l £

n 0 (44)

n

E µ

m

" r

Interestingly, Equation 43 shows that each new residual n is just a linear combination of the previous

¤

m m

" "„

" "

8 8

Ô Ô å å nFñ

residual and 1 n . Recalling that 1 , this fact implies that each new subspace 1 is formed

¤

" " å

from the union of the previous subspace å and the subspace . Hence,

"

8

§ ¤ ¤ ¤

m m m m –

" 2 1

 

å •¯Ô Ô Ô Ô

n n n

span 0 n 0 0 0

"

E E E E

8

¤ § m ¤ m ¤ m m

2 1 –

 

•ar r r r

n n n

span 0 n 0 0 0

E E E E

The Method of Conjugate Gradients 31 á

r ß á (2) á

d(2) e(2)

ß á

d á

ß r (1) (1) rá dá (0) (0)

Figure 29: In the method of Conjugate Gradients, each new residual is orthogonal to all the previous

residuals and search directions; and each new search direction is constructed (from the residual) to be

T ða® ×¥® °

-orthogonal to all the previous residuals and search directions. The endpoints of 2 ° and 2 lie on a

×¥® ða® ×¥®

° ° ° plane parallel to è 2 (the shaded subspace). In CG, 2 is a linear combination of 2 and 1 .

This subspace is called a Krylov subspace, a subspace created by repeatedly applying a matrix to a

¤

m

" "$

"$

å å r

vector. It has a pleasing property: because is included in 1, the fact that the next residual 1 n

¤

m

"$ "

"$

å r å

is orthogonal to 1 (Equation 39) implies that 1 n is -orthogonal to . Gram-Schmidt conjugation

¤

m m

"$ "

r Ô n

becomes easy, because 1 n is already -orthogonal to all of the previous search directions except !

§ò*  ¤  ¤

m m

m m

"

"

£ r Ô ªÔ Ô

£ £

£

n

Recall from Equation 37 that the Gram-Schmidt constants are n ; let us

n n

m

"

Ü r

simplify this expression. Taking the inner product of n and Equation 43,

  

m § m * m ¤ m

m m m

" " "

r r r r r Ô

£ £ £ £

y

n n n n

n n

n 1

  

m m m m

¤ § *

m m m

" " "

r Ô r r r r

£ £ £ £

y

n n n n

n n

n 1

m

 §

m "

1 "

r r Œ

n

n

õ

E µ¦E

{S÷‰øxù



¤ § óô

m

m

*  §

m

m

" ô "

1 "

r Ô

r r Œ

£

) n

n 1 (By Equation 44.) n

1 n

E µ E

{S÷úø„û ù

ô 0 otherwise ôö

E

§

» õ

1 ÷‰øxù

÷úøxù

§

Œ

" )

¢ 1

ó

E µ E

£ 1 1

{S÷‰ø$û ù ü}ý ü

z÷‰ø$û ù

z 1 (By Equation 37.)

÷úø$û ù

,

Œ

Ü

ö )

0 ý 1

E µ "

As if by magic, most of the £ terms have disappeared. It is no longer necessary to store old search vectors ¤ to ensure the -orthogonalityÜ of new search vectors. This major advance is what makes CG as important

an algorithm as it is, because both the space complexity and time complexity per iteration are reduced from

1 ¤

2 1

Þ.Âþ þ

Þ. to , where is the number of nonzero entries of . Henceforth, I shall use the abbreviation

m

§

"Âÿ "

" 8

n 1. Simplifying further:

Ü Ü

m



m

"

"

r r

m §

n

n

"

m

 m

n (by Equation 32)

"

"

8

Ô r

8 n

n 1

Ü 1

 m

m

"

"

r r

§

n

n

m 

m (by Equation 42)

"

"

8

r 8 r n

1 n 1

32 Jonathan Richard Shewchuk ¥ 2 4

2 ¥

-4 -2 2 4 6 1

¥

m ¥

0 n -2

-4

-6

Figure 30: The method of Conjugate Gradients.

Let's put it all together into one piece now. The method of Conjugate Gradients is:

m § m §©U*?¤¦¥(m

Ô r

n n

0 n 0 0 (45)

E

m



m

"

"

r r

m §

n

n

"

¤

m



y m

n (by Equations 32 and 42) (46)

"

"

E

Ô Ô

n

n

m m m m

¥ §4¥

"$ " " "

Ô

)Ay

n n n

1 n

E

m m m m

§ * ¤

"$ " " "

r r Ô

y

n n n

1 n (47)

E

 m

m

"$

"$

r r

§

m n

n 1

"$ 1

m



m n (48)

1 "

"

E

r r

n

n

Ü

§

m m m m

"$ " "$ "$

Ô Ô r

)

n n n

1 n 1 1 (49) Ü The performance of CG on our sample problem is demonstrated in Figure 30. The name ªConjugate Gradientsº is a bit of a misnomer, because the gradients are not conjugate, and the conjugate directions are not all gradients. ªConjugated Gradientsº would be more accurate.

9. Convergence Analysis of Conjugate Gradients

CG is complete after iterations, so why should we care about convergence analysis? In practice, accumulated ¯oating point roundoff error causes the residual to gradually lose accuracy, and cancellation Convergence Analysis of Conjugate Gradients 33 error causes the search vectors to lose ¤ -orthogonality. The former problem could be dealt with as it was for Steepest Descent, but the latter problem is not easily curable. Because of this loss of conjugacy, the mathematical community discarded CG during the 1960s, and interest only resurged when evidence for its effectiveness as an iterative procedure was published in the seventies.

Times have changed, and so has our outlook. Today, convergence analysis is important because CG is

commonly used for problems so large it is not feasible to run even iterations. Convergence analysis is seen less as a ward against ¯oating point error, and more as a proof that CG is useful for problems that are out of the reach of any exact algorithm.

The ®rst iteration of CG is identical to the ®rst iteration of Steepest Descent, so without changes, Section 6.1 describes the conditions under which CG converges on the ®rst iteration.

9.1. Picking Perfect Polynomials

m m

"

"

p p å

) n

We have seen that at each step of CG, the value n is chosen from 0 , where

"

8

¤ ¤ ¤ §

m m m m –

" 2 1

 

r å •ar r r

n n n

span 0 n 0 0 0

"

E E E E

§ ¤ m ¤ m ¤ m ¤ m

2 3 –

 

• p p p p

n n n

span 0 n 0 0 0

E E E E

Krylov subspaces such as this have another pleasing property. For a ®xed Œ , the error term has the form

"

·

m §¡ ¢ ¤ m

£¥¤¦

"

p ” £ p

) n

n 0

# £

1 £

m m

" "

£

y n

The coef®cients are related to the values n and , but the precise relationship is not important here.

¹ ¹9»

m

"

£

Ü

£ p

What is important is the proof in Section 7.3 that CG chooses the coef®cients that minimize n .

£

1

" .§‡

The expression in parentheses above can be expressed as a polynomial. Let § be a polynomial of

" §

degree Π. can take either a scalar or a matrix as its argument, and will evaluate to the same; for instance,

¤¢1¦§ ¤

1 § 2 2

§ .§‡ ‡ § . ” )

if 2 2 ) 1, then 2 2 . This ¯exible notation comes in handy for eigenvectors, for

¤¢1 § 1 ¤ § ¤ § "

" 2 2

. † § .§‡ † † ‡œ† † ‡ † which you should convince yourself that § . (Note that , , and so on.)

Now we can express the error term as

m § ¤G1 m

"

"

. p § p n

0 n

E

1:§

"

. £

if we require that § 0 1. CG chooses this polynomial when it chooses the coef®cients. Let's

m m

£

p p n examine the effect of applying this polynomial to 0 n . As in the analysis of Steepest Descent, express 0

as a linear combination of orthogonal unit eigenvectors



·

m §

p £†£

0 n

#

E £

1 ¸

and we ®nd that

·

1 m §

"

"

£¥§ .§‡&£ †a£ p

n

£

¸

·

1 ¤ §

m

"

"

£¥§ .§‡&£ ‡&£†a£ p

n

£

¸

·

¹ ¹

§ J 1

m

» "

" 2 2 2

p § .§‡&£ L ‡&£

£

n

£ ¸

34 Jonathan Richard Shewchuk

1

1

§ .§‡ .§‡ § 0 (a) 1 (b) 1 1 0.75 0.75 0.5 0.5

0.25 0.25

‡ ‡ -0.25 2 7 -0.25 2 7 -0.5 -0.5 -0.75 -0.75

-1 -1

1

1

§ .§‡ .§‡ § 2 (c) 2 (d) 1 1 0.75 0.75 0.5 0.5

0.25 0.25

‡ ‡ -0.25 2 7 -0.25 2 7 -0.5 -0.5 -0.75 -0.75

-1 -1

¨ ’

Figure 31: The convergence of CG after ’ iterations depends on how close a polynomial of degree can

“

O R V

be to zero on each eigenvalue, given the constraint that ¨ 0 1. “

CG ®nds the polynomial that minimizes this expression, but convergence is only as good as the convergence

¤¢1 ¤

of the worst eigenvector. Letting Λ . be the set of eigenvalues of , we have

·

¹ ¹

J 1

m

»

" "

2 Î 2 2

m

»

p © § .§‡ L ‡&£

£

n min max

Λ n

£

ø

¸

¹ ¹

§ J 1

m »

" 2 2

m

»

§ .§‡ L p

©

min max n (50)

Λ 0 n

ø

" §

Figure 31 illustrates, for several values of Π, the that minimizes this expression for our sample problem

1¦§ .

with eigenvalues 2 and 7. There is only one polynomial of degree zero that satis®es § 0 0 1, and that

1d§ 1§ * ¥

.§‡ § .§‡ ª

is § 0 1, graphed in Figure 31(a). The optimal polynomial of degree one is 1 1 2 9,

16§ 1F§´*

. ª § . ª graphed in Figure 31(b). Note that § 1 2 5 9 and 1 7 5 9, and so the energy norm of the error

term after one iteration of CG is no greater than 5 ª 9 its initial value. Figure 31(c) shows that, after two

iterations, Equation 50 evaluates to zero. This is because a polynomial of degree two can be ®t to three

1s§ 1¤§ 1¤§

§ . § . § . points ( 2 0 1, 2 2 0, and 2 7 0). In general, a polynomial of degree can ®t ) 1 points,

and thereby accommodate separate eigenvalues.

The foregoing discussing reinforces our understanding that CG yields the exact result after iterations; and furthermore proves that CG is quicker if there are duplicated eigenvalues. Given in®nite ¯oating point

precision, the number of iterations required to compute an exact solution is at most the number of distinct

¥ m ¤

eigenvalues. (There is one other possibility for early termination: 0 n may already be -orthogonal to

m

¤ ¥

some of the eigenvectors of . If eigenvectors are missing from the expansion of 0 n , their eigenvalues may be omitted from consideration in Equation 50. Be forewarned, however, that these eigenvectors may be reintroduced by ¯oating point roundoff error.)

Convergence Analysis of Conjugate Gradients 35

1

1

Ã

à . 2 . 5 2 2 1.5 1.5 1 1

0.5 0.5

à à -1 -0.5 0.5 1 -1 -0.5 0.5 1 -0.5 -0.5 -1 -1 -1.5 -1.5

-2 -2

1

1

Ã

à . 10 . 49 2 2 1.5 1.5 1 1

0.5 0.5

à à -1 -0.5 0.5 1 -1 -0.5 0.5 1 -0.5 -0.5 -1 -1 -1.5 -1.5 -2 -2

Figure 32: Chebyshev polynomials of degree 2, 5, 10, and 49.

We also ®nd that CG converges more quickly when eigenvalues are clustered together (Figure 31(d))

"

`

‡Ï ‡ÏoÐ than when they are irregularly distributed between  and , because it is easier for CG to choose a polynomial that makes Equation 50 small.

If we know something about the characteristics of the eigenvalues of ¤ , it is sometimes possible to

suggest a polynomial that leads to a proof of a fast convergence. For the remainder of this analysis, however,

"

`

‡Ï ‡œÏsÐ

I shall assume the most general case: the eigenvalues are evenly distributed between  and , the number of distinct eigenvalues is large, and ¯oating point roundoff occurs.

9.2. Chebyshev Polynomials

J

"

`

‡œÏ ‡ÏoÐ L

A useful approach is to minimize Equation 50 over the range  rather than at a ®nite number of points. The polynomials that accomplish this are based on ChebysheE v polynomials.

The Chebyshev polynomial of degree Πis

" "

1c§ * 1 * * 1

à à à à Ã

" 1 2 2

. . .

) ) 1  1

2 

(If this expression doesn't look like a polynomial to you, try working it out for Πequal to 1 or 2.) Several

Chebyshev polynomials are illustrated in Figure 32. The Chebyshev polynomials have the property that

1 * Jx*

à Ã

"

Î

ˆ . ˆ L

1 (in fact, they oscillate between 1 and 1) on the domain ñ 1 1 , and furthermore that

Jx* 1 1

à à Ã

E

" "

ˆ . ˆ l L ˆ . ˆ is maximum on the domain ñ 1 1 among all such polynomials. Loosely speaking, increases as quickly as possible outside the boxE es in the illustration.

36 Jonathan Richard Shewchuk

1 .§‡ § 2 1

0.75

0.5

0.25

1 2 3 4 5 6 7 8 ‡

-0.25

-0.5

-0.75

-1

O\›SR › V ›¦V

Figure 33: The polynomial ¨ 2 that minimizes Equation 50 for 2 and 7 in the general case.

“ This curve is a scaled version of the Chebyshev polynomial of degree 2. The energy norm of the error term after two iterations is no greater than 0.183 times its initial value. Compare with Figure 31(c), where it is known that there are only two eigenvalues.

It is shown in Appendix C3 that Equation 50 is minimized by choosing



b



8

" 2



b



ø

8

¿

1c§

"

ø

¾

§ .§‡

 b



"

 b



ø

8

¿

ø

¾

"

Î Î

`

‡Ï ‡ ‡ÏoÐ

This polynomial has the oscillating properties of Chebyshev polynomials on the domain 

1c§

" .

(see Figure 33). The denominator enforces our requirement that § 0 1. The numerator has a maximum

"

`

‡Ï ‡ÏoÐ

value of one on the interval between  and , so from Equation 50 we have

8 "

` 1

» »

¹ ¹ ¹ ¹

m m

‡ÏoÐ ‡œÏ

)



"

"

Î Ñ

*

p p

"

n n

` 0

‡ÏoÐ ‡œÏ

Ò 

8 1

¹ ¹9»

§

m

Å )

" 1

Ñ

* p

0 n Å

1 Ò

" "

8 

 1

*

© ©

¹ ¹9»

§

m

Å Å

) 1 1

 

*

p

© ©

À )wÀ

2 0 n (51)

Å Å

1 ) 1

Á Á

The second addend inside the square brackets converges to zero as Πgrows, so it is more common to express

the convergence of CG with the weaker inequality

"

*

©

¹ ¹9» ¹ ¹9»

m m

Å 1 "

Î

p p

©

À n

n 2 0 (52) Å

) 1 Á

Complexity 37 Ã

1

0.8

0.6

0.4

0.2

0 Å 20 40 60 80 100

Figure 34: Convergence of Conjugate Gradients (per iteration) as a function of condition number. Compare

with Figure 20. §

The ®rst step of CG is identical to a step of Steepest Descent. Setting Œ 1 in Equation 51, we obtain Equation 28, the convergence result for Steepest Descent. This is just the linear polynomial case illustrated in Figure 31(b).

Figure 34 charts the convergence per iteration of CG, ignoring the lost factor of 2. In practice, CG usually converges faster than Equation 52 would suggest, because of good eigenvalue distributions or good starting points. Comparing Equations 52 and 28, it is clear that the convergence of CG is much quicker than that of Steepest Descent (see Figure 35). However, it is not necessarily true that every iteration of CG enjoys faster convergence; for example, the ®rst iteration of CG is an iteration of Steepest Descent. The factor of 2 in Equation 52 allows CG a little slack for these poor iterations.

10. Complexity

The dominating operations during an iteration of either Steepest Descent or CG are matrix-vector products.

1 þ In general, matrix-vector multiplication requires Þ.Âþ operations, where is the number of non-zero

entries in the matrix. For many problems, including those listed in the introduction, ¤ is sparse and

1

þ Þ. ñ .

Suppose we wish to perform enough iterations to reduce the norm of the error by a factor of ; that is,

¹ ¹ ¹ ¹

m m

"

Î

p p n n 0 . Equation 28 can be used to show that the maximum number of iterations required to achieve this bound using Steepest Descent is

1 1#

Î"! Ñ Å

Πln

E $ 2 Ò

whereas Equation 52 suggests that the maximum number of iterations CG requires is #

1 © 2

Î%! Ñ

Å

Œ ln $ 2 Ò 38 à Jonathan Richard Shewchuk

40

35

30

25

20

15

10

5

0 Å 200 400 600 800 1000

Figure 35: Number of iterations of Steepest Descent required to match one iteration of CG. 1

I conclude that Steepest Descent has a time complexity of Þ.ÂþfÅ , whereas CG has a time complexity of

1 1

©

Å Þ.Âþ Þ.Âþ . Both algorithms have a space complexity of .

Finite difference and ®nite element approximations of second-order elliptic boundary value problems 1

2 ½

Ô Å Þ.Â

z

posed on -dimensional domains often have ñ . Thus, Steepest Descent has a time complexity of

1 1

2 3 ½ 2 Þ.Â

Þ. for two-dimensional problems, versus for CG; and Steepest Descent has a time complexity

1 1 ½

5 ½ 3 4 3 Þ. of Þ. for three-dimensional problems, versus for CG.

11. Starting and Stopping

In the preceding presentation of the Steepest Descent and Conjugate Gradient algorithms, several details have been omitted; particularly, how to choose a starting point, and when to stop.

11.1. Starting

There's not much to say about starting. If you have a rough estimate of the value of ¥ , use it as the starting

m m

¥ ¥ § n value 0 n . If not, set 0 0; either Steepest Descent or CG will eventually converge when used to solve linear systems. Nonlinear minimization (coming up in Section 14) is trickier, though, because there may be several local minima, and the choice of starting point will determine which minimum the procedure converges to, or whether it will converge at all.

11.2. Stopping

When Steepest Descent or CG reaches the minimum point, the residual becomes zero, and if Equation 11 or 48 is evaluated an iteration later, a division by zero will result. It seems, then, that one must stop Preconditioning 39 immediately when the residual is zero. To complicate things, though, accumulated roundoff error in the recursive formulation of the residual (Equation 47) may yield a false zero residual; this problem could be resolved by restarting with Equation 45.

Usually, however, one wishes to stop before convergence is complete. Because the error term is not

available, it is customary to stop when the norm of the residual falls below a speci®ed value; often, this

¹ ¹ ¹ ¹

m m

"

r ‹& r n value is some small fraction of the initial residual ( n 0 ). See Appendix B for sample code.

12. Preconditioning

Preconditioning is a technique for improving the condition number of a matrix. Suppose that ' is a

¤ ¥4§ © symmetric, positive-de®nite matrix that approximates ¤ , but is easier to invert. We can solve

indirectly by solving

8 8 ©

1 ¤¦¥™§ 1

'

' (53)

¤G1*) ¤¢1 8 ¤ ¤

8 1 1

(' Å(. '

If Å(. , or if the eigenvalues of are better clustered than those of , we can iteratively ¤ 8 1

solve Equation 53 more quickly than the original problem. The catch is that ' is not generally ¤

symmetric nor de®nite, even if ' and are.

We can circumvent this dif®culty, because for every symmetric, positive-de®nite ' there is a (not

u§

ž£ž ' ž

necessarily unique) matrix ž that has the property that . (Such an can be obtained, for

¤ ¤

8 8œ

8 1 1

ž ž

instance, by Cholesky factorization.) The matrices ' and have the same eigenvalues. This

¤  8 ¤ 8œ

8 1 1

' ‡ ž † ž ž is true because if † is an eigenvector of with eigenvalue , then is an eigenvector of

with eigenvalue ‡ :

8 8œ   8œ 8  8 

1 1U§ 1 ¤ § ¤ §

1 ¤ 1 1

. ž ž . ž † . ž ž ž † ž ' † ‡'ž †

The system ¤ ¥§ © can be transformed into the problem

+ 8 8œ+  8

¤ © ¥¨§ ¥

1 ¥™§ 1

ž ž ž ž

E E

+ +

¥ 8 ¤ 8œ ¥

¥ 1 ž which we solve ®rst for , then for . Because ž is symmetric and positive-de®nite, can be found by Steepest Descent or CG. The process of using CG to solve this system is called the Transformed

Preconditioned Conjugate Gradient Method:

+

+ 8 8 8œ+

m m § ©s* ¤ ¥ m

§ 1 1

Ô r ž ž ž

n n

0 n 0 0

E

+ +

 m

m

"

"

r r

§

m

n

n

"

+ +

¤

m

y 

8 8œ

n

m

" "

1 E

Ô Ô ž ž

n

n

+

+ +

¥ m § ¥ m m m

"$ " " "

Ô

)Ay

n n n

1 n

E

+

+ +

8 8œ

m § m * m ¤ m

" " "

"$ 1

r r ž ž Ô

y

n n n

1 n

E

+ +

m



m

"$

"$

r r

§ m n

n 1

"„ 1

+ +

m



m n

1 "

"

E

r r

n

n

Ü

+ +

+

m § m m m

"$ "$ " "$

Ô Ô r

)

n n n

1 n 1 1 Ü

40 Jonathan Richard Shewchuk ¥

2 ¥ -4 -2 2 4 6 1

-2

-4

-6

-8

Figure 36: Contour lines of the quadratic form of the diagonally preconditioned sample problem.

ž

This method has the undesirable characteristic that must be+ computed. However, a few careful

+

§ §

m 8 m m  m

" " "

" 1

ž r ž r Ô ž Ô

n n n

variable substitutions can eliminate . Setting n and , and using the identities

+

m m

8œ 8 § 8 ¥ §  ¥ "

" 1 1

ž ž ' ž n n and , we derive the Untransformed Preconditioned Conjugate Gradient

Method:

m § ©o*Y¤ ¥ m

r n

0 n 0

E

8

§ m

1 m

Ô ' r n

0 n 0

E

 8 m

m "

" 1

r ' r

m §

n

n

"

m

¤



y

m

n

"

"

E

Ô Ô

n

n

m m m m

¥ §4¥

"$ " " "

Ô

)Ay

n n n

1 n

E

m § m * m ¤ m

"„ " " "

r r Ô

y

n n n

1 n

E

 8

m

m "$

"$ 1

r ' r

§

m n

n 1

"„ 1

m



8

m n 1 "

" 1

E

r ' r

n

n

Ü

8

m m § m m

"$ "$ "

1 "$

Ô r ' Ô

)

n n n

1 n 1 1 Ü

8 1 ' The matrix ž does not appear in these equations; only is needed. By the same means, it is possible

to derive a Preconditioned Steepest Descent Method that does not use ž . 8

1 ¤ ' The effectiveness of a ' is determined by the condition number of , and occa-

sionally by its clustering of eigenvalues. The problem remains of ®nding a preconditioner that approximates

¤ 8 m

1 "

' r well enough to improve convergence enough to make up for the cost of computing the product n

8 1 ' once per iteration. (It is not necessary to explicitly compute ' or ; it is only necessary to be able to 8 1 compute the effect of applying ' to a vector.) Within this constraint, there is a surprisingly rich supply of possibilities, and I can only scratch the surface here. Conjugate Gradients on the Normal Equations 41

Intuitively, preconditioning is an attempt to stretch the quadratic form to make it appear more spherical, §4¤

so that the eigenvalues are close to each other. A perfect preconditioner is ' ; for this preconditioner, ¤ 8 1

' has a condition number of one, and the quadratic form is perfectly spherical, so solution takes only ¥3§È©

one iteration. Unfortunately, the preconditioning step is solving the system ' , so this isn't a useful preconditioner at all.

The simplest preconditioner is a diagonal matrix whose diagonal entries are identical to those of ¤ . The process of applying this preconditioner, known as diagonal preconditioning or Jacobi preconditioning, is

equivalent to scaling the quadratic form along the coordinate axes. (By comparison, the perfect precondi- §È¤

tioner ' scales the quadratic form along its eigenvector axes.) A diagonal matrix is trivial to invert, but is often only a mediocre preconditioner. The contour lines of our sample problem are shown, after

diagonal preconditioning, in Figure 36. Comparing with Figure 3, it is clear that some improvement has

occurred. The condition number has improved from 3 5 to roughly 2 8. Of course, this improvement is -, much more bene®cial for systems where 2.

A more elaborate preconditioner is incomplete Cholesky preconditioning. Cholesky factorization is a

¤  . technique for factoring a matrix into the form ./. , where is a lower . Incomplete

Cholesky factorization is a variant in which little or no ®ll is allowed; ¤ is approximated by the product

+ + +

¤ 

. . . .

+ + +

, where might be+ restricted to have the same pattern of nonzero elements as ; other elements of are

§

 10

. . . .

Ÿ

+ thrown away. T+ o use as a preconditioner, the solution to is computed by backsubstitution

 . (the inverse of . is never explicitly computed). Unfortunately, incomplete Cholesky preconditioning is not always stable.

Many , some quite sophisticated, have been developed. Whatever your choice, it is generally accepted that for large-scale applications, CG should nearly always be used with a preconditioner.

13. Conjugate Gradients on the Normal Equations

CG can be used to solve systems where ¤ is not symmetric, not positive-de®nite, and even not square. A

solution to the least squares problem

¹ ¹ ¤¦¥>*?© 2

min` (54)

can be found by setting the derivative of Expression 54 to zero:

 

¤ ¤¦¥™§4¤ ©

(55)

¤ ¥§© ¤

If ¤ is square and nonsingular, the solution to Equation 55 is the solution to . If is not square, §q©

and ¤¦¥ is overconstrained Ð that is, has more linearly independent equations than variables Ð then ¥ there may or may not be a solution to ¤ ¥¨§È© , but it is always possible to ®nd a value of that minimizes

Expression 54, the sum of the squares of the errors of each linear equation.

¹ ¹

¥ ¥(¤¦(¤ ¥™§ ¤ ¥ ¤ ¥§© ¤¦(¤ 2

is symmetric and positive (for any , Ä 0). If is not underconstrained,

¤ ¤

then  is nonsingular, and methods like Steepest Descent and CG can be used to solve Equation 55. The ¤ only nuisance in doing so is that the condition number of ¤6(¤ is the square of that of , so convergence is signi®cantly slower.

An important technical point is that the matrix ¤6(¤ is never formed explicitly, because it is less sparse

¤ ¤ ¤ ¤ ¤ ¤

 

Ô Ô than . Instead, is multiplied by Ô by ®rst ®nding the product , and then . Also, numerical

42 Jonathan Richard Shewchuk

¤ ¤ ¤

 

Ô Ô stability is improved if the value Ô (in Equation 46) is computed by taking the inner product of with itself.

14. The Nonlinear Conjugate Gradient Method

CG can be used not only to ®nd the minimum point of a quadratic form, but to minimize any continuous

¥M1

; ; ]

function . for which the gradient can be computed. Applications include a variety of optimization problems, such as engineering design, neural net training, and nonlinear regression.

14.1. Outline of the Nonlinear Conjugate Gradient Method

To derive nonlinear CG, there are three changes to the linear algorithm: the recursive formula for the residual

cannot be used, it becomes more complicated to compute the step size y , and there are several different

choices for .

Ü

§+* ¥ 1

m ; m

]

" "

r . n

In nonlinear CG, the residual is always set to the negation of the gradient; n . The search directions are computed by Gram-Schmidt conjugation of the residuals as with linear CG. Performing a line

search along this search direction is much more dif®cult than in the linear case, and a variety of procedures

¥ 1

m ; m m m

" " " "

. Ô

y )¡y

n n n can be used. As with the linear CG, a value of n that minimizes is found by ensuring

that the gradient is orthogonal to the search direction. We can use any algorithm that ®nds the zeros of the

J ¥ m m m 1 m

; 

]

" " " "

. Ô L Ô

)Ay

n n n expression n .

In linear CG, there are several equivalent expressions for the value of . In nonlinear CG, these different

expressions are no longer equivalent; researchers are still investigating theÜ best choice. Two choices are the Fletcher-Reeves formula, which we used in linear CG for its ease of computation, and the Polak-RibiereÁ

formula:

m m * m 1  

m m

"$ "$ "

"„ "$

©

r .Âr r r r

§ §

n n n

m m n

1 n 1 1 1

"$ "$

m m

 

m m

" "

n n "

1 " 1

E

r r r r

n n

n n

Ü3214 Ü 4

The Fletcher-Reeves method converges if the starting point is suf®ciently close to the desired minimum, whereas the Polak-RibiereÁ method can, in rare cases, cycle in®nitely without converging. However, Polak-

RibiereÁ often converges much more quickly.

©

§ –

Fortunately, convergence of the Polak-RibiereÁ method can be guaranteed by choosing max • 0 .

©

E

Ü Ü 4

Using this value is equivalent to restarting CG if ‹ 0. To restart CG is to forget the past search Ü directions, and start CG anew in the direction of steepest4 descent.

Here is an outline of the nonlinear CG method:

m m m

§ §+* ] ¥ 1

;

Ô r .

n n

0 n 0 0

E

¥(m m m 1 m

;

" " " "

. Ô

)ty y

n n n

Find n that minimizes

E

m m m m

¥ §4¥

"$ " " "

Ô

)Ay

n n n

1 n

E

m m

§+* ] ¥ 1

;

"$ "$

. r n

1 n 1

E

m m m

1   *

m m

" "„ "$

"$ "$

r r r r .Âr

m m

§ § ³

n n n n

n 1 1 "$

1 "$ 1

5

m m

 

m m n

n or max 0 "

1 " 1

" "

E E

r r r r

n n

n n

Ü Ü

The Nonlinear Conjugate Gradient Method 43

§

m m m m

"$ "$ "$ "

Ô r Ô

)

n n n

1 n 1 1 Ü Nonlinear CG comes with few of the convergence guarantees of linear CG. The less similar ; is to a quadratic function, the more quickly the search directions lose conjugacy. (It will become clear shortly that the term ªconjugacyº still has some meaning in nonlinear CG.) Another problem is that a general function ; may have many local minima. CG is not guaranteed to converge to the global minimum, and may not even ®nd a local minimum if ; has no lower bound.

Figure 37 illustrates nonlinear CG. Figure 37(a) is a function with many local minima. Figure 37(b) demonstrates the convergence of nonlinear CG with the Fletcher-Reeves formula. In this example, CG is not nearly as effective as in the linear case; this function is deceptively dif®cult to minimize. Figure 37(c) shows a cross-section of the surface, corresponding to the ®rst line search in Figure 37(b). Notice that there

are several minima; the line search ®nds a value of y corresponding to a nearby minimum. Figure 37(d)

shows the superior convergence of Polak-RibiereÁ CG.

Because CG can only generate conjugate vectors in an -dimensional space, it makes sense to restart

CG every iterations, especially if is small. Figure 38 shows the effect when nonlinear CG is restarted every second iteration. (For this particular example, both the Fletcher-Reeves method and the Polak-RibiereÁ method appear the same.)

14.2. General Line Search



; ;

] ]

Depending on the value of , it might be possible to use a fast algorithm to ®nd the zeros of Ô . For

; ]

instance, if is polynomial in y , then an ef®cient algorithm for polynomial zero-®nding can be used. However, we will only consider general-purpose algorithms.

Two iterative methods for zero-®nding are the Newton-Raphson method and the Secant method. Both

methods require that ; be twice continuously differentiable. Newton-Raphson also requires that it be

1 ¥

;

Ô . y possible to compute the second derivative of )Ay with respect to .

The Newton-Raphson method relies on the Taylor series approximation

2 2

176 ¥M1 1 ¥ 1;: B ¥ ¥

; ; ; ;

Ô Ô

y

Ô . Ô . Ô . .

)Ay98 )Ay )Ay )

)ty (56) #

2 #

D

Ô Ô y y 0 2

{ 0

2 {

 

¥M1 J ] ¥M1 § ] ] ¥<1

; ; ;

y

. . L Ô Ô . Ô )

)Ay 2

 

¥ 176 J ] ¥<1 ] ] ¥<1

; ; ;

Ô

. Ô . L Ô Ô . Ô )Ay

)Ay (57)

Ô

y

¥<1

;

] ]

where . is the

_ _ _

 

< <

2 < 2 2

 

b

_` _` _` _` _` _`

 

 

_ _

_ 1 1 1 2 1

 

< <

2 < 2 2

 

b

_` _` _` _` _` _`

¥M1c§ ]

;^] 2 1 2 2 2

. . .. . 

 . . .

_ _ _

< <

2 < 2 2

_`ab¯_` _`ab¯_a` _`ab_`ab   1 2

44 Jonathan Richard Shewchuk ¥ 2 (b) (a)

6

4

500

250

¥<1

;

2 ¥ m 0 .

6 0 n

4 -250 ¥

¥ 2 6 -4 -2 2 4 6 1 2 4

0 2 ¥ 0 ¥ 1 -2 -2 -2

-4

¥

¥ 1

m m

; 2 (d)

" " Ô

. (c)

)Ay

n n

600 6

400 4

200 2 ¥(m

0 n ¥

y 1 -0.04 -0.02 0.02 0.04 -4 -2 2 4 6

-200 -2

Figure 37: Convergence of the nonlinear Conjugate Gradient Method. (a) A complicated function with many local minima and maxima. (b) Convergence path of Fletcher-Reeves CG. Unlike linear CG, convergence does not occur in two steps. (c) Cross-section of the surface corresponding to the ®rst line search. (d) Convergence path of Polak-RibiereÁ CG.

The Nonlinear Conjugate Gradient Method 45 ¥ 2

6

4

2 ¥(m

0 n ¥ -4 -2 2 4 6 1

-2

Figure 38: Nonlinear CG can be more effective with periodic restarts.

1

0.75

0.5

0.25

-1 -0.5 0.5 1 1.5 2 y

-0.25 ¥

-0.5

-0.75 Ÿ

-1

Figure 39: The Newton-Raphson method for minimizing a one-dimensional function (solid curve). Starting

from the point P , calculate the ®rst and second derivatives, and use them to construct a quadratic approxi-

mation to the function (dashed curve). A new point = is chosen at the base of the parabola. This procedure is iterated until convergence is reached.

46 Jonathan Richard Shewchuk

¥ 1

;

. Ô

The function )Ay is approximately minimized by setting Expression 57 to zero, giving

; 

]

§+*

Ô

;



] ]

y

Ô Ô

¥ 1

;

. Ô

The truncated Taylor series approximates )Yy with a parabola; we step to the bottom of the parabola

; ; ]

(see Figure 39). In fact, if is a quadratic form, then this parabolic approximation is exact, because ] is

¤

; ]

just the familiar matrix . In general, the search directions are conjugate if they are ] -orthogonal. The

¥

; ;

] ] ]

meaning of ªconjugateº keeps changing, though, because ] varies with . The more quickly varies

¥ ¥(m "

with , the more quickly the search directions lose conjugacy. On the other hand, the closer n is to the

; ] solution, the less ] varies from iteration to iteration. The closer the starting point is to the solution, the more similar the convergence of nonlinear CG is to that of linear CG.

To perform an exact line search of a non-quadratic function, repeated steps must be taken along the line



; ]

until Ô is zero; hence, one CG iteration may include many Newton-Raphson iterations. The values of

;  Ó; Š;

] ] ] ] ]

Ô Ô Ô Ô

Ô and must be evaluated at each step. These evaluations may be inexpensive if can be

; ] analytically simpli®ed, but if the full matrix ] must be evaluated repeatedly, the algorithm is prohibitively

slow. For some applications, it is possible to circumvent this problem by performing an approximate line

; ]

search that uses only the diagonal elements of ] . Of course, there are functions for which it is not possible

; ]

to evaluate ] at all.

; ]

To perform an exact line search without computing ] , the Secant method approximates the second

1 ¥ § §?> >

;

Ô .

y y derivative of )Yy by evaluating the ®rst derivative at the distinct points 0 and , where is

an arbitrary small nonzero number:

J 1 *ÝJ 1 ¥ ¥

; ;

BA #

2 #

Ô L Ô L . .

1@6 z z ¥ > §

;

Ô )Ay

)Ay 0

{ {

>

Ô . l

z|{ z|{

2 )¡y 0

Ô

y

J ¥ > 1  *ÝJ ¥M1 

; ;

] ]

§

. Ô L Ô . L Ô )

> (58)

E >

which becomes a better approximation to the second derivative as y and approach zero. If we substitute

Expression 58 for the third term of the Taylor expansion (Equation 56), we have

  

1*6qJ ¥ ¥<1 J ¥ > 1 *%J ¥M1

; ; ; ;

] ] ]

Ô

y

>DC

Ô . . L Ô . Ô L Ô . L ÔFE

)Ay ) )

Ô

y

¥ 1

;

. Ô

Minimize )Ay by setting its derivative to zero:

¥M1  J

;

]

§u*G>

. L Ô

J ¥ > 1 *ÝJ ¥<1

; ;

 

] ]

y (59)

. Ô L Ô . L Ô

)

¥ 1

;

. Ô

Like Newton-Raphson, the Secant method also approximates )%y with a parabola, but instead of choosing the parabola by ®nding the ®rst and second derivative at a point, it ®nds the ®rst derivative at

two different points (see Figure 40). Typically, we will choose an arbitrary > on the ®rst Secant method

¥ > ¥ Ô

iteration; on subsequent iterations we will choose ) to be the value of from the previous Secant

"J

Œ

IH y

method iteration. In other words, if we let y denote the value of calculated during Secant iteration ,

> §+*

"$ J "KJ

H H then 1 y .

Both the Newton-Raphson and Secant methods should be terminated when ¥ is reasonably close to the exact solution. Demanding too little precision could cause a failure of convergence, but demanding too ®ne precision makes the computation unnecessarily slow and gains nothing, because conjugacy will break The Nonlinear Conjugate Gradient Method 47

1

0.5

-1 1 2 3 4 y ¥ -0.5

-1 Ÿ

-1.5

Figure 40: The Secant method for minimizing a one-dimensional function (solid curve). Starting from the

L V L V point P , calculate the ®rst derivatives at two different points (here, 0 and 2), and use them to

construct a quadratic approximation to the function (dashed curve). Notice that both curves have the same

>V L>V = slope at L 0, and also at 2. As in the Newton-Raphson method, a new point is chosen at the base

of the parabola, and the procedure is iterated until convergence.

¥<1 ¥

;

] ]

down quickly anyway if . varies much with . Therefore, a quick but inexact line search is often the better policy (for instance, use only a ®xed number of Newton-Raphson or Secant method iterations).

Unfortunately, inexact line search may lead to the construction of a search direction that is not a descent

 Ô

direction. A common solution is to test for this eventuality (is r nonpositive?), and restart CG if necessary

§ r by setting Ô .

A bigger problem with both methods is that they cannot distinguish minima from maxima. The result of nonlinear CG generally depends strongly on the starting point, and if CG with the Newton-Raphson or Secant method starts near a local maximum, it is likely to converge to that point.

Each method has its own advantages. The Newton-Raphson method has a better convergence rate, and is

 1

;

] ]

Ô Þ.Â

to be preferred if Ô can be calculated (or well approximated) quickly (i.e., in time). The Secant > method only requires ®rst derivatives of ; , but its success may depend on a good choice of the parameter .

It is easy to derive a variety of other methods as well. For instance, by sampling ; at three different points,

¥ 1

;

. Ô it is possible to generate a parabola that approximates )¡y without the need to calculate even a ®rst derivative of ; .

14.3. Preconditioning

;

] ]

Nonlinear CG can be preconditioned by choosing a preconditioner ' that approximates and has the

8 1 r property that ' is easy to compute. In linear CG, the preconditioner attempts to transform the quadratic

form so that it is similar to a sphere; a nonlinear CG preconditioner performs this transformation for a region

m

¥ "

near n .

; ] Even when it is too expensive to compute the full Hessian ] , it is often reasonable to compute its diagonal

48 Jonathan Richard Shewchuk ¥ 2

6

4

2

¥ m

0 n ¥ -4 -2 2 4 6 1

-2

Figure 41: The preconditioned nonlinear Conjugate Gradient Method, using the Polak-RibiereÁ formula and a diagonal preconditioner. The space has been ªstretchedº to show the improvement in circularity of the contour lines around the minimum. for use as a preconditioner. However, be forewarned that if ¥ is suf®ciently far from a local minimum, the

diagonal elements of the Hessian may not all be positive. A preconditioner should be positive-de®nite, so

§ ” nonpositive diagonal elements cannot be allowed. A conservative solution is to not precondition (set ' ) when the Hessian cannot be guaranteed to be positive-de®nite. Figure 41 demonstrates the convergence of

diagonally preconditioned nonlinear CG, with the Polak-RibiereÁ formula, on the same function illustrated

¥

; ] in Figure 37. Here, I have cheated by using the diagonal of ] at the solution point to precondition every iteration.

A Notes

Conjugate Direction methods were probably ®rst presented by Schmidt [14] in 1908, and were independently reinvented by Fox, Huskey, and Wilkinson [7] in 1948. In the early ®fties, the method of Conjugate Gradients was discovered independently by Hestenes [10] and Stiefel [15]; shortly thereafter, they jointly published what is considered the seminal reference on CG [11]. Convergence bounds for CG in terms of Chebyshev polynomials were developed by Kaniel [12]. A more thorough analysis of CG convergence is provided by van der Sluis and van der Vorst [16]. CG was popularized as an iterative method for large, sparse matrices by Reid [13] in 1971.

CG was generalized to nonlinear problems in 1964 by Fletcher and Reeves [6], based on work by Davidon [4] and Fletcher and Powell [5]. Convergence of nonlinear CG with inexact line searches was

analyzed by Daniel [3]. The choice of for nonlinear CG is discussed by Gilbert and Nocedal [8]. Ü A history and extensive annotated bibliography of CG to the mid-seventies is provided by Golub and O'Leary [9]. Most research since that time has focused on nonsymmetric systems. A survey of iterative methods for the solution of linear systems is offered by Barrett et al. [1]. Canned Algorithms 49 B Canned Algorithms

The code given in this section represents ef®cient implementations of the algorithms discussed in this article.

B1. Steepest Descent

¤ © ¥ `

Given the inputs , , a starting value , a maximum number of iterations Œ§ÏoÐ , and an error tolerance 5‹

1: NM

Π0

©U*?¤¦¥

rOM



M r r Ù

0 M ,

` 2

Ù Ù

ÏsÐ

While ŒU‹¡Œ and 0 do

¤

P

Ù Ù

M r

M Q

y

¥ ¥

r M

)Ay

ü

ýR

If Πis divisible by 50

©s*k¤ ¥

rSM

else

*

P

rSMCr

y



MCr r

Ù

ŒTM Œ

) 1 `

This algorithm terminates when the maximum number of iterations Œ§ÏoÐ has been exceeded, or when

¹ ¹ ¹ ¹

m m

"

Î

r r n n 0 .

The fast recursive formula for the residual is usually used, but once every 50 iterations, the exact residual

is recalculated to remove accumulated ¯oating point error. Of course, the number 50 is arbitrary; for large

©

, might be appropriate. If the tolerance is large, the residual need not be corrected at all (in practice, this correction is rarely used). If the tolerance is close to the limits of the ¯oating point precision of the

Î 2

machine, a test should be added after is evaluated to check if 0, and if this test holds true, the exact

Ù Ù residual should also be recomputed andÙ reevaluated. This prevents the procedure from terminating early due to ¯oating point roundoff error. Ù 50 Jonathan Richard Shewchuk

B2. Conjugate Gradients

¤ © ¥ `

Given the inputs , , a starting value , a maximum number of iterations Œ§ÏoÐ , and an error tolerance 5‹

1: NM

Π0

©o*Y¤¦¥

rUM

ÔVMCr



«XWYMKr r



Ù

«XW M

0  ,

` 2

Ù Ù

«;W Œc‹¨Œ2ÏoÐ

While and  0 do

¤

P

Ù Ù

M Ô

b\[(]

MZQ

y

¥ ¥

z

M Ô

)Ay

ý R

If Πis divisible by 50

©U*?¤¦¥

rOM

else

*

P

rOM r

y

M «XW





z

Ù Ù_^a`

«;WbMCr r



bd[(]

Ù

McQ

Qfe;g h

Ü

ÔiM r Ô

)

Ü

ŒNMKŒ

) 1

See the comments at the end of Section B1. Canned Algorithms 51

B3. Preconditioned Conjugate Gradients

¤ © ¥

Given the inputs , , a starting value , a (perhaps implicitly de®ned) preconditioner ' , a maximum

`

£‹

number of iterations Œ§ÏoÐ , and an error tolerance 1: NM

Π0

©o*Y¤¦¥

rUM

8 1

ÔVMj' r



«XWYMKr Ô



Ù

«XW M

0  ,

` 2

Ù Ù

«;W Œc‹¨Œ2ÏoÐ

While and  0 do

¤

P

Ù Ù

M Ô

b\[(]

MZQ

y

¥ ¥

z

M Ô

)Ay

ý R

If Πis divisible by 50

©U*?¤¦¥

rOM

else

*

P

rOM r

y 8

k 1

Mj' r

M «XW





z

k

Ù Ù_^a`

«;WbMCr



bd[(]

Ù

Q

M

e;g h

Q

k

Ü

Ô ÔiM

)

Ü

ŒNMKŒ

) 1 8

k 1 r The statement ª Ml' º implies that one should apply the preconditioner, which may not actually be in the form of a matrix.

See also the comments at the end of Section B1. 52 Jonathan Richard Shewchuk

B4. Nonlinear Conjugate Gradients with Newton-Raphson and Fletcher-Reeves

¥

; `

Given a function , a starting value , a maximum number of CG iterations Œ2ÏoÐ , a CG error tolerance

`

#

‹ ÏoÐ

1, a maximum number of Newton-Raphson iterations , and a Newton-Raphson error tolerance µ

‹ 1: NM

Œ 0 ¶

M 0

* ¥M1

;

]

rUM .

ÔVMCr



«XWYMKr r



Ù

«XW M

0  ,

` 2

Ù Ù

«;W Œc‹¨Œ2ÏoÐ

While and  0 do

Ù Ù

M 0



µ

MKÔ Ô

z Ù

Do m

J

`

<_m

*

H

m

n

`

z

M

<

y

m m

n

¥ ¥

z z

ý

Ô M

)ty

ý

# M

) 1

,

µ µ

` 2 2

‹ ÏsÐ

while and y

* ¥<1

;

z

]

µ µ

Ù

rSM .

«XW

M





z

Ù Ù_^a`

«;WbMCr r



bd[(]

Ù

McQ

e;g h

Q

Ü

ÔiM r Ô

)

¶ ¶

Ü M

) 1

¶d§ 

Î

r Ô

If or 0

ÔiMCr ¶

M 0

ŒNMKŒ

) 1 `

This algorithm terminates when the maximum number of iterations Œ§ÏoÐ has been exceeded, or when

¹ ¹ ¹ ¹

m m

"

Î

r r n

n 0 .

¥

#

Ô Ô y

Each Newton-Raphson iteration adds y to ; the iterations are terminated when each update falls

¹ ¹

Î

`

Ô ÏsÐ

below a given tolerance ( y ), or when the number of iterations exceeds . A fast inexact line

¥<1

;

] ]

µ

` ÏoÐ search can be accomplished by using a small and/or by approximating the Hessian . with its

diagonal. µ

DM r

Nonlinear CG is restarted (by setting Ô ) whenever a search direction is computed that is not a

descent direction. It is also restarted once every iterations, to improve convergence for small .

¥ m y The calculation of may result in a divide-by-zero error. This may occur because the starting point 0 n is not suf®ciently close to the desired minimum, or because ; is not twice continuously differentiable. In the former case, the solution is to choose a better starting point or a more sophisticated line search. In the latter case, CG might not be the most appropriate minimization algorithm. Canned Algorithms 53

B5. Preconditioned Nonlinear Conjugate Gradients with Secant and Polak-RibierÁ e

¥

; `

Given a function , a starting value , a maximum number of CG iterations Œ2ÏoÐ , a CG error tolerance

>

`

#

™‹ ÏoÐ

1, a Secant method step parameter 0, a maximum number of Secant method iterations , and a µ

Secant method error tolerance ‹ 1: NM

Œ 0 ¶

M 0

* ¥<1

;

]

rOM .

6 ¥<1

;

] ] '

Calculate a preconditioner . 8

k 1

r Mn'

k

ÔiM



«;WoM r Ô



Ù

«;W M

0  ,

` 2

Ù Ù

ŒU‹¨Œ§ÏoÐ «XW

While and  0 do

Ù Ù

M 0



µ

M Ô Ô

*G>

z

Ù M

y 0

J ¥ > 1 

;

]

prq

«;stM . Ô L Ô

) 0

Do

ü

J ¥<1 

;

]

p

M . L Ô

[(y

M

8

y y u

¥ ¥

uwvax u

Ô M

)ty

prq p

«XstM

# M

) 1

ü

,

µ µ

` 2 2

‹ ÏoÐ

while and y

* ¥M1

;

z

]

µ µ

Ù

rOM .

M «;W





z

"

k

Ù Ù_^w`

M r

Ï

6 ¥<1

;

z

] ]

Ù '

Calculate a preconditioner . 8

k 1

r Mj'



k

«XWoMCr



b\[(]



8

Ù

ø

Q Q h

M

¶ ¶

e;g h

Q

Ü M

) 1

¶d§ Î

If or 0

k

Ü

ÔzM ¶

M 0

else

k

ÔzM Ô

)

Ü

ŒNM Œ

) 1 `

This algorithm terminates when the maximum number of iterations Œ§ÏoÐ has been exceeded, or when

¹ ¹ ¹ ¹

m m

"

Î

r r n

0 n .

¥

#

Ô Ô y

Each Secant method iteration adds y to ; the iterations are terminated when each update falls below

¹ ¹

Î

`

ÏoÐ Ô

a given tolerance ( y ), or when the number of iterations exceeds . A fast inexact line search can

> >

µ ` be accomplished by using a small ÏoÐ . The parameter 0 determines the value of in Equation 59 for the ®rst step of each Secant method minimization.µ Unfortunately, it may be necessary to adjust this parameter 54 Jonathan Richard Shewchuk

to achieve convergence.

m

8 8

b\[]

 1

8

û

n §

1 § 1

÷úøK{ ù ÷úøjù ù ÷úøjù {

1 1 1 ÷‰ø

÷úøK{ ù}| ÷úøK{ ù}| ù~ ÷úø{

ø

h Q

The Polak-RibiereÁ parameter is Q 1 . Care must

û

e;g h ü ý ü}ý ü ü}ý ü

Q

÷úøjù ÷úøjù

| ~

÷úøjù ÷úøjù Ü

be taken that the preconditioner ' is always positive-de®nite. The preconditioner is not necessarily in the

ü ü ü

ý ý

form of a matrix.

€M r

Nonlinear CG is restarted (by setting Ô ) whenever the Polak-RibiereÁ parameter is negative. It is

Ü also restarted once every iterations, to improve convergence for small .

Nonlinear CG presents several choices: Preconditioned or not, Newton-Raphson method or Secant method or another method, Fletcher-Reeves or Polak-Ribiere.Á It should be possible to construct any of the variations from the versions presented above. (Polak-RibiereÁ is almost always to be preferred.)

C Ugly Proofs

C1. The Solution to ¤ ¥¨§ © Minimizes the Quadratic Form

¥ ¤¦¥ § © Suppose ¤ is symmetric. Let be a point that satis®es and minimizes the quadratic form

(Equation 3), and let p be an error term. Then

 

1 ¥ 1Š*?© 1Õ§ 1 ¤ ¥ ¥ ¥

; 1

p . p p p . . .

)A@ ) ) ) ) (by Equation 3)

2

    

¤ ¥ ¥ ¤ ¥ ¤ *?© ¥d*t© ¤

§ 1 1

p p p p

) ) )A@ (by symmetry of )

2 2

    

© ¥ ¤ ¥d*?© ¥ ¤ *?©

§ 1 1

p p p p )

2 )t@U) 2



§ ¥<1 ¤

; 1

. p p

) 2

¤ § ¥ ;

If is positive-de®nite, then the latter term is positive for all pl 0; therefore minimizes .

C2. A Symmetric Matrix Has Orthogonal Eigenvectors.

¤u* 1

‡'” ‡

Any matrix has at least one eigenvector. To see this, note that det . is a polynomial in , and

» »

¤ *

‡ ”

therefore has at least one zero (possibly complex); call it . The matrix ‡ has determinant zero

»

¤4* 1 §

. ‡ ” † †

and is therefore singular, so there must be a nonzero vector † for which 0. The vector is

»

¤ §

‡ † an eigenvector, because † .

Any symmetric matrix has a full complement of orthogonal eigenvectors. To prove this, I will ¤

demonstrate for the case of a 4  4 matrix and let you make the obvious generalization to matrices of

»

¤ ‡

arbitrary size. It has already been proven that has at least one eigenvector † with eigenvalue . Let

¹ ¹

¥ § ¥ ¥ ¥ ¥

" †

1 †œª , which has unit length. Choose three arbitrary vectors 2 3 4 so that the are mutually E

orthogonal and all of unit length (such vectors can be found by Gram-SchmidtE orthogonalization). Let

§´J ¥ ¥ ¥ ¥ ¥  § ?§ 8

    

" 1 ” 1 2 3 4 L . Because the are orthonormal, , and so . Note also that for

Ugly Proofs 55

ƒ ƒ ƒ ƒ

ŒG‚ „† „‰ˆ „† „ „‰ „ ˆ3Š†‹

1, ˆ 0. Thus, ‡

1 ‡ 1 1

’‘ ’‘

‘ ‘

„† „† ‘

1 ‘ 1

Œ Œ

„ „

ƒ Ž Ž

2 ƒ— 2

„ „ „ „ „ „ „ „

Š†‹

– –

Ž Ž

‡ ‡ ‡ ‡ ”

‡•” 1 2 3 4 1 2 3 4

“ “

„ „

Ž 

3 Ž 3 „†

„† 4 4

’‘ ‘

Š‰‹ 0 0 0 ‘

0

ƒ Ž

Ž

“š™

˜  Ž 0

0

˜ › ˜ œ œ ž where is a 3 3 symmetric matrix. must have some eigenvalue with eigenvalue Š . Let be a

vector of length 4 whose ®rst element is zero, and whose remaining elements are the elements of œ . Clearly,

’‘

‘ ‹

Š 0 0 0

‘

Œ Œ Œ

Π٠0

ƒ—Ž ƒ

1 ƒ

œ¢¡ œ œ œ

ž ž ž ž

Š

Ž

‡ ‡

“

˜ Ž  0

0

Œ Œ Œ Œ

ƒ ƒ

œ ž œ ž œ ž „ œ ž

In other words, Š‰ , and thus is an eigenvector of . Furthermore, ‡

‡ 1

£

Œ

ƒ

œ „ œ ž 1 0 0 0 ¤¥ž 0, so and are orthogonal. Thus, has at least two orthogonal eigenvectors!

1 ‡

The more general statement that a symmetric matrix has ¦ orthogonal eigenvectors is proven by induction.

› ˜ œ

For the example above, assume any 3 3 matrix (such as ) has 3 orthogonal eigenvectors; call them ž 1,

Œ Œ Œ Œ

œ œ ž œ ž œ ž œ ž ž , and . Then , , and are eigenvectors of , and they are orthogonal because

2 3 1 2 3 ‡

has orthonormal columns, and therefore maps orthogonal vectors to orthogonal vectors. Thus, has 4 ‡ orthogonal eigenvectors.

C3. Optimality of Chebyshev Polynomials

Chebyshev polynomials are optimal for minimization of expressions like Expression 50 because they £§

increase in magnitude more quickly outside the range 1 1 ¤ than any other polynomial that is restricted to

£§ ™

have magnitude no greater than one inside the range 1 1 ¤ . ™

The Chebyshev polynomial of degree ¨ ,

§ §

© ˆ 1 ˆ

ƒ 2 2

ˆª}«­¬ ª}«o®¯ « ® ¬ ®°ª}« ¯ « ¬

1 1 – ”

2 ™ £§

can also be expressed on the region 1 1 ¤ as

™

§

Ÿ ©

ƒ 1

ª¨ «­¬ ±²«³± ¡

ˆaª}«­¬ cos cos 1 1 ™

From this expression (and from Figure 32), one can deduce that the Chebyshev polynomials have the ´

property ´

§

©

± ±³«9±

ˆª}«­¬ 1 1 1 ™

and oscillate rapidly between § 1 and 1:

§

©

ƒ ƒ

ª µ3¶¸· ¬w» ¡¥¡¥¡ ¨¼¡

ˆTµ cos 1 0 1

™ ™ ™ ™

¶ ¨º¹*¹

56 Jonathan Richard Shewchuk

£§

© ©

ˆ ¨†® ˆ ¤

Notice that the ¨ zeros of must fall between the 1 extrema of in the range 1 1 . For an example,

© ™ see the ®ve zeros of 5 ª}«­¬ in Figure 32.

Similarly, the function ½ Ÿ

© 2

ˆ1¾À¿rÁFÂÃ¥ÄB¿ ¿

ÁBŒÆ

Ÿ

ƒ

¿ ¿

ÁFÂÃ

ÁFÅKƗÇ

ˆ¼ª ¬

Š

©

½

¾¿ ÄB¿

ˆ ÁFÂÃ

ÁFÅKÆ

Ÿ

¿ ¿

ÁFÂÃ

Ç

ÁFÅKÆ

½

£ ©

Ÿ 1

¬ È ˆª_¿rÁFÂÃ_ÄB¿ ˆÌË ¤ ˆª ¬

ÁFÅKÆ

Ÿ

Š†ÉIͼЊ

oscillates in the range on the domain ŠÊÉ . also satis®es the requirement

™

¿rÁFÂÃ ¿

ÁFÅKÆ

ƒ ˆaª

that 0 ¬ 1.

½

ƒ

ÏЈ¼ª ¬ ¨ ÏЈª ¬ ÏЈ Š

The proof relies on a cute trick. There is no polynomial of degree such that½ 0 1 and

£

ˆ ˆÌË

¤

É É­Í¼Î

Š Š

is better than on the range . To prove this, suppose½ that there is such a polynomial; then,

§ £

© ™

Ÿ 1

¬ ˆ ÏЈ ÏЈª ¬ÐÑ ˆª ˆÌË ¤

¿rÁFÂaÃrÄB¿

ÁFÅKÆ

Ÿ

ŠÊÉ Š†ÉIÍÒÎ

Š on the range . It follows that the polynomial has a zero

§ £

™

¿rÁFÂÃ ¿

ÁFÅKÆ

ƒ

¨ ˆ ÏUˆ ¨ ˆË ¤

ŠÊÉ Š†ÉIͼÎ

at Š 0, and also has zeros on the range . Hence, is a polynomial of degree with ™

at least ¨1® 1 zeros, which is impossible. By contradiction, I conclude that the Chebyshev polynomial of

degree ¨ optimizes Expression 50.

D Homework

For the following questions, assume (unless otherwise stated) that you are using exact arithmetic; there is no ¯oating point roundoff error.

1. (Easy.) Prove that the preconditioned Conjugate Gradient method is not affected if the preconditioning

matrix is scaled. In other words, for any nonzero constant Ó , show that the sequence of steps

„ÕÔ „ÕÔ ¡¥¡¥¡ ÓB× Ö

0 Ö 1 taken using the preconditioner is identical to the steps taken using the preconditioner

™ ™

× . ƒlØ

2. (Hard, but interesting.) Suppose you wish to solve „ for a symmetric, positive-de®nite ‡

¦Ù›Ú¦ matrix . Unfortunately, the trauma of your linear algebra course has caused you to repress ‡ all memories of the Conjugate Gradient algorithm. Seeing you in distress, the Good Eigenfairy

materializes and grants you a list of the Û distinct eigenvalues (but not the eigenvectors) of . ‡ However, you do not know how many times each eigenvalue is repeated.

Clever person that you are, you mumbled the following algorithm in your sleep this morning:

Ô „

Choose an arbitrary starting point 0 Ö

§ Û

For ¨TÜ 0 to 1

§

Ý

Ø

Ô Ü „ÕÔ

ˆ ˆ

Ö Ö

‡ ˆ

Remove an arbitrary eigenvalue from the list and call it Š Ÿ

1 Ý

„ÕÔ Üj„ÕÔ ® Ô

ˆ ˆ ˆ

ˆ Š

Ö Ö

1 Ö Ä

No eigenvalue is used twice; on termination, the list is empty.

ƒ°Ø

ÔÞ

„ „

(a) Show that upon termination of this algorithm, Ö is the solution to . Hint: Read Section ‡ 6.1 carefully. Graph the convergence of a two-dimensional example; draw the eigenvector axes, and try selecting each eigenvalue ®rst. (It is most important to intuitively explain what the algorithm is doing; if you can, also give equations that prove it.) Homework 57

(b) Although this routine ®nds the exact solution after Û iterations, you would like each intermediate

„

Ô

ˆ iterate Ö to be as close to the solution as possible. Give a crude rule of thumb for how you should choose an eigenvalue from the list on each iteration. (In other words, in what order should the eigenvalues be used?) (c) What could go terribly wrong with this algorithm if ¯oating point roundoff occurs? (d) Give a rule of thumb for how you should choose an eigenvalue from the list on each iteration to prevent ¯oating point roundoff error from escalating. Hint: The answer is not the same as that of question (b). 58 Jonathan Richard Shewchuk References

[1] Richard Barrett, Michael Berry, Tony Chan, James Demmel, June Donato, Jack Dongarra, Victor Eijkhout, Roldan Pozo, Charles Romine, and Henk van der Vorst, Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods, SIAM, Philadelphia, Pennsylvania, 1993.

[2] William L. Briggs, A Multigrid Tutorial, SIAM, Philadelphia, Pennsylvania, 1987.

[3] James W. Daniel, Convergence of the Conjugate Gradient Method with Computationally Convenient Modi®cations, Numerische Mathematik 10 (1967), 125±131.

[4] W. C. Davidon, Variable Metric Method for Minimization, Tech. Report ANL-5990, Argonne National Laboratory, Argonne, Illinois, 1959.

[5] R. Fletcher and M. J. D. Powell, A Rapidly Convergent Descent Method for Minimization, Computer Journal 6 (1963), 163±168.

[6] R. Fletcher and C. M. Reeves, Function Minimization by Conjugate Gradients, Computer Journal 7 (1964), 149±154.

[7] L. Fox, H. D. Huskey, and J. H. Wilkinson, Notes on the Solution of Algebraic Linear Simultaneous Equations, Quarterly Journal of Mechanics and Applied Mathematics 1 (1948), 149±173.

[8] Jean Charles Gilbert and Jorge Nocedal, Global Convergence Properties of Conjugate Gradient Methods for Optimization, SIAM Journal on Optimization 2 (1992), no. 1, 21±42.

[9] Gene H. Golub and Dianne P. O'Leary, Some History of the Conjugate Gradient and Lanczos Algo- rithms: 1948±1976, SIAM Review 31 (1989), no. 1, 50±102.

[10] Magnus R. Hestenes, Iterative Methods for Solving Linear Equations, Journal of Optimization Theory and Applications 11 (1973), no. 4, 323±334, Originally published in 1951 as NAML Report No. 52-9, National Bureau of Standards, Washington, D.C.

[11] Magnus R. Hestenes and Eduard Stiefel, Methods of Conjugate Gradients for Solving Linear Systems, Journal of Research of the National Bureau of Standards 49 (1952), 409±436.

[12] Shmuel Kaniel, Estimates for Some Computational Techniques in Linear Algebra, Mathematics of Computation 20 (1966), 369±378.

[13] John K. Reid, On the Method of Conjugate Gradients for the Solution of Large Sparse Systems of Linear Equations, Large Sparse Sets of Linear Equations (London and New York) (John K. Reid, ed.), Academic Press, London and New York, 1971, pp. 231±254.

[14] E. Schmidt, Title unknown, Rendiconti del Circolo Matematico di Palermo 25 (1908), 53±77.

[15] Eduard Stiefel, UberÈ einige Methoden der Relaxationsrechnung, Zeitschrift furÈ Angewandte Mathe- matik und Physik 3 (1952), no. 1, 1±33.

[16] A. van der Sluis and H. A. van der Vorst, The Rate of Convergence of Conjugate Gradients, Numerische Mathematik 48 (1986), no. 5, 543±560.