<<

A. Useful Relations from

A basic understanding of the fundamentals of linear algebra is crucial to our development of numerical methods, and it is assumed that the reader is at least familar with this subject area. Given below is some notation and some of the important relations.

A.I Notation

(1) In the present context a vector is a vertical column or string. Thus

and its itr is the horizontal row

(2) A general m x m A can be written au [ a21 A = (aij) =

amI (3) An alternative notation for A is

and its transpose AT is

(4) The inverse of a matrix (if it exists) is written A-I and has the property that A-I A = AA -1 = I, where I is the . 232 A. Useful Relations from Linear Algebra A.2 Definitions

(1) A is symmetric if AT = A. (2) A is skew-symmetric or antisymmetric if AT = -A. (3) A is diagonally dominant if aii 2: I:#i laij I, i = 1,2, ... ,m, and aii > I:#i laijl for at least one i. (4) A is orthogonal if aij are real, and AT A = AAT = I . (5) A is the of A. (6) P is a if PV is a simple reordering of v. (7) The of a matrix is I:i aii. (8) A is normalif AT A = AAT. (9) det[AJ is the of A. (10) AH is the conjugate (or Hermitian) transpose of A. (11) If

then det[AJ = ad - bc and A- 1 __1_ [d -b] - det[AJ -c a

A.3 Algebra We consider only square matrices of the same . (1) A and B are equal if aij = bij for all i,j = 1,2, ... , m. (2) A + (B + C) = (C + A) + B, etc. (3) sA = (saij) where s is a scalar. (4) In general AB =I- BA. (5) Transpose equalities: (A + B)T = AT + BT (AT)T = A (AB)T = BTAT (6) Inverse equalities (if the inverse exists): (A-1)-1 = A (AB)-l = B-1A- 1 (AT)-l = (A-1)T (7) Any matrix A can be expressed as the sum of a symmetric and a skew• . Thus A=~(A+AT)+~(A-AT) . A.4 Eigensystems 233 A.4 Eigensystems

(1) The eigenvalue problem for a matrix A is defined as

Ax = AX or [A - Ai]x = 0

and the generalized eigenvalue problem, including the matrix B, as

Ax= ABx or [A - AB]x= O.

(2) If a with real elements is symmetric, its eigenvalues are all real. If it is skew-symmetric, they are all imaginary. (3) Gershgorin's theorem: The eigenvalues of a matrix lie in the in the union of circles having centers located by the diagonals with radii equal to the sum of the absolute values of the corresponding ofI• diagonal row elements. (4) In general, an m x m matrix A has nx linearly independent eigenvectors with nx :S: m and n), distinct eigenvalues (Ai) with n), :S: nx :S: m. (5) A set of eigenvectors is said to be linearly independent if

a·x-m + b 'XnrXk- ...J.- m=l=n=l=k

for any complex a and b and for all combinations of vectors in the set. (6) If A posseses m linearly independent eigenvectors then A is diagonaliz• able, i.e. X-IAX=A, where X is a matrix whose columns are the eigenvectors,

and A is the AI

A= [ 0 o o Il If A can be diagonalized, its eigenvectors completely span the space, and A is said to have a complete eigensystem. (7) If A has m distinct eigenvalues, then A is always diagonalizable. With each distinct eigenvalue there is one associated eigenvector, and this eigenvector cannot be formed from a linear combination of any of the other eigenvectors. (8) In general, the eigenvalues of a matrix may not be distinct, in which case the possibility exists that it cannot be diagonalized. If the eigenvalues of a matrix are not distinct, but all of the eigenvectors are linearly indepen• dent, the matrix is said to be derogatory, and it can still be diagonalized. 234 A. Useful Relations from Linear Algebra

(9) If a matrix does not have a complete set of linearly independent eigenvec• tors, it cannot be diagonalized. The eigenvectors of such a matrix cannot span the space, and the matrix is said to have a defective eigensystem. (10) Defective matrices cannot be diagonalized, but they can still be put into a compact form by a similarity transform, S, such that o

o ]J

where there are k linearly independent eigenvectors, and Ji is either a Jordan subblock or .Ai. (11) A Jordan submatrix has the form

.Ai 1 0 0 0 .Ai 1 Ji = 0 0 .Ai 0 1 0 0 0 .Ai

(12) Use of the transform S is known as putting A into its Jordan Canonical form. A repeated root in a Jordan block is referred to as a defective eigen• value. For each Jordan submatrix with an eigenvalue .Ai of multiplicity r, there exists one eigenvector. The other r - 1 vectors associated with this eigenvalue are referred to as principal vectors. The complete set of principal vectors and eigenvectors are all linearly independent. (13) Note that if P is the permutation matrix

P = [00 0 1 pT = p-l = P 1 0 ~l then 1 0 p-' [~ .A p~ [~ .A 0 n 1 ~l (14) Some of the Jordan blocks may have the same eigenvalue. For example, the matrix A.5 Vector and Matrix Norms 235

is both defective and derogatory, having: • nine eigenvalues, • three distinct eigenvalues, • three Jordan blocks, • five linearly independent eigenvectors, • three principal vectors with AI, • one principal vector with A2'

A.5 Vector and Matrix Norms

(1) The spectral radius of a matrix A is symbolized by a(A) such that

a(A) = laml max ,

where am are the eigenvalues of the matrix A. (2) A p- of the vector if is defined as

(3) A p-norm of a matrix A is defined as IIAvll p = max IIAllp x¥O -11-11-v p .

(4) Let A and B be square matrices of the same order. All matrix norms must have the properties

IIAII ;:::: 0, IIAII = 0 implies A = 0 lie· All = lel'IIAII IIA + BII :::; IIAII + IIBII IIA·BII :::; IIAII·IIBII· 236 A. Useful Relations from Linear Algebra

(5) Special p-norms are

maximum column sum

IIAlloo = maxi=1,2,···,M L~llaijl maximum row sum,

where IIAlip is referred to as the Lp norm of A. (6) In general a(A) does not satisfy the conditions in (4), so in general a(A) is not a true norm. (7) When A is normal, a(A) is a true norm; in fact, in this case it is the L2 norm. (8) The spectral radius of A, a(A), is the lower bound of all the norms of A. B. Some Properties of Tridiagonal Matrices

B.l Standard Eigensystem for Simple Tridiagonal Matrices

In this work tridiagonal banded matrices are prevalent. It is useful to list some of their properties. Many of these can be derived by solving the simple linear difference equations that arise in deriving recursion relations. Let us consider a simple , i.e. a tridiagonal matrix with constant scalar elements a, b, and c; see Section 3.3.2. If we examine the conditions under which the determinant of this matrix is zero, we find (by a recursion exercise)

det[B(M : a, b, c)] = 0 if

b+ 2vaccos (Mm; 1) = 0 m= 1,2, ... ,M.

From this it follows at once that the eigenvalues of B(a, b, c) are

Am = b + 2vaccos (Mm; 1) m= 1,2, ... ,M. (B. 1)

The right-hand eigenvector of B(a, b, c) that is associated with the eigenvalue Am satisfies the equation

(B.2) and is given by

m = 1,2, ... ,M . (B.3)

These vectors are the columns of the right-hand eigenvector matrix, the ele• ments of which are (~)(j-l)/2 [jm7r] j=1,2, ... ,M X -_ (xJTn . ) -_ sm. (B.4) c M+1 m= 1,2, ... ,M. 238 B. Some Properties of Tridiagonal Matrices

Notice that if a = -1 and c = 1,

(~) (j-1)/2 = ei (j-1)7r/2 . (B.5)

The left-hand eigenvector matrix of B(a, b, c) can be written

-1 2 (c)(m-1)/2. m= 1,2, ... ,M X =-- - sm [mjrr]-- M+l a M+l j = 1,2, ... ,M.

In this case notice that if a = -1 and c = 1

(~) (m-1)/2 = e-i (m-1)7r/2 . (B.6)

B.2 Generalized Eigensystem for Simple Tridiagonal Matrices

This system is defined as follows b c abc a b c a b In this case one can show after some algebra that

det[B(a - Ad, b - Ae, c - Af] = 0 (B.7) if b-Ame+2v!(a-Amd)(C-Amf)cos(Mm:l) =0 , m=1,2, ... ,M. If we define mrr Om = M + 1 ' Pm = cos Om then

eb - 2(cd + af)p~ + 2PmV(ec - fb)(ea - bd) + [(cd - af)Pm]2 Am = e2 _ 4fdp~

The right-hand eigenvectors are

m= 1,2, ... ,M j = 1,2, ... ,M. These relations are useful in studying relaxation methods. B.3 The Inverse of a Simple Tridiagonal Matrix 239 B.3 The Inverse of a Simple Tridiagonal Matrix

The inverse of B(a, b, c) can also be written in analytic form. Let DM repre• sent the determinant of B(M : a, b, c)

DM == det[B(M : a, b, c)] .

Defining Do to be 1, it is simple to derive the first few , thus

Do = 1 Dl = b (B.8) D2 = b2 - ac D3 = b3 - 2abc .

One can also find the recursion relation

(B.9)

Equation (B.9) is a linear OLlE, the solution of which was discussed in Sec• tion 6.3. Its characteristic is P(E) = E2 - bE + ac, and the two roots to P( a) = 0 result in the solution

M = 0,1,2, ... , (B.I0) where we have made use of the initial conditions Do = 1 and Dl = b. In the limiting case when b2 - 4ac = 0, one can show that

Then for M = 4

-cD2 c2Dl DID2 -CDIDI -aDIDl D2Dl a2Dl -aD2 and for M = 5

- CD3 c2D2 - c3Dl DID3 -cD1 D2 c2DIDl -aD1 D2 D2D2 -cD2D 1 a2DIDl -aD2D 1 D3 D l - a3Dl a2 D2 - aD3 240 B. Some Properties of Tridiagonal Matrices

The general element dmn is Upper triangle:

m = 1,2, ... , M - 1 n = m + 1, m + 2, ... , M

Diagonal: n = m = 1,2, ... ,M dmm = DM-IDM-m/DM

Lower triangle:

m = n+ 1,n+2, ... ,M , n = 1,2, ... , M - 1

dmn = DM_mDn_I(-a)m-n/DM

B.4 Eigensystems of Circulant Matrices

B.4.1 Standard Tridiagonal Matrices Consider the circulant (see Section 3.3.4) tridiagonal matrix Bp(M: a,b,c) . (B.ll) The eigenvalues are

Am = b+ (a + c) cos (2~m) - i(a - c) sin (2~)

m = 0,1,2, ... , M - 1 . (B.12)

The right-hand eigenvector that satisfies Bp(a, b, C}Xm = Amxm is j = 0, 1, ... , M - 1 , (B.13) where i == yCI, and the right-hand eigenvector matrix has the form j = O,l, ... ,M - 1 m = O,l, ... ,M - 1 The left-hand eigenvector matrix with elements x' is

X-I = (x' .) = ~e-im(2,,,"j/M) m = O,l, ... ,M - 1 mJ M j =O,l, ... ,M - 1

Note that both X and X-I are symmetric and that X-I = -lrXH, where X H is the conjugate transpose of X. B.i:. Special Cases Found from Symmetries 241

B.4.2 General Circulant Systems

Notice the remarkable fact that the elements of the eigenvector matrices X and X-I for the tridiagonal given by (B.ll) do not depend on the elements a, b, c in the matrix. In fact, all circulant matrices of order M have the same set of linearly independent eigenvectors, even if they are completely dense. An example of a dense circulant matrix of order M = 4 is

(B.14)

The eigenvectors are always given by (B.13), and further examination shows that the elements in these eigenvectors correspond to the elements in a com• plex harmonic analysis or complex discrete Fourier series. Although the eigenvectors of a circulant matrix are independent of its elements, the eigenvalues are not. For the element indexing shown in (B.14) they have the general form

M-I Am = 2: bj ei (27rj m/M) , j=O of which (B.12) is a special case.

B.5 Special Cases Found from Symmetries

Consider a mesh with an even number of interior points. One can seek from the tridiagonal matrix B(2M : a, b, a,) the eigenvector subset that has even symmetry when spanning the interval 0 :S x :S 7r. For example, we seek the set of eigenvectors xm for which

b a Xl Xl a b a X2 X2 a = Am a

a b a X2 X2 a b Xl Xl

This leads to the subsystem of order M which has the form 242 B. Some Properties of Tridiagonal Matrices

ba a b a a B(M: a,b,a)Xm = (B.15) .. a a b a ab+a

By folding the known eigenvectors of B(2M : a, b, a) about the center, one can show from previous results that the eigenvalues of (B.15) are

(2m Am = b + 2a cos ( 2M-nIT) + 1 m=1,2, ... ,M (B.16) and the corresponding eigenvectors are _ _ . (j(2m - 1)71") Xm -sm 2M +1 j = 1,2, ... ,M.

Imposing symmetry about the same interval but for a mesh with an odd number of points leads to the matrix b a a b a a B(M: a,b,a) = a a b a 2a b

By folding the known eigenvalues of B(2M -1: a, b, a) about the center, one can show from previous results that the eigenvalues of (B.16) are

(2m -1)71") Am = b + 2acos ( 2M m= 1,2, ... ,M and the corresponding eigenvectors are _ . (j(2m - 1)71") Xm=sm 2M j = 1,2, ... ,M.

B.6 Special Cases Involving Boundary Conditions

We consider two special cases for the matrix operator representing the 3-point central difference approximation for the second [j2 / aX2 at all points away from the boundaries, combined with special conditions imposed at the boundaries. Bo6 Special Cases Involving Boundary Conditions 243

Note: In both cases m = I,2,ooo,M j = I,2,ooo,M 2 -2 + 2cos(a) = -4sin (a/2) 0

When the boundary conditions are Dirichlet on both sides,

1 -2 1 -2 1 Am = -2 + 2 cos (Mm.; 1) [ 1 -2 1 I (BoI7) - 0 [o( m7r )] 1 -2 1 Xm=sm J M+1 0 1 -2

When one boundary condition is Dirichlet and the other is Neumann (and a diagonal preconditioner is applied to scale the last equation),

(2m - I)7r] -21 -21 1 Am = -2 + 2 cos [ 2M + 1 [ 1 -2 1 I (BoI8)

_ 0 [o((2m-I)7r)] 1 -2 1 Xm = sm J 2M + 1 1 -1 c. The Homogeneous Property of the Euler Equations

The Euler equations have a special property that is sometimes useful in con• structing numerical methods. In order to examine this property, let us first inspect Euler's theorem on homogeneous functions. Consider first the scalar case. If F( u, v) satisfies the identity

F(au,av) = anF(u,v) (C.1) for a fixed n, F is called homogeneous of degree n. Differentiating both sides with respect to a and setting a = 1 (since the identity holds for all a), we find of of u ou +v ov = nF(u, v) . (C.2)

Consider next the theorem as it applies to systems of equations. If the vector F( Q) satisfies the identity

(C.3) for a fixed n, F is said to be homogeneous of degree n and we find

[~]Q=nF(Q) . (C.4)

Now it is easy to show, by direct use of (C.3), that both E and F in (2.11) and (2.12) are homogeneous of degree 1, and their Jacobians, A and B, are homogeneous of degree 0 (actually the latter is a direct consequence of the former).1 This being the case, we notice that the expansion of the flux vector in the vicinity of tn, which, according to (6.105), can be written in general as

E = En + An(Q - Qn) + O(h2) F = Fn + Bn(Q - Qn) + O(h2) , (C.5) becomes

1 Note that this depends on the form of the equation of state. The Euler equations are homogeneous if the equation of state can be written in the form p = p!(E), where e is the internal energy per unit mass. 246 C. The Homogeneous Property of the Euler Equations

E = AnQ + O(h2) F = BnQ + O(h2) , (C.6) since the terms En - AnQn and Fn - BnQn are identically zero for homoge• neous vectors of degree 1 (see (C.4)). Notice also that, under this condition, the constant term drops out of (6.106). As a final remark, we notice from the chain rule that for any vectors F andQ

8F(Q) = [8F] 8Q = A 8Q . (C.7) 8x 8Q 8x 8x

We notice also that for a homogeneous F of degree 1, F = AQ and

(C.8)

Therefore, if F is homogeneous of degree 1,

(C.9) in spite of the fact that individually [8A/8x) and Q are not equal to zero. Index

A-stability, 128 Defective systems, 52, 55, 86, 117, 119, Ao-stability, 128 234 Adams method, 127 Delta form, 108, 157, 213 Adams-Bashforth methods, 97, 98, 123, Derogatory systems, 55, 233 146, 148 Diffusion equation, 13 Adams-Bashforth-Moulton methods, - difference operators at boundaries, 102 44 Adams-Moulton methods, 97, 100 - integral form, 70 AD! (alternating direction implicit) - model ODE, 50 method,211 solution in wave space, 14 Amplitude error, 41, 93 - steady solutions, 14 Artificial dissipation, 199 Dirichlet boundary conditions, 14, 23 Artificial viscosity, 199 Dispersion, 13, 40, 63, 191, 194 Asymptotic convergence rate, 163 Displacement operator, 85 Dissipation (see also Amplitude error), Banded matrices, 25, 237 192, 194 Banded matrix notation, 25 Driving eigenvalues, 142 Boundary conditions, 3, 17 DuFort-Frankel method, 50, 136 Burstein method, 102 Error (in relaxation terminology), 158 Catastrophic instability, 126 Euler equations, 9 Cauchy-Riemann equations, 18 Explicit Euler method, 83, 98, 123, 146 Characteristic polynomial, 54, 85-87 Explicit vs. implicit methods, 82, 97, Characteristic speeds, 16 149-152 Characteristic variables, 16, 196 Chebyshev , 172 Factoring, 207-214, 222-230 Circulant matrices, 28, 135, 217, Falkner-Skan equations, 110 240-241 Fibonacci series, 112 Compact difference schemes, 35 Finite-difference methods, 8 Complete eigensystem, 52, 233 Finite-volume methods, 8, 67 Computational space, 19 Flux Jacobian matrices, 9 Conservation laws, 7 Flux splitting, 196-199 Conservative form, 9 Flux-difference splitting, 198-199 Conservative variables, 9 Flux-vector splitting, 196-197 Consistency, 120, 135-138 Fourier error analysis, 37-41 Convergence, 120 - rate, 158, 163 Gauss-Seidel method, 158-159, 161, Courant (or CFL) number, 94, 138, 166, 178 193, 194, 219 , 212 Curvilinear coordinates, 19 Gazdag method, 102, 124, 147-148 Generalized eigenvalue problem, 163, Data-bases, 206 233, 238 248 Index

Global error, 92 One-root time-marching methods, 92 One-sided differencing, 189 Hermite interpolation, 33 Operational form, 85, 87 Higher-order difference schemes, 36 Order of accuracy, 22 Homogeneity property, 11, 197,245-246 - of time-marching methods, 90, 93 Hyperbolic systems, 11, 15, 195 Ordinary difference equations, 81, 84-88 I-stability, 128 Implicit Euler method, 83, 100 Implicit vs. explicit methods, 82, 97, Pade difference schemes, 35 149-152 Parasitic eigenvalues, 142 Inherent stability, 116 Particular polynomial, 87 Periodic boundary conditions, 12, 26, Jordan canonical form, 55, 168, 234 28 Periodic matrices, 27 >.-a relations, 89 Permutation matrices, 155, 180, 207, - table of, 122 209, 232 Lagrange interpolation, 33 Phase error, 93 Laplace's equation, 14 Point difference operator, 23 Lax stability, 120 Point-Jacobi method, 158-159, 161, Lax's theorem, 120 163,178 Lax-Wendroff method, 192-195 Poisson equation, 14 Leapfrog method, 89, 98, 123, 148 Preconditioning, 154 Lees method, 132 Predictor-corrector methods, 101 Linear convection equation, 11 Primitive variables, 9, 17 - difference operators at boundaries, Principal root, 90 41 Principal vectors, 167, 170, 234 - exact solution, 12 Prolongation, 186 - integral form, 69 - model ODE, 51 Reconstruction, 68 - solution in wave space, 12 Relaxation methods, 153 Linear multistep methods, 96-101 Representative equation, representative Local error, 92 ODE, 64 Local time linearization, 107-112 Residual (in relaxation terminology), 158 MacCormack's method, 194 Restriction, 185 MacCormack's time-marching method, Richardson's method 83,102 of nonstationary relaxation, 171-173, Matrix difference operator, 23-26 178,179 Matrix norms, 235 of overlapping steps, 50 Mesh Reynolds number, 219 Roe average, 199 Milne methods, 127, 132 Runge--Kutta methods, 103, 125, 146, Modified partial differential equation, 148 190-193 Modified wavenumber, 38-41, 62-63 Multigrid,177-186 Second-order backward method, 100 Self-starting property, 91-92 Navier-Stokes equations, 3, 8 Semi-discrete approach, 49 Neumann boundary conditions, 14, 24 Shift operator, 43, 85 Newton's method, 109 Space splitting, 207, 222 Non-conservative form, 9 Spectral radius, 120, 235-236 Nonstationary relaxation, 157, 171 Spurious roots, 91 Normal matrices, 121, 232 Stationary relaxation, 157 Numerical phase speed, 40 Stiffness, 141-144, 149 Index 249

Successive overrelaxation (SOR) Unconditional stability, 128 method, 159, 161, 169 Unstructured meshes, 67 Upwind methods, 195-199 8-method,92 Taylor series, 21, 28 Vector and matrix norms, 120, 235 Taylor tables, 28-35 Vector norms, 235 Time splitting, 204, 218-222 Transient error, 93 Trapezoidal method, 100, 124 Wachspress method, 175 Tridiagonal matrices, 25, 237-243 Wave equation, 18 Truncation error, 31 Wave speeds, 16, 195, 196