Quick viewing(Text Mode)

MATH 304 Linear Algebra Lecture 36: Complexification. Symmetric And

MATH 304 Lecture 36: Complexification. Symmetric and orthogonal matrices. Complex numbers

C: complex numbers. : z = x + iy, where x, y R and i 2 = 1. ∈ − i = √ 1: − Alternative notation: z = x + yi. x = real part of z, iy = imaginary part of z y = 0 = z = x () ⇒ x = 0 = z = iy (purely ) ⇒ We add, subtract, and multiply complex numbers as in i (but keep in mind that i 2 = 1). − If z1 = x1 + iy1 and z2 = x2 + iy2, then

z1 + z2 = (x1 + x2) + i(y1 + y2), z z = (x x ) + i(y y ), 1 − 2 1 − 2 1 − 2 z z = (x x y y ) + i(x y + x y ). 1 2 1 2 − 1 2 1 2 2 1 Given z = x + iy, the of z is z¯ = x iy. The modulus of z is z = x 2 + y 2. − | | zz¯ = (x + iy)(x iy) = x 2 (iy)2 = x 2 +py 2 = z 2. − − | | 1 z¯ 1 x iy z− = ,(x + iy) = − . z 2 − x 2 + y 2 | | Complex exponentials Definition. For any z C let ∈ z2 zn ez = 1 + z + + + + 2! ··· n! ··· Remark. A sequence of complex numbers z1 = x1 + iy1, z2 = x2 + iy2,... converges to z = x + iy if xn x and yn y as n . → → → ∞ Theorem 1 If z = x + iy, x, y R, then ∈ ez = ex (cos y + i sin y). In particular, eiφ = cos φ + i sin φ, φ R. ∈ Theorem 2 ez+w = ez ew for all z, w C. · ∈ Proposition eiφ = cos φ + i sin φ for all φ R. ∈ (iφ)2 (iφ)n Proof: eiφ = 1 + iφ + + + + 2! ··· n! ··· The sequence 1, i, i 2, i 3,..., i n,... is periodic: 1, i, 1, i, 1, i, 1, i,... − − − − |It follows{z that} | {z } φ2 φ4 φ2k eiφ = 1 + + ( 1)k + − 2! 4! −··· − (2k)! ··· φ3 φ5 φ2k+1 + i φ + + ( 1)k +  − 3! 5! −··· − (2k + 1)! ···  = cos φ + i sin φ. Geometric representation Any complex number z = x + iy is represented by the vector/point (x, y) R2. ∈ y r φ 0 x 0

x = r cos φ, y = r sin φ = z = r(cos φ + i sin φ) = reiφ ⇒ iφ1 iφ2 If z1 = r1e and z2 = r2e , then i(φ1+φ2) i(φ1 φ2) z1z2 = r1r2e , z1/z2 = (r1/r2)e − . Fundamental Theorem of Algebra Any of degree n 1, with complex ≥ coefficients, has exactly n roots (counting with multiplicities).

Equivalently, if n n 1 p(z) = anz + an 1z − + + a1z + a0, − ··· where ai C and an = 0, then there exist complex ∈ 6 numbers z1, z2,..., zn such that

p(z) = an(z z )(z z ) ... (z zn). − 1 − 2 − Complex eigenvalues/eigenvectors

0 1 Example. A = − . det(A λI ) = λ2 + 1. 1 0 − Characteristic roots: λ = i and λ = i. 1 2 − Associated eigenvectors: v = (1, i) and v = (1, i). 1 − 2 0 1 1 i 1 − = = i , 1 0  i 1  i − − 0 1 1 i 1 − = − = i . 1 0 i   1 − i 

v1, v2 is a of eigenvectors. In which space? Complexification Instead of the real R2, we consider a complex vector space C2 (all complex numbers are admissible as scalars). The linear operator f : R2 R2, f (x) = Ax is → extended to a complex linear operator F : C2 C2, F (x) = Ax. → The vectors v = (1, i) and v = (1, i) form a 1 − 2 basis for C2. C2 is also a real vector space (of real 4). The 2 standard real basis for C is e1 = (1, 0), e2 = (0, 1), ie1 =(i, 0), ie2 = (0, i). The of the operator F with A O respect to this basis has a block structure . O A  of complex vectors Dot product of real vectors n x = (x ,..., xn), y = (y ,..., yn) R : 1 1 ∈ x y = x y + x y + + xnyn. · 1 1 2 2 ··· Dot product of complex vectors n x = (x ,..., xn), y = (y ,..., yn) C : 1 1 ∈ x y = x y + x y + + xnyn. · 1 1 2 2 ··· If z = r + it (t, s R) then z = r it, ∈ − zz = r 2 + t2 = z 2. | |2 2 2 Hence x x = x + x + + xn 0. · | 1| | 2| ··· | | ≥ Also, x x = 0 if and only if x = 0. · The norm is defined by x = √x x. k k · Normal matrices

Definition. An n n matrix A is called × symmetric if AT = A; • T T T 1 orthogonal if AA = A A = I , i.e., A = A− ; • normal if AAT = AT A. • Theorem Let A be an n n matrix with real × entries. Then (a) A is normal there exists an orthonormal ⇐⇒ basis for Cn consisting of eigenvectors of A; (b) A is symmetric there exists an orthonormal ⇐⇒ basis for Rn consisting of eigenvectors of A. 1 0 1 Example. A = 0 3 0. 1 0 1   A is symmetric. • A has three eigenvalues: 0, 2, and 3. • Associated eigenvectors are v = ( 1, 0, 1), • 1 − v2 = (1, 0, 1), and v3 = (0, 1, 0), respectively. Vectors 1 v , 1 v , v form an orthonormal • √2 1 √2 2 3 basis for R3. Theorem Suppose A is a . Then for any x Cn and λ C one has ∈ ∈ Ax = λx AT x = λx. ⇐⇒ Thus any normal matrix A shares with AT all real eigenvalues and the corresponding eigenvectors. Also, Ax = λx Ax = λ x for any matrix A ⇐⇒ with real entries.

Corollary All eigenvalues λ of a symmetric matrix are real (λ = λ). All eigenvalues λ of an 1 orthogonal matrix satisfy λ = λ− λ = 1. ⇐⇒ | | Why are orthogonal matrices called so?

Theorem Given an n n matrix A, the following × conditions are equivalent: T 1 (i) A is orthogonal: A = A− ; (ii) columns of A form an orthonormal basis for Rn; (iii) rows of A form an orthonormal basis for Rn. Proof: Entries of the matrix AT A are dot products of columns of A. Entries of AAT are dot products of rows of A.

Thus an orthogonal matrix is the transition matrix from one orthonormal basis to another. cos φ sin φ Example. Aφ = − .  sin φ cos φ

AφAψ = Aφ+ψ • 1 T Aφ− = A φ = Aφ • − Aφ is orthogonal • 2 2 det(Aφ λI ) = (cos φ λ) + sin φ. • − − iφ Eigenvalues: λ1 = cos φ + i sin φ = e , • iφ λ = cos φ i sin φ = e− . 2 − Associated eigenvectors: v = (1, i), • 1 − v2 = (1, i). Vectors 1 v and 1 v form an orthonormal • √2 1 √2 2 basis for C2. Consider a linear operator L : Rn Rn, L(x) = Ax, → where A is an n n orthogonal matrix. × Theorem There exists an orthonormal basis for Rn such that the matrix of L relative to this basis has a diagonal block structure

D 1 O ... O ±  O R1 ... O  . . .. . ,  . . . .     O O ... Rk    where D 1 is a diagonal matrix whose diagonal ± entries are equal to 1 or 1, and − cos φj sin φj Rj = − , φj R.  sin φj cos φj  ∈