<<

APPENDIX

Complex Numbers

The complex Bumbers are a of objects which can be added and multiplied, the sum and product of two complex numbers being also a , and satisfy the following conditions. (1) Every real number is a complex number, and if a, ß are real numbers, then their sum and product as complex numbers are the same as their sum and product as real numbers. (2) There is a complex number denoted by i such that i2 = - 1. (3) Every complex number can be written uniquely in the form a + bi where a, b are real numbers. (4) The ordinary laws of arithmetic concerning addition and multipli­ cation are satisfied. We list these laws: If a, ß, y are complex numbers, then

(aß)y = a(ßy) and (a + ß) + Y = a + (ß + y). We have a(ß + y) = aß + ay, and (ß + y)a = ßa + ya. We have aß = ßa, and a + ß = ß + a. If 1 is the real number one, then 1a = a. If 0 is the real number zero, then Oa = o. We have a + (-l)a = o. We shall now draw consequences of these properties. With each complex number a + bi, we associate the vector (a, b) in the plane. Let a = a 1 + a2 i and ß = b1 + b2 i be two complex numbers. Then 278 COMPLEX NUMBERS [APP. I]

Hence addition of complex numbers is carried out "componentwise" and corresponds to addition of vectors in the plane. For example,

(2 + 3i) + ( - 1 + 5i) = 1 + 8i.

In multiplying complex numbers, we use the rule i2 = - 1 to simplify a product and to put it in the form a + bio For instance, let a = 2 + 3i and ß = 1 - i. Then

aß = (2 + 3i)(1 - i) = 2(1 - i) + 3i(1 - i) = 2 - 2i + 3i - 3i 2 = 2 + i - 3( -1) =2+3+i

= 5 + i.

Let a = a + bi be a complex number. We define ii to be a - bio Thus if a = 2 + 3i, then ii = 2 - 3i. The complex number iX is called the conjugate of a. We see at once that

With the vector interpretation of complex numbers, we see that aii is the square of the distance of the point (a, b) from the origin. We now have one more important property of complex numbers, which will allow us to divide by complex numbers other than 0. If a = a + bi is a complex number =I- 0, and if we let

then aA = Aa = 1. The proof of this property is an immediate consequence of the law of multiplication of complex numbers, because

The number A above is called the inverse of a, and is denoted by a- I or l/a. If a, ß are complex numbers, we often write ß/a instead of a -1 ß (or ßa- 1), just as we did with real numbers. We see that we can divide by complex numbers f= 0. [APP. I] COMPLEX NUMBERS 279

We define the absolute value of a complex number a. = al + ia 2 to be

la.I = Jai + a~.

This absolute value is none other than the norm of the vector (al' a2 ). In terms of absolute values, we can write

provided a. "# o. The triangle inequality for the norm of vectors can now be stated for complex numbers. If a., P are complex numbers, then

Ia. + PI ~ la.I + IPI·

Another property of the absolute value is given in Exercise 5. Using some elementary facts of analysis, we shall now prove:

Theorem. The complex numbers are algebraically closed, in other words, every polynomial fE C[t] of degree ~ 1 has a root in C.

Proof We may write

f(t) = a"t" + a"_lt"-l + ... + ao with a" "# O. For every real R > 0, the function Ifl such that

t ~ If(t)1 is continuous on the closed disc of radius R, and hence has a minimum value on this disco On the other hand, from the expression

f (t ) = a"t,,( 1 + --a"-l + ... + -ao ) a"t a"t" we see that when Itl becomes large, then If(t)1 also becomes large, i.e. given C > 0 there exists R > 0 such that if Itl > R then I f(t) I > c. Con­ sequently, there exists a positive number Ro such that, if Zo is a mini­ mum point of Ifl on the closed disc of radius Ro, then

If(t)1 ~ I f(zo) I for all complex numbers t. In other words, Zo is an absolute minimum for Ifl. We shall prove that f(zo) = O. 280 COMPLEX NUMBERS [APP. I]

We express f in the form

with constants Ci' (We did it in the text, but one also sees it by writing t = Zo + (t - zo) and substituting directly in f(t).) If f(zo) i= 0, then Co = f(zo) i= O. Let z = t - zo, and let m be the smallest integer > 0

such that Cm i= O. This integer m exists because f is assumed to have degree ~ 1. Then we can write

for some polynomial g, and some polynomial f1 (obtained from f by changing the variable). Let Z1 be a complex number such that

and consider values of z of type

where A is real, 0 ~ A ~ 1. We have

f(t) =f1(AZ1) = Co - AmCo + Am+1z,;,+1g(AZ1)

= co[l - 2m + Am+1z';'+1cö1g(AZ1)].

There exists a number C > 0 such that for all A with 0 ~ A ~ 1 we have IZ,;,+1cö 1g(2z1)1 ~ C, and hence

If we can now prove that for sufficiently small A with 0 < A < 1 we have

then for such A we get I f1(.A,Z1) I < !col, thereby contradicting the hypoth­ esis that If(zo)1 ~ If(t)1 for all complex numbers t. The left inequality is of course obvious since 0< A < 1. The right inequality amounts to CAm+ 1 < 2m, or equivalently CA< 1, which is certainly satisfied for suffi­ ciently small A. This concludes the proof. [APP. I] COMPLEX NUMBERS 281

APP. EXERCISES

1. Express the following eomplex numbers in the form x + iy, where x, y are real numbers. (a) (-1 + 3i)-1 (b) (l + i)(1 - i) (e) (1 + i)i(2 - i) (d) (i - 1)(2 - i) (e) (7 + ni)(n + i) (f) (2i + 1)ni (g) (J2 + i)(n + 3i) (h) (i + 1)(i - 2)(i + 3) 2. Express the following eomplex numbers in the form x + iy, where x, y are real numbers. 1 2 + i 1 (a) (l + i)-I (e) -. (d)-. (b) 3 + i 2 -I 2-1

1 + i i 2i 1 (e) -. (g) -3-. 1 (f) 1 + i -I (h) -1 + i

3. Let IX be a eomplex number # O. What is the absolute value of lXiii? What is ~?

4. Let IX, ß be two eomplex numbers. Show that IXß = iiß and that

IX + ß = ii + ß.

5. Show that IIXßI = IIXIIßI. 6. Define addition of n-tuples of eomplex numbers eomponentwise, and multipli­ eation of n-tuples of eomplex numbers by eomplex numbers eomponentwise also. If A = (lXI"" ,IX.) and B = (ßI"" ,ß.) are n-tuples of eomplex numbers, define their produet

(note the eomplex eonjugation!). Prove the following rules: HP 1.

HP 4. If A = 0 then O. 7. We assume that you know about the funetions sine and eosine, and their addition formulas. Let (J be areal number. (a) Define ei8 = eos (J + i sin (J.

Show that if (JI and (J2 are real numbers, then 282 COMPLEX NUMBERS [APP. I]

Show that any complex number of absolute value 1 can be written in the form eil for some real number t. (b) Show that any complex number can be written in the form reiB for some real numbers r, () with r ~ o. (c) If Zl = r 1e iBl and Z2 = r 2 e iB2 with real r1, r 2 ~ 0 and real (}1, (}2' show that

(d) If z is a complex number, and n an integer > 0, show that there exists a complex number w such that w' = z. If z # 0 show that there exists n dis­ tinct such complex numbers w. [Hint: If z = reiB, consider first rl/neiB/n.] 8. Assuming the complex numbers algebraically closed, prove that every ir­ reducible polynomial over the real numbers has degree 1 or 2. [Hint: Split the polynomial over the complex numbers and pair off complex conjugate roots.] APPENDIX "

Iwasawa Decomposition and Others

Let SLn denote the set of matrices with determinant 1. The purpose of this appendix is to formu1ate in some general terms results about SLn . We shall use the language of theory, which has not been used previously, so we have to start with the definition of a group. Let G be a set. We are given a mapping G x G --+ G, which at first we write as a product, i.e. to each pair of elements (x, y) of G we associate an element of G denoted by xy, satisfying the following axioms.

GR 1. The product is associative, namely for all x, y, Z E G we have

(xY)Z = x(yz).

GR 2. There is an element e E G such that ex = xe = x for all x E G.

GR 3. Given x E G there exists an element x-I E G such that

xx-I = x-IX = e.

It is an easy exercise to show that the element in GR 2 is uniquely determined, and it is called the unit element. The element x-I in GR 3 is also easily shown to be uniquely determined, and is called the inverse of x. A set together with a mapping satisfying the three axioms is called a group.

Example. Let G = SLn(R). Let the product be the multiplication of matrices. Then SLn(R) is a group. Similarly, SLn(C) is a group. The unit element is the unit matrix I. 284 IW ASA W A DECOMPOSITION AND OTHERS [APP. II]

Example. Let G be a group and let H be a subset which contains the unit element, and is closed under taking products and inverses, i.e. if x, Y E H then x-I EH and xy E H. Then His a group under the "same" product as in G, and is called a subgroup. We shall now consider some important subgroups. Let G = SLn(R). Note that the subset consisting of the two elements I, -I is a subgroup. Also note that SLn(R) is a subgroup of the group GLn(R) (all real matrices with non-zero determinant).

We shall now express Theorem 2.1 of Chapter V in the context of groups and subgroups. Let:

U = subgroup of upper triangular matrices with 1's on the diagonal,

called unipotent.

A = subgroup of positive diagonal elements:

with ai > 0 for all i.

K = subgroup of real unitary matrices k, satisfying tk = k-I .

Theorem 1 (Iwasawa decomposition). The product map U x A x K -t G given by (u, a, k) ~ uak is a bijection.

Proof Let el, . .. ,en be the standard unit vectors of Rn (vertical). Let g = (gi}) E G. Then we have o

o [APP. 11] IW ASA W A DECOMPOSITION AND OTHERS 285

There exists an upper triangular matrix B = (bij), so with bij = 0 if i > j, such that bllg(1) = e; b 12g(1) + b22g(2) = e~

b (n) , + nng = en , such that the diagonal elements are positive, that is bl I, ... , bnn > 0, and such that the vectors e;, ... , e~ are mutually perpendicular unit vectors. Getting such a matrix B is merely applying the usual Gram Schmidt orthogonalization process, subtracting a linear combination of previous vectors to get orthogonality, and then dividing by the norms to get unit vectors. Thus

j n n n n e; = L bijg(i) = L L gqibijeq = L L gqibijeq. i=1 i=1 q=1 q=1 i=1

Let gB = k E K. Then kei = e;, so k maps the orthogonal unit vectors el, ... ,en to the orthogonal unit vectors e;, . .. ,e~. Therefore k is unitary, and g = kB-I. Then and B=au where a is the diagonal matrix with ai = bu and u is unipotent, u = a-I B. This proves the surjection G = UAK. For uniqueness of the decompo­ sition, if g = uak = u'a'k', let UI = u-Iu', so using gIg you get a 21 ujl = UI a,2. These matrices are lower and upper triangular respectively, with diagonals a2, a,2, so a = a', and finally UI = I, proving uniqueness.

The elements of U are called unipotent because they are of the form

u(X) = I +X, where X is strictly upper triangular, and xn+1 = O. Thus X = u - I IS called nilpotent. Let

00 yj 00 Xi exp Y=L-., and 10g(I + X) = L( _l)i+1 -;-. j=O J. i=1 I 286 IW ASA W A DECOMPOSITION AND OTHERS [APP. II]

Let n denote the space of all strictly upper triangular matrices. Then

exp: n --+ U, Y f---t exp Y is a bijection, whose inverse is given by the log series, Y = 10g(1 + X). Note that, because of the nilpotency, the exp and log series are actually polynomials, defining inverse polynomial mappings between U and n. The bijection actually holds over any of characteristic 0. The relations

exp 10g(1 + X) = 1 + X and log exp Y = 10g(1 + X) = Y hold as identities of formal power series. Cf. my Complex Analysis, Chapter II, §3, Exercise 2.

Geometrie interpretation in dimension 2 Let h2 be the upper half plane of complex numbers z = x + iy with x, y E Rand y > 0, Y = y(z). For

g = (: ~) E G = SL2(R) define g(z) = (az + b)(cz + d)-l.

Then G aets on h2, meaning that the following two conditions are satisfied: If 1 is the unit matrix, then I(z) = z for all z. For g,g' E G we have g(g'(z)) = (gg')(z). Also note the property: If g(z) = z for all z, then g = ±I.

To see that if z E h2 then g(z) E h2 also, you will need to check the transformation formula y(z) y(g(z)) = Icz + d1 2 ' proved by direct computation. These statements are proved by (easy) brote force. In addition, for w E h2, let Gw be the subset of elements gE G such that g(w) = w. Then Gw is a subgroup of G, called the isotropy group of w. Verify that:

Theorem 2. The isotropy group 0/ i is K, i.e. K is the subgroup 0/ elements k E G such that k(i) = i. This is the group 0/ matrices

cos () sin () ) ( -sin () eos () .

Or equivalently, a = d, c = -b, a2 + b2 = 1. [APP. 11] IW ASA W A DECOMPOSITION AND OTHERS 287

For XE Rand al > 0, let

u(X) = (~ ~) and

If g = uak, then u(x)(z) = z + x, so putting Y = ar, we get a(i) = yi, g(i) = uak(i) = ua(i) = yi + x = x + iy.

Thus G acts transitively, and we have a description of the action in tenns of the Iwasawa decomposition and the coordinates of the upper half plane.

Geometrie interpretation in dimension 3. We hope you know the quaternions, whose elements are

with XI,X2,X3,X4 ER

and i2 =j2=k2 =_I, ij=k, jk=i, ki=j. Define

Then

and we define Izl = (zz)l j 2. Let h3 be the upper half space conslstmg of elements z whose k­ component is 0, and X3 > 0, so we write

with y> 0.

Let G = SL2(C), so elements of Gare matrices

g=(: !) with a,b,c,d E C and ad - bc = 1.

As in the case of h2, define

g(z) = (az + b)(cz + d)-I.

Verify by brote force that if z E h3 then g(z) E h3, and that G acts on h3, namely the two properties listed in the previous example are also satisfied here. Since the quaternions are not commutative, we have to use the quotient as written (az + b)(cz + d)-I. Also note that the y-coordinate transfonnation fonnula for z E h3 reads the same as for h2, namely

y(g(z)) = y(z)/lcz+dI2 . 288 IW ASA W A DECOMPOSITION AND OTHERS [APP. II]

The group G = SL2 (C) has the Iwasawa decomposition

G= UAK, where:

U = group of elements u(x) = (~ ~) with XE C;

A = same group as before in the case of SL2 (R); K = complex unitary group of elements k such that IJ( = k-1•

The previous proof works the same way, BUT you can verify direct1y:

Theorem 3. The isotropy group Gj is K. If g = uak with u E U, a E A, k E K, u = u(x) and y = y(a), then

g(j) = x + yj.

Thus G acts transitively, and the Iwasawa decomposition follows trivially from this group action (see below). Thus the orthogonalization type proof can be completely avoided.

Proof of the Iwasawa decomposition from the above two properties. Let gE G and g(j) = x + yj. Let u = u(x) and a be such that y = aI/a2 = ar. Let g' = ua. Then by the second property, we get g(j) = g'(j), so j = g-lg'(j). By the first property, we get g-lg' = k for some k E K, so g'k-1 = uak-1 = g, conc1uding the proof.

The conjugation action

By a homomorphism f: G ~ G' of a group into another we mean a mapping which satisfies the properties f(eG) = f(eG') (where e = unit ele­ ment), and

for all gl, g2 E G.

A homomorphism is called an isomorphism if it has an inverse homo­ morphism, i.e. if there exists a homomorphism f': G' ~ G such that ff' = idG" and f'f = idG. An isomorphism of G with itself is called an auto­ morphism of G. Y ou can verify at once that the set of automorphisms of G, denoted by Aut( G), is a group. The product in this group is the com­ position of mappings. Note that a bijective homomorphism is an iso­ morphism, just as for linear maps. Let X be a set. A bijective map a: X ~ X of X with itself is called a permutation. Y ou can verify at once that the set of permutations of X is a group, denoted by Perm(X). By an action of a group G on X we mean a [APP. 11] IW ASA W A DECOMPOSITION AND OTHERS 289

map

GxX~X denoted by (g, x) I-t gx,

satisfying the two properties:

If e is the unit element of G, then ex = x for all x E X. For all gl, g2 E G and x E X we have gl (g2X) = (glg2)X. This is just a general formulation of action, of which we have seen an example above. Given gE G, the map x I-t gx of X into itself is a per­ mutation of X. Y ou can verify this directly from the definition, namely the inverse permutation is given by x I-t g-Ix . Let a(g) denote the permutation associated with g. Then you can also verify directly from the definition that g I-t a(g)

is a homomorphism of G into the group of permutations of X. Conversely, such a homomorphism gives rise to an action of G on X. Let G be a group. The conjugation action of G on itself is defined for g,g' E G by c(g)g' = gg'g-I.

It is immediately verified that the map g I-t c(g) is a homomorphism of G into Aut( G) (the group of automorphisms of G). Then G also acts on spaces naturally associated to G. Consider the special case when G = SLn(R). Let

a = vector space of diagonal matrices diag(hl, ... , hn ) with trace 0, 2:.hi =0. n = vector space of strictly upper triangular matrices (hij) with hij = 0 if i ~j. In = vector space of strictly lower diagonal matrices. 9 = vector space of n x n matrices of trace o.

Then 9 is the direct sum a + n + In, and A acts by conjugation. In fact, 9 is a direct sum of eigenspaces for this action. Indeed, let Eij (i < j) be the matrix with ij-component 1 and all other components O. Then

c(a)Eij = (ai/aj)Eij = a!XijEij

by direct computation, defining a!Xij = a;faj. Thus rxij is a homomorphism of A into R+ (positive real multiplicative group). The set of such homo­ morphisms will be called the set of regular characters, denoted by ~(n) because n is the direct sum of the 1 dimensional eigenspaces having basis Eij (i < j). We write n= E8 n!X, !XE9t(n) 290 IW ASA W A DECOMPOSITION AND OTHERS [APP. 11]

where "IX is the set of elements XE" such that aXa-1 = a" X. We have similarly

Note that a is the O-eigenspace for the conjugation action of A. Essentially the same structure holds for SLn(C) except that the R­ dimension of the eigenspaces "IX is 2, because "IX has basis E", iE". The C­ dimension is 1. By an algebra we mean a vector space with abilinear map into itself, called a product. We make g into an algebra by defining the Lie product of X, Y E 9 to be [X, Y] = XY - YX.

It is immediately verified that this product is bilinear but not associative. We call 9 the Lie algebra of G. Let the space of linear maps 2(g, g) be denoted by End(g), whose elements are called endomorphisms of g. By definition the regular representation of 9 on itself is the map

9 -t End(g)

which to each XE 9 associates the endomorphism L(X) of 9 such that

L(X)( Y) = [X, Y].

Note that X f--+ L(X) is a linear map (Chapter XI, §6, Exercise 7).

Exercise. Verify that denoting L(X) by D x , we have the derivation property for aB Y, Z E g, namely

Dx[Y,Z] = [DxY,Z] + [Y,DxZ ].

U sing only the bracket notation, this looks like

[X, [Y,Zll = [[X, Y],Z] + [Y,X,Z]].

We use (X also to denote the on a given on a diagonal matrix H = diag(hl, ... ,hn ) by

This is the additive version of the multiplicative character previously considered multiplicatively on A. Then each "IX is also the (X-eigenspace for the additive character (x, namely for H E a, we have [APP. 11) IW ASA W A DECOMPOSITION AND OTHERS 291 which you can verify at once from the definition of multiplication of matrices.

Polar Decompositions

We list here more product decompositions in the notation of groups and subgroups. Let G = SLn(C). Let U = U(C) be the set of strictly upper triangular matrices with components in C. Show that U is a subgroup. Let D be the set of diagonal complex matrices with non-zero diagonal elements. Show that D is a subgroup. Let K be the set of elements k E SLn(C) such that tJe = k-\. Then K is a subgroup, the complex unitary group. Cf. Chapter VII, §3, Exercise 4. Verify that the proof of the Iwasawa decomposition works in the complex case, that is G = UAK, with the same A in the real and complex cases. The quadratic map. Let gE G. Define g* = tg. Show that

(g\g2)* = g~gi·

An element gE G is hermitian if and only if g = g*. Cf. Chapter VII, §2. Then gg* is hermitian positive definite, i.e. for every v E C n, we have

Theorem 4. Let pE SPosn(C). Then p has a unique square root in SPosn(C).

Proo! See Chapter VIII, §5, Exercise 1.

Let H be a subgroup of G. By a (left) coset of H, we mean a subset of G of the form gH with some g E G. Y ou can easily verify that two cosets are either equal or they are disjoint. By G/ H we mean the set of co sets of Hin G.

Theorem 5. The quadratic map g f-+ gg* induces a bijection

Proo! Exercise. Show injectivity and surjectivity separately.

Theorem 6. The group G has the decomposition (non-unique) G=KAK.

If gE G is written as a product g = k\bk2 with k\,k2 E K and b E A, then b is uniquely determined up to apermutation 0/ the diagonal elements. 292 IW ASA W A DECOMPOSITION AND OTHERS [APP. 11]

Proof Given 9 E G there exists kj E K and b E A such that

by using Chapter VIII, Theorem 4.4. By the bijection of Theorem 5, there exists k2 E K such that 9 = k j bk2, which proves the existence of the de­ composition. As to the uniqueness, note that b2 is the diagonal matrix of eigenvalues of gg*, Le. the diagonal elements are the roots of the charac­ teristic polynomial, and these roots are unique1y determined up to a per­ mutation, thus proving the theorem.

Note that there is another version of the polar decomposition as folIows.

Theorem 7. Abbreviate SPosn(C) = P. Then G = PK, and the decom­ position of an element 9 = pk with PEP, k E K is unique.

Proof The existence is a rephrasing of Chapter VIII, §5, Exercise 4. As to uniqueness, suppose 9 = pk. The quadratic map gives gg* = pp' = p2. The uniqueness of the square root in Theorem 4 shows that p is uniquely determined by g, whence so is k, as was to be shown. Index

A Complex numbers 277 Complex unitary 291 Action 267, 286, 288 Component 3, 99 Adjoint 185 Component of a matrix 23 Algebra 290 Conjugate 278 Algebraically c10sed 279 Conjugation action 289 Altemating 147 Constant term 232 Antilinear 128 Contained 1 Associated linear map 81 Convex 77, 268 Associated matrix 82 Convex c10sure 79 Automorphism 288 Coordinate functions 46 Coordinate vector 3, 11 B Coordinates with respect to a basis 11 Coset 291 Basis 11, 87 Cramer's rule 157 Bessel inequality 102 Cyc1ic 262 Bijective 48 Bilinear form 118, 132 Bilinear map 118, 132 D Bounded from below 273 Bracket action 267 Degree of polynomial 232 Derivation property 290 C Derivative 55, 129, 195 Determinant 140, 201 Character 289 Diagonal elements 27 Characteristic polynomial 200, 206 Diagonal matrix 27 Characteristic value 194 Diagonalize 93, 199, 218, 220, 221, Coefficients of a matrix 29 243 Coefficients of a polynomial 232 Differential equations 64, 197, 220, Column 23 258 Column equivalence 161 Dimension 16, 20, 61, 66, 97, 106, 115 Column rank 113 Dirac functional 127 Column vector 24 Direct product 21 294 INDEX

Direct sum 19, 21, 111, 257 Image 60 Distance 98 Independent 10, 159 Divide 250 Index of nullity 137 Dot product 6, 31 Index of positivity 138 Dual basis 127 Infinite dimensional 17 Dual space 126 Injective 47 Intersection 1 E Invariant subspace 219, 237, 255, 260 Eigencharacter 267 Inverse 35, 48, 69, 163, 174, 283 Eigenspace 195, 224 Inverse image 80 Eigenvalue 194, 201, 216 Invertible 35, 87 Eigenvector 194 Irreducible 251 Element 1 Isomorphism 69 Endomorphism 289 Isotropy group 286 Euclidean algorithm 245 Iwasawa decomposition 284 Even permutation 168 Expansion of determinant 143, 149, J 169 Extreme point 272 Jordan basis 263 Jordan normal form 264 F K Fan 237 Fan basis 237 Kernel 59 Field 2 Krein-Milman theorem 275 Finite dimensional 17 Fourier coefficient 100, 109 L Function space 7 Functional 126 Leading coefficient 232 Lie 267 G Line 17, 57, 72 Linear combination 5 Generate 6, 248 Linear equations 29, 113 Gradient 129 Linear mapping 51, 54 Gram-Schmidt orthogonalization 104 Linearly dependent or independent Greatest common divisor 250 10, 86, 159, 160 Group 283 M H M:, 88 Half space 269 Mapping 43 Hamilton-Cayley 241 Markov matrix 240 Hermitian form 184 Matrix 23, 81, 82, 88, 92, 120 Hermitian map 185, 225 Maximal set of linearly independent Hermitian matrix 186 elements 13, 17 Hermitian product 108 Maximum 215 Homomorphism 267, 288 Minimal polynomial 254 Homogeneous equations 29 Multilinear map 146 Hyperplane 269 Multiplicity 253

I N Ideal 248 Negative definite 224 Identity map 48, 53 Nilpotent 42, 94, 240 INDEX 295

Non-degenerate 32, 95 Root 205, 233, 246 Non-singular 35, 175 Rotation 85, 93 Non-trivial 29 Row 23 Norm of a vector 97 Row rank 113 Normal 227 Null form 137 Null space 124 s Numbers 2 Sealar produet 6, 95 Sehur's lemma 261 Schwarz inequality 100, 110 o Segment 57, 72 Odd permutation 168 Self-adjoint 185 Operator 68, 181 Semilinear 128 Orthogonal 7, 96, 188 Semipositive 183, 222, 226 Orthogonal basis 103 Separating hyperplane 269 Orthogonal complement 107, 130 Sign of permutation 166 Orthonormal 103, 110, 136 Similar matrices 93 Skew-symmetric 65, 183 Span 73, 75, 79 p Speetral theorem 219, 226 Square matrix 24 Parallelogram 58, 73, 99 Stable subspace 219 Period 262 Strietly upper triangular 41 Permutation 163 Subfie1d 2 Perpendicular 7, 96 Subgroup 284 Plane 17 Subset 1 Polar decomposition 292 Subspace 5 Polarization 186 Sum of subspaces 9, 19 Polynomial 231 Supporting hyperplane 270 Positive definite operator 183 Surjective 48 Positive definite product 97, 108, Sylvester's theorem 137 222 Symmetrie form 132 Product 283 Symmetrie linear map 182, 213 Product of determinants 172 Symmetrie matrix 26, 213 Product of matrices 32 Projection 99 Proper subset 1 T Pythagoras 99 Trace 40, 64 Translation 49, 75 Q Transpose of linear map 182 Transpose of matrix 26, 37, 89 Quadratic map 291 Transposition 164 Quadratic form 132, 214 Triangle 75 Quatemions 287 Triangle inequality 101 Triangulable 238 Triangular 28, 41 R Trilinear 146 Rank 114, 178 Trivial solution 29 Real unitary 284 Refleetion 199 U Regular action 267 Regular charaeters 289 Union Regular representation 290 Unipotent 284, 285 296 INDEX

Unique factorization 251 v Unit element 283 Unit ideal 248 Value 7 Unit sphere 215 Vandermonde determinant 155 Unit vector 99, 110 Vector 4 Unitary group 284, 291 Vector space 3 Unitary map 188, 228, 243 Unitary matrix 27, 190 z Unknown 29 Upper triangular 28, 41 Zero mapping 53, 55 Zero matrix 25