Linear Algebra II

Linear Algebra II

Linear Algebra II Martin Otto Summer Term 2018 Contents 1 Eigenvalues and Diagonalisation7 1.1 Eigenvectors and eigenvalues................... 10 1.2 Polynomials............................ 16 1.2.1 Algebra of polynomials.................. 17 1.2.2 Division of polynomials.................. 20 1.2.3 Polynomials over the real and complex numbers.... 24 1.3 Upper triangle form........................ 25 1.4 The Cayley{Hamilton Theorem................. 27 1.5 Minimal polynomial and diagonalisation............ 32 1.6 Jordan Normal Form....................... 36 1.6.1 Block decomposition, part 1............... 37 1.6.2 Block decomposition, part 2............... 39 1.6.3 Jordan normal form.................... 45 2 Euclidean and Unitary Spaces 49 2.1 Euclidean and unitary vector spaces............... 51 2.1.1 The standard scalar products in Rn and Cn ...... 51 2.1.2 Cn as a unitary space................... 53 2.1.3 Bilinear and semi-bilinear forms............. 55 2.1.4 Scalar products in euclidean and unitary spaces.... 58 2.2 Further examples......................... 62 2.3 Orthogonality and orthonormal bases.............. 64 2.3.1 Orthonormal bases.................... 64 2.3.2 Orthogonality and orthogonal complements...... 66 2.3.3 Orthogonal and unitary maps.............. 69 2.4 Endomorphisms in euclidean or unitary spaces......... 74 2.4.1 The adjoint map..................... 75 2.4.2 Diagonalisation of self-adjoint maps and matrices... 76 3 4 Linear Algebra II | Martin Otto 2018 2.4.3 Normal maps and matrices................ 78 3 Bilinear and Quadratic Forms 81 3.1 Matrix representations of bilinear forms............. 81 3.2 Simultaneous diagonalisation................... 82 3.2.1 Symmetric bilinear forms vs. self-adjoint maps..... 83 3.2.2 Principal axes....................... 84 3.2.3 Positive definiteness................... 88 3.3 Quadratic forms and quadrics.................. 89 3.3.1 Quadrics in Rn ...................... 92 3.3.2 Projective space Pn .................... 96 3.3.3 Appendix: an example from classical mechanics.... 100 LA II | Martin Otto 2018 5 Notational conventions In this second part, we adopt the following conventions and notational sim- plifications over and above conventions established in part I. • all bases are understood to be labelled bases, with individual basis vectors listed as for instance in B = (b1;:::; bn). • we extend the idea of summation notation to products, writing for instance n Y λi for λ1 · ::: · λn: i=1 and also to sums and direct sums of subspaces, as in k X Ui for U1 + ··· + Uk i=1 and k M Ui for U1 ⊕ · · · ⊕ Uk: i=1 • in rings (like Hom(V; V )) we use exponentiation notation for iterated products, always identifying the power to exponent zero with the unit element (neutral element w.r.t. multiplication). For instance, if ' is an endomorphism of V , we write k 0 ' for ' ◦ · · · ◦ ' where ' = idV : | {z } k times 6 Linear Algebra II | Martin Otto 2018 Chapter 1 Eigenvalues and Diagonalisation One of the core topics of linear algebra concerns the choice of suitable bases for the representation of linear maps. For an endomorphism ': V ! V of the n-dimensional F-vector space V , we want to find a basis B = (b1;:::; bn) of B V that is specially adapted to make the matrix representation A' = [[']]B 2 F(n;n) as simple as possible for the analysis of the map ' itself. This chapter starts from the study of eigenvectors of '. Such an eigenvector is a vector v 6= 0 that is mapped to a scalar multiple of itself under '. So '(v) = λv for some λ 2 F, the corresponding eigenvalue. If v is an eigenvector (with eigenvalue λ), it spans a one-dimensional subspace U = fµv: µ 2 Fg ⊆ V , which is invariant under ' (or preserved by ') in the sense that '(u) = λu 2 U for all u 2 U. In other words, in the direction of v, ' acts as a rescaling with factor λ. But for instance over the standard two-dimensional R-vector space R2, an endomorphism need not have any eigenvectors. Consider the example of a rotation through an angle 0 < α < π; this linear map preserves no 1-dimensional subspaces at all. In contrast, we shall see later that any en- domorphism of R3 must have at least one eigenvector. In the case of a non- trivial rotation, for instance, the axis of rotation gives rise to an eigenvector with eigenvalue 1. In fact eigenvalues and eigenvectors often have further significance, either geometrically or in terms of the phenomena modelled by a linear map. To give an example, consider the homogeneous linear differential equation of the 7 8 Linear Algebra II | Martin Otto 2018 harmonic oscillator d2 f(t) + cf(t) = 0 dt2 with a positive constant c 2 R and for C1 functions f : R ! R (modelling, for instance, the position of a mass attached to a spring as a function of time t). One may regard this as the problem of finding the eigenvectors d2 1 for eigenvalue −c of the linear operator dt2 that maps a C function to its second derivative.p In this case, therep are two linearly independent solutions f(t) = sin( c t) and f(t) = cos( c t) which span the solution space of this differential equation. The eigenvalue −c of d2 that we look at here is related dtp2 to the frequency of the oscillation (which is c=2π). In quantum mechanics, states of a physical system are modelled as vectors of a C-vector space (e.g., of wave functions). Associated physical quantities (observables) are described by linear (differential) operators on such states, which are endomorphisms of the state space. The eigenvectors of these op- erators are the possible values for measurements of those observables. Here one seeks bases of the state space made up from eigenvectors (eigenstates) associated with particular values for the quantity under consideration via their eigenvalues. W.r.t. such a basis an arbitrary state can be represented as a linear combination (superposition) of eigenstates, accounting for com- ponents which each have their definite value for that observable, but mixed in a composite state with different possible outcomes for its measurement. If V has a basis consisting of eigenvectors of an endomorphism ': V ! V , then w.r.t. that basis, ' is represented by a diagonal matrix, with the eigenvalues as entries on the diagonal, and all other entries equal to 0. Matrix arithmetic is often much simpler for diagonal matrices, and it therefore makes sense to apply a corresponding basis transformation to achieve diagonal form where this is possible. Example 1.0.1 Consider for instance a square matrix A 2 R(n;n) (or C(n;n)). Suppose that there is a regular matrix C (for a basis transformation) such that 0 1 λ1 0 ::: 0 B 0 λ 0 C ^ −1 B 2 C A = CAC = B . .. C @ . 0 A 0 ··· 0 λn LA II | Martin Otto 2018 9 is a diagonal matrix. Suppose now that we want to evaluate powers of A: 0 1 2 A = En, A = A, A = AA, . We note that the corresponding powers of A^ are very easily computed. One checks that 0 ` 1 λ1 0 ::: 0 B 0 λ` 0 C ^` B 2 C A = B . .. C : @ . 0 A ` 0 ··· 0 λn Then A` = (C−1AC^ )` = (C−1AC^ ) · ::: · (C−1AC^ ) = C−1A^`C | {z } ` times is best computed via the detour through A^. Example 1.0.2 Consider the linear differential equation d v(t) = Av(t) dt for a vector-valued C1 function v: t 7! v(t) 2 Rn, where the matrix A 2 R(n;n) is fixed. In close analogy with the one-dimensional case, one can find a solution using the exponential function. Here the exponential function has to be considered as a matrix-valued function on matrices, B 7! eB, defined by tA its series expansion. The function t 7! v(t) := e v0 solves the differential tA equation for initial value v(0) = v0. Here e stands for the series 1 X tkAk 2 (n;n); k! R k=0 tA which can be shown to converge for all t. The value v(t) = e v0 is the result tA of applying the matrix e to the vector v0. If it so happens that there is a basis relative to which A is similar to a diagonal matrix A^ = CAC−1, as in the previous example, then we may solve d ^ the differential equation dt v^(t) = Av^(t) instead, and merely have to express the initial value v0 in the new basis as v^0 = Cv0. The evaluation of the exponential function on the diagonal matrix A is easy (as in the previous example): 0 1 0 tλ1 1 λ1 0 ::: 0 e 0 ::: 0 B 0 λ 0 C B 0 etλ2 0 C ^ B 2 C tA^ B C A = B . .. C ) e = B . .. C : @ . 0 A @ . 0 A tλn 0 ··· 0 λn 0 ··· 0 e 10 Linear Algebra II | Martin Otto 2018 We correspondingly find that the solution (in the adapted basis) is tA^ v^(t) = e v^0; which then transforms back into the original basis according to −1 tA^ v(t) = C e C]v0: Convention: in this chapter, unless otherwise noted, we fix a finite-dimen- sional F-vector space V of dimension greater than 0 throughout. We shall occasionally look at specific examples and specify concrete fields F. 1.1 Eigenvectors and eigenvalues Recall from chapter 3 in part I that Hom(V; V ) stands for the space of all linear maps ': V ! V (vector space homomorphisms from V to V , i.e., endomorphisms). Recall that w.r.t. to the natural addition and scalar mul- tiplication operations, Hom(V; V ) forms an F-vector space; while w.r.t.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    110 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us