Second Year Mathematics Revision Linear Algebra - Part 2

Total Page:16

File Type:pdf, Size:1020Kb

Second Year Mathematics Revision Linear Algebra - Part 2 Second Year Mathematics Revision Linear Algebra - Part 2 Rui Tong UNSW Mathematics Society Rui Tong Second Year Mathematics Revision Eigenvalues and Eigenvectors Today's plan 1 Eigenvalues and Eigenvectors Singular Value Decomposition (MATH2601) 2 The Jordan Canonical Form Finding Jordan Forms The Cayley-Hamilton Theorem 3 Matrix Exponentials Computing Matrix Exponentials Application to Systems of Differential Equations Rui Tong Second Year Mathematics Revision Eigenvalues and Eigenvectors Singular Value Decomposition (MATH2601) Singular Values (MATH2601 only section) Definition 1: Singular Values A singular value of a m × n matrix A is the square root of an eigenvalue of A∗A. Recall: A∗A denotes the adjoint of A. Definition 2: Singular Value Decomposition A SVD for an m × n matrix A is of the form A = UΣV ∗ where U is an m × m unitary matrix. V is an n × n unitary matrix. Σ has entries σii > 0. (These are determined by the singular values.) σij = 0 for all i 6= j. Rui Tong Second Year Mathematics Revision Eigenvalues and Eigenvectors Singular Value Decomposition (MATH2601) Nice properties of A∗A Lemma 1: Properties of A∗A 1 All eigenvalues of A∗A are real and non-negative (even if A has complex entries!) 2 ker(A∗A) = ker(A) 3 rank(A∗A) = rank(A) The first one is pretty much why everything works. Rui Tong Second Year Mathematics Revision Eigenvalues and Eigenvectors Singular Value Decomposition (MATH2601) SVD Algorithm Algorithm 1: Finding a SVD ∗ 1 Find all eigenvalues λi of A A and write in descending order. Also find their associated eigenvectors of unit length vi . 2 Find an orthonormal set of eigenvectors for A∗A. Automatically occurs when all eigenvalues are distinct, which will usually be the case. Otherwise require Gram-Schmidt for any eigenspace with dimension strictly greater than 1.. 1 3 Compute ui = p Avi for each non-zero eigenvalue. λi 4 State U and V from the vectors you found, and Σ from the singular values. Lemma 2: Used to speed up step 1 A∗A and AA∗ share the same non-zero eigenvalues. If rank(A) = r, then A∗A has r non-zero eigenvalues. All other eigenvalues are 0. Rui Tong Second Year Mathematics Revision Eigenvalues and Eigenvectors Singular Value Decomposition (MATH2601) SVD Example Example 1: MATH2601 2017 Q2 c) 3 1 1 For the matrix A = −1 3 1 1 Find the eigenvalues of AA∗. 2 Explain why the eigenvalues in part 1 are also eigenvalues of A∗A, and state any other eigenvalues of A∗A. 3 Find all eigenvectors of A∗A. 4 Find a singular value decomposition for A. Rui Tong Second Year Mathematics Revision Eigenvalues and Eigenvectors Singular Value Decomposition (MATH2601) SVD Example Part 1: We compute 03 −11 3 1 1 11 1 AA∗ = 1 3 = : −1 3 1 @ A 1 11 1 1 This matrix has two eigenvalues, and they sum to tr(AA∗) = 22 and ∗ multiply to det(AA ) = 120. By inspection, λ1 = 12 and λ2 = 10. Rui Tong Second Year Mathematics Revision Eigenvalues and Eigenvectors Singular Value Decomposition (MATH2601) SVD Example Part 2: Quoted word for word from the answers... "We know that A∗A and AA∗ have the same nonzero eigenvalues, so 12 and 10 are eigenvalues of A∗A. Also, all eigenvalues of A∗A are real and nonnegative, so its third eigenvalue is 0." Rui Tong Second Year Mathematics Revision Eigenvalues and Eigenvectors Singular Value Decomposition (MATH2601) SVD Example Part 3: We compute 03 −11 010 0 21 3 1 1 A∗A = 1 3 = 0 10 4 @ A −1 3 1 @ A 1 1 2 4 2 For λ = 12: 0−2 0 2 1 ∗ A A − 12I = @ 0 −2 4 A : 2 4 −10 Looking at row 1, arbitrarily set first component to 1, and then the third component is 1. Equating row 2, the second component is 2. 011 ) v1 = t @2A : 1 Rui Tong Second Year Mathematics Revision Eigenvalues and Eigenvectors Singular Value Decomposition (MATH2601) SVD Example Part 3: We compute 03 −11 010 0 21 3 1 1 A∗A = 1 3 = 0 10 4 @ A −1 3 1 @ A 1 1 2 4 2 For λ = 10: 00 0 2 1 ∗ A A − 10I = @0 0 4 A : 2 4 −8 Rows 1 and 2 force the third component to be 0. Looking at row 3, it is easier to set the second component to 1, and then the first component will be −2. 0−21 ) v2 = t @ 1 A : 0 Rui Tong Second Year Mathematics Revision Eigenvalues and Eigenvectors Singular Value Decomposition (MATH2601) SVD Example Part 3: We compute 03 −11 010 0 21 3 1 1 A∗A = 1 3 = 0 10 4 @ A −1 3 1 @ A 1 1 2 4 2 For λ = 0, looking at A∗A itself, there really are many possibilities we can go about it. But I follow the answers, which arbitrarily set the first component to −1. See if you can then show that 0−11 v3 = t @−2A : 5 Note: In each case, t 2 R. Rui Tong Second Year Mathematics Revision Eigenvalues and Eigenvectors Singular Value Decomposition (MATH2601) SVD Example Part 4: In each case, choose the value of t that normalises the eigenvectors: 011 1 v1 = p @2A (λ1 = 12) 6 1 0−21 1 v2 = p @ 1 A (λ2 = 10) 5 0 0−11 1 v3 = p @−2A (λ3 = 0) 30 5 Rui Tong Second Year Mathematics Revision Eigenvalues and Eigenvectors Singular Value Decomposition (MATH2601) SVD Example 1 Part 4: Compute ui = p Avi for each non-zero eigenvector: λi 011 1 3 1 1 1 1 1 u = p p 2 = p 1 −1 3 1 @ A 1 12 6 1 2 0−21 1 3 1 1 1 1 −1 u = p p 1 = p 2 −1 3 1 @ A 1 10 5 0 2 Rui Tong Second Year Mathematics Revision Eigenvalues and Eigenvectors Singular Value Decomposition (MATH2601) SVD Example Part 4: We conclude that a SVD for A is A = UΣV ∗, where 1 1 −1 U = p 2 1 1 p 12 0 0 Σ = p 0 10 0 0 p1 − p2 − p1 1 6 5 30 V = B p2 p1 − p2 C @ 6 5 30 A p1 0 p5 6 30 Rui Tong Second Year Mathematics Revision Eigenvalues and Eigenvectors Singular Value Decomposition (MATH2601) Remark ∗ For A = UΣV , where A 2 Mn×n(C): The columns of U = u1 ::: um are called the left singular vectors. The columns of V = v1 ::: vn are called the right singular vectors. Word of advice: Write some of these numbers very quickly! SVDs are instructive when you know the method, but it always takes forever to do. Rui Tong Second Year Mathematics Revision Eigenvalues and Eigenvectors Singular Value Decomposition (MATH2601) Reduced SVD I don't see these examined, but I should still mention them. 1 Obtain Σ^ by removing any zero columns in Σ 2 Obtain V^ by removing the corresponding columns in V . 3 Then, A = UΣ^V^∗. For the earlier example: p 12 0 Σ^ = p 0 10 0 p1 − p2 1 6 5 V^ = B p2 p1 C @ 6 5 A p1 0 6 Rui Tong Second Year Mathematics Revision The Jordan Canonical Form Today's plan 1 Eigenvalues and Eigenvectors Singular Value Decomposition (MATH2601) 2 The Jordan Canonical Form Finding Jordan Forms The Cayley-Hamilton Theorem 3 Matrix Exponentials Computing Matrix Exponentials Application to Systems of Differential Equations Rui Tong Second Year Mathematics Revision The Jordan Canonical Form Finding Jordan Forms Jordan Blocks Definition 3: Jordan blocks The k × k Jordan block for λ is the matrix 0λ 1 0 ::: 01 B0 λ 1 ::: 0C B C B0 0 λ : : : 0C J (λ) = B C 2 M ( ): k B . .. C k×k C B . C B C @0 0 0 ::: 1A 0 0 0 : : : λ That is, put λ on every entry along the main diagonal, and a 1 immediately above each λ wherever possible. Rui Tong Second Year Mathematics Revision The Jordan Canonical Form Finding Jordan Forms Jordan Blocks Quick examples: 0−4 1 0 1 J3(−4) = @ 0 −4 1 A 0 0 −4 00 1 0 01 B0 0 1 0C J4(0) = B C @0 0 0 1A 0 0 0 0 Rui Tong Second Year Mathematics Revision The Jordan Canonical Form Finding Jordan Forms Powers of Jordan Forms Find the pattern. n n J1(λ) = λ λn nλn−1 J (λ)n = 1 2 0 λn 0 n n n−1 n n−21 λ 1 λ 2 λ n n n n−1 J3(λ) = @ 0 λ 1 λ A 0 0 λn 0 n n n−1 n n−2 n n−31 λ 1 λ 2 λ 3 λ n n n−1 n n−2 n B 0 λ 1 λ 2 λ C J4(λ) = B n n n−1C @ 0 0 λ 1 λ A 0 0 0 λn Rui Tong Second Year Mathematics Revision The Jordan Canonical Form Finding Jordan Forms Powers of Jordan Forms Lemma 3: Computing powers of Jordan forms 1 Start with λn on every diagonal entry. 2 n n−1 n Put 1 λ wherever you can immediately above λ 3 n n−2 n n−1 Put 2 λ wherever you can immediately above 1 λ 4 Keep doing this, increasing the binomial coefficient and decreasing the power on λ. n Note: Not quite the above. If you ever bump into n , that's the last diagonal you fill. Just put 0's everywhere else above.
Recommended publications
  • Regular Linear Systems on Cp1 and Their Monodromy Groups V.P
    Astérisque V. P. KOSTOV Regular linear systems onCP1 and their monodromy groups Astérisque, tome 222 (1994), p. 259-283 <http://www.numdam.org/item?id=AST_1994__222__259_0> © Société mathématique de France, 1994, tous droits réservés. L’accès aux archives de la collection « Astérisque » (http://smf4.emath.fr/ Publications/Asterisque/) implique l’accord avec les conditions générales d’uti- lisation (http://www.numdam.org/conditions). Toute utilisation commerciale ou impression systématique est constitutive d’une infraction pénale. Toute copie ou impression de ce fichier doit contenir la présente mention de copyright. Article numérisé dans le cadre du programme Numérisation de documents anciens mathématiques http://www.numdam.org/ REGULAR LINEAR SYSTEMS ON CP1 AND THEIR MONODROMY GROUPS V.P. KOSTOV 1. INTRODUCTION 1.1 A meromorphic linear system of differential equations on CP1 can be pre­ sented in the form X = A(t)X (1) where A{t) is a meromorphic on CP1 n x n matrix function, " • " = d/dt. Denote its poles ai,... ,ap+i, p > 1. We consider the dependent variable X to be also n x n-matrix. Definition. System (1) is called fuchsian if all the poles of the matrix- function A(t) axe of first order. Definition. System (1) is called regular at the pole a,j if in its neighbour­ hood the solutions of the system are of moderate growth rate, i.e. IW-a^l^Odt-a^), Ni E R, j = 1,...., p + 1 Here || • || denotes an arbitrary norm in gl(n, C) and we consider a restriction of the solution to a sector with vertex at ctj and of a sufficiently small radius, i.e.
    [Show full text]
  • The Rational and Jordan Forms Linear Algebra Notes
    The Rational and Jordan Forms Linear Algebra Notes Satya Mandal November 5, 2005 1 Cyclic Subspaces In a given context, a "cyclic thing" is an one generated "thing". For example, a cyclic groups is a one generated group. Likewise, a module M over a ring R is said to be a cyclic module if M is one generated or M = Rm for some m 2 M: We do not use the expression "cyclic vector spaces" because one generated vector spaces are zero or one dimensional vector spaces. 1.1 (De¯nition and Facts) Suppose V is a vector space over a ¯eld F; with ¯nite dim V = n: Fix a linear operator T 2 L(V; V ): 1. Write R = F[T ] = ff(T ) : f(X) 2 F[X]g L(V; V )g: Then R = F[T ]g is a commutative ring. (We did considered this ring in last chapter in the proof of Caley-Hamilton Theorem.) 2. Now V acquires R¡module structure with scalar multiplication as fol- lows: Define f(T )v = f(T )(v) 2 V 8 f(T ) 2 F[T ]; v 2 V: 3. For an element v 2 V de¯ne Z(v; T ) = F[T ]v = ff(T )v : f(T ) 2 Rg: 1 Note that Z(v; T ) is the cyclic R¡submodule generated by v: (I like the notation F[T ]v, the textbook uses the notation Z(v; T ).) We say, Z(v; T ) is the T ¡cyclic subspace generated by v: 4. If V = Z(v; T ) = F[T ]v; we say that that V is a T ¡cyclic space, and v is called the T ¡cyclic generator of V: (Here, I di®er a little from the textbook.) 5.
    [Show full text]
  • MAT247 Algebra II Assignment 5 Solutions
    MAT247 Algebra II Assignment 5 Solutions 1. Find the Jordan canonical form and a Jordan basis for the map or matrix given in each part below. (a) Let V be the real vector space spanned by the polynomials xiyj (in two variables) with i + j ≤ 3. Let T : V V be the map Dx + Dy, where Dx and Dy respectively denote 2 differentiation with respect to x and y. (Thus, Dx(xy) = y and Dy(xy ) = 2xy.) 0 1 1 1 1 1 ! B0 1 2 1 C (b) A = B C over Q @0 0 1 -1A 0 0 0 2 Solution: (a) The operator T is nilpotent so its characterictic polynomial splits and its only eigenvalue is zero and K0 = V. We have Im(T) = spanf1; x; y; x2; xy; y2g; Im(T 2) = spanf1; x; yg Im(T 3) = spanf1g T 4 = 0: Thus the longest cycle for eigenvalue zero has length 4. Moreover, since the number of cycles of length at least r is given by dim(Im(T r-1)) - dim(Im(T r)), the number of cycles of lengths at least 4, 3, 2, 1 is respectively 1, 2, 3, and 4. Thus the number of cycles of lengths 4,3,2,1 is respectively 1,1,1,1. Denoting the r × r Jordan block with λ on the diagonal by Jλ,r, the Jordan canonical form is thus 0 1 J0;4 B J0;3 C J = B C : @ J0;2 A J0;1 We now proceed to find a corresponding Jordan basis, i.e.
    [Show full text]
  • Quantum Information
    Quantum Information J. A. Jones Michaelmas Term 2010 Contents 1 Dirac Notation 3 1.1 Hilbert Space . 3 1.2 Dirac notation . 4 1.3 Operators . 5 1.4 Vectors and matrices . 6 1.5 Eigenvalues and eigenvectors . 8 1.6 Hermitian operators . 9 1.7 Commutators . 10 1.8 Unitary operators . 11 1.9 Operator exponentials . 11 1.10 Physical systems . 12 1.11 Time-dependent Hamiltonians . 13 1.12 Global phases . 13 2 Quantum bits and quantum gates 15 2.1 The Bloch sphere . 16 2.2 Density matrices . 16 2.3 Propagators and Pauli matrices . 18 2.4 Quantum logic gates . 18 2.5 Gate notation . 21 2.6 Quantum networks . 21 2.7 Initialization and measurement . 23 2.8 Experimental methods . 24 3 An atom in a laser field 25 3.1 Time-dependent systems . 25 3.2 Sudden jumps . 26 3.3 Oscillating fields . 27 3.4 Time-dependent perturbation theory . 29 3.5 Rabi flopping and Fermi's Golden Rule . 30 3.6 Raman transitions . 32 3.7 Rabi flopping as a quantum gate . 32 3.8 Ramsey fringes . 33 3.9 Measurement and initialisation . 34 1 CONTENTS 2 4 Spins in magnetic fields 35 4.1 The nuclear spin Hamiltonian . 35 4.2 The rotating frame . 36 4.3 On-resonance excitation . 38 4.4 Excitation phases . 38 4.5 Off-resonance excitation . 39 4.6 Practicalities . 40 4.7 The vector model . 40 4.8 Spin echoes . 41 4.9 Measurement and initialisation . 42 5 Photon techniques 43 5.1 Spatial encoding .
    [Show full text]
  • MATH 237 Differential Equations and Computer Methods
    Queen’s University Mathematics and Engineering and Mathematics and Statistics MATH 237 Differential Equations and Computer Methods Supplemental Course Notes Serdar Y¨uksel November 19, 2010 This document is a collection of supplemental lecture notes used for Math 237: Differential Equations and Computer Methods. Serdar Y¨uksel Contents 1 Introduction to Differential Equations 7 1.1 Introduction: ................................... ....... 7 1.2 Classification of Differential Equations . ............... 7 1.2.1 OrdinaryDifferentialEquations. .......... 8 1.2.2 PartialDifferentialEquations . .......... 8 1.2.3 Homogeneous Differential Equations . .......... 8 1.2.4 N-thorderDifferentialEquations . ......... 8 1.2.5 LinearDifferentialEquations . ......... 8 1.3 Solutions of Differential equations . .............. 9 1.4 DirectionFields................................. ........ 10 1.5 Fundamental Questions on First-Order Differential Equations............... 10 2 First-Order Ordinary Differential Equations 11 2.1 ExactDifferentialEquations. ........... 11 2.2 MethodofIntegratingFactors. ........... 12 2.3 SeparableDifferentialEquations . ............ 13 2.4 Differential Equations with Homogenous Coefficients . ................ 13 2.5 First-Order Linear Differential Equations . .............. 14 2.6 Applications.................................... ....... 14 3 Higher-Order Ordinary Linear Differential Equations 15 3.1 Higher-OrderDifferentialEquations . ............ 15 3.1.1 LinearIndependence . ...... 16 3.1.2 WronskianofasetofSolutions . ........ 16 3.1.3 Non-HomogeneousProblem
    [Show full text]
  • The Exponential of a Matrix
    5-28-2012 The Exponential of a Matrix The solution to the exponential growth equation dx kt = kx is given by x = c e . dt 0 It is natural to ask whether you can solve a constant coefficient linear system ′ ~x = A~x in a similar way. If a solution to the system is to have the same form as the growth equation solution, it should look like At ~x = e ~x0. The first thing I need to do is to make sense of the matrix exponential eAt. The Taylor series for ez is ∞ n z z e = . n! n=0 X It converges absolutely for all z. It A is an n × n matrix with real entries, define ∞ n n At t A e = . n! n=0 X The powers An make sense, since A is a square matrix. It is possible to show that this series converges for all t and every matrix A. Differentiating the series term-by-term, ∞ ∞ ∞ ∞ n−1 n n−1 n n−1 n−1 m m d At t A t A t A t A At e = n = = A = A = Ae . dt n! (n − 1)! (n − 1)! m! n=0 n=1 n=1 m=0 X X X X At ′ This shows that e solves the differential equation ~x = A~x. The initial condition vector ~x(0) = ~x0 yields the particular solution At ~x = e ~x0. This works, because e0·A = I (by setting t = 0 in the power series). Another familiar property of ordinary exponentials holds for the matrix exponential: If A and B com- mute (that is, AB = BA), then A B A B e e = e + .
    [Show full text]
  • The Exponential Function for Matrices
    The exponential function for matrices Matrix exponentials provide a concise way of describing the solutions to systems of homoge- neous linear differential equations that parallels the use of ordinary exponentials to solve simple differential equations of the form y0 = λ y. For square matrices the exponential function can be defined by the same sort of infinite series used in calculus courses, but some work is needed in order to justify the construction of such an infinite sum. Therefore we begin with some material needed to prove that certain infinite sums of matrices can be defined in a mathematically sound manner and have reasonable properties. Limits and infinite series of matrices Limits of vector valued sequences in Rn can be defined and manipulated much like limits of scalar valued sequences, the key adjustment being that distances between real numbers that are expressed in the form js−tj are replaced by distances between vectors expressed in the form jx−yj. 1 Similarly, one can talk about convergence of a vector valued infinite series n=0 vn in terms of n the convergence of the sequence of partial sums sn = i=0 vk. As in the case of ordinary infinite series, the best form of convergence is absolute convergence, which correspondsP to the convergence 1 P of the real valued infinite series jvnj with nonnegative terms. A fundamental theorem states n=0 1 that a vector valued infinite series converges if the auxiliary series jvnj does, and there is P n=0 a generalization of the standard M-test: If jvnj ≤ Mn for all n where Mn converges, then P n vn also converges.
    [Show full text]
  • Jordan Decomposition for Differential Operators Samuel Weatherhog Bsc Hons I, BE Hons IIA
    Jordan Decomposition for Differential Operators Samuel Weatherhog BSc Hons I, BE Hons IIA A thesis submitted for the degree of Master of Philosophy at The University of Queensland in 2017 School of Mathematics and Physics i Abstract One of the most well-known theorems of linear algebra states that every linear operator on a complex vector space has a Jordan decomposition. There are now numerous ways to prove this theorem, how- ever a standard method of proof relies on the existence of an eigenvector. Given a finite-dimensional, complex vector space V , every linear operator T : V ! V has an eigenvector (i.e. a v 2 V such that (T − λI)v = 0 for some λ 2 C). If we are lucky, V may have a basis consisting of eigenvectors of T , in which case, T is diagonalisable. Unfortunately this is not always the case. However, by relaxing the condition (T − λI)v = 0 to the weaker condition (T − λI)nv = 0 for some n 2 N, we can always obtain a basis of generalised eigenvectors. In fact, there is a canonical decomposition of V into generalised eigenspaces and this is essentially the Jordan decomposition. The topic of this thesis is an analogous theorem for differential operators. The existence of a Jordan decomposition in this setting was first proved by Turrittin following work of Hukuhara in the one- dimensional case. Subsequently, Levelt proved uniqueness and provided a more conceptual proof of the original result. As a corollary, Levelt showed that every differential operator has an eigenvector. He also noted that this was a strange chain of logic: in the linear setting, the existence of an eigen- vector is a much easier result and is in fact used to obtain the Jordan decomposition.
    [Show full text]
  • Approximating the Exponential from a Lie Algebra to a Lie Group
    MATHEMATICS OF COMPUTATION Volume 69, Number 232, Pages 1457{1480 S 0025-5718(00)01223-0 Article electronically published on March 15, 2000 APPROXIMATING THE EXPONENTIAL FROM A LIE ALGEBRA TO A LIE GROUP ELENA CELLEDONI AND ARIEH ISERLES 0 Abstract. Consider a differential equation y = A(t; y)y; y(0) = y0 with + y0 2 GandA : R × G ! g,whereg is a Lie algebra of the matricial Lie group G. Every B 2 g canbemappedtoGbythematrixexponentialmap exp (tB)witht 2 R. Most numerical methods for solving ordinary differential equations (ODEs) on Lie groups are based on the idea of representing the approximation yn of + the exact solution y(tn), tn 2 R , by means of exact exponentials of suitable elements of the Lie algebra, applied to the initial value y0. This ensures that yn 2 G. When the exponential is difficult to compute exactly, as is the case when the dimension is large, an approximation of exp (tB) plays an important role in the numerical solution of ODEs on Lie groups. In some cases rational or poly- nomial approximants are unsuitable and we consider alternative techniques, whereby exp (tB) is approximated by a product of simpler exponentials. In this paper we present some ideas based on the use of the Strang splitting for the approximation of matrix exponentials. Several cases of g and G are considered, in tandem with general theory. Order conditions are discussed, and a number of numerical experiments conclude the paper. 1. Introduction Consider the differential equation (1.1) y0 = A(t; y)y; y(0) 2 G; with A : R+ × G ! g; where G is a matricial Lie group and g is the underlying Lie algebra.
    [Show full text]
  • (VI.E) Jordan Normal Form
    (VI.E) Jordan Normal Form Set V = Cn and let T : V ! V be any linear transformation, with distinct eigenvalues s1,..., sm. In the last lecture we showed that V decomposes into stable eigenspaces for T : s s V = W1 ⊕ · · · ⊕ Wm = ker (T − s1I) ⊕ · · · ⊕ ker (T − smI). Let B = fB1,..., Bmg be a basis for V subordinate to this direct sum and set B = [T j ] , so that k Wk Bk [T]B = diagfB1,..., Bmg. Each Bk has only sk as eigenvalue. In the event that A = [T]eˆ is s diagonalizable, or equivalently ker (T − skI) = ker(T − skI) for all k , B is an eigenbasis and [T]B is a diagonal matrix diagf s1,..., s1 ;...; sm,..., sm g. | {z } | {z } d1=dim W1 dm=dim Wm Otherwise we must perform further surgery on the Bk ’s separately, in order to transform the blocks Bk (and so the entire matrix for T ) into the “simplest possible” form. The attentive reader will have noticed above that I have written T − skI in place of skI − T . This is a strategic move: when deal- ing with characteristic polynomials it is far more convenient to write det(lI − A) to produce a monic polynomial. On the other hand, as you’ll see now, it is better to work on the individual Wk with the nilpotent transformation T j − s I =: N . Wk k k Decomposition of the Stable Eigenspaces (Take 1). Let’s briefly omit subscripts and consider T : W ! W with one eigenvalue s , dim W = d , B a basis for W and [T]B = B.
    [Show full text]
  • MA251 Algebra I: Advanced Linear Algebra Revision Guide
    MA251 Algebra I: Advanced Linear Algebra Revision Guide Written by David McCormick ii MA251 Algebra I: Advanced Linear Algebra Contents 1 Change of Basis 1 2 The Jordan Canonical Form 1 2.1 Eigenvalues and Eigenvectors . 1 2.2 Minimal Polynomials . 2 2.3 Jordan Chains, Jordan Blocks and Jordan Bases . 3 2.4 Computing the Jordan Canonical Form . 4 2.5 Exponentiation of a Matrix . 6 2.6 Powers of a Matrix . 7 3 Bilinear Maps and Quadratic Forms 8 3.1 Definitions . 8 3.2 Change of Variable under the General Linear Group . 9 3.3 Change of Variable under the Orthogonal Group . 10 3.4 Unitary, Hermitian and Normal Matrices . 11 4 Finitely Generated Abelian Groups 12 4.1 Generators and Cyclic Groups . 13 4.2 Subgroups and Cosets . 13 4.3 Quotient Groups and the First Isomorphism Theorem . 14 4.4 Abelian Groups and Matrices Over Z .............................. 15 Introduction This revision guide for MA251 Algebra I: Advanced Linear Algebra has been designed as an aid to revision, not a substitute for it. While it may seem that the module is impenetrably hard, there's nothing in Algebra I to be scared of. The underlying theme is normal forms for matrices, and so while there is some theory you have to learn, most of the module is about doing computations. (After all, this is mathematics, not philosophy.) Finding books for this module is hard. My personal favourite book on linear algebra is sadly out-of- print and bizarrely not in the library, but if you can find a copy of Evar Nering's \Linear Algebra and Matrix Theory" then it's well worth it (though it doesn't cover the abelian groups section of the course).
    [Show full text]
  • Singles out a Specific Basis
    Quantum Information and Quantum Noise Gabriel T. Landi University of Sao˜ Paulo July 3, 2018 Contents 1 Review of quantum mechanics1 1.1 Hilbert spaces and states........................2 1.2 Qubits and Bloch’s sphere.......................3 1.3 Outer product and completeness....................5 1.4 Operators................................7 1.5 Eigenvalues and eigenvectors......................8 1.6 Unitary matrices.............................9 1.7 Projective measurements and expectation values............ 10 1.8 Pauli matrices.............................. 11 1.9 General two-level systems....................... 13 1.10 Functions of operators......................... 14 1.11 The Trace................................ 17 1.12 Schrodinger’s¨ equation......................... 18 1.13 The Schrodinger¨ Lagrangian...................... 20 2 Density matrices and composite systems 24 2.1 The density matrix........................... 24 2.2 Bloch’s sphere and coherence...................... 29 2.3 Composite systems and the almighty kron............... 32 2.4 Entanglement.............................. 35 2.5 Mixed states and entanglement..................... 37 2.6 The partial trace............................. 39 2.7 Reduced density matrices........................ 42 2.8 Singular value and Schmidt decompositions.............. 44 2.9 Entropy and mutual information.................... 50 2.10 Generalized measurements and POVMs................ 62 3 Continuous variables 68 3.1 Creation and annihilation operators................... 68 3.2 Some important
    [Show full text]