Fundamentals of Linear Algebra and Optimization CIS515, Some Slides

Total Page:16

File Type:pdf, Size:1020Kb

Fundamentals of Linear Algebra and Optimization CIS515, Some Slides Fundamentals of Linear Algebra and Optimization CIS515, Some Slides Jean Gallier Department of Computer and Information Science University of Pennsylvania Philadelphia, PA 19104, USA e-mail: [email protected] c Jean Gallier November 5, 2013 2 Contents 1 Basics of Linear Algebra 9 1.1 Motivations: Linear Combinations, Linear Independence, Rank . 9 1.2 Vector Spaces . 27 1.3 Linear Independence, Subspaces . 37 1.4 Bases of a Vector Space . 47 1.5 LinearMaps ................ 57 1.6 Matrices .................. 68 1.7 Haar Basis Vectors; a Glimpse at Wavelets 108 1.8 TheE↵ect of a Change of Bases on Matrices138 1.9 AffineMaps ................ 149 1.10 Direct Products, Sums, and Direct Sums . 164 1.11 The Dual Space E⇤ and Linear Forms . 178 1.12 Hyperplanes and Linear Forms . 199 1.13 Transpose of a Linear Map and of a Matrix 200 1.14 The Four Fundamental Subspaces . 207 2 Gaussian Elimination, LU, Cholesky, Re- 3 4 CONTENTS duced Echelon 217 2.1 Motivating Example: Curve Interpolation . 217 2.2 Gaussian Elimination and LU-Factorization226 2.3 Gaussian Elimination of Tridiagonal Ma- trices . 269 2.4 SPD Matrices and the Cholesky Decompo- sition . 276 2.5 Reduced Row Echelon Form . 280 3 Determinants 305 3.1 Permutations, Signature of a Permutation 305 3.2 AlternatingMultilinearMaps . 311 3.3 Definition of a Determinant . 319 3.4 Inverse Matrices and Determinants . 330 3.5 Systems of Linear Equations and Determi- nants . 334 3.6 DeterminantofaLinearMap . 335 3.7 The Cayley–Hamilton Theorem . 337 3.8 Further Readings . 344 4 Vector Norms and Matrix Norms 345 4.1 Normed Vector Spaces . 345 4.2 MatrixNorms ............... 354 4.3 ConditionNumbersofMatrices . 373 CONTENTS 5 5 Iterative Methods for Solving Linear Sys- tems 387 5.1 Convergence of Sequences of Vectors and Matrices .................. 387 5.2 Convergence of Iterative Methods . 392 5.3 Methods of Jacobi, Gauss-Seidel, and Re- laxation . 397 5.4 ConvergenceoftheMethods . 410 6 Euclidean Spaces 419 6.1 Inner Products, Euclidean Spaces . 419 6.2 Orthogonality . 429 6.3 Linear Isometries (Orthogonal Transforma- tions) . 447 6.4 The Orthogonal Group, Orthogonal Matrices453 6.5 QR-Decomposition for Invertible Matrices 458 7 QR-Decomposition for Arbitrary Matri- ces 465 7.1 Orthogonal Reflections . 465 7.2 QR-Decomposition Using Householder Ma- trices . 473 8 BasicsofHermitianGeometry 479 8.1 Sesquilinear Forms, Hermitian Forms . 479 6 CONTENTS 8.2 Orthogonality, Duality, Adjoint of A Lin- earMap .................. 492 8.3 Linear Isometries (also called Unitary Trans- formations) . 502 8.4 The Unitary Group, Unitary Matrices . 506 9 Eigenvectors and Eigenvalues 511 9.1 Eigenvectors and Eigenvalues of a Linear Map .................... 511 9.2 Reduction to Upper Triangular Form . 525 9.3 Location of Eigenvalues . 529 10 Spectral Theorems 533 10.1 NormalLinearMaps . 533 10.2 Self-Adjoint and Other Special Linear Maps548 10.3 Normal and Other Special Matrices . 553 11 Introduction to The Finite Elements 561 11.1 A One-Dimensional Problem: Bending of aBeam................... 561 11.2 A Two-Dimensional Problem: An Elastic Membrane ................. 587 11.3 Time-Dependent Boundary Problems . 596 12 Singular Value Decomposition and Polar CONTENTS 7 Form 617 12.1 Singular Value Decomposition for Square Matrices .................. 617 12.2 Singular Value Decomposition for Rectan- gularMatrices . 632 13 Applications of SVD and Pseudo-inverses635 13.1 Least Squares Problems and the Pseudo- inverse . 635 13.2 Data Compression and SVD . 648 13.3 Principal Components Analysis (PCA) . 651 13.4 Best Affine Approximation . 664 14 Quadratic Optimization Problems 675 14.1 Quadratic Optimization: The Positive Def- inite Case . 675 14.2 Quadratic Optimization: The General Case 694 14.3 Maximizing a Quadratic Function on the Unit Sphere . 699 Bibliography 708 8 CONTENTS Chapter 1 Basics of Linear Algebra 1.1 Motivations: Linear Combinations, Linear Inde- pendence and Rank Consider the problem of solving the following system of three linear equations in the three variables x1,x2,x3 R: 2 x +2x x =1 1 2 − 3 2x1 + x2 + x3 =2 x 2x 2x =3. 1 − 2 − 3 One way to approach this problem is introduce some “column vectors.” 9 10 CHAPTER 1. BASICS OF LINEAR ALGEBRA Let u, v, w,andb,bethevectors given by 1 2 1 1 − u = 2 v = 1 w = 1 b = 2 011 0 21 0 21 031 − − @ A @ A @ A @ A and write our linear system as x1u + x2v + x3w = b. In the above equation, we used implicitly the fact that a vector z can be multiplied by a scalar λ R,where 2 z1 λz1 λz = λ z = λz , 0 21 0 21 z3 λz3 @ A @ A and two vectors y and and z can be added, where y1 z1 y1 + z1 y + z = y + z = y + z . 0 21 0 21 0 2 21 y3 z3 y3 + z3 @ A @ A @ A 1.1. MOTIVATIONS: LINEAR COMBINATIONS, LINEAR INDEPENDENCE, RANK11 The set of all vectors with three components is denoted 3 1 by R ⇥ . 3 1 The reason for using the notation R ⇥ rather than the more conventional notation R3 is that the elements of 3 1 R ⇥ are column vectors;theyconsistofthreerowsand asinglecolumn,whichexplainsthesuperscript3 1. ⇥ On the other hand, R3 = R R R consists of all triples ⇥ ⇥ of the form (x1,x2,x3), with x1,x2,x3 R,andthese are row vectors. 2 For the sake of clarity, in this introduction, we will denote n 1 the set of column vectors with n components by R ⇥ . An expression such as x1u + x2v + x3w where u, v, w are vectors and the xisarescalars(inR)is called a linear combination. 12 CHAPTER 1. BASICS OF LINEAR ALGEBRA Using this notion, the problem of solving our linear sys- tem x1u + x2v + x3w = b is equivalent to determining whether b can be expressed as a linear combination of u, v, w. Now, if the vectors u, v, w are linearly independent, which means that there is no triple (x1,x2,x3) =(0, 0, 0) such that 6 x1u + x2v + x3w =03, 3 1 it can be shown that every vector in R ⇥ can be written as a linear combination of u, v, w. Here, 03 is the zero vector 0 03 = 0 . 001 @ A 1.1. MOTIVATIONS: LINEAR COMBINATIONS, LINEAR INDEPENDENCE, RANK13 It is customary to abuse notation and to write 0 instead of 03.Thisrarelycausesaproblembecauseinmostcases, whether 0 denotes the scalar zero or the zero vector can be inferred from the context. 3 1 In fact, every vector z R ⇥ can be written in a unique way as a linear combination2 z = x1u + x2v + x3w. Then, our equation x1u + x2v + x3w = b has a unique solution,andindeed,wecancheckthat x1 =1.4 x = 0.4 2 − x = 0.4 3 − is the solution. But then, how do we determine that some vectors are linearly independent? One answer is to compute the determinant det(u, v, w), and to check that it is nonzero. 14 CHAPTER 1. BASICS OF LINEAR ALGEBRA In our case, 12 1 − det(u, v, w)= 21 1=15, 1 2 2 − − which confirms that u, v, ware linearly independent. Other methods consist of computing an LU-decomposition or a QR-decomposition, or an SVD of the matrix con- sisting of the three columns u, v, w, 12 1 − A = uvw = 21 1 . 01 2 21 − − @ A If we form the vector of unknowns x1 x = x , 0 21 x3 @ A then our linear combination x1u + x2v + x3w can be written in matrix form as 1.1. MOTIVATIONS: LINEAR COMBINATIONS, LINEAR INDEPENDENCE, RANK15 12 1 x − 1 x1u + x2v + x3w = 21 1 x2 . 01 2 21 0x 1 − − 3 @ A @ A So, our linear system is expressed by 12 1 x 1 − 1 21 1 x2 = 2 , 01 2 21 0x 1 031 − − 3 @ A @ A @ A or more concisely as Ax = b. Now, what if the vectors u, v, w are linearly dependent? 16 CHAPTER 1. BASICS OF LINEAR ALGEBRA For example, if we consider the vectors 1 2 1 − u = 2 v = 1 w = 1 , 011 0 11 0 2 1 − @ A @ A @ A we see that u v = w, − anontriviallinear dependence. It can be verified that u and v are still linearly indepen- dent. Now, for our problem x1u + x2v + x3w = b to have a solution, it must be the case that b can be expressed as linear combination of u and v. However, it turns out that u, v, b are linearly independent (because det(u, v, b)= 6), so b cannot be expressed as alinearcombinationofu−and v and thus, our system has no solution. 1.1. MOTIVATIONS: LINEAR COMBINATIONS, LINEAR INDEPENDENCE, RANK17 If we change the vector b to 3 b = 3 , 001 then @ A b = u + v, and so the system x1u + x2v + x3w = b has the solution x1 =1,x2 =1,x3 =0. Actually, since w = u v,theabovesystemisequivalent to − (x + x )u +(x x )v = b, 1 3 2 − 3 and because u and v are linearly independent, the unique solution in x + x and x x is 1 3 2 − 3 x1 + x3 =1 x x =1, 2 − 3 which yields an infinite number of solutions parameter- ized by x3,namely x =1 x 1 − 3 x2 =1+x3. 18 CHAPTER 1. BASICS OF LINEAR ALGEBRA In summary, a 3 3linearsystemmayhaveaunique solution, no solution,⇥ or an infinite number of solutions, depending on the linear independence (and dependence) or the vectors u, v, w, b. This situation can be generalized to any n n system, and even to any n m system (n equations in⇥ m variables), as we will see later.⇥ The point of view where our linear system is expressed in matrix form as Ax = b stresses the fact that the map x Ax is a linear transformation.
Recommended publications
  • Linear Algebra Handbook
    CS419 Linear Algebra January 7, 2021 1 What do we need to know? By the end of this booklet, you should know all the linear algebra you need for CS419. More specifically, you'll understand: • Vector spaces (the important ones, at least) • Dirac (bra-ket) notation and why it's quite nice to use • Linear combinations, linearly independent sets, spanning sets and basis sets • Matrices and linear transformations (they're the same thing) • Changing basis • Inner products and norms • Unitary operations • Tensor products • Why we care about linear algebra 2 Vector Spaces In quantum computing, the `vectors' we'll be working with are going to be made up of complex numbers1. A vector space, V , over C is a set of vectors with the vital property2 that αu + βv 2 V for all α; β 2 C and u; v 2 V . Intuitively, this means we can add together and scale up vectors in V, and we know the result is still in V. Our vectors are going to be lists of n complex numbers, v 2 Cn, and Cn will be our most important vector space. Note we can just as easily define vector spaces over R, the set of real numbers. Over the course of this module, we'll see the reasons3 we use C, but for all this linear algebra, we can stick with R as everyone is happier with real numbers. Rest assured for the entire module, every time you see something like \Consider a vector space V ", this vector space will be Rn or Cn for some n 2 N.
    [Show full text]
  • 23. Eigenvalues and Eigenvectors
    23. Eigenvalues and Eigenvectors 11/17/20 Eigenvalues and eigenvectors have a variety of uses. They allow us to solve linear difference and differential equations. For many non-linear equations, they inform us about the long-run behavior of the system. They are also useful for defining functions of matrices. 23.1 Eigenvalues We start with eigenvalues. Eigenvalues and Spectrum. Let A be an m m matrix. An eigenvalue (characteristic value, proper value) of A is a number λ so that× A λI is singular. The spectrum of A, σ(A)= {eigenvalues of A}.− Sometimes it’s possible to find eigenvalues by inspection of the matrix. ◮ Example 23.1.1: Some Eigenvalues. Suppose 1 0 2 1 A = , B = . 0 2 1 2 Here it is pretty obvious that subtracting either I or 2I from A yields a singular matrix. As for matrix B, notice that subtracting I leaves us with two identical columns (and rows), so 1 is an eigenvalue. Less obvious is the fact that subtracting 3I leaves us with linearly independent columns (and rows), so 3 is also an eigenvalue. We’ll see in a moment that 2 2 matrices have at most two eigenvalues, so we have determined the spectrum of each:× σ(A)= {1, 2} and σ(B)= {1, 3}. ◭ 2 MATH METHODS 23.2 Finding Eigenvalues: 2 2 × We have several ways to determine whether a matrix is singular. One method is to check the determinant. It is zero if and only the matrix is singular. That means we can find the eigenvalues by solving the equation det(A λI)=0.
    [Show full text]
  • Chapter 2: Linear Algebra User's Manual
    Preprint typeset in JHEP style - HYPER VERSION Chapter 2: Linear Algebra User's Manual Gregory W. Moore Abstract: An overview of some of the finer points of linear algebra usually omitted in physics courses. May 3, 2021 -TOC- Contents 1. Introduction 5 2. Basic Definitions Of Algebraic Structures: Rings, Fields, Modules, Vec- tor Spaces, And Algebras 6 2.1 Rings 6 2.2 Fields 7 2.2.1 Finite Fields 8 2.3 Modules 8 2.4 Vector Spaces 9 2.5 Algebras 10 3. Linear Transformations 14 4. Basis And Dimension 16 4.1 Linear Independence 16 4.2 Free Modules 16 4.3 Vector Spaces 17 4.4 Linear Operators And Matrices 20 4.5 Determinant And Trace 23 5. New Vector Spaces from Old Ones 24 5.1 Direct sum 24 5.2 Quotient Space 28 5.3 Tensor Product 30 5.4 Dual Space 34 6. Tensor spaces 38 6.1 Totally Symmetric And Antisymmetric Tensors 39 6.2 Algebraic structures associated with tensors 44 6.2.1 An Approach To Noncommutative Geometry 47 7. Kernel, Image, and Cokernel 47 7.1 The index of a linear operator 50 8. A Taste of Homological Algebra 51 8.1 The Euler-Poincar´eprinciple 54 8.2 Chain maps and chain homotopies 55 8.3 Exact sequences of complexes 56 8.4 Left- and right-exactness 56 { 1 { 9. Relations Between Real, Complex, And Quaternionic Vector Spaces 59 9.1 Complex structure on a real vector space 59 9.2 Real Structure On A Complex Vector Space 64 9.2.1 Complex Conjugate Of A Complex Vector Space 66 9.2.2 Complexification 67 9.3 The Quaternions 69 9.4 Quaternionic Structure On A Real Vector Space 79 9.5 Quaternionic Structure On Complex Vector Space 79 9.5.1 Complex Structure On Quaternionic Vector Space 81 9.5.2 Summary 81 9.6 Spaces Of Real, Complex, Quaternionic Structures 81 10.
    [Show full text]
  • Eigenvalues and Eigenvectors MAT 67L, Laboratory III
    Eigenvalues and Eigenvectors MAT 67L, Laboratory III Contents Instructions (1) Read this document. (2) The questions labeled \Experiments" are not graded, and should not be turned in. They are designed for you to get more practice with MATLAB before you start working on the programming problems, and they reinforce mathematical ideas. (3) A subset of the questions labeled \Problems" are graded. You need to turn in MATLAB M-files for each problem via Smartsite. You must read the \Getting started guide" to learn what file names you must use. Incorrect naming of your files will result in zero credit. Every problem should be placed is its own M-file. (4) Don't forget to have fun! Eigenvalues One of the best ways to study a linear transformation f : V −! V is to find its eigenvalues and eigenvectors or in other words solve the equation f(v) = λv ; v 6= 0 : In this MATLAB exercise we will lead you through some of the neat things you can to with eigenvalues and eigenvectors. First however you need to teach MATLAB to compute eigenvectors and eigenvalues. Lets briefly recall the steps you would have to perform by hand: As an example lets take the matrix of a linear transformation f : R3 ! R3 to be (in the canonical basis) 01 2 31 M := @2 4 5A : 3 5 6 The steps to compute eigenvalues and eigenvectors are (1) Calculate the characteristic polynomial P (λ) = det(M − λI) : (2) Compute the roots λi of P (λ). These are the eigenvalues. (3) For each of the eigenvalues λi calculate ker(M − λiI) : The vectors of any basis for for ker(M − λiI) are the eigenvectors corresponding to λi.
    [Show full text]
  • Eigenvalues and Eigenvectors
    Eigenvalues and eigenvectors Definition: For a vector space V over a field K and a linear map f : V → V , the eigenvalue of f is any λ ∈ K for which exists a vector u ∈ V \ 0 such that f (u) = λu. The eigenvector corresponding to an eigenvalue λ is any vector u such that f (u) = λu. If V is of finite dimension n, then f can be represented n×n by the matrix A = [f ]XX ∈ K w.r.t. some basis X of V . n This way we get eigenvalues λ ∈ K and eigenvectors x ∈ K of matrices — these shall satisfy Ax = λx. The collection of all eigenvalues of a matrix is its spectrum. Examples — a linear map in the plane R2 The axis symmetry by the axis of the 2nd and 4th quadrant x x 1 2 0 −1 [f ] = KK −1 0 T λ1 = 1 x1 = c · (−1, 1) T λ2 = −1 x2 = c · (1, 1) The rotation by the right angle 0 1 [f ] = KK −1 0 No real eigenvalues nor eigenvectors exist. The projection onto the first coordinate x 1 1 0 [f ] = KK 0 0 x2 T λ1 = 0 x1 = c · (0, 1) T λ2 = 1 x2 = c · (1, 0) Scaling by the factor2 f (u) 2 0 [f ] = u KK 0 2 λ1 = 2 Every vector is an eigenvector. A linear map given by a matrix x1 f (u) 1 0 [f ]KK = u 1 1 T λ1 = 1 x1 = c · (0, 1) Eigenvectors and eigenvalues of a diagonal matrix D The equation d1,1 0 ..
    [Show full text]
  • CLUSTER MONOMIALS ARE DUAL CANONICAL Contents 1. Introduction 1 2. the Quantum Group 3 3. Bases of Canonical Type 4 4. Generalis
    CLUSTER MONOMIALS ARE DUAL CANONICAL PETER J MCNAMARA Abstract. Kang, Kashiwara, Kim and Oh have proved that cluster monomials lie in the dual canonical basis, under a symmetric type assumiption. This involves constructing a monoidal categorification of a quantum cluster algebra using representations of KLR alge- bras. We use a folding technique to generalise their results to all Lie types. Contents 1. Introduction 1 2. The quantum group 3 3. Bases of canonical type 4 4. Generalised minors 6 5. KLR algebras with automorphism 7 6. The Grothendieck group construction 9 7. Induction, restriction and duality 9 8. Cuspidal modules 10 9. The quantum unipotent ring and its categorification 12 10. R-matrices 13 11. Reduction modulo p 14 12. Quantum cluster algebras 16 13. The initial quiver 17 14. Seeds with automorphism 18 15. The initial seed 19 16. Cluster mutation 23 17. Decategorification 24 18. Conclusion 27 References 28 1. Introduction Let G be a Kac-Moody group, w an element of its Weyl group. Then asssociated to w and G there is a unipotent subgroup N(w), whose Lie algebra is spanned by the positive roots α such that wα is negative. Its coordinate ring C[N(w)] has a natural structure of a cluster algebra [GLS11]. This story generalises to the quantum setting [GLS13a, GY1], where the Date: August 25, 2020. 1 2 PETER J MCNAMARA quantised coordinate ring Aq(n(w)) has a quantum cluster algebra structure. In this paper, we work with the quantum version. However, for this introduction, we shall continue with the classical story.
    [Show full text]
  • Modules and Matrices
    Chapter I Modules and matrices Apart from the general reference given in the Introduction, for this Chapter we refer in particular to [8] and [20]. Let R be a ring with 1 6= 0. We assume most definitions and basic notions concerning left and right modules over R and recall just a few facts. If M is a left R-module, then for every m 2 M the set Ann (m) := fr 2 R j rm = 0M g T is a left ideal of R. Moreover Ann (M) = m2M Ann (m) is an ideal of R. The module M is torsion free if Ann (m) = f0g for all non-zero m 2 M. The regular module RR is the additive group (R; +) considered as a left R-module with respect to the ring product. The submodules of RR are precisely the left ideals of R. A finitely generated R-module is free if it is isomorphic to the direct sum of n copies of RR, for some natural number n. Namely if it is isomorphic to the module n (0.1) (RR) := RR ⊕ · · · ⊕R R | {z } n times in which the operations are performed component-wise. If R is commutative, then n ∼ m (RR) = (RR) only if n = m. So, in the commutative case, the invariant n is called n n the rank of (RR) . Note that (RR) is torsion free if and only if R has no zero-divisors. The aim of this Chapter is to determine the structure of finitely generated modules over a principal ideal domain (which are a generalization of finite dimensional vector spaces) and to describe some applications.
    [Show full text]
  • MATH 423 Linear Algebra II Lecture 38: Generalized Eigenvectors. Jordan Canonical Form (Continued)
    MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued). Jordan canonical form A Jordan block is a square matrix of the form λ 1 0 ··· 0 0 . 0 λ 1 .. 0 0 . 0 0 λ .. 0 0 J = . . .. .. . 0 0 0 .. λ 1 0 0 0 ··· 0 λ A square matrix B is in the Jordan canonical form if it has diagonal block structure J1 O . O O J2 . O B = . , . .. O O . Jk where each diagonal block Ji is a Jordan block. Consider an n×n Jordan block λ 1 0 ··· 0 0 . 0 λ 1 .. 0 0 . 0 0 λ .. 0 0 J = . . .. .. . 0 0 0 .. λ 1 0 0 0 ··· 0 λ Multiplication by J − λI acts on the standard basis by a chain rule: 0 e1 e2 en−1 en ◦ ←− • ←− •←−··· ←− • ←− • 2 1 0000 0 2 0000 0 0 3 0 0 0 Consider a matrix B = . 0 0 0 3 1 0 0 0 0 0 3 1 0 0 0 0 0 3 This matrix is in Jordan canonical form. Vectors from the standard basis are organized in several chains. Multiplication by B − 2I : 0 e1 e2 ◦ ←− • ←− • Multiplication by B − 3I : 0 e3 ◦ ←− • 0 e4 e5 e6 ◦ ←− • ←− • ←− • Generalized eigenvectors Let L : V → V be a linear operator on a vector space V . Definition. A nonzero vector v is called a generalized eigenvector of L associated with an eigenvalue λ if (L − λI)k (v)= 0 for some integer k ≥ 1 (here I denotes the identity map on V ). The set of all generalized eigenvectors for a particular λ along with the zero vector is called the generalized eigenspace and denoted Kλ.
    [Show full text]
  • The Jordan Canonical Form
    The Jordan canonical form Attila M´at´e Brooklyn College of the City University of New York November 14, 2014 Contents 1 Preliminaries 2 1.1 Rank-nullitytheorem. ....... 2 1.2 Existence of eigenvalues and eigenvectors . ............. 3 2 Existence of a Jordan decomposition 3 3 Uniqueness of the Jordan decomposition 6 4 Theminimalpolynomialofalineartransformation 7 4.1 Existenceoftheminimalpolynomial . .......... 7 4.2 The minimal polynomial for algebraically closed fields . ................ 8 4.3 The characteristic polynomial and the Cayley–Hamilton theorem ........... 9 4.4 Findingtheminimalpolynomial . ........ 9 5 Consequences for matrices 11 5.1 Representation of vector spaces and linear transformations............... 11 5.2 Similaritytransformations . .......... 12 5.3 Directsumsofmatrices ............................ ...... 12 5.4 Jordanblockmatrices ............................. ...... 12 5.5 TheJordancanonicalformofamatrix . ......... 13 5.6 The characteristic polynomial as a determinant . .............. 13 6 Anexampleforfindingtheminimalpolynomialofamatrix 13 7 Theexamplecontinued: findingtheJordancanonicalform 14 7.1 ThefirsttwoJordansubspaces . ....... 14 7.2 ThenexttoJordansubspaces. ....... 16 7.3 TheJordancanonicalform . ...... 18 8 Matrices and field extensions 20 8.1 Afurtherdecompositiontheorem . ......... 20 1 1 Preliminaries 1.1 Rank-nullity theorem Let U and V be vector spaces over a field F , and let T : U V be a linear transformation. Then the domain of T , denoted as dom(T ), is the set U itself. The range→ of T is the set ra(T )= T u : u U , and it is a subspace of V ; one often writes TU instead of ra(T ). The kernel, or null{ space, of∈T }is the set ker(T )= u U : T u = 0 ; the kernel is a subspace of U. A key theorem is the following, often called the Rank-Nullity{ ∈ Theorem:} Theorem 1.
    [Show full text]
  • Unit 4: Matrices, Linear Maps and Change of Basis
    Unit 4: Matrices, Linear maps and change of basis Juan Luis Melero and Eduardo Eyras September 2018 1 Contents 1 Linear maps3 1.1 Operations of linear maps....................3 1.1.1 Scaling...........................3 1.1.2 Reflection.........................4 1.1.3 Pure rotation.......................4 1.2 Definition of linear map.....................6 1.3 Image of a map..........................6 1.4 Kernel (nullspace) of a map...................8 1.5 Types of maps........................... 10 1.5.1 Monomorphism (injective or one-to-one map)..... 10 1.5.2 Epimorphism (surjective or onto map)......... 11 1.5.3 Isomorphism (bijective or "one-to-one and onto" maps) 11 1.6 Matrix representation of a linear map.............. 13 1.7 Properties of the matrix associated to a linear map, kernel and image............................... 16 1.7.1 Definitions......................... 16 1.7.2 Application of the properties............... 18 2 Change of basis 20 3 Composition of linear maps 22 4 Inverse of a linear map 23 5 Path of linear maps 23 6 Exercises 24 7 R practical 28 7.1 Kernel of a linear map...................... 28 7.2 Image of a linear map...................... 28 2 1 Linear maps 1.1 Operations of linear maps A linear map is an operation on a vector space to transform one vector into another and which can be represented as a matrix multiplication: n m fA : R ! R m ~u !~v = fA(~u) = A~u 2 R For instance, in three dimensions: 0 1 0 1 0 1 a11 a12 a13 u1 v1 ~v = fA(~u) = A~u = @a21 a22 a23A @u2A = @v2A a31 a32 a33 u3 v3 Many operations can be represented with linear maps.
    [Show full text]
  • Generalized Eigenvector - Wikipedia
    11/24/2018 Generalized eigenvector - Wikipedia Generalized eigenvector In linear algebra, a generalized eigenvector of an n × n matrix is a vector which satisfies certain criteria which are more relaxed than those for an (ordinary) eigenvector.[1] Let be an n-dimensional vector space; let be a linear map in L(V), the set of all linear maps from into itself; and let be the matrix representation of with respect to some ordered basis. There may not always exist a full set of n linearly independent eigenvectors of that form a complete basis for . That is, the matrix may not be diagonalizable.[2][3] This happens when the algebraic multiplicity of at least one eigenvalue is greater than its geometric multiplicity (the nullity of the matrix , or the dimension of its nullspace). In this case, is called a defective eigenvalue and is called a defective matrix.[4] A generalized eigenvector corresponding to , together with the matrix generate a Jordan chain of linearly independent generalized eigenvectors which form a basis for an invariant subspace of .[5][6][7] Using generalized eigenvectors, a set of linearly independent eigenvectors of can be extended, if necessary, to a complete basis for .[8] This basis can be used to determine an "almost diagonal matrix" in Jordan normal form, similar to , which is useful in computing certain matrix functions of .[9] The matrix is also useful in solving the system of linear differential equations where need not be diagonalizable.[10][11] Contents Overview and definition Examples Example 1 Example 2 Jordan chains
    [Show full text]
  • Inverses of Generalized Vandermonde Matrices Lurs VERDE-STAR
    JOURNAL OF MATHEMATICAL ANALYSIS AND APPLICATIONS 131, 341-353 (1988) Inverses of Generalized Vandermonde Matrices Lurs VERDE-STAR Department of Mathematics, Universidad Autdnoma Metropolitana, M&ico D.F. CP. 09340, M&ico Submitted by C.-C. Rota Received October 15, 1986 1. INTRODUCTION Vandermonde matrices appear frequently in the theories of approximation and interpolation, linear difference and differential equations, and algebraic coding. In many cases Vandermonde matrices appear as coefficient matrices of systems of equations which must be solved. In other cases the inverse matrix is required. See for example Davis [3], Kalman [S], and Schumaker [12]. It is well known that under certain simple hypothesis confluent Vander- monde matrices are invertible. This fact is often used to show existence and uniqueness of solutions of a great variety of problems, for instance, Davis [3] does this for several interpolation problems. On the other hand, it seems that there is not much known about the structure and algebraic properties of the inverses of confluent Vander- monde matrices. In this paper we will prove several matrix equations involving generalized Vandermonde matrices, which give explicit algebraic infor- mation about the inverse matrices. For example, Eq. (3.20) says that the inversion of a confluent Vandermonde matrix V is accomplished multiply- ing V by certain permutation matrices, a triangular change of basis matrix and a block diagonal matrix whose blocks are triangular Toeplitz matrices. All these matrices are completely determined, they have a simple structure and are easily computed. A convenient way to deal with Vandermonde matrices is through their connection with polynomial interpolation.
    [Show full text]