LECTURE NOTES on LINEAR ALGEBRA, LINEAR DIFFERENTIAL EQUATIONS, and LINEAR SYSTEMS

Total Page:16

File Type:pdf, Size:1020Kb

LECTURE NOTES on LINEAR ALGEBRA, LINEAR DIFFERENTIAL EQUATIONS, and LINEAR SYSTEMS LECTURE NOTES on LINEAR ALGEBRA, LINEAR DIFFERENTIAL EQUATIONS, and LINEAR SYSTEMS A. S. Morse August 29, 2020 Contents I Linear Algebra and Linear Differential Equations 1 1 Fields and Polynomial Rings 3 2 Matrix Algebra 7 2.1 Matrices:Notation .................................. ....... 7 2.2 TypesofMatrices ................................... ....... 8 2.3 Determinants:Review ................................ ........ 9 2.4 MatrixRank ........................................ ..... 9 2.5 BasicMatrixOperations............................... ........ 10 2.5.1 Transposition..................................... .... 10 2.5.2 Scalar Multiplication . ... 11 2.5.3 Addition.......................................... .. 11 2.5.4 Multiplication . .. 12 2.5.5 Linear Algebraic Equations . ..... 14 2.5.6 MatrixInversion ................................... .... 15 2.6 LinearRecursionEquations .. .. .. .. .. .. .. .. .. .. .. .. ......... 17 2.7 Invariants under Premultiplication . .......... 19 2.7.1 ElementaryRowOperations. ...... 20 2.7.2 EchelonForm...................................... ... 21 2.7.3 Gauss Elimination . .. 22 2.7.4 Elementary Column Operations . ...... 27 2.7.5 Equivalent Matrices . .... 28 3 Linear Algebra 31 i ii CONTENTS 3.1 LinearVectorSpaces................................ ......... 31 3.1.1 Subspaces....................................... .... 32 3.1.2 SubspaceOperations............................... ...... 33 3.1.3 Distributative Rule . .... 34 3.1.4 IndependentSubspacesandDirectSum . ......... 36 3.1.5 Linear Combination and Span . ..... 36 3.1.6 LinearIndependence............................... ...... 36 3.1.7 Basis............................................ .. 37 3.1.8 Dimension......................................... .. 40 3.2 Functions ......................................... ...... 40 3.2.1 LinearTransformations .. .. .. .. .. .. .. .. .. .. .. .. ...... 41 3.2.2 Operations With Linear Transformations . ........ 42 3.2.3 Representations of Linear Transformations . ........... 42 3.2.4 CoordinateTransformations. ........ 43 3.2.5 Defining a Linear Transformation on a Basis . ....... 44 3.2.6 Linear Equations: Existence, Image, Epimorphism . .......... 45 3.2.7 Linear Equations: Uniqueness, Kernel, Monomorphism . ........... 47 3.2.8 IsomorphismsandIsomorphicVectorSpaces . .......... 48 3.2.9 Endomorphisms and Similar Matrices . ..... 48 4 Basic Concepts from Analysis 51 4.1 NormedVectorSpaces ............................... ......... 51 4.1.1 OpenSets........................................ ... 53 4.2 Continuous and Differentiable Functions . ........... 53 4.3 Convergence...................................... ........ 55 5 Ordinary Differential Equations - First concepts 57 5.1 TypesofEquation ................................... ....... 57 5.2 Modeling ........................................... .... 59 5.3 StateSpaceSystems ................................ ......... 62 5.3.1 ConversiontoState-SpaceForm . ........ 63 5.4 Initial Value Problem . ...... 65 CONTENTS iii 5.5 PicardIterations ................................... ........ 68 5.5.1 Three Relationships . .... 69 5.5.2 Convergence ..................................... .... 70 5.5.3 Uniqueness ....................................... ... 72 5.5.4 SummaryofFindings ................................. ... 74 5.6 TheConceptofState................................ ......... 77 5.6.1 Trajectories, Equilibrium Points and Periodic Orbits . ......... 78 5.6.2 Time-Invariant Dynamical Systems . ........ 80 5.7 Simulation.......................................... ..... 81 5.7.1 NumericalTechniques ............................... ..... 81 5.7.2 Analog Simulation . .. 83 5.7.3 ProgrammingDiagrams ............................... .... 84 6 Linear Differential Equations 87 6.1 Linearization ....................................... ...... 87 6.2 Linear Homogenous Differential Equations . ............ 90 6.2.1 State Transition Matrix . ..... 91 6.2.2 TheMatrixExponential.............................. ..... 92 6.2.3 State Transition Matrix Properties . ........ 93 6.3 Forced Linear Differential Equations . ........... 96 6.3.1 Variation of Constants Formula . ....... 96 6.4 Periodic Homegeneous Differential Equations . ............ 97 6.4.1 CoordinateTransformations. ........ 98 6.4.2 LyapunovTransformations . ....... 99 6.4.3 Floquet’sTheorem.................................. 100 7 Matrix Similarity 103 7.1 Motivation ......................................... ..... 103 7.1.1 The Matrix Exponential of a Diagonal Matrix . ....... 104 7.1.2 The State Transition Matrix of a Diagonalizable Matrix . ........ 104 7.2 Eigenvalues ........................................ ...... 105 7.2.1 Characteristic Polynomial . ...... 105 iv CONTENTS 7.2.2 ComputationofEigenvectors . ....... 106 7.2.3 Semisimple Matrices . 107 7.2.4 Eigenvectors and Differential Equations . ......... 108 7.3 A -InvariantSubspaces ................................. ...... 109 7.3.1 Direct Sum Decomposition of IKn ..............................111 7.4 Similarity Invariants . ....... 112 7.5 TheCayley-HamiltonTheorem . ........ 112 7.6 Minimal Polynomial . ..... 113 7.6.1 The Miminal Polynomial of a Vector . ..... 114 7.6.2 The Minimal Polynomial of a Finite Set of Vectors . ....... 114 7.6.3 The Minimal Polynomial of a Subspace . ..... 114 7.6.4 The Minimal Polynomial of A ................................115 7.6.5 A Similarity Invariant . 115 7.7 AUsefulformula .................................... ....... 116 7.7.1 Application . 117 7.8 CyclicSubspaces .................................... ....... 117 7.8.1 CompanionForms ................................... 118 7.8.2 RationalCanonicalForm ............................. ..... 120 7.9 CoprimeDecompositions............................... ........ 121 7.9.1 TheJordonNormalForm ............................. ..... 124 7.10 TheStructureoftheMatrixExponential. .............. 126 7.10.1 RealMatrices.................................... ..... 128 7.10.2 AsymptoticBehavior.. .. .. .. .. .. .. .. .. .. .. .. .. ...... 129 7.11 LinearRecursionEquations . .......... 129 8 Inner Product Spaces 131 8.1 Definition .......................................... ..... 131 8.1.1 Triangle Inequality . 132 8.2 Orthogonality...................................... ....... 133 8.2.1 OrthogonalVectors............................... ....... 133 8.2.2 OrthonormalSets ................................. ..... 133 CONTENTS v 8.3 Gram’sCriterion ..................................... ...... 134 8.4 Cauchy-SchwartzInequality . ........... 135 8.5 OrthogonalizationofaSetofVectors. ............ 136 8.6 OrthogonalProjections ............................. .......... 137 8.7 Bessel’s Inequality . ....... 138 8.8 LeastSquares..................................... ........ 139 9 Normal Matrices 143 9.1 AlgebraicGroups.................................... ....... 143 9.1.1 OrthogonalGroup ................................. ..... 144 9.1.2 UnitaryGroup ..................................... 145 9.2 AdjointMatrix ...................................... ...... 145 9.2.1 OrthogonalComplement. ...... 145 9.2.2 Properties of Adjoint Matrices . ....... 146 9.3 NormalMatrices .................................... ....... 146 9.3.1 Diagonalization of a Normal Matrix . ..... 147 9.4 Hermitian, Unitary, Orthogonal and Symmetric Matrices . ...............149 9.4.1 Hermitian Matrices . 149 9.4.2 UnitaryMatrices................................... 150 9.4.3 OrthogonalMatrices ............................... ...... 150 9.4.4 SymmetricMatrices ................................. 150 9.5 RealQuadraticForms ................................ ........ 152 9.5.1 SymmetricQuadraticForms . ..... 153 9.5.2 Change of Variables: Congruent Matrices . ......... 153 9.5.3 Reduction to Principal Axes . ...... 153 9.5.4 SumofSquares.................................... 154 9.6 Positive Definite Quadratic Forms . ......... 156 9.6.1 Conditions for Positive Definiteness . ....... 156 9.6.2 MatrixSquareRoots................................ ..... 157 9.7 Simultaneous Diagonalization . ........ 158 9.7.1 Constrained Optimization . ..... 159 vi CONTENTS 9.7.2 ASpecialCase ..................................... 159 II Linear Systems 163 10 Introductory Concepts 165 10.1 LinearSystems.................................... ........ 165 10.2 Continuous-TimeLinearSystems . ........... 166 10.2.1 Time-Invaraint, Continuous-Time, Linear Systems . ............ 166 10.3 Discrete-TimeLinearSystems. ........... 168 10.3.1 Time-Invariant Discrete Time Systems . ......... 168 10.3.2 SampledDataSystems. .. .. .. .. .. .. .. .. .. .. .. .. ...... 169 10.4 TheConceptofaRealization . ......... 170 10.4.1 Existence of Continuous-Time Linear Realizations . .......... 171 11 Controllability 173 11.1ReachableStates .................................. ......... 173 11.2 Controllability . ...... 175 11.2.1 Controllability Reduction . ..... 176 11.3 Time-Invariant Continuus-Time Linear Systems . .............. 178 11.3.1 Properties of the Controllable Space of (A, B).......................181 11.3.2 Control Reduction for Time-Invariant Systems . ............ 182 11.3.3 Controllable Decomposition . ...... 184 12 Observability 187 12.1UnobservableStates ............................... .......... 187 12.2Observability ...................................... ....... 189 12.2.1 Observability Reduction . ...... 190 12.3MinimalSystems .................................... ....... 192 12.4 Time-Invariant Continuous-Time Linear Systems . .............. 193 12.4.1 Properties of the Unobservable Space
Recommended publications
  • Formal Power Series - Wikipedia, the Free Encyclopedia
    Formal power series - Wikipedia, the free encyclopedia http://en.wikipedia.org/wiki/Formal_power_series Formal power series From Wikipedia, the free encyclopedia In mathematics, formal power series are a generalization of polynomials as formal objects, where the number of terms is allowed to be infinite; this implies giving up the possibility to substitute arbitrary values for indeterminates. This perspective contrasts with that of power series, whose variables designate numerical values, and which series therefore only have a definite value if convergence can be established. Formal power series are often used merely to represent the whole collection of their coefficients. In combinatorics, they provide representations of numerical sequences and of multisets, and for instance allow giving concise expressions for recursively defined sequences regardless of whether the recursion can be explicitly solved; this is known as the method of generating functions. Contents 1 Introduction 2 The ring of formal power series 2.1 Definition of the formal power series ring 2.1.1 Ring structure 2.1.2 Topological structure 2.1.3 Alternative topologies 2.2 Universal property 3 Operations on formal power series 3.1 Multiplying series 3.2 Power series raised to powers 3.3 Inverting series 3.4 Dividing series 3.5 Extracting coefficients 3.6 Composition of series 3.6.1 Example 3.7 Composition inverse 3.8 Formal differentiation of series 4 Properties 4.1 Algebraic properties of the formal power series ring 4.2 Topological properties of the formal power series
    [Show full text]
  • New Foundations for Geometric Algebra1
    Text published in the electronic journal Clifford Analysis, Clifford Algebras and their Applications vol. 2, No. 3 (2013) pp. 193-211 New foundations for geometric algebra1 Ramon González Calvet Institut Pere Calders, Campus Universitat Autònoma de Barcelona, 08193 Cerdanyola del Vallès, Spain E-mail : [email protected] Abstract. New foundations for geometric algebra are proposed based upon the existing isomorphisms between geometric and matrix algebras. Each geometric algebra always has a faithful real matrix representation with a periodicity of 8. On the other hand, each matrix algebra is always embedded in a geometric algebra of a convenient dimension. The geometric product is also isomorphic to the matrix product, and many vector transformations such as rotations, axial symmetries and Lorentz transformations can be written in a form isomorphic to a similarity transformation of matrices. We collect the idea Dirac applied to develop the relativistic electron equation when he took a basis of matrices for the geometric algebra instead of a basis of geometric vectors. Of course, this way of understanding the geometric algebra requires new definitions: the geometric vector space is defined as the algebraic subspace that generates the rest of the matrix algebra by addition and multiplication; isometries are simply defined as the similarity transformations of matrices as shown above, and finally the norm of any element of the geometric algebra is defined as the nth root of the determinant of its representative matrix of order n. The main idea of this proposal is an arithmetic point of view consisting of reversing the roles of matrix and geometric algebras in the sense that geometric algebra is a way of accessing, working and understanding the most fundamental conception of matrix algebra as the algebra of transformations of multiple quantities.
    [Show full text]
  • 5. Polynomial Rings Let R Be a Commutative Ring. a Polynomial of Degree N in an Indeterminate (Or Variable) X with Coefficients
    5. polynomial rings Let R be a commutative ring. A polynomial of degree n in an indeterminate (or variable) x with coefficients in R is an expression of the form f = f(x)=a + a x + + a xn, 0 1 ··· n where a0, ,an R and an =0.Wesaythata0, ,an are the coefficients of f and n ··· ∈ ̸ ··· that anx is the highest degree term of f. A polynomial is determined by its coeffiecients. m i n i Two polynomials f = i=0 aix and g = i=1 bix are equal if m = n and ai = bi for i =0, 1, ,n. ! ! Degree··· of a polynomial is a non-negative integer. The polynomials of degree zero are just the elements of R, these are called constant polynomials, or simply constants. Let R[x] denote the set of all polynomials in the variable x with coefficients in R. The addition and 2 multiplication of polynomials are defined in the usual manner: If f(x)=a0 + a1x + a2x + a x3 + and g(x)=b + b x + b x2 + b x3 + are two elements of R[x], then their sum is 3 ··· 0 1 2 3 ··· f(x)+g(x)=(a + b )+(a + b )x +(a + b )x2 +(a + b )x3 + 0 0 1 1 2 2 3 3 ··· and their product is defined by f(x) g(x)=a b +(a b + a b )x +(a b + a b + a b )x2 +(a b + a b + a b + a b )x3 + · 0 0 0 1 1 0 0 2 1 1 2 0 0 3 1 2 2 1 3 0 ··· Let 1 denote the constant polynomial 1 and 0 denote the constant polynomial zero.
    [Show full text]
  • Understanding the Spreading Power of All Nodes in a Network: a Continuous-Time Perspective
    i i “Lawyer-mainmanus” — 2014/6/12 — 9:39 — page 1 — #1 i i Understanding the spreading power of all nodes in a network: a continuous-time perspective Glenn Lawyer ∗ ∗Max Planck Institute for Informatics, Saarbr¨ucken, Germany Submitted to Proceedings of the National Academy of Sciences of the United States of America Centrality measures such as the degree, k-shell, or eigenvalue central- epidemic. When this ratio exceeds this critical range, the dy- ity can identify a network’s most influential nodes, but are rarely use- namics approach the SI system as a limiting case. fully accurate in quantifying the spreading power of the vast majority Several approaches to quantifying the spreading power of of nodes which are not highly influential. The spreading power of all all nodes have recently been proposed, including the accessibil- network nodes is better explained by considering, from a continuous- ity [16, 17], the dynamic influence [11], and the impact [7] (See time epidemiological perspective, the distribution of the force of in- fection each node generates. The resulting metric, the Expected supplementary Note S1). These extend earlier approaches to Force (ExF), accurately quantifies node spreading power under all measuring centrality by explicitly incorporating spreading dy- primary epidemiological models across a wide range of archetypi- namics, and have been shown to be both distinct from previous cal human contact networks. When node power is low, influence is centrality measures and more highly correlated with epidemic a function of neighbor degree. As power increases, a node’s own outcomes [7, 11, 18]. Yet they retain the common foundation degree becomes more important.
    [Show full text]
  • Similarity Preserving Linear Maps on Upper Triangular Matrix Algebras ୋ
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector Linear Algebra and its Applications 414 (2006) 278–287 www.elsevier.com/locate/laa Similarity preserving linear maps on upper triangular matrix algebras ୋ Guoxing Ji ∗, Baowei Wu College of Mathematics and Information Science, Shaanxi Normal University, Xian 710062, PR China Received 14 June 2005; accepted 5 October 2005 Available online 1 December 2005 Submitted by P. Šemrl Abstract In this note we consider similarity preserving linear maps on the algebra of all n × n complex upper triangular matrices Tn. We give characterizations of similarity invariant subspaces in Tn and similarity preserving linear injections on Tn. Furthermore, we considered linear injections on Tn preserving similarity in Mn as well. © 2005 Elsevier Inc. All rights reserved. AMS classification: 47B49; 15A04 Keywords: Matrix; Upper triangular matrix; Similarity invariant subspace; Similarity preserving linear map 1. Introduction In the last few decades, many researchers have considered linear preserver problems on matrix or operator spaces. For example, there are many research works on linear maps which preserve spectrum (cf. [4,5,11]), rank (cf. [3]), nilpotency (cf. [10]) and similarity (cf. [2,6–8]) and so on. Many useful techniques have been developed; see [1,9] for some general techniques and background. Hiai [2] gave a characterization of the similarity preserving linear map on the algebra of all complex n × n matrices. ୋ This research was supported by the National Natural Science Foundation of China (no. 10571114), the Natural Science Basic Research Plan in Shaanxi Province of China (program no.
    [Show full text]
  • Formal Power Series Rings, Inverse Limits, and I-Adic Completions of Rings
    Formal power series rings, inverse limits, and I-adic completions of rings Formal semigroup rings and formal power series rings We next want to explore the notion of a (formal) power series ring in finitely many variables over a ring R, and show that it is Noetherian when R is. But we begin with a definition in much greater generality. Let S be a commutative semigroup (which will have identity 1S = 1) written multi- plicatively. The semigroup ring of S with coefficients in R may be thought of as the free R-module with basis S, with multiplication defined by the rule h k X X 0 0 X X 0 ( risi)( rjsj) = ( rirj)s: i=1 j=1 s2S 0 sisj =s We next want to construct a much larger ring in which infinite sums of multiples of elements of S are allowed. In order to insure that multiplication is well-defined, from now on we assume that S has the following additional property: (#) For all s 2 S, f(s1; s2) 2 S × S : s1s2 = sg is finite. Thus, each element of S has only finitely many factorizations as a product of two k1 kn elements. For example, we may take S to be the set of all monomials fx1 ··· xn : n (k1; : : : ; kn) 2 N g in n variables. For this chocie of S, the usual semigroup ring R[S] may be identified with the polynomial ring R[x1; : : : ; xn] in n indeterminates over R. We next construct a formal semigroup ring denoted R[[S]]: we may think of this ring formally as consisting of all functions from S to R, but we shall indicate elements of the P ring notationally as (possibly infinite) formal sums s2S rss, where the function corre- sponding to this formal sum maps s to rs for all s 2 S.
    [Show full text]
  • Similarity to Symmetric Matrices Over Finite Fields
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector FINITE FIELDS AND THEIR APPLICATIONS 4, 261Ð274 (1998) ARTICLE NO. FF980216 Similarity to Symmetric Matrices over Finite Fields Joel V. Brawley Clemson University, Clemson, South Carolina 29631 and Timothy C. Teitlo¤ Kennesaw State University, Kennesaw, Georgia 30144 E-mail: tteitlo¤@ksumail.kennesaw.edu Communicated by Zhe-Xian Wan Received October 22, 1996; revised March 31, 1998 It has been known for some time that every polynomial with coe¦cients from a finite field is the minimum polynomial of a symmetric matrix with entries from the same field. What have remained unknown, however, are the possible sizes for the symmetric matrices with a specified minimum polynomial and, in particular, the least possible size. In this paper we answer these questions using only the prime factoriz- ation of the given polynomial. Closely related is the question of whether or not a given matrix over a finite field is similar to a symmetric matrix over that field. Although partial results on that question have been published before, this paper contains a complete characterization. ( 1998 Academic Press 1. INTRODUCTION This paper seeks to solve two problems: PROBLEM 1. Determine if a given polynomial with coe¦cients from Fq, the finite field of q elements, is the minimum polynomial of a sym- metric matrix, and if so, determine the possible sizes of such symmetric matrices. PROBLEM 2. Characterize those matrices over Fq that are similar over Fq to symmetric matrices. 261 1071-5797/98 $25.00 Copyright ( 1998 by Academic Press All rights of reproduction in any form reserved.
    [Show full text]
  • The Method of Coalgebra: Exercises in Coinduction
    The Method of Coalgebra: exercises in coinduction Jan Rutten CWI & RU [email protected] Draft d.d. 8 July 2018 (comments are welcome) 2 Draft d.d. 8 July 2018 Contents 1 Introduction 7 1.1 The method of coalgebra . .7 1.2 History, roughly and briefly . .7 1.3 Exercises in coinduction . .8 1.4 Enhanced coinduction: algebra and coalgebra combined . .8 1.5 Universal coalgebra . .8 1.6 How to read this book . .8 1.7 Acknowledgements . .9 2 Categories { where coalgebra comes from 11 2.1 The basic definitions . 11 2.2 Category theory in slogans . 12 2.3 Discussion . 16 3 Algebras and coalgebras 19 3.1 Algebras . 19 3.2 Coalgebras . 21 3.3 Discussion . 23 4 Induction and coinduction 25 4.1 Inductive and coinductive definitions . 25 4.2 Proofs by induction and coinduction . 28 4.3 Discussion . 32 5 The method of coalgebra 33 5.1 Basic types of coalgebras . 34 5.2 Coalgebras, systems, automata ::: ....................... 34 6 Dynamical systems 37 6.1 Homomorphisms of dynamical systems . 38 6.2 On the behaviour of dynamical systems . 41 6.3 Discussion . 44 3 4 Draft d.d. 8 July 2018 7 Stream systems 45 7.1 Homomorphisms and bisimulations of stream systems . 46 7.2 The final system of streams . 52 7.3 Defining streams by coinduction . 54 7.4 Coinduction: the bisimulation proof method . 59 7.5 Moessner's Theorem . 66 7.6 The heart of the matter: circularity . 72 7.7 Discussion . 76 8 Deterministic automata 77 8.1 Basic definitions . 78 8.2 Homomorphisms and bisimulations of automata .
    [Show full text]
  • Block Matrices in Linear Algebra
    PRIMUS Problems, Resources, and Issues in Mathematics Undergraduate Studies ISSN: 1051-1970 (Print) 1935-4053 (Online) Journal homepage: https://www.tandfonline.com/loi/upri20 Block Matrices in Linear Algebra Stephan Ramon Garcia & Roger A. Horn To cite this article: Stephan Ramon Garcia & Roger A. Horn (2020) Block Matrices in Linear Algebra, PRIMUS, 30:3, 285-306, DOI: 10.1080/10511970.2019.1567214 To link to this article: https://doi.org/10.1080/10511970.2019.1567214 Accepted author version posted online: 05 Feb 2019. Published online: 13 May 2019. Submit your article to this journal Article views: 86 View related articles View Crossmark data Full Terms & Conditions of access and use can be found at https://www.tandfonline.com/action/journalInformation?journalCode=upri20 PRIMUS, 30(3): 285–306, 2020 Copyright # Taylor & Francis Group, LLC ISSN: 1051-1970 print / 1935-4053 online DOI: 10.1080/10511970.2019.1567214 Block Matrices in Linear Algebra Stephan Ramon Garcia and Roger A. Horn Abstract: Linear algebra is best done with block matrices. As evidence in sup- port of this thesis, we present numerous examples suitable for classroom presentation. Keywords: Matrix, matrix multiplication, block matrix, Kronecker product, rank, eigenvalues 1. INTRODUCTION This paper is addressed to instructors of a first course in linear algebra, who need not be specialists in the field. We aim to convince the reader that linear algebra is best done with block matrices. In particular, flexible thinking about the process of matrix multiplication can reveal concise proofs of important theorems and expose new results. Viewing linear algebra from a block-matrix perspective gives an instructor access to use- ful techniques, exercises, and examples.
    [Show full text]
  • NEW TRENDS in SYZYGIES: 18W5133
    NEW TRENDS IN SYZYGIES: 18w5133 Giulio Caviglia (Purdue University, Jason McCullough (Iowa State University) June 24, 2018 – June 29, 2018 1 Overview of the Field Since the pioneering work of David Hilbert in the 1890s, syzygies have been a central area of research in commutative algebra. The idea is to approximate arbitrary modules by free modules. Let R be a Noetherian, commutative ring and let M be a finitely generated R-module. A free resolution is an exact sequence of free ∼ R-modules ···! Fi+1 ! Fi !··· F0 such that H0(F•) = M. Elements of Fi are called ith syzygies of M. When R is local or graded, we can choose a minimal free resolution of M, meaning that a basis for Fi is mapped onto a minimal set of generators of Ker(Fi−1 ! Fi−2). In this case, we get uniqueness of minimal free resolutions up to isomorphism of complexes. Therefore, invariants defined by minimal free resolutions yield invariants of the module being resolved. In most cases, minimal resolutions are infinite, but over regular rings, like polynomial and power series rings, all finitely generated modules have finite minimal free resolutions. Beyond the study of invariants of modules, syzygies have connections to algebraic geometry, algebraic topology, and combinatorics. Statements like the Total Rank Conjecture connect algebraic topology with free resolutions. Bounds on Castelnuovo-Mumford regularity and projective dimension, as with the Eisenbud- Goto Conjecture and Stillman’s Conjecture, have implications for the computational complexity of Grobner¨ bases and computational algebra systems. Green’s Conjecture provides a link between graded free resolutions and the geometry of canonical curves.
    [Show full text]
  • Linear Systems
    Linear Systems Professor Yi Ma Professor Claire Tomlin GSI: Somil Bansal Scribe: Chih-Yuan Chiu Department of Electrical Engineering and Computer Sciences University of California, Berkeley Berkeley, CA, U.S.A. May 22, 2019 2 Contents Preface 5 Notation 7 1 Introduction 9 1.1 Lecture 1 . .9 2 Linear Algebra Review 15 2.1 Lecture 2 . 15 2.2 Lecture 3 . 22 2.3 Lecture 3 Discussion . 28 2.4 Lecture 4 . 30 2.5 Lecture 4 Discussion . 35 2.6 Lecture 5 . 37 2.7 Lecture 6 . 42 3 Dynamical Systems 45 3.1 Lecture 7 . 45 3.2 Lecture 7 Discussion . 51 3.3 Lecture 8 . 55 3.4 Lecture 8 Discussion . 60 3.5 Lecture 9 . 61 3.6 Lecture 9 Discussion . 67 3.7 Lecture 10 . 72 3.8 Lecture 10 Discussion . 86 4 System Stability 91 4.1 Lecture 12 . 91 4.2 Lecture 12 Discussion . 101 4.3 Lecture 13 . 106 4.4 Lecture 13 Discussion . 117 4.5 Lecture 14 . 120 4.6 Lecture 14 Discussion . 126 4.7 Lecture 15 . 127 3 4 CONTENTS 4.8 Lecture 15 Discussion . 148 5 Controllability and Observability 157 5.1 Lecture 16 . 157 5.2 Lecture 17 . 163 5.3 Lectures 16, 17 Discussion . 176 5.4 Lecture 18 . 182 5.5 Lecture 19 . 185 5.6 Lecture 20 . 194 5.7 Lectures 18, 19, 20 Discussion . 211 5.8 Lecture 21 . 216 5.9 Lecture 22 . 222 6 Additional Topics 233 6.1 Lecture 11 . 233 6.2 Hamilton-Jacobi-Bellman Equation . 254 A Appendix to Lecture 12 259 A.1 Cayley-Hamilton Theorem: Alternative Proof 1 .
    [Show full text]
  • Matrix Theory States Ecd Eef = X (D = (3) E)Ecf
    Tensor spaces { the basics S. Gill Williamson Abstract We present the basic concepts of tensor products of vectors spaces, ex- ploiting the special properties of vector spaces as opposed to more gen- eral modules. Introduction (1), Basic multilinear algebra (2), Tensor products of vector spaces (3), Tensor products of matrices (4), Inner products on tensor spaces (5), Direct sums and tensor products (6), Background concepts and notation (7), Discussion and acknowledge- ments (8). This material draws upon [Mar73] and relates to subsequent work [Mar75]. 1. Introduction a We start with an example. Let x = ( b ) 2 M2;1 and y = ( c d ) 2 M1;2 where Mm;n denotes the m × n matrices over the real numbers, R. The set of matrices Mm;n forms a vector space under matrix addition and multiplication by real numbers, R. The dimension of this vector space is mn. We write dim(Mm;n) = mn: arXiv:1510.02428v1 [math.AC] 8 Oct 2015 ac ad Define a function ν(x; y) = xy = bc bd 2 M2;2 (matrix product of x and y). The function ν has domain V1 × V2 where V1 = M2;1 and V2 = M1;2 are vector spaces of dimension, dim(Vi) = 2, i = 1; 2. The range of ν is the vector space P = M2;2 which has dim(P ) = 4: The function ν is bilinear in the following sense: (1.1) ν(r1x1 + r2x2; y) = r1ν(x1; y) + r2ν(x2; y) (1.2) ν(x; r1y1 + r2y2) = r1ν(x; y1) + r2ν(x; y2) for any r1; r2 2 R; x; x1; x2 2 V1; and y; y1; y2 2 V2: We denote the set of all such bilinear functions by M(V1;V2 : P ).
    [Show full text]