Linear Algebra

Total Page:16

File Type:pdf, Size:1020Kb

Linear Algebra University of Cambridge Mathematics Tripos Part IB Linear Algebra Michaelmas, 2017 Lectures by A. M. Keating Notes by Qiangru Kuang Contents Contents 1 Vector Space 3 1.1 Definitions ............................... 3 1.2 Vector Subspace ........................... 3 1.3 Span, Linear Independence & Basis ................. 5 1.4 Dimension ............................... 7 1.5 Direct Sum .............................. 8 2 Linear Map 11 2.1 Definitions ............................... 11 2.2 Isomorphism of Vector Spaces .................... 11 2.3 Linear Maps as Vector Space .................... 13 2.3.1 Matrices, an Interlude .................... 14 2.3.2 Representation of Linear Maps by Matrices ........ 14 2.3.3 Change of Bases ....................... 17 2.3.4 Elementary Matrices and Operations ............ 20 3 Dual Space & Dual Map 23 3.1 Definitions ............................... 23 3.2 Dual Map ............................... 24 3.3 Double Dual .............................. 27 4 Bilinear Form I 29 5 Determinant & Trace 32 5.1 Trace .................................. 32 5.2 Determinant .............................. 32 5.3 Determinant of Linear Maps ..................... 37 5.4 Determinant of Block-triangular Matrices ............. 37 5.5 Volume Interpretation of Determinant ............... 38 5.6 Determinant of Elementary Operation ............... 38 5.7 Column Expansion & Adjugate Matrices .............. 39 5.8 Application: System of Linear Equations .............. 40 6 Endomorphism 42 6.1 Definitions ............................... 42 6.2 Polynomial Ring, an Aside ...................... 43 6.3 Characteristic Polynomial of Endormorphism ........... 43 7 Bilinear Form II 55 7.1 Symmetric Bilinear Forms ...................... 55 7.2 Sesquilinear Form ........................... 61 7.3 Hermitian Form ............................ 61 7.4 Alternating Form ........................... 63 1 Contents 8 Inner Product Space 65 8.1 Definitions ............................... 65 8.2 Orthonomal Basis .......................... 66 8.3 Orthogonal Complements & Projections .............. 68 8.4 Adjoints ................................ 70 8.5 Self-adjoint Maps & Isomoetries ................... 71 8.5.1 Spectral Theory for Self-adjoint Maps ........... 73 8.5.2 Spectral Theory for Unitary Maps ............. 74 8.5.3 Application to Bilinear Forms ................ 75 Index 77 2 1 Vector Space 1 Vector Space Convention. Throughout this course, F denotes a general field. If you wish, think of it as R or C. 1.1 Definitions Definition (Vector space). An F-vector space (or a vector space over F) is an abelian group (V; +) equipped with a function, called scalar multiplication: F × V ! V (λ, v) 7! λ · v satisfying the axioms • distributive over vectors: λ(v1 + v2) = λ(v1 + v2), • distributive over scalars: (λ1 + λ2)v = λ1v + λ2v, • λ(µv) = λµv, • 1 · v = v. The additive unit of V is denoted by 0. Example. 1. For all n 2 N; Fn is the space of column vectors of length n with entries in F. It is an vector space by entry-wise addition and entry-wise scalar multiplication. 2. Mm;n(F), the set of m × n matrices with entries in F, with the operation defined as entry-wise addition. 3. For any set X, RX = ff : X ! Rg, the set of R-valued functions on X, with addition and scalar multiplication defined pointwise. For instance, (f1 + f2)(x) = f1(x) + f2(x). Exercise. 1. Check the above examples satisfy the axioms. 2. 0 · v = 0 and (−1) · v = −v for all v 2 V . 1.2 Vector Subspace Definition (Vector subspace). Let V be an F-vector space. A subset U ⊆ V is a subspace, denoted U ≤ V , if • 0 2 U, • U is closed under addition: 8u1; u2 2 U; u1 + u2 2 U, • U is closed under scalar multiplication: 8u 2 U; 8λ 2 F; λu 2 U. 3 1 Vector Space Exercise. If U is a subspace of V , then U is also an F-vector space. Example. 1. V = RR, the set all functions from R to itself, has a (proper) subspace C(R), the space of continuous functions on R as continuous functions are closed under addition and scalar multiplication. C(R) in turn has a proper subspace P (R), the set of all polynomials in R. 3 2. f(x1; x2; x3) 2 R : x1 + x2 + x3 = tg where t is some fixed constant is a subspace of R3 if and only if t = 0. Proposition 1.1. Let V be an F-vector space, U; W ≤ V . Then U \W ≤ V . Proof. • 0 2 U; 0 2 V so 0 2 U \ W . • Suppose u; w 2 U \ W . Fix λ, µ 2 F. As U ≤ V , λu + µw 2 U. As W ≤ V , λu + µw 2 W so λu + µw 2 U \ W . Take λ = µ = 1 for vector addition and µ = 0 for scalar multiplication. Example. V = R3;U = f(x; y; z): x = 0g;W = f(x; y; z): y = 0g, then U \ W = f(x; y; z): x = y = 0g. Note. The union of a family of subspaces is almost never a subspace. For example, V = R2, U; V be x- and y-axis. Definition (Sum of vector spaces). Let V be an F-vector space, U; W ≤ V , the sum of U and W is the set U + W = fu + w : u 2 U; w 2 W g Example. Use the definition from the previous example, U + W = V . Proposition 1.2. U + W ≤ V . Proof. • 0 = 0 + 0 2 U + W , • u1; u1 2 U; w1; w2 2 W , (u1 + w2) + (u2 + w2) = (u1 + u2) + (w1 + w2) 2 U + W , • similar for scalar multiplication. Left as an exercise. Note. U + W is the smallest subspace containing both U and W . This is because all elements of the form u + w are in such a space by closure under addition. 4 1 Vector Space Definition (Quotient vector space). Let V be an F-vector space, U ≤ V . The quotient space V=U is the abelian gropup V=U equipped with scalar multiplication F × V=U ! V=U (λ, v + U) 7! λv + U Proposition 1.3. This is well-defined and V=U is an F-vector space. Proof. First check it is well-defined. Suppose v1 + U = v2 + U 2 V=U. Then v1 − v2 2 U. Now use closure under scalar multiplication and distributivity, λv1 − λv2 = λ(v1 − v2) 2 U so λv1 + U = λv2 + U 2 V=U. Now check vector space axioms of V=U, which will follow from the axioms for V : • λ(µ(v + U)) = λ(µv + U) = λ(µv) + U = (λµ)v + U = λµ(v + U), • other axioms are left as an exercise. 1.3 Span, Linear Independence & Basis Definition (Span). Let V be a F-vector space, S ⊆ V be a subset. The span of S n X o hSi = λss : λs 2 F s2S is the set of all the finite linear combinations of elements (i.e. all but finitely many of the λ are zero) of S. Remark. hSi is the smallest subspace of V containing all elements of S. Convention. h;i = f0g Example. 1. V = R3, S = f(1; 0; 0); (0; 1; 2); (3; −2; −4)g, hSi = f(a; b; 2b): a; b 2 Rg X 2. For any set X, R is a vector space. For x 2 X, define δx : X ! R; δx(x) = 1; δx(y) = 0 8y 6= x, then X hδx : x 2 Xi = ff 2 R : f has finite supportg Definition (Span). S spans V if hSi = V . Definition (Finite-dimensional). V is finite-dimensional over F if it is spanned by a finite set. 5 1 Vector Space Definition (Linear independence). The vectors v1; : : : ; vn are linearly in- dependent over F if n X λi = 0 ) λi = 0 8i i=1 A subset S ⊆ V is linearly independent if every finite subset of S is linearly independent. A subset is linearly dependent if it is not linearly independent. Example. In the first example above, the three vectors are not linearly inde- pendent. Exercise. The set fδx : x 2 Xg is linearly independent. Definition (Basis). S is a basis of V if it is linearly independent and spans V . Example. n 1. F has standard basis fe1; e2; : : : ; eng where ei is the column vector with 1 in the ith entry and 0 elsewhere. 2. V = C over C has natural basis f1g, but over R it has natural basis f1; ig. 3. V = P (R), the space of real polynomials, has natural basis f1; x; x2;::: g: It is an exercise to check this carefully. Lemma 1.4. Let V be a F-vector space. The vectors v1; : : : ; vn form a basis of V if and only if each vector v 2 V has a unique expression n X v = λivi; λi 2 F: i=1 Proof. P • ): Fix v 2 V . The vi span V , so exists λi 2 F such that v = λivi. P Suppose also v = µivi for some µi 2 F. Then the difference X (µi − λi)vi = 0: Since the vi are linearly independent, µi − λi = 0 for all i. Pn • (: The vi span V by assumption. Suppose i=1 λivi = 0. Note that Pn 0 = i=0 0 · vi. By appying uniqueness to 0, λi = 0 for all i. 6 1 Vector Space Lemma 1.5. If v1; : : : ; vn spans V over F, then some subset of v1; : : : ; vn is a basis of V over F. Proof. If v1; : : : ; vn is linearly independent then done. Otherwise for some `, there exist α1; : : : ; α`−1 2 F such that `−1 X v` = αivi: i=1 P λi (If λivi = 0, not all λi is zero. Take ` maximal with λ` 6= 0, then αi = − .) λ` Now v1; : : : ; v`−1; v`+1; : : : ; vn still span V . Continue iteratively until we have linear independence. Theorem 1.6 (Steinitz Exchange Lemma). Let V be a finite-dimensional vector space over F. Take v1; : : : ; vm to be linearly independent, w1; : : : ; wn to span V .
Recommended publications
  • Modeling, Inference and Clustering for Equivalence Classes of 3-D Orientations Chuanlong Du Iowa State University
    Iowa State University Capstones, Theses and Graduate Theses and Dissertations Dissertations 2014 Modeling, inference and clustering for equivalence classes of 3-D orientations Chuanlong Du Iowa State University Follow this and additional works at: https://lib.dr.iastate.edu/etd Part of the Statistics and Probability Commons Recommended Citation Du, Chuanlong, "Modeling, inference and clustering for equivalence classes of 3-D orientations" (2014). Graduate Theses and Dissertations. 13738. https://lib.dr.iastate.edu/etd/13738 This Dissertation is brought to you for free and open access by the Iowa State University Capstones, Theses and Dissertations at Iowa State University Digital Repository. It has been accepted for inclusion in Graduate Theses and Dissertations by an authorized administrator of Iowa State University Digital Repository. For more information, please contact [email protected]. Modeling, inference and clustering for equivalence classes of 3-D orientations by Chuanlong Du A dissertation submitted to the graduate faculty in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Major: Statistics Program of Study Committee: Stephen Vardeman, Co-major Professor Daniel Nordman, Co-major Professor Dan Nettleton Huaiqing Wu Guang Song Iowa State University Ames, Iowa 2014 Copyright c Chuanlong Du, 2014. All rights reserved. ii DEDICATION I would like to dedicate this dissertation to my mother Jinyan Xu. She always encourages me to follow my heart and to seek my dreams. Her words have always inspired me and encouraged me to face difficulties and challenges I came across in my life. I would also like to dedicate this dissertation to my wife Lisha Li without whose support and help I would not have been able to complete this work.
    [Show full text]
  • Sometimes Only Square Matrices Can Be Diagonalized 19
    PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 52, October 1975 SOMETIMESONLY SQUARE MATRICESCAN BE DIAGONALIZED LAWRENCE S. LEVY ABSTRACT. It is proved that every SQUARE matrix over a serial ring is equivalent to some diagonal matrix, even though there are rectangular matri- ces over these rings which cannot be diagonalized. Two matrices A and B over a ring R are equivalent if B - PAQ for in- vertible matrices P and Q over R- Following R. B. Warfield, we will call a module serial if its submodules are totally ordered by inclusion; and we will call a ring R serial it R is a direct sum of serial left modules, and also a direct sum of serial right mod- ules. When R is artinian, these become the generalized uniserial rings of Nakayama LE-GJ- Note that, in serial rings, finitely generated ideals do not have to be principal; for example, let R be any full ring of lower triangular matrices over a field. Moreover, it is well known (and easily proved) that if aR + bR is a nonprincipal right ideal in any ring, the 1x2 matrix [a b] is not equivalent to a diagonal matrix. Thus serial rings do not have the property that every rectangular matrix can be diagonalized. 1. Background. Serial rings are clearly semiperfect (P/radR is semisim- ple artinian and idempotents can be lifted modulo radP). (1.1) Every indecomposable projective module over a semiperfect ring R is S eR for some primitive idempotent e of R. (See [M].) A double-headed arrow -» will denote an "onto" homomorphism.
    [Show full text]
  • The Intersection of Some Classical Equivalence Classes of Matrices Mark Alan Mills Iowa State University
    Iowa State University Capstones, Theses and Retrospective Theses and Dissertations Dissertations 1999 The intersection of some classical equivalence classes of matrices Mark Alan Mills Iowa State University Follow this and additional works at: https://lib.dr.iastate.edu/rtd Part of the Mathematics Commons Recommended Citation Mills, Mark Alan, "The intersection of some classical equivalence classes of matrices " (1999). Retrospective Theses and Dissertations. 12153. https://lib.dr.iastate.edu/rtd/12153 This Dissertation is brought to you for free and open access by the Iowa State University Capstones, Theses and Dissertations at Iowa State University Digital Repository. It has been accepted for inclusion in Retrospective Theses and Dissertations by an authorized administrator of Iowa State University Digital Repository. For more information, please contact [email protected]. INFORMATION TO USERS This manuscript has been reproduced from the microfilm master. UMI films the text directly fi'om the original or copy submitted. Thus, some thesis and dissertation copies are in typewriter face, while others may be from any type of computer printer. The quality of this reproduction is dependent upon the quality of the copy submitted. Broken or indistinct print, colored or poor quality illustrations and photographs, print bleedthrough, substandard margins, and improper alignment can adversely affect reproduction. In the unlikely event that the author did not send UMI a complete manuscript and there are missing pages, these will be noted. Also, if unauthorized copyright material had to be removed, a note will indicate the deletion. Oversize materials (e.g., maps, drawings, charts) are reproduced by sectioning the original, beginning at the upper left-hand comer and continuing firom left to right in equal sections with small overlaps.
    [Show full text]
  • A Review of Matrix Scaling and Sinkhorn's Normal Form for Matrices
    A review of matrix scaling and Sinkhorn’s normal form for matrices and positive maps Martin Idel∗ Zentrum Mathematik, M5, Technische Universität München, 85748 Garching Abstract Given a nonnegative matrix A, can you find diagonal matrices D1, D2 such that D1 AD2 is doubly stochastic? The answer to this question is known as Sinkhorn’s theorem. It has been proved with a wide variety of methods, each presenting a variety of possible generalisations. Recently, generalisations such as to positive maps between matrix algebras have become more and more interesting for applications. This text gives a review of over 70 years of matrix scaling. The focus lies on the mathematical landscape surrounding the problem and its solution as well as the generalisation to positive maps and contains hardly any nontrivial unpublished results. Contents 1. Introduction3 2. Notation and Preliminaries4 3. Different approaches to equivalence scaling5 3.1. Historical remarks . .6 3.2. The logarithmic barrier function . .8 3.3. Nonlinear Perron-Frobenius theory . 12 3.4. Entropy optimisation . 14 3.5. Convex programming and dual problems . 17 3.6. Topological (non-constructive) approaches . 20 arXiv:1609.06349v1 [math.RA] 20 Sep 2016 3.7. Other ideas . 21 3.7.1. Geometric proofs . 22 3.7.2. Other direct convergence proofs . 23 ∗[email protected] 1 4. Equivalence scaling 24 5. Other scalings 27 5.1. Matrix balancing . 27 5.2. DAD scaling . 29 5.3. Matrix Apportionment . 33 5.4. More general matrix scalings . 33 6. Generalised approaches 35 6.1. Direct multidimensional scaling . 35 6.2. Log-linear models and matrices as vectors .
    [Show full text]
  • Math 623: Matrix Analysis Final Exam Preparation
    Mike O'Sullivan Department of Mathematics San Diego State University Spring 2013 Math 623: Matrix Analysis Final Exam Preparation The final exam has two parts, which I intend to each take one hour. Part one is on the material covered since the last exam: Determinants; normal, Hermitian and positive definite matrices; positive matrices and Perron's theorem. The problems will all be very similar to the ones in my notes, or in the accompanying list. For part two, you will write an essay on what I see as the fundamental theme of the course, the four equivalence relations on matrices: matrix equivalence, similarity, unitary equivalence and unitary similarity (See p. 41 in Horn 2nd Ed. He calls matrix equivalence simply \equivalence."). Imagine you are writing for a fellow master's student. Your goal is to explain to them the key ideas. Matrix equivalence: (1) Define it. (2) Explain the relationship with abstract linear transformations and change of basis. (3) We found a simple set of representatives for the equivalence classes: identify them and sketch the proof. Similarity: (1) Define it. (2) Explain the relationship with abstract linear transformations and change of basis. (3) We found a set of representatives for the equivalence classes using Jordan matrices. State the Jordan theorem. (4) The proof of the Jordan theorem can be broken into two parts: (1) writing the ambient space as a direct sum of generalized eigenspaces, (2) classifying nilpotent matrices. Sketch the proof of each. Explain the role of invariant spaces. Unitary equivalence (1) Define it. (2) Explain the relationship with abstract inner product spaces and change of basis.
    [Show full text]
  • A Local Approach to Matrix Equivalence*
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector A Local Approach to Matrix Equivalence* Larry J. Gerstein Department of Mathematics University of California Santa Barbara, California 93106 Submitted by Hans Schneider ABSTRACT Matrix equivalence over principal ideal domains is considered, using the tech- nique of localization from commutative algebra. This device yields short new proofs for a variety of results. (Some of these results were known earlier via the theory of determinantal divisors.) A new algorithm is presented for calculation of the Smith normal form of a matrix, and examples are included. Finally, the natural analogue of the Witt-Grothendieck ring for quadratic forms is considered in the context of matrix equivalence. INTRODUCTION It is well known that every matrix A over a principal ideal domain is equivalent to a diagonal matrix S(A) whose main diagonal entries s,(A),s,(A), . divide each other in succession. [The s,(A) are the invariant factors of A; they are unique up to unit multiples.] How should one effect this diagonalization? If the ring is Euclidean, a sequence of elementary row and column operations will do the job; but in the general case one usually relies on the theory of determinantal divisors: Set d,,(A) = 1, and for 1 < k < r = rankA define d,(A) to be the greatest common divisor of all the k x k subdeterminants of A; then +(A) = dk(A)/dk_-l(A) for 1 < k < r, and sk(A) = 0 if k > r. The calculation of invariant factors via determinantal divisors in this way is clearly a very tedious process.
    [Show full text]
  • Linear Algebra
    LINEAR ALGEBRA T.K.SUBRAHMONIAN MOOTHATHU Contents 1. Vector spaces and subspaces 2 2. Linear independence 5 3. Span, basis, and dimension 7 4. Linear operators 11 5. Linear functionals and the dual space 15 6. Linear operators = matrices 18 7. Multiplying a matrix left and right 22 8. Solving linear equations 28 9. Eigenvalues, eigenvectors, and the triangular form 32 10. Inner product spaces 36 11. Orthogonal projections and least square solutions 39 12. Unitary operators: they preserve distances and angles 43 13. Orthogonal diagonalization of normal and self-adjoint operators 46 14. Singular value decomposition 50 15. Determinant of a square matrix: properties 51 16. Determinant: existence and expressions 54 17. Minimal and characteristic polynomials 58 18. Primary decomposition theorem 61 19. The Jordan canonical form (JCF) 65 20. Trace of a square matrix 71 21. Quadratic forms 73 Developing effective techniques to solve systems of linear equations is one important goal of Linear Algebra. Our study of Linear Algebra will proceed through two parallel roads: (i) the theory of linear operators, and (ii) the theory of matrices. Why do we say parallel roads? Well, here is an illustrative example. Consider the following system of linear equations: 1 2 T.K.SUBRAHMONIAN MOOTHATHU 2x1 + 3x2 = 7 4x1 − x2 = 5. 2 We are interested in two questions: is there a solution (x1; x2) 2 R to the above? and if there is a solution, is the solution unique? This problem can be formulated in two other ways: 2 2 First reformulation: Let T : R ! R be T (x1; x2) = (2x1 + 3x2; 4x1 − x2).
    [Show full text]
  • Linear Algebra Jim Hefferon
    Answers to Exercises Linear Algebra Jim Hefferon ¡ ¢ 1 3 ¡ ¢ 2 1 ¯ ¯ ¯ ¯ ¯1 2¯ ¯3 1¯ ¡ ¢ 1 x · 1 3 ¡ ¢ 2 1 ¯ ¯ ¯ ¯ ¯x · 1 2¯ ¯ ¯ x · 3 1 ¡ ¢ 6 8 ¡ ¢ 2 1 ¯ ¯ ¯ ¯ ¯6 2¯ ¯8 1¯ Notation R real numbers N natural numbers: {0, 1, 2,...} ¯ C complex numbers {... ¯ ...} set of . such that . h...i sequence; like a set but order matters V, W, U vector spaces ~v, ~w vectors ~0, ~0V zero vector, zero vector of V B, D bases n En = h~e1, . , ~eni standard basis for R β,~ ~δ basis vectors RepB(~v) matrix representing the vector Pn set of n-th degree polynomials Mn×m set of n×m matrices [S] span of the set S M ⊕ N direct sum of subspaces V =∼ W isomorphic spaces h, g homomorphisms, linear maps H, G matrices t, s transformations; maps from a space to itself T,S square matrices RepB,D(h) matrix representing the map h hi,j matrix entry from row i, column j |T | determinant of the matrix T R(h), N (h) rangespace and nullspace of the map h R∞(h), N∞(h) generalized rangespace and nullspace Lower case Greek alphabet name character name character name character alpha α iota ι rho ρ beta β kappa κ sigma σ gamma γ lambda λ tau τ delta δ mu µ upsilon υ epsilon ² nu ν phi φ zeta ζ xi ξ chi χ eta η omicron o psi ψ theta θ pi π omega ω Cover. This is Cramer’s Rule for the system x1 + 2x2 = 6, 3x1 + x2 = 8.
    [Show full text]
  • Understanding Understanding Equivalence of Matrices
    UNDERSTANDING UNDERSTANDING EQUIVALENCE OF MATRICES Abraham Berman *, Boris Koichu*, Ludmila Shvartsman** *Technion – Israel Institute of Technology **Academic ORT Braude College, Karmiel, Israel The title of the paper paraphrases the title of the famous paper by Edwina Michener "Understanding understanding mathematics". In our paper, we discuss what it means to "understand" the concept of equivalence relations between matrices. We focus on the concept of equivalence relations as it is a fundamental mathematical concept that can serve as a useful framework for teaching a linear algebra course. We suggest a definition of understanding the concept of equivalence relations, illustrate its operational nature and discuss how the definition can serve as a framework for teaching a linear algebra course. INTRODUCTION One of the major dilemmas that teachers of linear algebra face is whether to start with abstract concepts like vector space and then give concrete examples, or start with concrete applications like solving systems of linear equations and then generalize and teach more abstract concepts. Our personal preference is to start with systems of linear equations since they are relatively easy-to-understand and are connected to the students' high school experience. Unfortunately, for some students this only delays the difficulty of abstraction (Hazzan, 1999). The students also often tend to consider less and more abstract topics of the course as disjoint ones. Since the concept of equivalence relations appears both in concrete and abstract linear algebra topics, we think of equivalence relations as an overarching notion that can be helpful in overcoming these difficulties. In this paper we first suggest a review of topics, in which the notion of equivalence relations appear in high school and in a university linear algebra course and then theoretically analyse what it means to understand this notion, in connection with the other linear algebra notions.
    [Show full text]
  • Products of Positive Definite Symplectic Matrices
    Products of positive definite symplectic matrices by Daryl Q. Granario A dissertation submitted to the Graduate Faculty of Auburn University in partial fulfillment of the requirements for the Degree of Doctor of Philosophy Auburn, Alabama August 8, 2020 Keywords: symplectic matrix, positive definite matrix, matrix decomposition Copyright 2020 by Daryl Q. Granario Approved by Tin-Yau Tam, Committee Chair and Professor Emeritus of Mathematics Ming Liao, Professor of Mathematics Huajun Huang, Associate Professor of Mathematics Ulrich Albrecht, Department Chair and Professor of Mathematics Abstract We show that every symplectic matrix is a product of five positive definite symplectic matrices and five is the best in the sense that there are symplectic matrices which are not product of less. In Chapter 1, we provide a historical background and motivation behind the study. We highlight the important works in the subject that lead to the formulation of the problem. In Chapter 2, we present the necessary mathematical prerequisites and construct a symplectic ∗congruence canonical form for Sp(2n; C). In Chapter 3, we give the proof of the main the- orem. In Chapter 4, we discuss future research. The main results in this dissertation can be found in [18]. ii Acknowledgments I would like to express my sincere gratitude to everyone who have supported me through- out this process. To my advisor Professor Tin-Yau Tam for the continuous support of my Ph.D study and related research, for their patience, motivation, and immense knowledge. Thank you for their insightful comments and constant encouragement, and for the challenging questions that pushed me to grow as a mathematician.
    [Show full text]
  • MATRIX COMPLETION THEOREMS1 SL(2«, R), Although Sp(2, R)
    PROCEEDINGS of the AMERICAN MATHEMATICAL SOCIETY Volume 94, Number 1, May 1985 MATRIX COMPLETION THEOREMS1 MORRIS NEWMAN Abstract. Let Rbea principal ideal ring, M, „ the set of / X n matrices over R. The following results are proved : (a) Let D e M„ „. Then the least nonnegative integer t such that a matrix [% p] exists which belongs to GL(n + t, R) is t = n - p, where p is the number of invariant factors of D equal to 1. (b) Any primitive element of M¡ 2„ may be completed to a 2n X 2n symplectic matrix. (c) If A, B s M„ „ are such that [A, B] is primitive and AB is symmetric, then [A, B] may be completed to a 2« X In symplectic matrix. (d) If A e M, ,, B e M, „_, are such that [A, B] is primitive and A is symmetric, then [A, B] may be completed to a symmetric element of SL(«, R), provided that 1 « t « h/3. (e) If n > 3, then any primitive element of Mln occurs as the first row of the commutator of two elements of SL(n, R). 1. Introduction. In this paper we consider matrices over a principal ideal ring and the problem of completing, or embedding, such matrices so that the resulting matrices satisfy certain given criteria. We let R denote an arbitrary principal ideal ring and define Mln = Mt „(R), the set of t X n matrices over R, Mn = Mn(R), the set of « X « matrices over R. As usual, Gh(n, R) will denote the group of unit (or unimodular) matrices of Mn, and SL(«, R) the subgroup of GL(w, R) consisting of the matrices of determinant 1.
    [Show full text]
  • On Equivalence of Matrices
    On Equivalence of Matrices ⋆ Daizhan Cheng Key Laboratory of Systems and Control, AMSS, Chinese Academy of Sciences, Beijing 100190, P.R.China Abstract A new matrix product, called the semi-tensor product (STP), is briefly reviewed. The STP extends the classical matrix product to two arbitrary matrices. Under STP the set of matrices becomes a monoid (semi-group with identity). Some related structures and properties are investigated. Then the generalized matrix addition is also introduced, which extends the classical matrix addition to a class of two matrices with different dimensions. Motivated by STP of matrices, two kinds of equivalences of matrices (including vectors) are introduced, which are called matrix equivalence (M-equivalence) and vector equivalence (V-equivalence) respectively. The lattice structure has been es- tablished for each equivalence. Under each equivalence, the corresponding quotient space becomes a vector space. Under M- equivalence, many algebraic, geometric, and analytic structures have been posed to the quotient space, which include (i) lattice structure; (ii) inner product and norm (distance); (iii) topology; (iv) a fiber bundle structure, called the discrete bundle; (v) bundled differential manifold; (vi) bundled Lie group and Lie algebra. Under V-equivalence, vectors of different dimensions form a vector space V, and a matrix A of arbitrary dimension is considered as an operator (linear mapping) on V. When A is a bounded operator (not necessarily square but includes square matrices as a special case), the generalized characteristic function, eigenvalue and eigenvector etc. are defined. In one word, this new matrix theory overcomes the dimensional barrier in certain sense. It provides much more freedom for using matrix approach to practical problems.
    [Show full text]