Modeling, Inference and Clustering for Equivalence Classes of 3-D Orientations Chuanlong Du Iowa State University

Total Page:16

File Type:pdf, Size:1020Kb

Modeling, Inference and Clustering for Equivalence Classes of 3-D Orientations Chuanlong Du Iowa State University Iowa State University Capstones, Theses and Graduate Theses and Dissertations Dissertations 2014 Modeling, inference and clustering for equivalence classes of 3-D orientations Chuanlong Du Iowa State University Follow this and additional works at: https://lib.dr.iastate.edu/etd Part of the Statistics and Probability Commons Recommended Citation Du, Chuanlong, "Modeling, inference and clustering for equivalence classes of 3-D orientations" (2014). Graduate Theses and Dissertations. 13738. https://lib.dr.iastate.edu/etd/13738 This Dissertation is brought to you for free and open access by the Iowa State University Capstones, Theses and Dissertations at Iowa State University Digital Repository. It has been accepted for inclusion in Graduate Theses and Dissertations by an authorized administrator of Iowa State University Digital Repository. For more information, please contact [email protected]. Modeling, inference and clustering for equivalence classes of 3-D orientations by Chuanlong Du A dissertation submitted to the graduate faculty in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Major: Statistics Program of Study Committee: Stephen Vardeman, Co-major Professor Daniel Nordman, Co-major Professor Dan Nettleton Huaiqing Wu Guang Song Iowa State University Ames, Iowa 2014 Copyright c Chuanlong Du, 2014. All rights reserved. ii DEDICATION I would like to dedicate this dissertation to my mother Jinyan Xu. She always encourages me to follow my heart and to seek my dreams. Her words have always inspired me and encouraged me to face difficulties and challenges I came across in my life. I would also like to dedicate this dissertation to my wife Lisha Li without whose support and help I would not have been able to complete this work. iii TABLE OF CONTENTS LIST OF TABLES . vi LIST OF FIGURES . vii ACKNOWLEDGEMENTS . ix ABSTRACT . x CHAPTER 1. GENERAL INTRODUCTION . 1 1.1 Introduction . .1 1.2 Dissertation Organization . .3 CHAPTER 2. BAYESIAN INFERENCE FOR A NEW CLASS OF DIS- TRIBUTIONS ON EQUIVALENCE CLASSES OF 3-D ORIENTATIONS WITH APPLICATIONS TO MATERIALS SCIENCE . 4 Abstract . .4 2.1 Introduction . .5 2.2 Models for Equivalence Classes of Rotation Matrices . .8 2.2.1 UARS([S]; κ) Models for Equivalence Classes of Rotation Matrices . .8 2.2.2 Likelihood-based Confidence Regions for the SMF([S]; κ) Distribution on [Ω]....................................... 10 2.3 One-Sample Bayes Inference for the SMF([S]; κ) Distribution on [Ω] . 12 2.3.1 Cone-Shaped Credible Regions for [S]................... 12 2.3.2 One-Sample Bayes Methods . 13 2.3.3 Simulation Results and Comparison to Likelihood-based Methods . 15 2.4 Comparison to Inferences Based on SMF(S; κ) Models on Ω for Preprocessed Data 16 2.4.1 A First Comparison Based on Repeated Measurements . 17 iv 2.4.2 Another Comparison Based on Real Data . 19 2.4.3 A Comparison Based on a Small Simulation . 20 2.5 Extensions of Models to Other Equivalence Classes of Rotations . 21 2.6 Conclusion . 23 CHAPTER 3. A METHOD FOR MAPPING GRAINS IN EBSD SCANS OF MATERIAL SPECIMENS USING SPATIALLY INFORMED CLUS- TERING OF 3-D ORIENTATIONS . 24 Abstract . 24 3.1 Introduction . 25 3.2 Distance Between Orienlocations . 26 3.2.1 Distance Between Orientations . 26 3.2.2 Distance Between Equivalence Classes of Orientations . 27 3.2.3 Euclidean Distance . 28 3.2.4 A Combined Distance of Orientations and Locations for Orienlocations 28 3.2.5 Penalized Distances for Orientations and Locations . 29 3.3 Clustering Orienlocations . 29 3.4 Application to Real Data with Discussions . 31 3.4.1 Non-smoothed Grain Maps . 31 3.4.2 Smoothed Grain Maps . 35 3.5 A Simulation Study of the Clustering Algorithm . 37 3.6 Conclusion . 38 CHAPTER 4. A CLASS OF STATIONARY AND ERGODIC MARKOV CHAINS DEFINED ON PARTITIONS OF A FINITE SET WITH AP- PLICATIONS IN BAYESIAN CLUSTERING . 40 Abstract . 40 4.1 Introduction . 40 4.2 Definition of Markov Chains on Partitions . 41 4.3 Some Important Properties of the Du(n,G,F ) Process . 42 v 4.4 An Illustration of the Use of the Du(n,G,F ) Process in Bayesian Clustering . 45 4.5 Conclusion . 47 CHAPTER 5. GENERAL CONCLUSIONS . 49 APPENDIX A. UARS(S; κ) MODELS FOR ROTATION MATRICES . 51 APPENDIX B. DETAILED PROOFS OF THEOREMS IN CHAPTER 4.3. 53 BIBLIOGRAPHY . 59 vi LIST OF TABLES 2.1 Values of tuning parameters ρ and σ.................... 16 2.2 Coverage rates (as percentages) for κ and [S] for nominally 95% Bayes regions, inverted likelihood ratio test regions and Wald regions, for some choices of (n; κ)................................ 17 3.1 Artificial data for a toy example of agglomerative hierarchical clustering. 30 4.1 Simulated data from normal distributions. 46 vii LIST OF FIGURES 1.1 On the left, three edges of a (non-rotated) cube are aligned with a 3 standard coordinate reference frame in R ; each individual edge has a direction corresponding to a column of 3 × 3 identity matrix I3. The right figure illustrates a cube after a rotation; the resulting orientation can be represented by a 3 × 3 rotation matrix O = [x0 y0 z0], where the columns of O indicate how columns of I3 (i.e., edges of the cube) move upon rotation. (Further, in the Euler axis-angle representation of a rotation from AppendixA, the rotation can be described by \turning" 3 the cube through an angle r about a fixed axis in the direction of u 2 R , kuk =1.) ..................................2 1.2 An illustration of 6 labelings (coordinate systems) on a cuboid based on selecting the 2 bottom left points as the origin. .3 2.1 Histograms of rotation angles at locations 114, where the angles are expressed as multiples of π=6 showing concentrations at (0; 3; 4; 6) ∗ π=6 =(0; pi=2; pi; 2pi=3) (as expected under cube labeling ambiguities). .6 2.2 Variance of C(rjκ) against κ........................ 10 2.3 Cone-shaped region for [S]. ........................ 13 2.4 Histograms of posterior draws for κ for SMF(S; κ) and SMF([S]; κ) models (repeated real measurements example). 18 2.5 Histograms of posterior draws for κ for SMF(S; κ) and SMF([S]; κ) models (second real data example). 20 viii 2.6 Histograms of posterior draws for κ for SMF(S; κ) and SMF([S]; κ) models (simulated data example). 21 2.7 Right-hand coordinate systems on a symmetric cuboid. 22 3.1 Grain map with w = 1000, co = 4, ao = 0, cl = 4, al = 0 and n = 25. 32 3.2 Smoothed grain map with w = 1000, co = 4, ao = 0, cl = 4, al = 0 and n =25..................................... 32 3.3 Each grain map corresponds to a different factor combination (!, co, ao, cl, al, n) (the first line of parameters above plots), where n = 25 is the maximal number of clusters allowed. The 3 rows of grain maps correspond to ! of 1000, 100 and 10 respectively. The 3 columns of grain maps penalize no distances, the Euclidean distances and the orientation distances respectively. The second line of parameters above each plot contains the number of points smoothed out in each smoothing round. 33 3.4 Each grain map corresponds to a different factor combination (!, co, ao, cl, al, n) (the first line of parameters above plots), where n = 20 is the maximal number of clusters allowed. The 3 rows of grain maps correspond to ! of 5, 1 and 0 respectively. The 3 columns of grain maps penalize no distances, the Euclidean distances and the orientation distances respectively, except the bottom right one. The second line of parameters above each plot contains the number of points smoothed out in each smoothing round. 34 3.5 Best grain maps and corresponding parameters used to produce these grain maps for different combinations of κout and κin. The 3 rows of grain maps correspond to κout of 10, 5 and 3, and the 3 columns of grain maps correspond to κin of 10, 20 and 50. 39 ix ACKNOWLEDGEMENTS I would like to take this opportunity to express my thanks to those who helped me with var- ious aspects of conducting research and the writing of this dissertation. First and foremost, Dr. Steve Vardeman and Dr. Dan Nordman for their guidance, patience and support throughout this research and the writing of this dissertation. Their insights and words of encouragement have often inspired me and renewed my hopes for completing my graduate education. Second, I'd like to thank Dr. Dan Nettleton for supporting me as a research assistant during my last 3 year at Iowa State University. I would also like to thank my committee members for their efforts and contributions to this work. x ABSTRACT Investigating cubic crystalline structures of specimens is an important way to study prop- erties of materials in text analysis. Crystals in metal specimens have internally homogeneous orientations relative to a pre-chosen reference coordinate system. Clusters of crystals in the metal with locally similar orientations constitute so-called \grains." The nature of these grains (shape, size, etc.) affects physical properties (e.g., hardness, conductivity, etc.) of the mate- rial. Electron backscatter diffraction (EBSD) machines are often use to measure orientations of crystals in metal specimens. However, orientations reported by EBSD machines are in truth equivalence classes of crystallographically symmetric orientations. Motivated by the materials science applications, we formulate parametric probability models for \unlabeled orientation data." This amounts to developing models on equivalence classes of 3-D rotations. A Bayesian method is developed for inferencing parameters in the models, which is generally superior to large-sample methods based on likelihood estimation. We also proposed an algorithms for clustering equivalence classes of 3-D orientations.
Recommended publications
  • Sometimes Only Square Matrices Can Be Diagonalized 19
    PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 52, October 1975 SOMETIMESONLY SQUARE MATRICESCAN BE DIAGONALIZED LAWRENCE S. LEVY ABSTRACT. It is proved that every SQUARE matrix over a serial ring is equivalent to some diagonal matrix, even though there are rectangular matri- ces over these rings which cannot be diagonalized. Two matrices A and B over a ring R are equivalent if B - PAQ for in- vertible matrices P and Q over R- Following R. B. Warfield, we will call a module serial if its submodules are totally ordered by inclusion; and we will call a ring R serial it R is a direct sum of serial left modules, and also a direct sum of serial right mod- ules. When R is artinian, these become the generalized uniserial rings of Nakayama LE-GJ- Note that, in serial rings, finitely generated ideals do not have to be principal; for example, let R be any full ring of lower triangular matrices over a field. Moreover, it is well known (and easily proved) that if aR + bR is a nonprincipal right ideal in any ring, the 1x2 matrix [a b] is not equivalent to a diagonal matrix. Thus serial rings do not have the property that every rectangular matrix can be diagonalized. 1. Background. Serial rings are clearly semiperfect (P/radR is semisim- ple artinian and idempotents can be lifted modulo radP). (1.1) Every indecomposable projective module over a semiperfect ring R is S eR for some primitive idempotent e of R. (See [M].) A double-headed arrow -» will denote an "onto" homomorphism.
    [Show full text]
  • The Intersection of Some Classical Equivalence Classes of Matrices Mark Alan Mills Iowa State University
    Iowa State University Capstones, Theses and Retrospective Theses and Dissertations Dissertations 1999 The intersection of some classical equivalence classes of matrices Mark Alan Mills Iowa State University Follow this and additional works at: https://lib.dr.iastate.edu/rtd Part of the Mathematics Commons Recommended Citation Mills, Mark Alan, "The intersection of some classical equivalence classes of matrices " (1999). Retrospective Theses and Dissertations. 12153. https://lib.dr.iastate.edu/rtd/12153 This Dissertation is brought to you for free and open access by the Iowa State University Capstones, Theses and Dissertations at Iowa State University Digital Repository. It has been accepted for inclusion in Retrospective Theses and Dissertations by an authorized administrator of Iowa State University Digital Repository. For more information, please contact [email protected]. INFORMATION TO USERS This manuscript has been reproduced from the microfilm master. UMI films the text directly fi'om the original or copy submitted. Thus, some thesis and dissertation copies are in typewriter face, while others may be from any type of computer printer. The quality of this reproduction is dependent upon the quality of the copy submitted. Broken or indistinct print, colored or poor quality illustrations and photographs, print bleedthrough, substandard margins, and improper alignment can adversely affect reproduction. In the unlikely event that the author did not send UMI a complete manuscript and there are missing pages, these will be noted. Also, if unauthorized copyright material had to be removed, a note will indicate the deletion. Oversize materials (e.g., maps, drawings, charts) are reproduced by sectioning the original, beginning at the upper left-hand comer and continuing firom left to right in equal sections with small overlaps.
    [Show full text]
  • A Review of Matrix Scaling and Sinkhorn's Normal Form for Matrices
    A review of matrix scaling and Sinkhorn’s normal form for matrices and positive maps Martin Idel∗ Zentrum Mathematik, M5, Technische Universität München, 85748 Garching Abstract Given a nonnegative matrix A, can you find diagonal matrices D1, D2 such that D1 AD2 is doubly stochastic? The answer to this question is known as Sinkhorn’s theorem. It has been proved with a wide variety of methods, each presenting a variety of possible generalisations. Recently, generalisations such as to positive maps between matrix algebras have become more and more interesting for applications. This text gives a review of over 70 years of matrix scaling. The focus lies on the mathematical landscape surrounding the problem and its solution as well as the generalisation to positive maps and contains hardly any nontrivial unpublished results. Contents 1. Introduction3 2. Notation and Preliminaries4 3. Different approaches to equivalence scaling5 3.1. Historical remarks . .6 3.2. The logarithmic barrier function . .8 3.3. Nonlinear Perron-Frobenius theory . 12 3.4. Entropy optimisation . 14 3.5. Convex programming and dual problems . 17 3.6. Topological (non-constructive) approaches . 20 arXiv:1609.06349v1 [math.RA] 20 Sep 2016 3.7. Other ideas . 21 3.7.1. Geometric proofs . 22 3.7.2. Other direct convergence proofs . 23 ∗[email protected] 1 4. Equivalence scaling 24 5. Other scalings 27 5.1. Matrix balancing . 27 5.2. DAD scaling . 29 5.3. Matrix Apportionment . 33 5.4. More general matrix scalings . 33 6. Generalised approaches 35 6.1. Direct multidimensional scaling . 35 6.2. Log-linear models and matrices as vectors .
    [Show full text]
  • Math 623: Matrix Analysis Final Exam Preparation
    Mike O'Sullivan Department of Mathematics San Diego State University Spring 2013 Math 623: Matrix Analysis Final Exam Preparation The final exam has two parts, which I intend to each take one hour. Part one is on the material covered since the last exam: Determinants; normal, Hermitian and positive definite matrices; positive matrices and Perron's theorem. The problems will all be very similar to the ones in my notes, or in the accompanying list. For part two, you will write an essay on what I see as the fundamental theme of the course, the four equivalence relations on matrices: matrix equivalence, similarity, unitary equivalence and unitary similarity (See p. 41 in Horn 2nd Ed. He calls matrix equivalence simply \equivalence."). Imagine you are writing for a fellow master's student. Your goal is to explain to them the key ideas. Matrix equivalence: (1) Define it. (2) Explain the relationship with abstract linear transformations and change of basis. (3) We found a simple set of representatives for the equivalence classes: identify them and sketch the proof. Similarity: (1) Define it. (2) Explain the relationship with abstract linear transformations and change of basis. (3) We found a set of representatives for the equivalence classes using Jordan matrices. State the Jordan theorem. (4) The proof of the Jordan theorem can be broken into two parts: (1) writing the ambient space as a direct sum of generalized eigenspaces, (2) classifying nilpotent matrices. Sketch the proof of each. Explain the role of invariant spaces. Unitary equivalence (1) Define it. (2) Explain the relationship with abstract inner product spaces and change of basis.
    [Show full text]
  • A Local Approach to Matrix Equivalence*
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector A Local Approach to Matrix Equivalence* Larry J. Gerstein Department of Mathematics University of California Santa Barbara, California 93106 Submitted by Hans Schneider ABSTRACT Matrix equivalence over principal ideal domains is considered, using the tech- nique of localization from commutative algebra. This device yields short new proofs for a variety of results. (Some of these results were known earlier via the theory of determinantal divisors.) A new algorithm is presented for calculation of the Smith normal form of a matrix, and examples are included. Finally, the natural analogue of the Witt-Grothendieck ring for quadratic forms is considered in the context of matrix equivalence. INTRODUCTION It is well known that every matrix A over a principal ideal domain is equivalent to a diagonal matrix S(A) whose main diagonal entries s,(A),s,(A), . divide each other in succession. [The s,(A) are the invariant factors of A; they are unique up to unit multiples.] How should one effect this diagonalization? If the ring is Euclidean, a sequence of elementary row and column operations will do the job; but in the general case one usually relies on the theory of determinantal divisors: Set d,,(A) = 1, and for 1 < k < r = rankA define d,(A) to be the greatest common divisor of all the k x k subdeterminants of A; then +(A) = dk(A)/dk_-l(A) for 1 < k < r, and sk(A) = 0 if k > r. The calculation of invariant factors via determinantal divisors in this way is clearly a very tedious process.
    [Show full text]
  • Linear Algebra
    LINEAR ALGEBRA T.K.SUBRAHMONIAN MOOTHATHU Contents 1. Vector spaces and subspaces 2 2. Linear independence 5 3. Span, basis, and dimension 7 4. Linear operators 11 5. Linear functionals and the dual space 15 6. Linear operators = matrices 18 7. Multiplying a matrix left and right 22 8. Solving linear equations 28 9. Eigenvalues, eigenvectors, and the triangular form 32 10. Inner product spaces 36 11. Orthogonal projections and least square solutions 39 12. Unitary operators: they preserve distances and angles 43 13. Orthogonal diagonalization of normal and self-adjoint operators 46 14. Singular value decomposition 50 15. Determinant of a square matrix: properties 51 16. Determinant: existence and expressions 54 17. Minimal and characteristic polynomials 58 18. Primary decomposition theorem 61 19. The Jordan canonical form (JCF) 65 20. Trace of a square matrix 71 21. Quadratic forms 73 Developing effective techniques to solve systems of linear equations is one important goal of Linear Algebra. Our study of Linear Algebra will proceed through two parallel roads: (i) the theory of linear operators, and (ii) the theory of matrices. Why do we say parallel roads? Well, here is an illustrative example. Consider the following system of linear equations: 1 2 T.K.SUBRAHMONIAN MOOTHATHU 2x1 + 3x2 = 7 4x1 − x2 = 5. 2 We are interested in two questions: is there a solution (x1; x2) 2 R to the above? and if there is a solution, is the solution unique? This problem can be formulated in two other ways: 2 2 First reformulation: Let T : R ! R be T (x1; x2) = (2x1 + 3x2; 4x1 − x2).
    [Show full text]
  • Linear Algebra Jim Hefferon
    Answers to Exercises Linear Algebra Jim Hefferon ¡ ¢ 1 3 ¡ ¢ 2 1 ¯ ¯ ¯ ¯ ¯1 2¯ ¯3 1¯ ¡ ¢ 1 x · 1 3 ¡ ¢ 2 1 ¯ ¯ ¯ ¯ ¯x · 1 2¯ ¯ ¯ x · 3 1 ¡ ¢ 6 8 ¡ ¢ 2 1 ¯ ¯ ¯ ¯ ¯6 2¯ ¯8 1¯ Notation R real numbers N natural numbers: {0, 1, 2,...} ¯ C complex numbers {... ¯ ...} set of . such that . h...i sequence; like a set but order matters V, W, U vector spaces ~v, ~w vectors ~0, ~0V zero vector, zero vector of V B, D bases n En = h~e1, . , ~eni standard basis for R β,~ ~δ basis vectors RepB(~v) matrix representing the vector Pn set of n-th degree polynomials Mn×m set of n×m matrices [S] span of the set S M ⊕ N direct sum of subspaces V =∼ W isomorphic spaces h, g homomorphisms, linear maps H, G matrices t, s transformations; maps from a space to itself T,S square matrices RepB,D(h) matrix representing the map h hi,j matrix entry from row i, column j |T | determinant of the matrix T R(h), N (h) rangespace and nullspace of the map h R∞(h), N∞(h) generalized rangespace and nullspace Lower case Greek alphabet name character name character name character alpha α iota ι rho ρ beta β kappa κ sigma σ gamma γ lambda λ tau τ delta δ mu µ upsilon υ epsilon ² nu ν phi φ zeta ζ xi ξ chi χ eta η omicron o psi ψ theta θ pi π omega ω Cover. This is Cramer’s Rule for the system x1 + 2x2 = 6, 3x1 + x2 = 8.
    [Show full text]
  • Understanding Understanding Equivalence of Matrices
    UNDERSTANDING UNDERSTANDING EQUIVALENCE OF MATRICES Abraham Berman *, Boris Koichu*, Ludmila Shvartsman** *Technion – Israel Institute of Technology **Academic ORT Braude College, Karmiel, Israel The title of the paper paraphrases the title of the famous paper by Edwina Michener "Understanding understanding mathematics". In our paper, we discuss what it means to "understand" the concept of equivalence relations between matrices. We focus on the concept of equivalence relations as it is a fundamental mathematical concept that can serve as a useful framework for teaching a linear algebra course. We suggest a definition of understanding the concept of equivalence relations, illustrate its operational nature and discuss how the definition can serve as a framework for teaching a linear algebra course. INTRODUCTION One of the major dilemmas that teachers of linear algebra face is whether to start with abstract concepts like vector space and then give concrete examples, or start with concrete applications like solving systems of linear equations and then generalize and teach more abstract concepts. Our personal preference is to start with systems of linear equations since they are relatively easy-to-understand and are connected to the students' high school experience. Unfortunately, for some students this only delays the difficulty of abstraction (Hazzan, 1999). The students also often tend to consider less and more abstract topics of the course as disjoint ones. Since the concept of equivalence relations appears both in concrete and abstract linear algebra topics, we think of equivalence relations as an overarching notion that can be helpful in overcoming these difficulties. In this paper we first suggest a review of topics, in which the notion of equivalence relations appear in high school and in a university linear algebra course and then theoretically analyse what it means to understand this notion, in connection with the other linear algebra notions.
    [Show full text]
  • Linear Algebra
    University of Cambridge Mathematics Tripos Part IB Linear Algebra Michaelmas, 2017 Lectures by A. M. Keating Notes by Qiangru Kuang Contents Contents 1 Vector Space 3 1.1 Definitions ............................... 3 1.2 Vector Subspace ........................... 3 1.3 Span, Linear Independence & Basis ................. 5 1.4 Dimension ............................... 7 1.5 Direct Sum .............................. 8 2 Linear Map 11 2.1 Definitions ............................... 11 2.2 Isomorphism of Vector Spaces .................... 11 2.3 Linear Maps as Vector Space .................... 13 2.3.1 Matrices, an Interlude .................... 14 2.3.2 Representation of Linear Maps by Matrices ........ 14 2.3.3 Change of Bases ....................... 17 2.3.4 Elementary Matrices and Operations ............ 20 3 Dual Space & Dual Map 23 3.1 Definitions ............................... 23 3.2 Dual Map ............................... 24 3.3 Double Dual .............................. 27 4 Bilinear Form I 29 5 Determinant & Trace 32 5.1 Trace .................................. 32 5.2 Determinant .............................. 32 5.3 Determinant of Linear Maps ..................... 37 5.4 Determinant of Block-triangular Matrices ............. 37 5.5 Volume Interpretation of Determinant ............... 38 5.6 Determinant of Elementary Operation ............... 38 5.7 Column Expansion & Adjugate Matrices .............. 39 5.8 Application: System of Linear Equations .............. 40 6 Endomorphism 42 6.1 Definitions ..............................
    [Show full text]
  • Products of Positive Definite Symplectic Matrices
    Products of positive definite symplectic matrices by Daryl Q. Granario A dissertation submitted to the Graduate Faculty of Auburn University in partial fulfillment of the requirements for the Degree of Doctor of Philosophy Auburn, Alabama August 8, 2020 Keywords: symplectic matrix, positive definite matrix, matrix decomposition Copyright 2020 by Daryl Q. Granario Approved by Tin-Yau Tam, Committee Chair and Professor Emeritus of Mathematics Ming Liao, Professor of Mathematics Huajun Huang, Associate Professor of Mathematics Ulrich Albrecht, Department Chair and Professor of Mathematics Abstract We show that every symplectic matrix is a product of five positive definite symplectic matrices and five is the best in the sense that there are symplectic matrices which are not product of less. In Chapter 1, we provide a historical background and motivation behind the study. We highlight the important works in the subject that lead to the formulation of the problem. In Chapter 2, we present the necessary mathematical prerequisites and construct a symplectic ∗congruence canonical form for Sp(2n; C). In Chapter 3, we give the proof of the main the- orem. In Chapter 4, we discuss future research. The main results in this dissertation can be found in [18]. ii Acknowledgments I would like to express my sincere gratitude to everyone who have supported me through- out this process. To my advisor Professor Tin-Yau Tam for the continuous support of my Ph.D study and related research, for their patience, motivation, and immense knowledge. Thank you for their insightful comments and constant encouragement, and for the challenging questions that pushed me to grow as a mathematician.
    [Show full text]
  • MATRIX COMPLETION THEOREMS1 SL(2«, R), Although Sp(2, R)
    PROCEEDINGS of the AMERICAN MATHEMATICAL SOCIETY Volume 94, Number 1, May 1985 MATRIX COMPLETION THEOREMS1 MORRIS NEWMAN Abstract. Let Rbea principal ideal ring, M, „ the set of / X n matrices over R. The following results are proved : (a) Let D e M„ „. Then the least nonnegative integer t such that a matrix [% p] exists which belongs to GL(n + t, R) is t = n - p, where p is the number of invariant factors of D equal to 1. (b) Any primitive element of M¡ 2„ may be completed to a 2n X 2n symplectic matrix. (c) If A, B s M„ „ are such that [A, B] is primitive and AB is symmetric, then [A, B] may be completed to a 2« X In symplectic matrix. (d) If A e M, ,, B e M, „_, are such that [A, B] is primitive and A is symmetric, then [A, B] may be completed to a symmetric element of SL(«, R), provided that 1 « t « h/3. (e) If n > 3, then any primitive element of Mln occurs as the first row of the commutator of two elements of SL(n, R). 1. Introduction. In this paper we consider matrices over a principal ideal ring and the problem of completing, or embedding, such matrices so that the resulting matrices satisfy certain given criteria. We let R denote an arbitrary principal ideal ring and define Mln = Mt „(R), the set of t X n matrices over R, Mn = Mn(R), the set of « X « matrices over R. As usual, Gh(n, R) will denote the group of unit (or unimodular) matrices of Mn, and SL(«, R) the subgroup of GL(w, R) consisting of the matrices of determinant 1.
    [Show full text]
  • On Equivalence of Matrices
    On Equivalence of Matrices ⋆ Daizhan Cheng Key Laboratory of Systems and Control, AMSS, Chinese Academy of Sciences, Beijing 100190, P.R.China Abstract A new matrix product, called the semi-tensor product (STP), is briefly reviewed. The STP extends the classical matrix product to two arbitrary matrices. Under STP the set of matrices becomes a monoid (semi-group with identity). Some related structures and properties are investigated. Then the generalized matrix addition is also introduced, which extends the classical matrix addition to a class of two matrices with different dimensions. Motivated by STP of matrices, two kinds of equivalences of matrices (including vectors) are introduced, which are called matrix equivalence (M-equivalence) and vector equivalence (V-equivalence) respectively. The lattice structure has been es- tablished for each equivalence. Under each equivalence, the corresponding quotient space becomes a vector space. Under M- equivalence, many algebraic, geometric, and analytic structures have been posed to the quotient space, which include (i) lattice structure; (ii) inner product and norm (distance); (iii) topology; (iv) a fiber bundle structure, called the discrete bundle; (v) bundled differential manifold; (vi) bundled Lie group and Lie algebra. Under V-equivalence, vectors of different dimensions form a vector space V, and a matrix A of arbitrary dimension is considered as an operator (linear mapping) on V. When A is a bounded operator (not necessarily square but includes square matrices as a special case), the generalized characteristic function, eigenvalue and eigenvector etc. are defined. In one word, this new matrix theory overcomes the dimensional barrier in certain sense. It provides much more freedom for using matrix approach to practical problems.
    [Show full text]