Unimodular Modules Paul Camion1
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Theory of Angular Momentum Rotations About Different Axes Fail to Commute
153 CHAPTER 3 3.1. Rotations and Angular Momentum Commutation Relations followed by a 90° rotation about the z-axis. The net results are different, as we can see from Figure 3.1. Our first basic task is to work out quantitatively the manner in which Theory of Angular Momentum rotations about different axes fail to commute. To this end, we first recall how to represent rotations in three dimensions by 3 X 3 real, orthogonal matrices. Consider a vector V with components Vx ' VY ' and When we rotate, the three components become some other set of numbers, V;, V;, and Vz'. The old and new components are related via a 3 X 3 orthogonal matrix R: V'X Vx V'y R Vy I, (3.1.1a) V'z RRT = RTR =1, (3.1.1b) where the superscript T stands for a transpose of a matrix. It is a property of orthogonal matrices that 2 2 /2 /2 /2 vVx + V2y + Vz =IVVx + Vy + Vz (3.1.2) is automatically satisfied. This chapter is concerned with a systematic treatment of angular momen- tum and related topics. The importance of angular momentum in modern z physics can hardly be overemphasized. A thorough understanding of angu- Z z lar momentum is essential in molecular, atomic, and nuclear spectroscopy; I I angular-momentum considerations play an important role in scattering and I I collision problems as well as in bound-state problems. Furthermore, angu- I lar-momentum concepts have important generalizations-isospin in nuclear physics, SU(3), SU(2)® U(l) in particle physics, and so forth. -
Introduction
Introduction This dissertation is a reading of chapters 16 (Introduction to Integer Liner Programming) and 19 (Totally Unimodular Matrices: fundamental properties and examples) in the book : Theory of Linear and Integer Programming, Alexander Schrijver, John Wiley & Sons © 1986. The chapter one is a collection of basic definitions (polyhedron, polyhedral cone, polytope etc.) and the statement of the decomposition theorem of polyhedra. Chapter two is “Introduction to Integer Linear Programming”. A finite set of vectors a1 at is a Hilbert basis if each integral vector b in cone { a1 at} is a non- ,… , ,… , negative integral combination of a1 at. We shall prove Hilbert basis theorem: Each ,… , rational polyhedral cone C is generated by an integral Hilbert basis. Further, an analogue of Caratheodory’s theorem is proved: If a system n a1x β1 amx βm has no integral solution then there are 2 or less constraints ƙ ,…, ƙ among above inequalities which already have no integral solution. Chapter three contains some basic result on totally unimodular matrices. The main theorem is due to Hoffman and Kruskal: Let A be an integral matrix then A is totally unimodular if and only if for each integral vector b the polyhedron x x 0 Ax b is integral. ʜ | ƚ ; ƙ ʝ Next, seven equivalent characterization of total unimodularity are proved. These characterizations are due to Hoffman & Kruskal, Ghouila-Houri, Camion and R.E.Gomory. Basic examples of totally unimodular matrices are incidence matrices of bipartite graphs & Directed graphs and Network matrices. We prove Konig-Egervary theorem for bipartite graph. 1 Chapter 1 Preliminaries Definition 1.1: (Polyhedron) A polyhedron P is the set of points that satisfies Μ finite number of linear inequalities i.e., P = {xɛ ő | ≤ b} where (A, b) is an Μ m (n + 1) matrix. -
Chapter Four Determinants
Chapter Four Determinants In the first chapter of this book we considered linear systems and we picked out the special case of systems with the same number of equations as unknowns, those of the form T~x = ~b where T is a square matrix. We noted a distinction between two classes of T ’s. While such systems may have a unique solution or no solutions or infinitely many solutions, if a particular T is associated with a unique solution in any system, such as the homogeneous system ~b = ~0, then T is associated with a unique solution for every ~b. We call such a matrix of coefficients ‘nonsingular’. The other kind of T , where every linear system for which it is the matrix of coefficients has either no solution or infinitely many solutions, we call ‘singular’. Through the second and third chapters the value of this distinction has been a theme. For instance, we now know that nonsingularity of an n£n matrix T is equivalent to each of these: ² a system T~x = ~b has a solution, and that solution is unique; ² Gauss-Jordan reduction of T yields an identity matrix; ² the rows of T form a linearly independent set; ² the columns of T form a basis for Rn; ² any map that T represents is an isomorphism; ² an inverse matrix T ¡1 exists. So when we look at a particular square matrix, the question of whether it is nonsingular is one of the first things that we ask. This chapter develops a formula to determine this. (Since we will restrict the discussion to square matrices, in this chapter we will usually simply say ‘matrix’ in place of ‘square matrix’.) More precisely, we will develop infinitely many formulas, one for 1£1 ma- trices, one for 2£2 matrices, etc. -
2014 CBK Linear Algebra Honors.Pdf
PETERS TOWNSHIP SCHOOL DISTRICT CORE BODY OF KNOWLEDGE LINEAR ALGEBRA HONORS GRADE 12 For each of the sections that follow, students may be required to analyze, recall, explain, interpret, apply, or evaluate the particular concept being taught. Course Description This college level mathematics course will cover linear algebra and matrix theory emphasizing topics useful in other disciplines such as physics and engineering. Key topics include solving systems of equations, evaluating vector spaces, performing linear transformations and matrix representations. Linear Algebra Honors is designed for the extremely capable student who has completed one year of calculus. Systems of Linear Equations Categorize a linear equation in n variables Formulate a parametric representation of solution set Assess a system of linear equations to determine if it is consistent or inconsistent Apply concepts to use back-substitution and Guassian elimination to solve a system of linear equations Investigate the size of a matrix and write an augmented or coefficient matrix from a system of linear equations Apply concepts to use matrices and Guass-Jordan elimination to solve a system of linear equations Solve a homogenous system of linear equations Design, setup and solve a system of equations to fit a polynomial function to a set of data points Design, set up and solve a system of equations to represent a network Matrices Categorize matrices as equal Construct a sum matrix Construct a product matrix Assess two matrices as compatible Apply matrix multiplication -
Section 2.4–2.5 Partitioned Matrices and LU Factorization
Section 2.4{2.5 Partitioned Matrices and LU Factorization Gexin Yu [email protected] College of William and Mary Gexin Yu [email protected] Section 2.4{2.5 Partitioned Matrices and LU Factorization One approach to simplify the computation is to partition a matrix into blocks. 2 3 0 −1 5 9 −2 3 Ex: A = 4 −5 2 4 0 −3 1 5. −8 −6 3 1 7 −4 This partition can also be written as the following 2 × 3 block matrix: A A A A = 11 12 13 A21 A22 A23 3 0 −1 In the block form, we have blocks A = and so on. 11 −5 2 4 partition matrices into blocks In real world problems, systems can have huge numbers of equations and un-knowns. Standard computation techniques are inefficient in such cases, so we need to develop techniques which exploit the internal structure of the matrices. In most cases, the matrices of interest have lots of zeros. Gexin Yu [email protected] Section 2.4{2.5 Partitioned Matrices and LU Factorization 2 3 0 −1 5 9 −2 3 Ex: A = 4 −5 2 4 0 −3 1 5. −8 −6 3 1 7 −4 This partition can also be written as the following 2 × 3 block matrix: A A A A = 11 12 13 A21 A22 A23 3 0 −1 In the block form, we have blocks A = and so on. 11 −5 2 4 partition matrices into blocks In real world problems, systems can have huge numbers of equations and un-knowns. -
Linear Algebra and Matrix Theory
Linear Algebra and Matrix Theory Chapter 1 - Linear Systems, Matrices and Determinants This is a very brief outline of some basic definitions and theorems of linear algebra. We will assume that you know elementary facts such as how to add two matrices, how to multiply a matrix by a number, how to multiply two matrices, what an identity matrix is, and what a solution of a linear system of equations is. Hardly any of the theorems will be proved. More complete treatments may be found in the following references. 1. References (1) S. Friedberg, A. Insel and L. Spence, Linear Algebra, Prentice-Hall. (2) M. Golubitsky and M. Dellnitz, Linear Algebra and Differential Equa- tions Using Matlab, Brooks-Cole. (3) K. Hoffman and R. Kunze, Linear Algebra, Prentice-Hall. (4) P. Lancaster and M. Tismenetsky, The Theory of Matrices, Aca- demic Press. 1 2 2. Linear Systems of Equations and Gaussian Elimination The solutions, if any, of a linear system of equations (2.1) a11x1 + a12x2 + ··· + a1nxn = b1 a21x1 + a22x2 + ··· + a2nxn = b2 . am1x1 + am2x2 + ··· + amnxn = bm may be found by Gaussian elimination. The permitted steps are as follows. (1) Both sides of any equation may be multiplied by the same nonzero constant. (2) Any two equations may be interchanged. (3) Any multiple of one equation may be added to another equation. Instead of working with the symbols for the variables (the xi), it is eas- ier to place the coefficients (the aij) and the forcing terms (the bi) in a rectangular array called the augmented matrix of the system. a11 a12 . -
Block Matrices in Linear Algebra
PRIMUS Problems, Resources, and Issues in Mathematics Undergraduate Studies ISSN: 1051-1970 (Print) 1935-4053 (Online) Journal homepage: https://www.tandfonline.com/loi/upri20 Block Matrices in Linear Algebra Stephan Ramon Garcia & Roger A. Horn To cite this article: Stephan Ramon Garcia & Roger A. Horn (2020) Block Matrices in Linear Algebra, PRIMUS, 30:3, 285-306, DOI: 10.1080/10511970.2019.1567214 To link to this article: https://doi.org/10.1080/10511970.2019.1567214 Accepted author version posted online: 05 Feb 2019. Published online: 13 May 2019. Submit your article to this journal Article views: 86 View related articles View Crossmark data Full Terms & Conditions of access and use can be found at https://www.tandfonline.com/action/journalInformation?journalCode=upri20 PRIMUS, 30(3): 285–306, 2020 Copyright # Taylor & Francis Group, LLC ISSN: 1051-1970 print / 1935-4053 online DOI: 10.1080/10511970.2019.1567214 Block Matrices in Linear Algebra Stephan Ramon Garcia and Roger A. Horn Abstract: Linear algebra is best done with block matrices. As evidence in sup- port of this thesis, we present numerous examples suitable for classroom presentation. Keywords: Matrix, matrix multiplication, block matrix, Kronecker product, rank, eigenvalues 1. INTRODUCTION This paper is addressed to instructors of a first course in linear algebra, who need not be specialists in the field. We aim to convince the reader that linear algebra is best done with block matrices. In particular, flexible thinking about the process of matrix multiplication can reveal concise proofs of important theorems and expose new results. Viewing linear algebra from a block-matrix perspective gives an instructor access to use- ful techniques, exercises, and examples. -
Perfect, Ideal and Balanced Matrices: a Survey
Perfect Ideal and Balanced Matrices Michele Conforti y Gerard Cornuejols Ajai Kap o or z and Kristina Vuskovic December Abstract In this pap er we survey results and op en problems on p erfect ideal and balanced matrices These matrices arise in connection with the set packing and set covering problems two integer programming mo dels with a wide range of applications We concentrate on some of the b eautiful p olyhedral results that have b een obtained in this area in the last thirty years This survey rst app eared in Ricerca Operativa Dipartimento di Matematica Pura ed Applicata Universitadi Padova Via Belzoni Padova Italy y Graduate School of Industrial administration Carnegie Mellon University Schenley Park Pittsburgh PA USA z Department of Mathematics University of Kentucky Lexington Ky USA This work was supp orted in part by NSF grant DMI and ONR grant N Introduction The integer programming mo dels known as set packing and set covering have a wide range of applications As examples the set packing mo del o ccurs in the phasing of trac lights Stoer in pattern recognition Lee Shan and Yang and the set covering mo del in scheduling crews for railways airlines buses Caprara Fischetti and Toth lo cation theory and vehicle routing Sometimes due to the sp ecial structure of the constraint matrix the natural linear programming relaxation yields an optimal solution that is integer thus solving the problem We investigate conditions under which this integrality prop erty holds Let A b e a matrix This matrix is perfect if the fractional -
Order Number 9807786 Effectiveness in Representations of Positive Definite Quadratic Forms Icaza Perez, Marfa In6s, Ph.D
Order Number 9807786 Effectiveness in representations of positive definite quadratic form s Icaza Perez, Marfa In6s, Ph.D. The Ohio State University, 1992 UMI 300 N. ZeebRd. Ann Arbor, MI 48106 E ffectiveness in R epresentations o f P o s it iv e D e f i n it e Q u a d r a t ic F o r m s dissertation Presented in Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy in the Graduate School of The Ohio State University By Maria Ines Icaza Perez, The Ohio State University 1992 Dissertation Committee: Approved by John S. Hsia Paul Ponomarev fJ Adviser Daniel B. Shapiro lepartment of Mathematics A cknowledgements I begin these acknowledgements by first expressing my gratitude to Professor John S. Hsia. Beyond accepting me as his student, he provided guidance and encourage ment which has proven invaluable throughout the course of this work. I thank Professors Paul Ponomarev and Daniel B. Shapiro for reading my thesis and for their helpful suggestions. I also express my appreciation to Professor Shapiro for giving me the opportunity to come to The Ohio State University and for helping me during the first years of my studies. My recognition goes to my undergraduate teacher Professor Ricardo Baeza for teaching me mathematics and the discipline of hard work. I thank my classmates from the Quadratic Forms Seminar especially, Y. Y. Shao for all the cheerful mathematical conversations we had during these months. These years were more enjoyable thanks to my friends from here and there. I thank them all. -
1.5 Elementary Matrices and a Method for Finding the Inverse
1.5 Elementary Matrices and a Method for Finding the Inverse De¯nition 1 A n £ n matrix is called an elementary matrix if it can be obtained from In by performing a single elementary row operation Reminder: Elementary row operations: 1. Multiply a row a by k 2 R 2. Exchange two rows 3. Add a multiple of one row to another Theorem 1 If the elementary matrix E results from performing a certain row operation on In and A is a m £ n matrix, then EA is the matrix that results when the same row operation is performed on A. Proof: Has to be done for each elementary row operation. Example 1 · ¸ · ¸ a b 0 1 A = ;E = c d 1 0 E was obtained from I2 by exchanging the two rows. · ¸ · ¸ · ¸ 0 1 a b c d EA = = 1 0 c d a b EA is the matrix which results from A by exchanging the two rows. (to be expected according to the theorem above.) Theorem 2 Every elementary matrix is invertible, and the inverse is also an elementary matrix. Theorem 3 If A is a n £ n matrix then the following statements are equivalent 1. A is invertible 2. Ax = 0 has only the trivial solution 3. The reduced echelon form of A is In 4. A can be expressed as a product of elementary matrices. 1 Proof: Prove the theorem by proving (a))(b))(c))(d))(a): (a))(b): Assume A is invertible and x0 is a solution of Ax = 0, then ¡1 Ax0 = 0, multiplying both sides with A gives ¡1 (A A)x0 = 0 , Inx0 = 0 , x0 = 0. -
Matrices in Companion Rings, Smith Forms, and the Homology of 3
Matrices in companion rings, Smith forms, and the homology of 3-dimensional Brieskorn manifolds Vanni Noferini∗and Gerald Williams† July 26, 2021 Abstract We study the Smith forms of matrices of the form f(Cg) where f(t),g(t) ∈ R[t], where R is an elementary divisor domain and Cg is the companion matrix of the (monic) polynomial g(t). Prominent examples of such matrices are circulant matrices, skew-circulant ma- trices, and triangular Toeplitz matrices. In particular, we reduce the calculation of the Smith form of the matrix f(Cg) to that of the matrix F (CG), where F, G are quotients of f(t),g(t) by some common divi- sor. This allows us to express the last non-zero determinantal divisor of f(Cg) as a resultant. A key tool is the observation that a matrix ring generated by Cg – the companion ring of g(t) – is isomorphic to the polynomial ring Qg = R[t]/<g(t) >. We relate several features of the Smith form of f(Cg) to the properties of the polynomial g(t) and the equivalence classes [f(t)] ∈ Qg. As an application we let f(t) be the Alexander polynomial of a torus knot and g(t) = tn − 1, and calculate the Smith form of the circulant matrix f(Cg). By appealing to results concerning cyclic branched covers of knots and cyclically pre- sented groups, this provides the homology of all Brieskorn manifolds M(r,s,n) where r, s are coprime. Keywords: Smith form, elementary divisor domain, circulant, cyclically presented group, Brieskorn manifold, homology. -
An Algorithmic Proof of Suslin's Stability Theorem Over Polynomial
An Algorithmic Proof of Suslin’s Stability Theorem over Polynomial Rings Hyungju Park∗ Cynthia Woodburn† Abstract Let k be a field. Then Gaussian elimination over k and the Eu- clidean division algorithm for the univariate polynomial ring k[x] allow us to write any matrix in SLn(k) or SLn(k[x]), n ≥ 2, as a product of elementary matrices. Suslin’s stability theorem states that the same is true for the multivariate polynomial ring SLn(k[x1,...,xm]) with n ≥ 3. As Gaussian elimination gives us an algorithmic way of finding an explicit factorization of the given matrix into elementary matrices over a field, we develop a similar algorithm over polynomial rings. 1 Introduction Immediately after proving the famous Serre’s Conjecture (the Quillen-Suslin theorem, nowadays) in 1976 [11], A. Suslin went on [12] to prove the following K -analogue of Serre’s Conjecture which is now known as Suslin’s stability arXiv:alg-geom/9405003v1 10 May 1994 1 theorem: Let R be a commutative Noetherian ring and n ≥ max(3, dim(R)+ 2). Then, any n × n matrix A =(fij) of determinant 1, with fij being elements of the polynomial ring R[x1,...,xm], can be writ- ten as a product of elementary matrices over R[x1,...,xm]. ∗Dept. of Mathematics, University of California, Berkeley; [email protected] †Dept. of Mathematics, Pittsburg State University; [email protected] 1 Definition 1 For any ring R, an n × n elementary matrix Eij(a) over R is a matrix of the form I + a · eij where i =6 j, a ∈ R and eij is the n × n matrix whose (i, j) component is 1 and all other components are zero.