Linear Algebra in Maple®
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
The Sagemanifolds Project
Tensor calculus with free softwares: the SageManifolds project Eric´ Gourgoulhon1, Micha l Bejger2 1Laboratoire Univers et Th´eories (LUTH) CNRS / Observatoire de Paris / Universit´eParis Diderot 92190 Meudon, France http://luth.obspm.fr/~luthier/gourgoulhon/ 2Centrum Astronomiczne im. M. Kopernika (CAMK) Warsaw, Poland http://users.camk.edu.pl/bejger/ Encuentros Relativistas Espa~noles2014 Valencia 1-5 September 2014 Eric´ Gourgoulhon, Micha l Bejger (LUTH, CAMK) SageManifolds ERE2014, Valencia, 2 Sept. 2014 1 / 44 Outline 1 Differential geometry and tensor calculus on a computer 2 Sage: a free mathematics software 3 The SageManifolds project 4 SageManifolds at work: the Mars-Simon tensor example 5 Conclusion and perspectives Eric´ Gourgoulhon, Micha l Bejger (LUTH, CAMK) SageManifolds ERE2014, Valencia, 2 Sept. 2014 2 / 44 Differential geometry and tensor calculus on a computer Outline 1 Differential geometry and tensor calculus on a computer 2 Sage: a free mathematics software 3 The SageManifolds project 4 SageManifolds at work: the Mars-Simon tensor example 5 Conclusion and perspectives Eric´ Gourgoulhon, Micha l Bejger (LUTH, CAMK) SageManifolds ERE2014, Valencia, 2 Sept. 2014 3 / 44 In 1969, during his PhD under Pirani supervision at King's College, Ray d'Inverno wrote ALAM (Atlas Lisp Algebraic Manipulator) and used it to compute the Riemann tensor of Bondi metric. The original calculations took Bondi and his collaborators 6 months to go. The computation with ALAM took 4 minutes and yield to the discovery of 6 errors in the original paper [J.E.F. Skea, Applications of SHEEP (1994)] In the early 1970's, ALAM was rewritten in the LISP programming language, thereby becoming machine independent and renamed LAM The descendant of LAM, called SHEEP (!), was initiated in 1977 by Inge Frick Since then, many softwares for tensor calculus have been developed.. -
Using Macaulay2 Effectively in Practice
Using Macaulay2 effectively in practice Mike Stillman ([email protected]) Department of Mathematics Cornell 22 July 2019 / IMA Sage/M2 Macaulay2: at a glance Project started in 1993, Dan Grayson and Mike Stillman. Open source. Key computations: Gr¨obnerbases, free resolutions, Hilbert functions and applications of these. Rings, Modules and Chain Complexes are first class objects. Language which is comfortable for mathematicians, yet powerful, expressive, and fun to program in. Now a community project Journal of Software for Algebra and Geometry (started in 2009. Now we handle: Macaulay2, Singular, Gap, Cocoa) (original editors: Greg Smith, Amelia Taylor). Strong community: including about 2 workshops per year. User contributed packages (about 200 so far). Each has doc and tests, is tested every night, and is distributed with M2. Lots of activity Over 2000 math papers refer to Macaulay2. History: 1976-1978 (My undergrad years at Urbana) E. Graham Evans: asked me to write a program to compute syzygies, from Hilbert's algorithm from 1890. Really didn't work on computers of the day (probably might still be an issue!). Instead: Did computation degree by degree, no finishing condition. Used Buchsbaum-Eisenbud \What makes a complex exact" (by hand!) to see if the resulting complex was exact. Winfried Bruns was there too. Very exciting time. History: 1978-1983 (My grad years, with Dave Bayer, at Harvard) History: 1978-1983 (My grad years, with Dave Bayer, at Harvard) I tried to do \real mathematics" but Dave Bayer (basically) rediscovered Groebner bases, and saw that they gave an algorithm for computing all syzygies. I got excited, dropped what I was doing, and we programmed (in Pascal), in less than one week, the first version of what would be Macaulay. -
Linear Systems and Control: a First Course (Course Notes for AAE 564)
Linear Systems and Control: A First Course (Course notes for AAE 564) Martin Corless School of Aeronautics & Astronautics Purdue University West Lafayette, Indiana [email protected] August 25, 2008 Contents 1 Introduction 1 1.1 Ingredients..................................... 4 1.2 Somenotation................................... 4 1.3 MATLAB ..................................... 4 2 State space representation of dynamical systems 5 2.1 Linearexamples.................................. 5 2.1.1 Afirstexample .............................. 5 2.1.2 Theunattachedmass........................... 6 2.1.3 Spring-mass-damper ........................... 6 2.1.4 Asimplestructure ............................ 7 2.2 Nonlinearexamples............................... 8 2.2.1 Afirstnonlinearsystem ......................... 8 2.2.2 Planarpendulum ............................. 9 2.2.3 Attitudedynamicsofarigidbody. .. 9 2.2.4 Bodyincentralforcemotion. 10 2.2.5 Ballisticsindrag ............................. 11 2.2.6 Doublependulumoncart . .. .. 12 2.2.7 Two-linkroboticmanipulator . .. 13 2.3 Discrete-timeexamples . ... 14 2.3.1 Thediscreteunattachedmass . 14 2.4 Generalrepresentation . ... 15 2.4.1 Continuous-time ............................. 15 2.4.2 Discrete-time ............................... 18 2.4.3 Exercises.................................. 20 2.5 Vectors....................................... 22 2.5.1 Vector spaces and IRn .......................... 22 2.5.2 IR2 andpictures.............................. 24 2.5.3 Derivatives ............................... -
Scientific Computing: an Introductory Survey
Eigenvalue Problems Existence, Uniqueness, and Conditioning Computing Eigenvalues and Eigenvectors Scientific Computing: An Introductory Survey Chapter 4 – Eigenvalue Problems Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted for noncommercial, educational use only. Michael T. Heath Scientific Computing 1 / 87 Eigenvalue Problems Existence, Uniqueness, and Conditioning Computing Eigenvalues and Eigenvectors Outline 1 Eigenvalue Problems 2 Existence, Uniqueness, and Conditioning 3 Computing Eigenvalues and Eigenvectors Michael T. Heath Scientific Computing 2 / 87 Eigenvalue Problems Eigenvalue Problems Existence, Uniqueness, and Conditioning Eigenvalues and Eigenvectors Computing Eigenvalues and Eigenvectors Geometric Interpretation Eigenvalue Problems Eigenvalue problems occur in many areas of science and engineering, such as structural analysis Eigenvalues are also important in analyzing numerical methods Theory and algorithms apply to complex matrices as well as real matrices With complex matrices, we use conjugate transpose, AH , instead of usual transpose, AT Michael T. Heath Scientific Computing 3 / 87 Eigenvalue Problems Eigenvalue Problems Existence, Uniqueness, and Conditioning Eigenvalues and Eigenvectors Computing Eigenvalues and Eigenvectors Geometric Interpretation Eigenvalues and Eigenvectors Standard eigenvalue problem : Given n × n matrix A, find scalar λ and nonzero vector x such that A x = λ x λ is eigenvalue, and x is corresponding eigenvector -
Triangular Factorization
Chapter 1 Triangular Factorization This chapter deals with the factorization of arbitrary matrices into products of triangular matrices. Since the solution of a linear n n system can be easily obtained once the matrix is factored into the product× of triangular matrices, we will concentrate on the factorization of square matrices. Specifically, we will show that an arbitrary n n matrix A has the factorization P A = LU where P is an n n permutation matrix,× L is an n n unit lower triangular matrix, and U is an n ×n upper triangular matrix. In connection× with this factorization we will discuss pivoting,× i.e., row interchange, strategies. We will also explore circumstances for which A may be factored in the forms A = LU or A = LLT . Our results for a square system will be given for a matrix with real elements but can easily be generalized for complex matrices. The corresponding results for a general m n matrix will be accumulated in Section 1.4. In the general case an arbitrary m× n matrix A has the factorization P A = LU where P is an m m permutation× matrix, L is an m m unit lower triangular matrix, and U is an×m n matrix having row echelon structure.× × 1.1 Permutation matrices and Gauss transformations We begin by defining permutation matrices and examining the effect of premulti- plying or postmultiplying a given matrix by such matrices. We then define Gauss transformations and show how they can be used to introduce zeros into a vector. Definition 1.1 An m m permutation matrix is a matrix whose columns con- sist of a rearrangement of× the m unit vectors e(j), j = 1,...,m, in RI m, i.e., a rearrangement of the columns (or rows) of the m m identity matrix. -
Problems in Abstract Algebra
STUDENT MATHEMATICAL LIBRARY Volume 82 Problems in Abstract Algebra A. R. Wadsworth 10.1090/stml/082 STUDENT MATHEMATICAL LIBRARY Volume 82 Problems in Abstract Algebra A. R. Wadsworth American Mathematical Society Providence, Rhode Island Editorial Board Satyan L. Devadoss John Stillwell (Chair) Erica Flapan Serge Tabachnikov 2010 Mathematics Subject Classification. Primary 00A07, 12-01, 13-01, 15-01, 20-01. For additional information and updates on this book, visit www.ams.org/bookpages/stml-82 Library of Congress Cataloging-in-Publication Data Names: Wadsworth, Adrian R., 1947– Title: Problems in abstract algebra / A. R. Wadsworth. Description: Providence, Rhode Island: American Mathematical Society, [2017] | Series: Student mathematical library; volume 82 | Includes bibliographical references and index. Identifiers: LCCN 2016057500 | ISBN 9781470435837 (alk. paper) Subjects: LCSH: Algebra, Abstract – Textbooks. | AMS: General – General and miscellaneous specific topics – Problem books. msc | Field theory and polyno- mials – Instructional exposition (textbooks, tutorial papers, etc.). msc | Com- mutative algebra – Instructional exposition (textbooks, tutorial papers, etc.). msc | Linear and multilinear algebra; matrix theory – Instructional exposition (textbooks, tutorial papers, etc.). msc | Group theory and generalizations – Instructional exposition (textbooks, tutorial papers, etc.). msc Classification: LCC QA162 .W33 2017 | DDC 512/.02–dc23 LC record available at https://lccn.loc.gov/2016057500 Copying and reprinting. Individual readers of this publication, and nonprofit libraries acting for them, are permitted to make fair use of the material, such as to copy select pages for use in teaching or research. Permission is granted to quote brief passages from this publication in reviews, provided the customary acknowledgment of the source is given. Republication, systematic copying, or multiple reproduction of any material in this publication is permitted only under license from the American Mathematical Society. -
Final Exam Outline 1.1, 1.2 Scalars and Vectors in R N. the Magnitude
Final Exam Outline 1.1, 1.2 Scalars and vectors in Rn. The magnitude of a vector. Row and column vectors. Vector addition. Scalar multiplication. The zero vector. Vector subtraction. Distance in Rn. Unit vectors. Linear combinations. The dot product. The angle between two vectors. Orthogonality. The CBS and triangle inequalities. Theorem 1.5. 2.1, 2.2 Systems of linear equations. Consistent and inconsistent systems. The coefficient and augmented matrices. Elementary row operations. The echelon form of a matrix. Gaussian elimination. Equivalent matrices. The reduced row echelon form. Gauss-Jordan elimination. Leading and free variables. The rank of a matrix. The rank theorem. Homogeneous systems. Theorem 2.2. 2.3 The span of a set of vectors. Linear dependence. Linear dependence and independence. Determining n whether a set {~v1, . ,~vk} of vectors in R is linearly dependent or independent. 3.1, 3.2 Matrix operations. Scalar multiplication, matrix addition, matrix multiplication. The zero and identity matrices. Partitioned matrices. The outer product form. The standard unit vectors. The transpose of a matrix. A system of linear equations in matrix form. Algebraic properties of the transpose. 3.3 The inverse of a matrix. Invertible (nonsingular) matrices. The uniqueness of the solution to Ax = b when A is invertible. The inverse of a nonsingular 2 × 2 matrix. Elementary matrices and EROs. Equivalent conditions for invertibility. Using Gauss-Jordan elimination to invert a nonsingular matrix. Theorems 3.7, 3.9. n n n 3.5 Subspaces of R . {0} and R as subspaces of R . The subspace span(v1,..., vk). The the row, column and null spaces of a matrix. -
Singular Value Decomposition and Its Numerical Computations
Singular Value Decomposition and its numerical computations Wen Zhang, Anastasios Arvanitis and Asif Al-Rasheed ABSTRACT The Singular Value Decomposition (SVD) is widely used in many engineering fields. Due to the important role that the SVD plays in real-time computations, we try to study its numerical characteristics and implement the numerical methods for calculating it. Generally speaking, there are two approaches to get the SVD of a matrix, i.e., direct method and indirect method. The first one is to transform the original matrix to a bidiagonal matrix and then compute the SVD of this resulting matrix. The second method is to obtain the SVD through the eigen pairs of another square matrix. In this project, we implement these two kinds of methods and develop the combined methods for computing the SVD. Finally we compare these methods with the built-in function in Matlab (svd) regarding timings and accuracy. 1. INTRODUCTION The singular value decomposition is a factorization of a real or complex matrix and it is used in many applications. Let A be a real or a complex matrix with m by n dimension. Then the SVD of A is: where is an m by m orthogonal matrix, Σ is an m by n rectangular diagonal matrix and is the transpose of n ä n matrix. The diagonal entries of Σ are known as the singular values of A. The m columns of U and the n columns of V are called the left singular vectors and right singular vectors of A, respectively. Both U and V are orthogonal matrices. -
Diagonalizable Matrix - Wikipedia, the Free Encyclopedia
Diagonalizable matrix - Wikipedia, the free encyclopedia http://en.wikipedia.org/wiki/Matrix_diagonalization Diagonalizable matrix From Wikipedia, the free encyclopedia (Redirected from Matrix diagonalization) In linear algebra, a square matrix A is called diagonalizable if it is similar to a diagonal matrix, i.e., if there exists an invertible matrix P such that P −1AP is a diagonal matrix. If V is a finite-dimensional vector space, then a linear map T : V → V is called diagonalizable if there exists a basis of V with respect to which T is represented by a diagonal matrix. Diagonalization is the process of finding a corresponding diagonal matrix for a diagonalizable matrix or linear map.[1] A square matrix which is not diagonalizable is called defective. Diagonalizable matrices and maps are of interest because diagonal matrices are especially easy to handle: their eigenvalues and eigenvectors are known and one can raise a diagonal matrix to a power by simply raising the diagonal entries to that same power. Geometrically, a diagonalizable matrix is an inhomogeneous dilation (or anisotropic scaling) — it scales the space, as does a homogeneous dilation, but by a different factor in each direction, determined by the scale factors on each axis (diagonal entries). Contents 1 Characterisation 2 Diagonalization 3 Simultaneous diagonalization 4 Examples 4.1 Diagonalizable matrices 4.2 Matrices that are not diagonalizable 4.3 How to diagonalize a matrix 4.3.1 Alternative Method 5 An application 5.1 Particular application 6 Quantum mechanical application 7 See also 8 Notes 9 References 10 External links Characterisation The fundamental fact about diagonalizable maps and matrices is expressed by the following: An n-by-n matrix A over the field F is diagonalizable if and only if the sum of the dimensions of its eigenspaces is equal to n, which is the case if and only if there exists a basis of Fn consisting of eigenvectors of A. -
Row Echelon Form Matlab
Row Echelon Form Matlab Lightless and refrigerative Klaus always seal disorderly and interknitting his coati. Telegraphic and crooked Ozzie always kaolinizing tenably and bell his cicatricles. Hateful Shepperd amalgamating, his decors mistiming purifies eximiously. The row echelon form of columns are both stored as One elementary transformations which matlab supports both above are equivalent, row echelon form may instead be used here we also stops all. Learn how we need your given value as nonzero column, row echelon form matlab commands that form, there are called parametric surfaces intersecting in matlab file make this? This article helpful in row echelon form matlab. There has multiple of satisfy all row echelon form matlab. Let be defined by translating from a sum each variable becomes a matrix is there must have already? We drop now vary the Second Derivative Test to determine the type in each critical point of f found above. Matrices in Matlab Arizona Math. The floating point, not change ababaarimes indicate that are displayed, and matrices with row operations for two general solution is a matrix is a line. The matlab will sum or graphing calculators for row echelon form matlab has nontrivial solution as a way: form for each componentwise operation or column echelon form. For accurate part, not sure to explictly give to appropriate was of equations as a comment before entering the appropriate matrices into MATLAB. If necessary, interchange rows to leaving this entry into the first position. But you with matlab do i mean a row echelon form matlab works by spaces and matrices, or multiply matrices. -
An Alternative Algorithm for Computing the Betti Table of a Monomial Ideal 3
AN ALTERNATIVE ALGORITHM FOR COMPUTING THE BETTI TABLE OF A MONOMIAL IDEAL MARIA-LAURA TORRENTE AND MATTEO VARBARO Abstract. In this paper we develop a new technique to compute the Betti table of a monomial ideal. We present a prototype implementation of the resulting algorithm and we perform numerical experiments suggesting a very promising efficiency. On the way of describing the method, we also prove new constraints on the shape of the possible Betti tables of a monomial ideal. 1. Introduction Since many years syzygies, and more generally free resolutions, are central in purely theo- retical aspects of algebraic geometry; more recently, after the connection between algebra and statistics have been initiated by Diaconis and Sturmfels in [DS98], free resolutions have also become an important tool in statistics (for instance, see [D11, SW09]). As a consequence, it is fundamental to have efficient algorithms to compute them. The usual approach uses Gr¨obner bases and exploits a result of Schreyer (for more details see [Sc80, Sc91] or [Ei95, Chapter 15, Section 5]). The packages for free resolutions of the most used computer algebra systems, like [Macaulay2, Singular, CoCoA], are based on these techniques. In this paper, we introduce a new algorithmic method to compute the minimal graded free resolution of any finitely generated graded module over a polynomial ring such that some (possibly non- minimal) graded free resolution is known a priori. We describe this method and we present the resulting algorithm in the case of monomial ideals in a polynomial ring, in which situ- ation we always have a starting nonminimal graded free resolution. -
Linear Algebra and Matrix Theory
Linear Algebra and Matrix Theory Chapter 1 - Linear Systems, Matrices and Determinants This is a very brief outline of some basic definitions and theorems of linear algebra. We will assume that you know elementary facts such as how to add two matrices, how to multiply a matrix by a number, how to multiply two matrices, what an identity matrix is, and what a solution of a linear system of equations is. Hardly any of the theorems will be proved. More complete treatments may be found in the following references. 1. References (1) S. Friedberg, A. Insel and L. Spence, Linear Algebra, Prentice-Hall. (2) M. Golubitsky and M. Dellnitz, Linear Algebra and Differential Equa- tions Using Matlab, Brooks-Cole. (3) K. Hoffman and R. Kunze, Linear Algebra, Prentice-Hall. (4) P. Lancaster and M. Tismenetsky, The Theory of Matrices, Aca- demic Press. 1 2 2. Linear Systems of Equations and Gaussian Elimination The solutions, if any, of a linear system of equations (2.1) a11x1 + a12x2 + ··· + a1nxn = b1 a21x1 + a22x2 + ··· + a2nxn = b2 . am1x1 + am2x2 + ··· + amnxn = bm may be found by Gaussian elimination. The permitted steps are as follows. (1) Both sides of any equation may be multiplied by the same nonzero constant. (2) Any two equations may be interchanged. (3) Any multiple of one equation may be added to another equation. Instead of working with the symbols for the variables (the xi), it is eas- ier to place the coefficients (the aij) and the forcing terms (the bi) in a rectangular array called the augmented matrix of the system. a11 a12 .