Orthogonal Reduction 1 the Row Echelon Form -.: Mathematical
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Arxiv:2003.06292V1 [Math.GR] 12 Mar 2020 Eggnrtr N Ignlmti.Tedaoa Arxi Matrix Diagonal the Matrix
ALGORITHMS IN LINEAR ALGEBRAIC GROUPS SUSHIL BHUNIA, AYAN MAHALANOBIS, PRALHAD SHINDE, AND ANUPAM SINGH ABSTRACT. This paper presents some algorithms in linear algebraic groups. These algorithms solve the word problem and compute the spinor norm for orthogonal groups. This gives us an algorithmic definition of the spinor norm. We compute the double coset decompositionwith respect to a Siegel maximal parabolic subgroup, which is important in computing infinite-dimensional representations for some algebraic groups. 1. INTRODUCTION Spinor norm was first defined by Dieudonné and Kneser using Clifford algebras. Wall [21] defined the spinor norm using bilinear forms. These days, to compute the spinor norm, one uses the definition of Wall. In this paper, we develop a new definition of the spinor norm for split and twisted orthogonal groups. Our definition of the spinornorm is rich in the sense, that itis algorithmic in nature. Now one can compute spinor norm using a Gaussian elimination algorithm that we develop in this paper. This paper can be seen as an extension of our earlier work in the book chapter [3], where we described Gaussian elimination algorithms for orthogonal and symplectic groups in the context of public key cryptography. In computational group theory, one always looks for algorithms to solve the word problem. For a group G defined by a set of generators hXi = G, the problem is to write g ∈ G as a word in X: we say that this is the word problem for G (for details, see [18, Section 1.4]). Brooksbank [4] and Costi [10] developed algorithms similar to ours for classical groups over finite fields. -
On the Eigenvalues of Euclidean Distance Matrices
“main” — 2008/10/13 — 23:12 — page 237 — #1 Volume 27, N. 3, pp. 237–250, 2008 Copyright © 2008 SBMAC ISSN 0101-8205 www.scielo.br/cam On the eigenvalues of Euclidean distance matrices A.Y. ALFAKIH∗ Department of Mathematics and Statistics University of Windsor, Windsor, Ontario N9B 3P4, Canada E-mail: [email protected] Abstract. In this paper, the notion of equitable partitions (EP) is used to study the eigenvalues of Euclidean distance matrices (EDMs). In particular, EP is used to obtain the characteristic poly- nomials of regular EDMs and non-spherical centrally symmetric EDMs. The paper also presents methods for constructing cospectral EDMs and EDMs with exactly three distinct eigenvalues. Mathematical subject classification: 51K05, 15A18, 05C50. Key words: Euclidean distance matrices, eigenvalues, equitable partitions, characteristic poly- nomial. 1 Introduction ( ) An n ×n nonzero matrix D = di j is called a Euclidean distance matrix (EDM) 1, 2,..., n r if there exist points p p p in some Euclidean space < such that i j 2 , ,..., , di j = ||p − p || for all i j = 1 n where || || denotes the Euclidean norm. i , ,..., Let p , i ∈ N = {1 2 n}, be the set of points that generate an EDM π π ( , ,..., ) D. An m-partition of D is an ordered sequence = N1 N2 Nm of ,..., nonempty disjoint subsets of N whose union is N. The subsets N1 Nm are called the cells of the partition. The n-partition of D where each cell consists #760/08. Received: 07/IV/08. Accepted: 17/VI/08. ∗Research supported by the Natural Sciences and Engineering Research Council of Canada and MITACS. -
Implementation of Gaussian- Elimination
International Journal of Innovative Technology and Exploring Engineering (IJITEE) ISSN: 2278-3075, Volume-5 Issue-11, April 2016 Implementation of Gaussian- Elimination Awatif M.A. Elsiddieg Abstract: Gaussian elimination is an algorithm for solving systems of linear equations, can also use to find the rank of any II. PIVOTING matrix ,we use Gaussian Jordan elimination to find the inverse of a non singular square matrix. This work gives basic concepts The objective of pivoting is to make an element above or in section (1) , show what is pivoting , and implementation of below a leading one into a zero. Gaussian elimination to solve a system of linear equations. The "pivot" or "pivot element" is an element on the left Section (2) we find the rank of any matrix. Section (3) we use hand side of a matrix that you want the elements above and Gaussian elimination to find the inverse of a non singular square matrix. We compare the method by Gauss Jordan method. In below to be zero. Normally, this element is a one. If you can section (4) practical implementation of the method we inherit the find a book that mentions pivoting, they will usually tell you computation features of Gaussian elimination we use programs that you must pivot on a one. If you restrict yourself to the in Matlab software. three elementary row operations, then this is a true Keywords: Gaussian elimination, algorithm Gauss, statement. However, if you are willing to combine the Jordan, method, computation, features, programs in Matlab, second and third elementary row operations, you come up software. -
Naïve Gaussian Elimination Jamie Trahan, Autar Kaw, Kevin Martin University of South Florida United States of America [email protected]
nbm_sle_sim_naivegauss.nb 1 Naïve Gaussian Elimination Jamie Trahan, Autar Kaw, Kevin Martin University of South Florida United States of America [email protected] Introduction One of the most popular numerical techniques for solving simultaneous linear equations is Naïve Gaussian Elimination method. The approach is designed to solve a set of n equations with n unknowns, [A][X]=[C], where A nxn is a square coefficient matrix, X nx1 is the solution vector, and C nx1 is the right hand side array. Naïve Gauss consists of two steps: 1) Forward Elimination: In this step, the unknown is eliminated in each equation starting with the first@ D equation. This way, the equations @areD "reduced" to one equation and@ oneD unknown in each equation. 2) Back Substitution: In this step, starting from the last equation, each of the unknowns is found. To learn more about Naïve Gauss Elimination as well as the pitfall's of the method, click here. A simulation of Naive Gauss Method follows. Section 1: Input Data Below are the input parameters to begin the simulation. This is the only section that requires user input. Once the values are entered, Mathematica will calculate the solution vector [X]. èNumber of equations, n: n = 4 4 è nxn coefficient matrix, [A]: nbm_sle_sim_naivegauss.nb 2 A = Table 1, 10, 100, 1000 , 1, 15, 225, 3375 , 1, 20, 400, 8000 , 1, 22.5, 506.25, 11391 ;A MatrixForm 1 10@88 100 1000 < 8 < 18 15 225 3375< 8 <<D êê 1 20 400 8000 ji 1 22.5 506.25 11391 zy j z j z j z j z è nx1 rightj hand side array, [RHS]: z k { RHS = Table 227.04, 362.78, 517.35, 602.97 ; RHS MatrixForm 227.04 362.78 @8 <D êê 517.35 ji 602.97 zy j z j z j z j z j z k { Section 2: Naïve Gaussian Elimination Method The following sections divide Naïve Gauss elimination into two steps: 1) Forward Elimination 2) Back Substitution To conduct Naïve Gauss Elimination, Mathematica will join the [A] and [RHS] matrices into one augmented matrix, [C], that will facilitate the process of forward elimination. -
4.1 RANK of a MATRIX Rank List Given Matrix M, the Following Are Equal
page 1 of Section 4.1 CHAPTER 4 MATRICES CONTINUED SECTION 4.1 RANK OF A MATRIX rank list Given matrix M, the following are equal: (1) maximal number of ind cols (i.e., dim of the col space of M) (2) maximal number of ind rows (i.e., dim of the row space of M) (3) number of cols with pivots in the echelon form of M (4) number of nonzero rows in the echelon form of M You know that (1) = (3) and (2) = (4) from Section 3.1. To see that (3) = (4) just stare at some echelon forms. This one number (that all four things equal) is called the rank of M. As a special case, a zero matrix is said to have rank 0. how row ops affect rank Row ops don't change the rank because they don't change the max number of ind cols or rows. example 1 12-10 24-20 has rank 1 (maximally one ind col, by inspection) 48-40 000 []000 has rank 0 example 2 2540 0001 LetM= -2 1 -1 0 21170 To find the rank of M, use row ops R3=R1+R3 R4=-R1+R4 R2ØR3 R4=-R2+R4 2540 0630 to get the unreduced echelon form 0001 0000 Cols 1,2,4 have pivots. So the rank of M is 3. how the rank is limited by the size of the matrix IfAis7≈4then its rank is either 0 (if it's the zero matrix), 1, 2, 3 or 4. The rank can't be 5 or larger because there can't be 5 ind cols when there are only 4 cols to begin with. -
Rotation Matrix - Wikipedia, the Free Encyclopedia Page 1 of 22
Rotation matrix - Wikipedia, the free encyclopedia Page 1 of 22 Rotation matrix From Wikipedia, the free encyclopedia In linear algebra, a rotation matrix is a matrix that is used to perform a rotation in Euclidean space. For example the matrix rotates points in the xy -Cartesian plane counterclockwise through an angle θ about the origin of the Cartesian coordinate system. To perform the rotation, the position of each point must be represented by a column vector v, containing the coordinates of the point. A rotated vector is obtained by using the matrix multiplication Rv (see below for details). In two and three dimensions, rotation matrices are among the simplest algebraic descriptions of rotations, and are used extensively for computations in geometry, physics, and computer graphics. Though most applications involve rotations in two or three dimensions, rotation matrices can be defined for n-dimensional space. Rotation matrices are always square, with real entries. Algebraically, a rotation matrix in n-dimensions is a n × n special orthogonal matrix, i.e. an orthogonal matrix whose determinant is 1: . The set of all rotation matrices forms a group, known as the rotation group or the special orthogonal group. It is a subset of the orthogonal group, which includes reflections and consists of all orthogonal matrices with determinant 1 or -1, and of the special linear group, which includes all volume-preserving transformations and consists of matrices with determinant 1. Contents 1 Rotations in two dimensions 1.1 Non-standard orientation -
§9.2 Orthogonal Matrices and Similarity Transformations
n×n Thm: Suppose matrix Q 2 R is orthogonal. Then −1 T I Q is invertible with Q = Q . n T T I For any x; y 2 R ,(Q x) (Q y) = x y. n I For any x 2 R , kQ xk2 = kxk2. Ex 0 1 1 1 1 1 2 2 2 2 B C B 1 1 1 1 C B − 2 2 − 2 2 C B C T H = B C ; H H = I : B C B − 1 − 1 1 1 C B 2 2 2 2 C @ A 1 1 1 1 2 − 2 − 2 2 x9.2 Orthogonal Matrices and Similarity Transformations n×n Def: A matrix Q 2 R is said to be orthogonal if its columns n (1) (2) (n)o n q ; q ; ··· ; q form an orthonormal set in R . Ex 0 1 1 1 1 1 2 2 2 2 B C B 1 1 1 1 C B − 2 2 − 2 2 C B C T H = B C ; H H = I : B C B − 1 − 1 1 1 C B 2 2 2 2 C @ A 1 1 1 1 2 − 2 − 2 2 x9.2 Orthogonal Matrices and Similarity Transformations n×n Def: A matrix Q 2 R is said to be orthogonal if its columns n (1) (2) (n)o n q ; q ; ··· ; q form an orthonormal set in R . n×n Thm: Suppose matrix Q 2 R is orthogonal. Then −1 T I Q is invertible with Q = Q . n T T I For any x; y 2 R ,(Q x) (Q y) = x y. -
The Jordan Canonical Forms of Complex Orthogonal and Skew-Symmetric Matrices Roger A
CORE Metadata, citation and similar papers at core.ac.uk Provided by Elsevier - Publisher Connector Linear Algebra and its Applications 302–303 (1999) 411–421 www.elsevier.com/locate/laa The Jordan Canonical Forms of complex orthogonal and skew-symmetric matrices Roger A. Horn a,∗, Dennis I. Merino b aDepartment of Mathematics, University of Utah, 155 South 1400 East, Salt Lake City, UT 84112–0090, USA bDepartment of Mathematics, Southeastern Louisiana University, Hammond, LA 70402-0687, USA Received 25 August 1999; accepted 3 September 1999 Submitted by B. Cain Dedicated to Hans Schneider Abstract We study the Jordan Canonical Forms of complex orthogonal and skew-symmetric matrices, and consider some related results. © 1999 Elsevier Science Inc. All rights reserved. Keywords: Canonical form; Complex orthogonal matrix; Complex skew-symmetric matrix 1. Introduction and notation Every square complex matrix A is similar to its transpose, AT ([2, Section 3.2.3] or [1, Chapter XI, Theorem 5]), and the similarity class of the n-by-n complex symmetric matrices is all of Mn [2, Theorem 4.4.9], the set of n-by-n complex matrices. However, other natural similarity classes of matrices are non-trivial and can be characterized by simple conditions involving the Jordan Canonical Form. For example, A is similar to its complex conjugate, A (and hence also to its T adjoint, A∗ = A ), if and only if A is similar to a real matrix [2, Theorem 4.1.7]; the Jordan Canonical Form of such a matrix can contain only Jordan blocks with real eigenvalues and pairs of Jordan blocks of the form Jk(λ) ⊕ Jk(λ) for non-real λ.We denote by Jk(λ) the standard upper triangular k-by-k Jordan block with eigenvalue ∗ Corresponding author. -
Linear Programming
Linear Programming Lecture 1: Linear Algebra Review Lecture 1: Linear Algebra Review Linear Programming 1 / 24 1 Linear Algebra Review 2 Linear Algebra Review 3 Block Structured Matrices 4 Gaussian Elimination Matrices 5 Gauss-Jordan Elimination (Pivoting) Lecture 1: Linear Algebra Review Linear Programming 2 / 24 columns rows 2 3 a1• 6 a2• 7 = a a ::: a = 6 7 •1 •2 •n 6 . 7 4 . 5 am• 2 3 2 T 3 a11 a21 ::: am1 a•1 T 6 a12 a22 ::: am2 7 6 a•2 7 AT = 6 7 = 6 7 = aT aT ::: aT 6 . .. 7 6 . 7 1• 2• m• 4 . 5 4 . 5 T a1n a2n ::: amn a•n Matrices in Rm×n A 2 Rm×n 2 3 a11 a12 ::: a1n 6 a21 a22 ::: a2n 7 A = 6 7 6 . .. 7 4 . 5 am1 am2 ::: amn Lecture 1: Linear Algebra Review Linear Programming 3 / 24 rows 2 3 a1• 6 a2• 7 = 6 7 6 . 7 4 . 5 am• 2 3 2 T 3 a11 a21 ::: am1 a•1 T 6 a12 a22 ::: am2 7 6 a•2 7 AT = 6 7 = 6 7 = aT aT ::: aT 6 . .. 7 6 . 7 1• 2• m• 4 . 5 4 . 5 T a1n a2n ::: amn a•n Matrices in Rm×n A 2 Rm×n columns 2 3 a11 a12 ::: a1n 6 a21 a22 ::: a2n 7 A = 6 7 = a a ::: a 6 . .. 7 •1 •2 •n 4 . 5 am1 am2 ::: amn Lecture 1: Linear Algebra Review Linear Programming 3 / 24 2 3 2 T 3 a11 a21 ::: am1 a•1 T 6 a12 a22 ::: am2 7 6 a•2 7 AT = 6 7 = 6 7 = aT aT ::: aT 6 . -
Fifth Lecture
Lecture # 3 Orthogonal Matrices and Matrix Norms We repeat the definition an orthogonal set and orthornormal set. n Definition 1 A set of k vectors fu1; u2;:::; ukg, where each ui 2 R , is said to be an orthogonal with respect to the inner product (·; ·) if (ui; uj) = 0 for i =6 j. The set is said to be orthonormal if it is orthogonal and (ui; ui) = 1 for i = 1; 2; : : : ; k The definition of an orthogonal matrix is related to the definition for vectors, but with a subtle difference. n×k Definition 2 The matrix U = (u1; u2;:::; uk) 2 R whose columns form an orthonormal set is said to be left orthogonal. If k = n, that is, U is square, then U is said to be an orthogonal matrix. Note that the columns of (left) orthogonal matrices are orthonormal, not merely orthogonal. Square complex matrices whose columns form an orthonormal set are called unitary. Example 1 Here are some common 2 × 2 orthogonal matrices ! 1 0 U = 0 1 ! p 1 −1 U = 0:5 1 1 ! 0:8 0:6 U = 0:6 −0:8 ! cos θ sin θ U = − sin θ cos θ Let x 2 Rn then k k2 T Ux 2 = (Ux) (Ux) = xT U T Ux = xT x k k2 = x 2 1 So kUxk2 = kxk2: This property is called orthogonal invariance, it is an important and useful property of the two norm and orthogonal transformations. That is, orthog- onal transformations DO NOT AFFECT the two-norm, there is no compa- rable property for the one-norm or 1-norm. -
Advanced Linear Algebra (MA251) Lecture Notes Contents
Algebra I – Advanced Linear Algebra (MA251) Lecture Notes Derek Holt and Dmitriy Rumynin year 2009 (revised at the end) Contents 1 Review of Some Linear Algebra 3 1.1 The matrix of a linear map with respect to a fixed basis . ........ 3 1.2 Changeofbasis................................... 4 2 The Jordan Canonical Form 4 2.1 Introduction.................................... 4 2.2 TheCayley-Hamiltontheorem . ... 6 2.3 Theminimalpolynomial. 7 2.4 JordanchainsandJordanblocks . .... 9 2.5 Jordan bases and the Jordan canonical form . ....... 10 2.6 The JCF when n =2and3 ............................ 11 2.7 Thegeneralcase .................................. 14 2.8 Examples ...................................... 15 2.9 Proof of Theorem 2.9 (non-examinable) . ...... 16 2.10 Applications to difference equations . ........ 17 2.11 Functions of matrices and applications to differential equations . 19 3 Bilinear Maps and Quadratic Forms 21 3.1 Bilinearmaps:definitions . 21 3.2 Bilinearmaps:changeofbasis . 22 3.3 Quadraticforms: introduction. ...... 22 3.4 Quadraticforms: definitions. ..... 25 3.5 Change of variable under the general linear group . .......... 26 3.6 Change of variable under the orthogonal group . ........ 29 3.7 Applications of quadratic forms to geometry . ......... 33 3.7.1 Reduction of the general second degree equation . ....... 33 3.7.2 The case n =2............................... 34 3.7.3 The case n =3............................... 34 1 3.8 Unitary, hermitian and normal matrices . ....... 35 3.9 Applications to quantum mechanics . ..... 41 4 Finitely Generated Abelian Groups 44 4.1 Definitions...................................... 44 4.2 Subgroups,cosetsandquotientgroups . ....... 45 4.3 Homomorphisms and the first isomorphism theorem . ....... 48 4.4 Freeabeliangroups............................... 50 4.5 Unimodular elementary row and column operations and the Smith normal formforintegralmatrices . -
Gaussian Elimination and Lu Decomposition (Supplement for Ma511)
GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511) D. ARAPURA Gaussian elimination is the go to method for all basic linear classes including this one. We go summarize the main ideas. 1. Matrix multiplication The rule for multiplying matrices is, at first glance, a little complicated. If A is m × n and B is n × p then C = AB is defined and of size m × p with entries n X cij = aikbkj k=1 The main reason for defining it this way will be explained when we talk about linear transformations later on. THEOREM 1.1. (1) Matrix multiplication is associative, i.e. A(BC) = (AB)C. (Henceforth, we will drop parentheses.) (2) If A is m × n and Im×m and In×n denote the identity matrices of the indicated sizes, AIn×n = Im×mA = A. (We usually just write I and the size is understood from context.) On the other hand, matrix multiplication is almost never commutative; in other words generally AB 6= BA when both are defined. So we have to be careful not to inadvertently use it in a calculation. Given an n × n matrix A, the inverse, if it exists, is another n × n matrix A−1 such that AA−I = I;A−1A = I A matrix is called invertible or nonsingular if A−1 exists. In practice, it is only necessary to check one of these. THEOREM 1.2. If B is n × n and AB = I or BA = I, then A invertible and B = A−1. Proof. The proof of invertibility of A will need to wait until we talk about deter- minants.