Chapter 1 Matrix Algebra. Definitions and Operations

Total Page:16

File Type:pdf, Size:1020Kb

Chapter 1 Matrix Algebra. Definitions and Operations Chapter 1 Matrix Algebra. Definitions and Operations 1.1 Matrices Matrices play very important roles in the computation and analysis of several engineering problems. First, matrices allow for compact notations. As discussed below, matrices are collections of objects arranged in rows and columns. The symbols representing these collec- tions can then induce an algebra, in which different operations such as matrix addition or matrix multiplication can be defined. (This compact representation has become even more significant today with the advent of computer software that allows simple statements such as A+B or A*B to be evaluated directly.) Aside from the convenience in representation, once the matrices have been constructed, other internal properties can be derived and assessed. Properties such as determinant, rank, trace, eigenvalues and eigenvectors (all to be defined later) determine characteristics about the systems from which the matrices were obtained. These properties can then help in the analysis and improvement of the systems under study. It can be argued that some problems may be solved without the use of matrices. How- ever, as the complexity of the problem increases, matrices can help improve the tractability of the solution. Definition 1.1 A matrix is a collection of objects, called the elements of the matrix, arranged in rows and columns. These elements could be numbers, 1 0 0.3 A = − with i = √ 1 2 3 + i 1 − µ − 2 ¶ 3 4 c 2006 Tomas Co, all rights reserved ° or functions, 1 2x(t) + a B = sin(ωt)dt dy/dt µ ¶ We restrict the discussion on matricesR that contain elements for which binary oper- ations such as addition, subtraction, multiplication and division among the elements make algebraic sense. To distinguish the elements from the collection, we refer to the valid elements of the matrix as scalars. Thus, a scalar is not the same as a matrix having only one row and one column. We will denote the elements of matrix A positioned at the ith row and jth column as aij. We will use capital letters to denote matrices. For example, let matrix A have m rows and n columns, a a a 11 12 ··· 1n a21 a22 a2n A = . ···. . .. am1 am2 amn ··· We will also use the symbol “[=]” to denote “has the size”, i.e A [=] m n means A has m rows and n columns. × A row vector is simply a matrix having one row. v = (v , v , , v ) 1 2 ··· n If v has n elements, then v is said to have length n. Likewise, a column vector is simply a matrix having one column. v1 v2 v = . . vn By default, “vector” will imply a column vector, unless it has been specified to be a row vector. A square matrix is a matrix with the same number of columns and rows. Special cases include: 1. L, a lower triangular matrix ℓ11 0 0 0 ℓ ℓ 0 ··· 0 21 22 ··· L = ℓ31 ℓ32 ℓ33 0 . ···. . .. ℓn1 ℓn2 ℓn3 ℓnn ··· c 2006 Tomas Co, all rights reserved 5 ° 2. U, an upper triangular matrix u u u u 11 12 13 ··· 1n 0 u u u 22 23 ··· 2n U = 0 0 u33 u3n . ···. . .. 0 0 0 unn ··· 3. D, a diagonal matrix d11 0 0 0 0 d 0 ··· 0 22 ··· D = 0 0 d33 0 . ···. . .. 0 0 0 dnn ··· A short hand notation is, D = diag(d11, d22, . , dnn) 4. I, the identity matrix I = diag(1, 1,..., 1) We will also use In to denote an identity matrix of size n. 1.2 Matrix Operations 1. Matrix Addition Let A = (aij), B = (bij), C = (cij), then A + B = C if and only if cij = aij + bij. Condition: A, B and C all have the same size. 2. Scalar Matrix Multiplication. Let A = (aij),B = (bij), and α a scalar (e.g. a real number or a complex number), then αA = B if and only if bij = α aij. Condition: A and B have the same size. 3. Matrix Multiplication. Let A [=] m n, B [=] n p, C [=] m p, then × × × A B = C ∗ 6 c 2006 Tomas Co, all rights reserved ° if and only if n cij = aiℓ bℓj Xℓ=1 Remarks: (a) A shorthand notation is AB. (b) For the operation AB, we say A pre-multiplies B and B post-multiplies A. (c) When the number of columns of A is equal to the number of rows of B then we say that A is conformable with B for the operation AB. (d) In general, AB is not equal to BA. For those special cases in which AB = BA, then we say that A commutes with B. 4. Haddamard-Schur Product. Let A = (aij), B = (bij), C = (cij), then A B = C ◦ if and only if cij = aijbij Condition: A, B and C all have the same size. 5. Kronecker (direct) Product. Let A = (a )[=]m n then ij × a B a B a B 11 12 ··· 1n a21B a22B a2nB A B = C = . ···. ⊗ . .. am1B am2B amnB ··· T 6. Transpose. Let A = (aij), then the transpose of A, denoted A , is obtained by interchanging the position of row and column.1 For example, suppose A is given by a b c d A = ef gh µ ¶ then the transpose is given by a e b f AT = c g d h 1In other journals and books, the transpose symbol is an apostrophe (′), i.e. A′ instead of AT . c 2006 Tomas Co, all rights reserved 7 ° If A = AT , then A is said to be symmetric. If A = AT , then A is said to be skew-symmetric. − If the elements of the matrix involves elements in the complex number field, then a related operation is the conjugate transpose A∗ = (¯aji), where A = (aij) anda ¯ is the complex conjugate of a. If A = A∗ then A is said to be Hermitian. If A = A∗ then − A is said to be skew-Hermitian. 7. Vectorization. Let A = (a )[=]m n, ij × a11 a21 . . am1 . x = vec(A) = . . a1n a2n . . a mn 8. Determinant. Let A be a square matrix of size n, then the determinant of A is given by det(A) or A = p(k ,...,k )a a . a n (1.1) | | 1 n 1,k1 2,k2 n,k k1=k2= =kn 6 X6 ···6 h where p(k1, . , kn) = ( 1) is called the permutation index and h is equal to the number of flips needed to− make the sequence k , k , k ,...,k equal to the sequence { 1 2 3 n} 1, 2, 3, . , n . { } Example 1.1 Let A be a 3 3 matrix, then the determinant of A is obtained as follows: × h k k k h ( 1) a a a n 1 2 3 − 1k1 2k2 nk 1 2 3 0 a11a22a33 1 3 2 1 a a a − 11 23 32 2 1 3 1 a a a − 12 21 33 2 3 1 2 a12a23a31 3 1 2 2 a13a21a32 3 2 1 1 a a a − 13 22 31 A = a a a a a a a a a + a a a + a a a a a a | | 11 22 33 − 11 23 32 − 12 21 33 12 23 31 13 21 32 − 13 22 31 8 c 2006 Tomas Co, all rights reserved ° ♦♦♦ From the definition given, we expect the summation to consist of n! terms. This definition is not usually used when doing actual determinant calculations. Instead, it is used more for proving some theorems which involve determinants. It is crucial to remember that (1.1) is the definition of a determinant ( and not the computation method using the cofactor that is developed below ). 9. Cofactor of aij. Let Aij denote a new matrix obtained by deleting the ith row and ↓ i+j jth column of A, then the cofactor of aij, denoted cof(aij) is given by ( 1) Aij . − | ↓| Using cofactors, the determinant of a matrix can be obtained recursively as follows: (a) The determinant of a 1 1 matrix is equal to that element, e.g. (a) = a. × | | (b) The determinant of an n n matrix can be obtained by column expansion × n A = a cof(a ) k is any one fixed column | | ik ik i=1 X or by row expansion n A = a cof(a ) k is any one fixed row | | kj kj j=1 X 10. Matrix Adjoint. The matrix adjoint of a square matrix, denoted adj(A), is obtained by first replacing each element aij by its cofactor and then taking the transpose of the resulting matrix. a a a cof(a ) cof(a ) cof(a ) 11 12 ··· 1n 11 12 ··· 1n a21 a22 a2n −→ cof(a21) cof(a22) cof(a2n) . ···. replace with . ···. . .. .. cofactors an1 an2 ann cof(an1) cof(an2) cof(ann) ··· ··· cof(a ) cof(a ) cof(a ) 11 21 ··· n1 cof(a12) cof(a22) cof(an2) −→ . ···. transpose . .. cof(a1n) cof(a2n) cof(ann) ··· 11. Trace of a Square Matrix. The trace of an n n matrix A, denoted tr(A), is given by × n tr(A) = aii i=1 X c 2006 Tomas Co, all rights reserved 9 ° 1 12. Inverse of a Square Matrix. The matrix, denoted by A− , is called the inverse of A if and only if 1 1 A− A = AA− = I where I is the identity matrix. Condition: The inverse of a square matrix exists only if its determinant is not equal to zero. A matrix whose determinant is zero is called a singular matrix. Lemma 1.1 The inverse of a square matrix A can be obtained using the identity 1 1 A− = adj(A) (1.2) A | | (see page 23 for proof) Note that even though only nonsingular square matrices have inverses, all square ma- trices can still have matrix adjoints.
Recommended publications
  • Parametrizations of K-Nonnegative Matrices
    Parametrizations of k-Nonnegative Matrices Anna Brosowsky, Neeraja Kulkarni, Alex Mason, Joe Suk, Ewin Tang∗ October 2, 2017 Abstract Totally nonnegative (positive) matrices are matrices whose minors are all nonnegative (positive). We generalize the notion of total nonnegativity, as follows. A k-nonnegative (resp. k-positive) matrix has all minors of size k or less nonnegative (resp. positive). We give a generating set for the semigroup of k-nonnegative matrices, as well as relations for certain special cases, i.e. the k = n − 1 and k = n − 2 unitriangular cases. In the above two cases, we find that the set of k-nonnegative matrices can be partitioned into cells, analogous to the Bruhat cells of totally nonnegative matrices, based on their factorizations into generators. We will show that these cells, like the Bruhat cells, are homeomorphic to open balls, and we prove some results about the topological structure of the closure of these cells, and in fact, in the latter case, the cells form a Bruhat-like CW complex. We also give a family of minimal k-positivity tests which form sub-cluster algebras of the total positivity test cluster algebra. We describe ways to jump between these tests, and give an alternate description of some tests as double wiring diagrams. 1 Introduction A totally nonnegative (respectively totally positive) matrix is a matrix whose minors are all nonnegative (respectively positive). Total positivity and nonnegativity are well-studied phenomena and arise in areas such as planar networks, combinatorics, dynamics, statistics and probability. The study of total positivity and total nonnegativity admit many varied applications, some of which are explored in “Totally Nonnegative Matrices” by Fallat and Johnson [5].
    [Show full text]
  • On Multivariate Interpolation
    On Multivariate Interpolation Peter J. Olver† School of Mathematics University of Minnesota Minneapolis, MN 55455 U.S.A. [email protected] http://www.math.umn.edu/∼olver Abstract. A new approach to interpolation theory for functions of several variables is proposed. We develop a multivariate divided difference calculus based on the theory of non-commutative quasi-determinants. In addition, intriguing explicit formulae that connect the classical finite difference interpolation coefficients for univariate curves with multivariate interpolation coefficients for higher dimensional submanifolds are established. † Supported in part by NSF Grant DMS 11–08894. April 6, 2016 1 1. Introduction. Interpolation theory for functions of a single variable has a long and distinguished his- tory, dating back to Newton’s fundamental interpolation formula and the classical calculus of finite differences, [7, 47, 58, 64]. Standard numerical approximations to derivatives and many numerical integration methods for differential equations are based on the finite dif- ference calculus. However, historically, no comparable calculus was developed for functions of more than one variable. If one looks up multivariate interpolation in the classical books, one is essentially restricted to rectangular, or, slightly more generally, separable grids, over which the formulae are a simple adaptation of the univariate divided difference calculus. See [19] for historical details. Starting with G. Birkhoff, [2] (who was, coincidentally, my thesis advisor), recent years have seen a renewed level of interest in multivariate interpolation among both pure and applied researchers; see [18] for a fairly recent survey containing an extensive bibli- ography. De Boor and Ron, [8, 12, 13], and Sauer and Xu, [61, 10, 65], have systemati- cally studied the polynomial case.
    [Show full text]
  • Linear Independence, Span, and Basis of a Set of Vectors What Is Linear Independence?
    LECTURENOTES · purdue university MA 26500 Kyle Kloster Linear Algebra October 22, 2014 Linear Independence, Span, and Basis of a Set of Vectors What is linear independence? A set of vectors S = fv1; ··· ; vkg is linearly independent if none of the vectors vi can be written as a linear combination of the other vectors, i.e. vj = α1v1 + ··· + αkvk. Suppose the vector vj can be written as a linear combination of the other vectors, i.e. there exist scalars αi such that vj = α1v1 + ··· + αkvk holds. (This is equivalent to saying that the vectors v1; ··· ; vk are linearly dependent). We can subtract vj to move it over to the other side to get an expression 0 = α1v1 + ··· αkvk (where the term vj now appears on the right hand side. In other words, the condition that \the set of vectors S = fv1; ··· ; vkg is linearly dependent" is equivalent to the condition that there exists αi not all of which are zero such that 2 3 α1 6α27 0 = v v ··· v 6 7 : 1 2 k 6 . 7 4 . 5 αk More concisely, form the matrix V whose columns are the vectors vi. Then the set S of vectors vi is a linearly dependent set if there is a nonzero solution x such that V x = 0. This means that the condition that \the set of vectors S = fv1; ··· ; vkg is linearly independent" is equivalent to the condition that \the only solution x to the equation V x = 0 is the zero vector, i.e. x = 0. How do you determine if a set is lin.
    [Show full text]
  • Geometric Polarimetry − Part I: Spinors and Wave States
    1 Geometric Polarimetry Part I: Spinors and − Wave States David Bebbington, Laura Carrea, and Ernst Krogager, Member, IEEE Abstract A new approach to polarization algebra is introduced. It exploits the geometric properties of spinors in order to represent wave states consistently in arbitrary directions in three dimensional space. In this first expository paper of an intended series the basic derivation of the spinorial wave state is seen to be geometrically related to the electromagnetic field tensor in the spatio-temporal Fourier domain. Extracting the polarization state from the electromagnetic field requires the introduction of a new element, which enters linearly into the defining relation. We call this element the phase flag and it is this that keeps track of the polarization reference when the coordinate system is changed and provides a phase origin for both wave components. In this way we are able to identify the sphere of three dimensional unit wave vectors with the Poincar´esphere. Index Terms state of polarization, geometry, covariant and contravariant spinors and tensors, bivectors, phase flag, Poincar´esphere. I. INTRODUCTION arXiv:0804.0745v1 [physics.optics] 4 Apr 2008 The development of applications in radar polarimetry has been vigorous over the past decade, and is anticipated to continue with ambitious new spaceborne remote sensing missions (for example TerraSAR-X [1] and TanDEM-X [2]). As technical capabilities increase, and new application areas open up, innovations in data analysis often result. Because polarization data are This work was supported by the Marie Curie Research Training Network AMPER (Contract number HPRN-CT-2002-00205). D. Bebbington and L.
    [Show full text]
  • Chapter Four Determinants
    Chapter Four Determinants In the first chapter of this book we considered linear systems and we picked out the special case of systems with the same number of equations as unknowns, those of the form T~x = ~b where T is a square matrix. We noted a distinction between two classes of T ’s. While such systems may have a unique solution or no solutions or infinitely many solutions, if a particular T is associated with a unique solution in any system, such as the homogeneous system ~b = ~0, then T is associated with a unique solution for every ~b. We call such a matrix of coefficients ‘nonsingular’. The other kind of T , where every linear system for which it is the matrix of coefficients has either no solution or infinitely many solutions, we call ‘singular’. Through the second and third chapters the value of this distinction has been a theme. For instance, we now know that nonsingularity of an n£n matrix T is equivalent to each of these: ² a system T~x = ~b has a solution, and that solution is unique; ² Gauss-Jordan reduction of T yields an identity matrix; ² the rows of T form a linearly independent set; ² the columns of T form a basis for Rn; ² any map that T represents is an isomorphism; ² an inverse matrix T ¡1 exists. So when we look at a particular square matrix, the question of whether it is nonsingular is one of the first things that we ask. This chapter develops a formula to determine this. (Since we will restrict the discussion to square matrices, in this chapter we will usually simply say ‘matrix’ in place of ‘square matrix’.) More precisely, we will develop infinitely many formulas, one for 1£1 ma- trices, one for 2£2 matrices, etc.
    [Show full text]
  • Determinants of Commuting-Block Matrices by Istvan Kovacs, Daniel S
    Determinants of Commuting-Block Matrices by Istvan Kovacs, Daniel S. Silver*, and Susan G. Williams* Let R beacommutative ring, and Matn(R) the ring of n × n matrices over R.We (i,j) can regard a k × k matrix M =(A ) over Matn(R)asablock matrix,amatrix that has been partitioned into k2 submatrices (blocks)overR, each of size n × n. When M is regarded in this way, we denote its determinant by |M|.Wewill use the symbol D(M) for the determinant of M viewed as a k × k matrix over Matn(R). It is important to realize that D(M)isann × n matrix. Theorem 1. Let R be acommutative ring. Assume that M is a k × k block matrix of (i,j) blocks A ∈ Matn(R) that commute pairwise. Then | | | | (1,π(1)) (2,π(2)) ··· (k,π(k)) (1) M = D(M) = (sgn π)A A A . π∈Sk Here Sk is the symmetric group on k symbols; the summation is the usual one that appears in the definition of determinant. Theorem 1 is well known in the case k =2;the proof is often left as an exercise in linear algebra texts (see [4, page 164], for example). The general result is implicit in [3], but it is not widely known. We present a short, elementary proof using mathematical induction on k.Wesketch a second proof when the ring R has no zero divisors, a proof that is based on [3] and avoids induction by using the fact that commuting matrices over an algebraically closed field can be simultaneously triangularized.
    [Show full text]
  • SUPPLEMENTARY MATERIAL: I. Fitting of the Hessian Matrix
    Supplementary Material (ESI) for PCCP This journal is © the Owner Societies 2010 1 SUPPLEMENTARY MATERIAL: I. Fitting of the Hessian matrix In the fitting of the Hessian matrix elements as functions of the reaction coordinate, one can take advantage of symmetry. The non-diagonal elements may be written in the form of numerical derivatives as E(Δα, Δβ ) − E(Δα,−Δβ ) − E(−Δα, Δβ ) + E(−Δα,−Δβ ) H (α, β ) = . (S1) 4δ 2 Here, α and β label any of the 15 atomic coordinates in {Cx, Cy, …, H4z}, E(Δα,Δβ) denotes the DFT energy of CH4 interacting with Ni(111) with a small displacement along α and β, and δ the small displacement used in the second order differencing. For example, the H(Cx,Cy) (or H(Cy,Cx)) is 0, since E(ΔCx ,ΔC y ) = E(ΔC x ,−ΔC y ) and E(−ΔCx ,ΔC y ) = E(−ΔCx ,−ΔC y ) . From Eq.S1, one can deduce the symmetry properties of H of methane interacting with Ni(111) in a geometry belonging to the Cs symmetry (Fig. 1 in the paper): (1) there are always 18 zero elements in the lower triangle of the Hessian matrix (see Fig. S1), (2) the main block matrices A, B and F (see Fig. S1) can be split up in six 3×3 sub-blocks, namely A1, A2 , B1, B2, F1 and F2, in which the absolute values of all the corresponding elements in the sub-blocks corresponding to each other are numerically identical to each other except for the sign of their off-diagonal elements, (3) the triangular matrices E1 and E2 are also numerically the same except for the sign of their off-diagonal terms, (4) the 1 Supplementary Material (ESI) for PCCP This journal is © the Owner Societies 2010 2 block D is a unique block and its off-diagonal terms differ only from each other in their sign.
    [Show full text]
  • EUCLIDEAN DISTANCE MATRIX COMPLETION PROBLEMS June 6
    EUCLIDEAN DISTANCE MATRIX COMPLETION PROBLEMS HAW-REN FANG∗ AND DIANNE P. O’LEARY† June 6, 2010 Abstract. A Euclidean distance matrix is one in which the (i, j) entry specifies the squared distance between particle i and particle j. Given a partially-specified symmetric matrix A with zero diagonal, the Euclidean distance matrix completion problem (EDMCP) is to determine the unspecified entries to make A a Euclidean distance matrix. We survey three different approaches to solving the EDMCP. We advocate expressing the EDMCP as a nonconvex optimization problem using the particle positions as variables and solving using a modified Newton or quasi-Newton method. To avoid local minima, we develop a randomized initial- ization technique that involves a nonlinear version of the classical multidimensional scaling, and a dimensionality relaxation scheme with optional weighting. Our experiments show that the method easily solves the artificial problems introduced by Mor´e and Wu. It also solves the 12 much more difficult protein fragment problems introduced by Hen- drickson, and the 6 larger protein problems introduced by Grooms, Lewis, and Trosset. Key words. distance geometry, Euclidean distance matrices, global optimization, dimensional- ity relaxation, modified Cholesky factorizations, molecular conformation AMS subject classifications. 49M15, 65K05, 90C26, 92E10 1. Introduction. Given the distances between each pair of n particles in Rr, n r, it is easy to determine the relative positions of the particles. In many applications,≥ though, we are given only some of the distances and we would like to determine the missing distances and thus the particle positions. We focus in this paper on algorithms to solve this distance completion problem.
    [Show full text]
  • Discover Linear Algebra Incomplete Preliminary Draft
    Discover Linear Algebra Incomplete Preliminary Draft Date: November 28, 2017 L´aszl´oBabai in collaboration with Noah Halford All rights reserved. Approved for instructional use only. Commercial distribution prohibited. c 2016 L´aszl´oBabai. Last updated: November 10, 2016 Preface TO BE WRITTEN. Babai: Discover Linear Algebra. ii This chapter last updated August 21, 2016 c 2016 L´aszl´oBabai. Contents Notation ix I Matrix Theory 1 Introduction to Part I 2 1 (F, R) Column Vectors 3 1.1 (F) Column vector basics . 3 1.1.1 The domain of scalars . 3 1.2 (F) Subspaces and span . 6 1.3 (F) Linear independence and the First Miracle of Linear Algebra . 8 1.4 (F) Dot product . 12 1.5 (R) Dot product over R ................................. 14 1.6 (F) Additional exercises . 14 2 (F) Matrices 15 2.1 Matrix basics . 15 2.2 Matrix multiplication . 18 2.3 Arithmetic of diagonal and triangular matrices . 22 2.4 Permutation Matrices . 24 2.5 Additional exercises . 26 3 (F) Matrix Rank 28 3.1 Column and row rank . 28 iii iv CONTENTS 3.2 Elementary operations and Gaussian elimination . 29 3.3 Invariance of column and row rank, the Second Miracle of Linear Algebra . 31 3.4 Matrix rank and invertibility . 33 3.5 Codimension (optional) . 34 3.6 Additional exercises . 35 4 (F) Theory of Systems of Linear Equations I: Qualitative Theory 38 4.1 Homogeneous systems of linear equations . 38 4.2 General systems of linear equations . 40 5 (F, R) Affine and Convex Combinations (optional) 42 5.1 (F) Affine combinations .
    [Show full text]
  • A Fast Method for Computing the Inverse of Symmetric Block Arrowhead Matrices
    Appl. Math. Inf. Sci. 9, No. 2L, 319-324 (2015) 319 Applied Mathematics & Information Sciences An International Journal http://dx.doi.org/10.12785/amis/092L06 A Fast Method for Computing the Inverse of Symmetric Block Arrowhead Matrices Waldemar Hołubowski1, Dariusz Kurzyk1,∗ and Tomasz Trawi´nski2 1 Institute of Mathematics, Silesian University of Technology, Kaszubska 23, Gliwice 44–100, Poland 2 Mechatronics Division, Silesian University of Technology, Akademicka 10a, Gliwice 44–100, Poland Received: 6 Jul. 2014, Revised: 7 Oct. 2014, Accepted: 8 Oct. 2014 Published online: 1 Apr. 2015 Abstract: We propose an effective method to find the inverse of symmetric block arrowhead matrices which often appear in areas of applied science and engineering such as head-positioning systems of hard disk drives or kinematic chains of industrial robots. Block arrowhead matrices can be considered as generalisation of arrowhead matrices occurring in physical problems and engineering. The proposed method is based on LDLT decomposition and we show that the inversion of the large block arrowhead matrices can be more effective when one uses our method. Numerical results are presented in the examples. Keywords: matrix inversion, block arrowhead matrices, LDLT decomposition, mechatronic systems 1 Introduction thermal and many others. Considered subsystems are highly differentiated, hence formulation of uniform and simple mathematical model describing their static and A square matrix which has entries equal zero except for dynamic states becomes problematic. The process of its main diagonal, a one row and a column, is called the preparing a proper mathematical model is often based on arrowhead matrix. Wide area of applications causes that the formulation of the equations associated with this type of matrices is popular subject of research related Lagrangian formalism [9], which is a convenient way to with mathematics, physics or engineering, such as describe the equations of mechanical, electromechanical computing spectral decomposition [1], solving inverse and other components.
    [Show full text]
  • Removing External Degrees of Freedom from Transition State
    Removing External Degrees of Freedom from Transition State Search Methods using Quaternions Marko Melander1,2*, Kari Laasonen1,2, Hannes Jónsson3,4 1) COMP Centre of Excellence, Aalto University, FI-00076 Aalto, Finland 2) Department of Chemistry, Aalto University, FI-00076 Aalto, Finland 3) Department of Applied Physics, Aalto University, FI-00076 Aalto, Finland 4) Faculty of Physical Sciences, University of Iceland, 107 Reykjavík, Iceland 1 ABSTRACT In finite systems, such as nanoparticles and gas-phase molecules, calculations of minimum energy paths (MEP) connecting initial and final states of transitions as well as searches for saddle points are complicated by the presence of external degrees of freedom, such as overall translation and rotation. A method based on quaternion algebra for removing the external degrees of freedom is described here and applied in calculations using two commonly used methods: the nudged elastic band (NEB) method for finding minimum energy paths and DIMER method for finding the minimum mode in minimum mode following searches of first order saddle points. With the quaternion approach, fewer images in the NEB are needed to represent MEPs accurately. In both NEB and DIMER calculations of finite systems, the number of iterations required to reach convergence is significantly reduced. The algorithms have been implemented in the Atomic Simulation Environment (ASE) open source software. Keywords: Nudged Elastic Band, DIMER, quaternion, saddle point, transition. 2 1. INTRODUCTION Chemical reactions, diffusion events and configurational changes of molecules are transitions from some initial arrangement of the atoms to another, from an initial state minimum on the energy surface to a final state minimum.
    [Show full text]
  • Irreducibility in Algebraic Groups and Regular Unipotent Elements
    PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 141, Number 1, January 2013, Pages 13–28 S 0002-9939(2012)11898-2 Article electronically published on August 16, 2012 IRREDUCIBILITY IN ALGEBRAIC GROUPS AND REGULAR UNIPOTENT ELEMENTS DONNA TESTERMAN AND ALEXANDRE ZALESSKI (Communicated by Pham Huu Tiep) Abstract. We study (connected) reductive subgroups G of a reductive alge- braic group H,whereG contains a regular unipotent element of H.Themain result states that G cannot lie in a proper parabolic subgroup of H. This result is new even in the classical case H =SL(n, F ), the special linear group over an algebraically closed field, where a regular unipotent element is one whose Jor- dan normal form consists of a single block. In previous work, Saxl and Seitz (1997) determined the maximal closed positive-dimensional (not necessarily connected) subgroups of simple algebraic groups containing regular unipotent elements. Combining their work with our main result, we classify all reductive subgroups of a simple algebraic group H which contain a regular unipotent element. 1. Introduction Let H be a reductive linear algebraic group defined over an algebraically closed field F . Throughout this text ‘reductive’ will mean ‘connected reductive’. A unipo- tent element u ∈ H is said to be regular if the dimension of its centralizer CH (u) coincides with the rank of H (or, equivalently, u is contained in a unique Borel subgroup of H). Regular unipotent elements of a reductive algebraic group exist in all characteristics (see [22]) and form a single conjugacy class. These play an important role in the general theory of algebraic groups.
    [Show full text]