Eigenvalues and Eigenvectors

Total Page:16

File Type:pdf, Size:1020Kb

Eigenvalues and Eigenvectors CHAPTER 7 Eigenvalues and Eigenvectors 7.1 ELEMENTARY PROPERTIES OF EIGENSYSTEMS Up to this point, almost everything was either motivated by or evolved from the consideration of systems of linear algebraic equations. But we have come to a turning point, and from now on the emphasis will be different. Rather than being concerned with systems of algebraic equations, many topics will be motivated or driven by applications involving systems of linear differential equations and their discrete counterparts, difference equations. For example, consider the problem of solving the system of two first-order linear differential equations, du1/dt =7u1 4u2 and du2/dt =5u1 2u2. In matrix notation, this system is − − u1′ 7 4 u1 = − or, equivalently, u′ = Au, (7.1.1) u′ 5 2 u 2 − 2 ′ u1 7 −4 u1 where u′ = ′ , A = , and u = . Because solutions of a single u2 5 −2 u2 λt equation u′ = λu have the form u = αe , we are motivated to seek solutions of (7.1.1) that also have the form λt λt u1 = α1e and u2 = α2e . (7.1.2) Differentiating these two expressions and substituting the results in (7.1.1) yields λt λt λt α1λe =7α1e 4α2e α1λ =7α1 4α2 7 4 α1 α1 − − − =λ . α λeλt =5α eλt 2α eλt ⇒ α λ =5α 2α ⇒ 5 2 α α 2 1 − 2 2 1 − 2 − 2 2 490 Chapter 7 Eigenvalues and Eigenvectors In other words, solutions of (7.1.1) having the form (7.1.2) can be constructed provided solutions for λ and x = α1 in the matrix equation Ax = λx can α2 be found. Clearly, x = 0 trivially satisfies Ax = λx, but x = 0 provides no useful information concerning the solution of (7.1.1). What we really need are scalars λ and nonzero vectors x that satisfy Ax = λx. Writing Ax = λx as (A λI) x = 0 shows that the vectors of interest are the nonzero vectors in N (A −λI) . But N (A λI) contains nonzero vectors if and only if A λI is singular.− Therefore, the− scalars of interest are precisely the values of λ −that make A λI singular or, equivalently, the λ ’s for which det (A λI)=0. − − 66 These observations motivate the definition of eigenvalues and eigenvectors. Eigenvalues and Eigenvectors For an n n matrix A, scalars λ and vectors xn 1 = 0 satisfying Ax = λx ×are called eigenvalues and eigenvectors of× A , respectively, and any such pair, (λ, x), is called an eigenpair for A. The set of distinct eigenvalues, denoted by σ (A) , is called the spectrum of A. λ σ (A) A λI is singular det (A λI)=0. (7.1.3) • ∈ ⇐⇒ − ⇐⇒ − x = 0 x N (A λI) is the set of all eigenvectors associated • with λ. From∈ now on,− N (A λI) is called an eigenspace for A. − Nonzero row vectors y∗ such that y∗(A λI)=0 are called left- • hand eigenvectors for A (see Exercise− 7.1.18 on p. 503). Geometrically, Ax = λx says that under transformation by A, eigenvec- tors experience only changes in magnitude or sign—the orientation of Ax in n is the same as that of x. The eigenvalue λ is simply the amount of “stretch”ℜ or “shrink” to which the eigenvector x is subjected when transformed by A. Figure 7.1.1 depicts the situation in 2. ℜ Ax = λx x Figure 7.1.1 66 The words eigenvalue and eigenvector are derived from the German word eigen, which means owned by or peculiar to. Eigenvalues and eigenvectors are sometimes called characteristic values and characteristic vectors, proper values and proper vectors, or latent values and latent vectors. 7.1 Elementary Properties of Eigensystems 491 Let’s now face the problem of finding the eigenvalues and eigenvectors of 7 −4 the matrix A = 5 −2 appearing in (7.1.1). As noted in (7.1.3), the eigen- values are the scalars λ for which det (A λI)=0. Expansion of det (A λI) produces the second-degree polynomial − − 7 λ 4 p(λ) = det (A λI)= − − = λ2 5λ +6=(λ 2)(λ 3), − 5 2 λ − − − − − which is called the characteristic polynomial for A. Consequently, the eigen- values for A are the solutions of the characteristic equation p(λ) = 0 (i.e., the roots of the characteristic polynomial), and they are λ = 2 and λ =3. The eigenvectors associated with λ = 2 and λ = 3 are simply the nonzero vectors in the eigenspaces N (A 2I) and N (A 3I), respectively. But deter- mining these eigenspaces amounts− to nothing more− than solving the two homo- geneous systems, (A 2I) x = 0 and (A 3I) x = 0. For λ =2, − − 5 4 1 4/5 x = (4/ 5)x A 2I = − − = 1 2 − 5 4 −→ 00 ⇒ x2 is free − 4/5 = N (A 2I)= x x = α . ⇒ − 1 For λ =3, 4 4 1 1 x = x A 3I = − − = 1 2 − 5 5 −→ 00 ⇒ x2 is free − 1 = N (A 3I)= x x = β . ⇒ − 1 In other words, the eigenvectors of A associated with λ = 2 are all nonzero multiples of x =(4/5 1)T , and the eigenvectors associated with λ = 3 are all nonzero multiples of y =(1 1)T . Although there are an infinite number of eigenvectors associated with each eigenvalue, each eigenspace is one dimensional, so, for this example, there is only one independent eigenvector associated with each eigenvalue. Let’s complete the discussion concerning the system of differential equations u′ = Au in (7.1.1). Coupling (7.1.2) with the eigenpairs (λ1, x) and (λ2, y) of A computed above produces two solutions of u′ = Au, namely, 4/5 1 u =eλ1tx =e2t and u =eλ2ty =e3t . 1 1 2 1 It turns out that all other solutions are linear combinations of these two particular solutions—more is said in 7.4 on p. 541. Below is a summary of§ some general statements concerning features of the characteristic polynomial and the characteristic equation. 492 Chapter 7 Eigenvalues and Eigenvectors Characteristic Polynomial and Equation The characteristic polynomial of An n is p(λ) = det (A λI). • The degree of p(λ) is n, and the leading× term in p(λ) is ( 1)−nλn. − The characteristic equation for A is p(λ)=0. • The eigenvalues of A are the solutions of the characteristic equation • or, equivalently, the roots of the characteristic polynomial. Altogether, A has n eigenvalues, but some may be complex num- • bers (even if the entries of A are real numbers), and some eigenval- ues may be repeated. If A contains only real numbers, then its complex eigenvalues must • occur in conjugate pairs—i.e., if λ σ (A) , then λ σ (A) . ∈ ∈ Proof. The fact that det (A λI) is a polynomial of degree n whose leading term is ( 1)nλn follows from− the definition of determinant given in (6.1.1). If − 1 ifi = j, δ = ij 0 ifi = j, then det (A λI)= σ(p)(a δ λ)(a δ λ) (a δ λ) − 1p1 − 1p1 2p2 − 2p2 ··· npn − npn p is a polynomial in λ. The highest power of λ is produced by the term (a λ)(a λ) (a λ), 11 − 22 − ··· nn − so the degree is n, and the leading term is ( 1)nλn. The discussion given earlier contained the proof that the eigenvalues are− precisely the solutions of the characteristic equation, but, for the sake of completeness, it’s repeated below: λ σ (A) Ax = λx for some x = 0 (A λI) x = 0 for some x = 0 ∈ ⇐⇒ ⇐⇒ − A λI is singular det (A λI)=0. ⇐⇒ − ⇐⇒ − The fundamental theorem of algebra is a deep result that insures every poly- nomial of degree n with real or complex coefficients has n roots, but some roots may be complex numbers (even if all the coefficients are real), and some roots may be repeated. Consequently, A has n eigenvalues, but some may be complex, and some may be repeated. The fact that complex eigenvalues of real matrices must occur in conjugate pairs is a consequence of the fact that the roots of a polynomial with real coefficients occur in conjugate pairs. 7.1 Elementary Properties of Eigensystems 493 Example 7.1.1 1 −1 Problem: Determine the eigenvalues and eigenvectors of A = 11. Solution: The characteristic polynomial is 1 λ 1 det (A λI)= − − = (1 λ)2 +1=λ2 2λ +2, − 11λ − − − 2 so the characteristic equation is λ 2λ +2=0. Application of the quadratic formula yields − 2 √ 4 2 2√ 1 λ = ± − = ± − =1 i, 2 2 ± so the spectrum of A is σ (A)= 1+i, 1 i . Notice that the eigenvalues are complex conjugates of each other—as{ they must− } be because complex eigenvalues of real matrices must occur in conjugate pairs. Now find the eigenspaces. For λ =1+i, i 1 1 i i A λI = − − − = N (A λI)=span . − 1 i −→ 00 ⇒ − 1 − For λ =1 i, − i 1 1i i A λI = − = N (A λI)=span − . − 1i−→ 00 ⇒ − 1 In other words, the eigenvectors associated with λ1 = 1 + i are all nonzero T multiples of x1 =(i 1) , and the eigenvectors associated with λ2 =1 i T − are all nonzero multiples of x2 =( i 1). In previous sections, you could be successful by thinking only in terms− of real numbers and by dancing around those statements and issues involving complex numbers. But this example makes it clear that avoiding complex numbers, even when dealing with real matrices, is no longer possible—very innocent looking matrices, such as the one in this example, can possess complex eigenvalues and eigenvectors.
Recommended publications
  • Lecture Notes in Advanced Matrix Computations
    Lecture Notes in Advanced Matrix Computations Lectures by Dr. Michael Tsatsomeros Throughout these notes, signifies end proof, and N signifies end of example. Table of Contents Table of Contents i Lecture 0 Technicalities 1 0.1 Importance . 1 0.2 Background . 1 0.3 Material . 1 0.4 Matrix Multiplication . 1 Lecture 1 The Cost of Matrix Algebra 2 1.1 Block Matrices . 2 1.2 Systems of Linear Equations . 3 1.3 Triangular Systems . 4 Lecture 2 LU Decomposition 5 2.1 Gaussian Elimination and LU Decomposition . 5 Lecture 3 Partial Pivoting 6 3.1 LU Decomposition continued . 6 3.2 Gaussian Elimination with (Partial) Pivoting . 7 Lecture 4 Complete Pivoting 8 4.1 LU Decomposition continued . 8 4.2 Positive Definite Systems . 9 Lecture 5 Cholesky Decomposition 10 5.1 Positive Definiteness and Cholesky Decomposition . 10 Lecture 6 Vector and Matrix Norms 11 6.1 Vector Norms . 11 6.2 Matrix Norms . 13 Lecture 7 Vector and Matrix Norms continued 13 7.1 Matrix Norms . 13 7.2 Condition Numbers . 15 Notes by Jakob Streipel. Last updated April 27, 2018. i TABLE OF CONTENTS ii Lecture 8 Condition Numbers 16 8.1 Solving Perturbed Systems . 16 Lecture 9 Estimating the Condition Number 18 9.1 More on Condition Numbers . 18 9.2 Ill-conditioning by Scaling . 19 9.3 Estimating the Condition Number . 20 Lecture 10 Perturbing Not Only the Vector 21 10.1 Perturbing the Coefficient Matrix . 21 10.2 Perturbing Everything . 22 Lecture 11 Error Analysis After The Fact 23 11.1 A Posteriori Error Analysis Using the Residue .
    [Show full text]
  • Introduction to Linear Bialgebra
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by University of New Mexico University of New Mexico UNM Digital Repository Mathematics and Statistics Faculty and Staff Publications Academic Department Resources 2005 INTRODUCTION TO LINEAR BIALGEBRA Florentin Smarandache University of New Mexico, [email protected] W.B. Vasantha Kandasamy K. Ilanthenral Follow this and additional works at: https://digitalrepository.unm.edu/math_fsp Part of the Algebra Commons, Analysis Commons, Discrete Mathematics and Combinatorics Commons, and the Other Mathematics Commons Recommended Citation Smarandache, Florentin; W.B. Vasantha Kandasamy; and K. Ilanthenral. "INTRODUCTION TO LINEAR BIALGEBRA." (2005). https://digitalrepository.unm.edu/math_fsp/232 This Book is brought to you for free and open access by the Academic Department Resources at UNM Digital Repository. It has been accepted for inclusion in Mathematics and Statistics Faculty and Staff Publications by an authorized administrator of UNM Digital Repository. For more information, please contact [email protected], [email protected], [email protected]. INTRODUCTION TO LINEAR BIALGEBRA W. B. Vasantha Kandasamy Department of Mathematics Indian Institute of Technology, Madras Chennai – 600036, India e-mail: [email protected] web: http://mat.iitm.ac.in/~wbv Florentin Smarandache Department of Mathematics University of New Mexico Gallup, NM 87301, USA e-mail: [email protected] K. Ilanthenral Editor, Maths Tiger, Quarterly Journal Flat No.11, Mayura Park, 16, Kazhikundram Main Road, Tharamani, Chennai – 600 113, India e-mail: [email protected] HEXIS Phoenix, Arizona 2005 1 This book can be ordered in a paper bound reprint from: Books on Demand ProQuest Information & Learning (University of Microfilm International) 300 N.
    [Show full text]
  • Mathematics (MATH) 1
    Mathematics (MATH) 1 MATH 021 Precalculus Algebra 4 Units MATHEMATICS (MATH) Students will study topics which include basic algebraic concepts, complex numbers, equations and inequalities, graphs of functions, linear MATH 013 Intermediate Algebra 5 Units and quadratic functions, polynomial functions of higher degree, rational, This course continues the Algebra sequence and is a prerequisite to exponential, absolute value, and logarithmic functions, sequences and transfer level math courses. Students will review elementary algebra series, and conic sections. This course is designed to prepare students topics and further their skills in solving absolute value in equations for the level of algebra required in calculus. Students may not take a and inequalities, quadratic functions and complex numbers, radicals combination of MATH 021 and MATH 025. (C-ID MATH 151) and rational exponents, exponential and logarithmic functions, inverse Lecture Hours: 4 Lab Hours: None Repeatable: No Grading: L functions, and sequences and series. Prerequisite: MATH 013 with C or better Lecture Hours: 5 Lab Hours: None Repeatable: No Grading: O Advisory Level: Read: 3 Write: 3 Math: None Prerequisite: MATH 111 with P grade or equivalent Transfer Status: CSU/UC Degree Applicable: AA/AS Advisory Level: Read: 3 Write: 3 Math: None CSU GE: B4 IGETC: 2A District GE: B4 Transfer Status: None Degree Applicable: AS Credit by Exam: Yes CSU GE: None IGETC: None District GE: None MATH 021X Just-In-Time Support for Precalculus Algebra 2 Units MATH 014 Geometry 3 Units Students will receive "just-in-time" review of the core prerequisite Students will study logical proofs, simple constructions, and numerical skills, competencies, and concepts needed in Precalculus.
    [Show full text]
  • LINEAR ALGEBRA METHODS in COMBINATORICS László Babai
    LINEAR ALGEBRA METHODS IN COMBINATORICS L´aszl´oBabai and P´eterFrankl Version 2.1∗ March 2020 ||||| ∗ Slight update of Version 2, 1992. ||||||||||||||||||||||| 1 c L´aszl´oBabai and P´eterFrankl. 1988, 1992, 2020. Preface Due perhaps to a recognition of the wide applicability of their elementary concepts and techniques, both combinatorics and linear algebra have gained increased representation in college mathematics curricula in recent decades. The combinatorial nature of the determinant expansion (and the related difficulty in teaching it) may hint at the plausibility of some link between the two areas. A more profound connection, the use of determinants in combinatorial enumeration goes back at least to the work of Kirchhoff in the middle of the 19th century on counting spanning trees in an electrical network. It is much less known, however, that quite apart from the theory of determinants, the elements of the theory of linear spaces has found striking applications to the theory of families of finite sets. With a mere knowledge of the concept of linear independence, unexpected connections can be made between algebra and combinatorics, thus greatly enhancing the impact of each subject on the student's perception of beauty and sense of coherence in mathematics. If these adjectives seem inflated, the reader is kindly invited to open the first chapter of the book, read the first page to the point where the first result is stated (\No more than 32 clubs can be formed in Oddtown"), and try to prove it before reading on. (The effect would, of course, be magnified if the title of this volume did not give away where to look for clues.) What we have said so far may suggest that the best place to present this material is a mathematics enhancement program for motivated high school students.
    [Show full text]
  • 21. Orthonormal Bases
    21. Orthonormal Bases The canonical/standard basis 011 001 001 B C B C B C B0C B1C B0C e1 = B.C ; e2 = B.C ; : : : ; en = B.C B.C B.C B.C @.A @.A @.A 0 0 1 has many useful properties. • Each of the standard basis vectors has unit length: q p T jjeijj = ei ei = ei ei = 1: • The standard basis vectors are orthogonal (in other words, at right angles or perpendicular). T ei ej = ei ej = 0 when i 6= j This is summarized by ( 1 i = j eT e = δ = ; i j ij 0 i 6= j where δij is the Kronecker delta. Notice that the Kronecker delta gives the entries of the identity matrix. Given column vectors v and w, we have seen that the dot product v w is the same as the matrix multiplication vT w. This is the inner product on n T R . We can also form the outer product vw , which gives a square matrix. 1 The outer product on the standard basis vectors is interesting. Set T Π1 = e1e1 011 B C B0C = B.C 1 0 ::: 0 B.C @.A 0 01 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 0 . T Πn = enen 001 B C B0C = B.C 0 0 ::: 1 B.C @.A 1 00 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 1 In short, Πi is the diagonal square matrix with a 1 in the ith diagonal position and zeros everywhere else.
    [Show full text]
  • What's in a Name? the Matrix As an Introduction to Mathematics
    St. John Fisher College Fisher Digital Publications Mathematical and Computing Sciences Faculty/Staff Publications Mathematical and Computing Sciences 9-2008 What's in a Name? The Matrix as an Introduction to Mathematics Kris H. Green St. John Fisher College, [email protected] Follow this and additional works at: https://fisherpub.sjfc.edu/math_facpub Part of the Mathematics Commons How has open access to Fisher Digital Publications benefited ou?y Publication Information Green, Kris H. (2008). "What's in a Name? The Matrix as an Introduction to Mathematics." Math Horizons 16.1, 18-21. Please note that the Publication Information provides general citation information and may not be appropriate for your discipline. To receive help in creating a citation based on your discipline, please visit http://libguides.sjfc.edu/citations. This document is posted at https://fisherpub.sjfc.edu/math_facpub/12 and is brought to you for free and open access by Fisher Digital Publications at St. John Fisher College. For more information, please contact [email protected]. What's in a Name? The Matrix as an Introduction to Mathematics Abstract In lieu of an abstract, here is the article's first paragraph: In my classes on the nature of scientific thought, I have often used the movie The Matrix to illustrate the nature of evidence and how it shapes the reality we perceive (or think we perceive). As a mathematician, I usually field questions elatedr to the movie whenever the subject of linear algebra arises, since this field is the study of matrices and their properties. So it is natural to ask, why does the movie title reference a mathematical object? Disciplines Mathematics Comments Article copyright 2008 by Math Horizons.
    [Show full text]
  • Multivector Differentiation and Linear Algebra 0.5Cm 17Th Santaló
    Multivector differentiation and Linear Algebra 17th Santalo´ Summer School 2016, Santander Joan Lasenby Signal Processing Group, Engineering Department, Cambridge, UK and Trinity College Cambridge [email protected], www-sigproc.eng.cam.ac.uk/ s jl 23 August 2016 1 / 78 Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. 2 / 78 Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. 3 / 78 Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. 4 / 78 Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... 5 / 78 Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary 6 / 78 We now want to generalise this idea to enable us to find the derivative of F(X), in the A ‘direction’ – where X is a general mixed grade multivector (so F(X) is a general multivector valued function of X). Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi. Then, provided A has same grades as X, it makes sense to define: F(X + tA) − F(X) A ∗ ¶XF(X) = lim t!0 t The Multivector Derivative Recall our definition of the directional derivative in the a direction F(x + ea) − F(x) a·r F(x) = lim e!0 e 7 / 78 Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi.
    [Show full text]
  • Things You Need to Know About Linear Algebra Math 131 Multivariate
    the standard (x; y; z) coordinate three-space. Nota- tions for vectors. Geometric interpretation as vec- tors as displacements as well as vectors as points. Vector addition, the zero vector 0, vector subtrac- Things you need to know about tion, scalar multiplication, their properties, and linear algebra their geometric interpretation. Math 131 Multivariate Calculus We start using these concepts right away. In D Joyce, Spring 2014 chapter 2, we'll begin our study of vector-valued functions, and we'll use the coordinates, vector no- The relation between linear algebra and mul- tation, and the rest of the topics reviewed in this tivariate calculus. We'll spend the first few section. meetings reviewing linear algebra. We'll look at just about everything in chapter 1. Section 1.2. Vectors and equations in R3. Stan- Linear algebra is the study of linear trans- dard basis vectors i; j; k in R3. Parametric equa- formations (also called linear functions) from n- tions for lines. Symmetric form equations for a line dimensional space to m-dimensional space, where in R3. Parametric equations for curves x : R ! R2 m is usually equal to n, and that's often 2 or 3. in the plane, where x(t) = (x(t); y(t)). A linear transformation f : Rn ! Rm is one that We'll use standard basis vectors throughout the preserves addition and scalar multiplication, that course, beginning with chapter 2. We'll study tan- is, f(a + b) = f(a) + f(b), and f(ca) = cf(a). We'll gent lines of curves and tangent planes of surfaces generally use bold face for vectors and for vector- in that chapter and throughout the course.
    [Show full text]
  • Schaum's Outline of Linear Algebra (4Th Edition)
    SCHAUM’S SCHAUM’S outlines outlines Linear Algebra Fourth Edition Seymour Lipschutz, Ph.D. Temple University Marc Lars Lipson, Ph.D. University of Virginia Schaum’s Outline Series New York Chicago San Francisco Lisbon London Madrid Mexico City Milan New Delhi San Juan Seoul Singapore Sydney Toronto Copyright © 2009, 2001, 1991, 1968 by The McGraw-Hill Companies, Inc. All rights reserved. Except as permitted under the United States Copyright Act of 1976, no part of this publication may be reproduced or distributed in any form or by any means, or stored in a database or retrieval system, without the prior writ- ten permission of the publisher. ISBN: 978-0-07-154353-8 MHID: 0-07-154353-8 The material in this eBook also appears in the print version of this title: ISBN: 978-0-07-154352-1, MHID: 0-07-154352-X. All trademarks are trademarks of their respective owners. Rather than put a trademark symbol after every occurrence of a trademarked name, we use names in an editorial fashion only, and to the benefit of the trademark owner, with no intention of infringement of the trademark. Where such designations appear in this book, they have been printed with initial caps. McGraw-Hill eBooks are available at special quantity discounts to use as premiums and sales promotions, or for use in corporate training programs. To contact a representative please e-mail us at [email protected]. TERMS OF USE This is a copyrighted work and The McGraw-Hill Companies, Inc. (“McGraw-Hill”) and its licensors reserve all rights in and to the work.
    [Show full text]
  • SUPPLEMENTARY MATERIAL: I. Fitting of the Hessian Matrix
    Supplementary Material (ESI) for PCCP This journal is © the Owner Societies 2010 1 SUPPLEMENTARY MATERIAL: I. Fitting of the Hessian matrix In the fitting of the Hessian matrix elements as functions of the reaction coordinate, one can take advantage of symmetry. The non-diagonal elements may be written in the form of numerical derivatives as E(Δα, Δβ ) − E(Δα,−Δβ ) − E(−Δα, Δβ ) + E(−Δα,−Δβ ) H (α, β ) = . (S1) 4δ 2 Here, α and β label any of the 15 atomic coordinates in {Cx, Cy, …, H4z}, E(Δα,Δβ) denotes the DFT energy of CH4 interacting with Ni(111) with a small displacement along α and β, and δ the small displacement used in the second order differencing. For example, the H(Cx,Cy) (or H(Cy,Cx)) is 0, since E(ΔCx ,ΔC y ) = E(ΔC x ,−ΔC y ) and E(−ΔCx ,ΔC y ) = E(−ΔCx ,−ΔC y ) . From Eq.S1, one can deduce the symmetry properties of H of methane interacting with Ni(111) in a geometry belonging to the Cs symmetry (Fig. 1 in the paper): (1) there are always 18 zero elements in the lower triangle of the Hessian matrix (see Fig. S1), (2) the main block matrices A, B and F (see Fig. S1) can be split up in six 3×3 sub-blocks, namely A1, A2 , B1, B2, F1 and F2, in which the absolute values of all the corresponding elements in the sub-blocks corresponding to each other are numerically identical to each other except for the sign of their off-diagonal elements, (3) the triangular matrices E1 and E2 are also numerically the same except for the sign of their off-diagonal terms, (4) the 1 Supplementary Material (ESI) for PCCP This journal is © the Owner Societies 2010 2 block D is a unique block and its off-diagonal terms differ only from each other in their sign.
    [Show full text]
  • ACT Info for Parent Night Handouts
    The following pages contain tips, information, how to buy test back, etc. From ACT.org Carefully read the instructions on the cover of the test booklet. Read the directions for each test carefully. Read each question carefully. Pace yourself—don't spend too much time on a single passage or question. Pay attention to the announcement of five minutes remaining on each test. Use a soft lead No. 2 pencil with a good eraser. Do not use a mechanical pencil or ink pen; if you do, your answer document cannot be scored accurately. Answer the easy questions first, then go back and answer the more difficult ones if you have time remaining on that test. On difficult questions, eliminate as many incorrect answers as you can, then make an educated guess among those remaining. Answer every question. Your scores on the multiple-choice tests are based on the number of questions you answer correctly. There is no penalty for guessing. If you complete a test before time is called, recheck your work on that test. Mark your answers properly. Erase any mark completely and cleanly without smudging. Do not mark or alter any ovals on a test or continue writing the essay after time has been called. If you do, you will be dismissed and your answer document will not be scored. If you are taking the ACT Plus Writing, see these Writing Test tips. Four Parts: English (45 minutes) Math (60 minutes) Reading (35 minutes) Science Reasoning (35 minutes) Content Covered by the ACT Mathematics Test In the Mathematics Test, three subscores are based on six content areas: pre-algebra, elementary algebra, intermediate algebra, coordinate geometry, plane geometry, and trigonometry.
    [Show full text]
  • EUCLIDEAN DISTANCE MATRIX COMPLETION PROBLEMS June 6
    EUCLIDEAN DISTANCE MATRIX COMPLETION PROBLEMS HAW-REN FANG∗ AND DIANNE P. O’LEARY† June 6, 2010 Abstract. A Euclidean distance matrix is one in which the (i, j) entry specifies the squared distance between particle i and particle j. Given a partially-specified symmetric matrix A with zero diagonal, the Euclidean distance matrix completion problem (EDMCP) is to determine the unspecified entries to make A a Euclidean distance matrix. We survey three different approaches to solving the EDMCP. We advocate expressing the EDMCP as a nonconvex optimization problem using the particle positions as variables and solving using a modified Newton or quasi-Newton method. To avoid local minima, we develop a randomized initial- ization technique that involves a nonlinear version of the classical multidimensional scaling, and a dimensionality relaxation scheme with optional weighting. Our experiments show that the method easily solves the artificial problems introduced by Mor´e and Wu. It also solves the 12 much more difficult protein fragment problems introduced by Hen- drickson, and the 6 larger protein problems introduced by Grooms, Lewis, and Trosset. Key words. distance geometry, Euclidean distance matrices, global optimization, dimensional- ity relaxation, modified Cholesky factorizations, molecular conformation AMS subject classifications. 49M15, 65K05, 90C26, 92E10 1. Introduction. Given the distances between each pair of n particles in Rr, n r, it is easy to determine the relative positions of the particles. In many applications,≥ though, we are given only some of the distances and we would like to determine the missing distances and thus the particle positions. We focus in this paper on algorithms to solve this distance completion problem.
    [Show full text]