3D Geometry Basics (For Robotics) Lecture Notes

Total Page:16

File Type:pdf, Size:1020Kb

3D Geometry Basics (For Robotics) Lecture Notes 3D geometry basics (for robotics) lecture notes Marc Toussaint Machine Learning & Robotics lab, FU Berlin Arnimallee 7, 14195 Berlin, Germany October 30, 2011 This document introduces to some basic geometry, fo- Concatenation is non-trivial in this representation cussing on 3D transformations, and introduces proper and we don’t discuss it here. In practice, a rota- conventions for notation. There exist one-to-one imple- tion vector is first converted to a rotation matrix or mentations of the concepts and equations in libORS. quaternion. Convertion to a matrix: For every vector w 2 R3 we 1 Rotations define its skew symmetric matrix as 0 0 −w w 1 B 3 2 C B C There are many ways to represent rotations in SO(3). We B C w = B w 0 −w C : (2) b B 3 1 C @ A restrict ourselves to three basic ones: rotation matrix, ro- −w2 w1 0 tation vector, and quaternion. The rotation vector is also the most natural representation for a “rotation velocity” Note that such skew-symmetric matrices are related (angular velocities). Euler angles or raw-pitch-roll are an to the cross product: w × v = wb v, where the cross alternative, but they have singularities and I don’t recom- product is rewritten as a matrix product. The rota- mend using them in practice. tion matrix R(w) that corresponds to a given rotation vector w is: A rotation matrix is a matrix R 2 R3×3 which is or- thonormal (columns and rows are orthogonal unit R(w) = exp(wb) (3) vectors, implying determinant 1). While a 3 × 3 ma- > 2 = cos θ I + sin θ w/θb + (1 − cos θ) ww /θ trix has 9 degrees of freedom (DoFs), the constraint (4) of orthogonality and determinant 1 constraints this: The set of rotation matrices has only 3 DoFs (∼ the The exp function is called exponential map (gener- local Lie algebra is 3-dim). ating a group element (=rotation matrix) via an el- ement of the Lie algebra (=skew matrix)). The other The application of R on a vector x is simply the formular is called Rodrigues’ formular: the first term matrix-vector product Rx. is a diagonal matrix (I is the 3D identity matrix), the R R Concatenation of two rotations 1 and 2 is the nor- second terms the skew symmetric part, the last term R R mal matrix-matrix product 1 2. the symmetric part (ww> is also called outper prod- Inversion is the transpose, R-1 = R>. uct). 3 A rotation vector is an unconstraint vector w 2 R . The Angular velocity & derivative of a rotation matrix: We w = w 3 vector’s direction jwj determines the rotation represent angular velocities by a vector w 2 R , the axis, the vector’s length jwj = θ determins the rota- direction w determines the rotation axis, the length tion angle (in radians, using the right thumb conven- jwj is the rotation velocity (in radians per second). tion). When a body’s orientation at time t is described by a The application of a rotation described by w 2 R3 on rotation matrix R(t) and the body’s angular velocity a vector x 2 R3 is given as (Rodrigues’ formula) is w, then > _ w · x = cos θ x + sin θ (w × x) + (1 − cos θ) w(w x) R(t) = wb R(t) : (5) (1) (That’s intuitive to see for a rotation about the x-axis where θ = jwj is the rotation angle and w = w/θ the with velocity 1.) Some insights from this relation: unit length rotation axis. Since R(t) must always be a rotation matrix (fulfill The inverse rotation is described by the negative of orthogonality and determinant 1), its derivative R_ (t) the rotation vector. must also fulfill certain constraints; in particular it 1 3D geometry basics (for robotics) lecture notes, Marc Toussaint—October 30, 2011 2 can only live in a 3-dimensional sub-space. It turns a small time interval δ, w generates a rotation vector out that the derivative R_ of a rotation matrix R must δw, which converts to a quaterion always be a skew symmetric matrix w times R – any- b ∆r = (cos(δ=2); sin(δ=2)w) : (13) thing else would be inconsistent with the contraints of orthogonality and determinant 1. That rotation is concatenated LHS to the original Note also that, assuming R(0) = I, the solution to quaternion, _ the differential equation R(t) = wb R(t) can be writ- r(t + δ) = ∆r ◦ r(t) : (14) ten as R(t) = exp(twb), where here the exponential function notation is used to denote a more general Now, if we take the derivative w.r.t. δ and evaluate so-called exponential map, as used in the context of it at δ = 0, all the cos(δ=2) terms become − sin(δ=2) Lie groups. It also follows that R(w) from (3) is the and evaluate to zero, all the sin(δ=2) terms become rotation matrix you get when you rotate for 1 second cos(δ=2) and evaluate to one, and we have with angular velocity described by w. 1 1 r_(t) = (−w>r;¯ r w +r ¯ × w) = (0; w) ◦ r(t) 2 0 2 Quaternion (I’m not describing the general definition, (15) only the “quaternion to represent rotation” defini- 4 tion.) A quaternion is a unit length 4D vector r 2 R4; Here (0; w) 2 R is a four-vector; for jwj = 1 it is a the first entry r0 is related to the rotation angle θ normalized quaternion. However, due to the linear- via r0 = cos(θ=2), the last three entries r¯ ≡ r1:3 ity the equation holds for any w. w are related to the unit length rotation axis via Quaternion velocity ! angular velocity The following r¯ = sin(θ=2) w . is relevant when taking the derivative w.r.t. the pa- The inverse of a quaternion is given by negating r¯, rameters of a quaternion, e.g., for a ball joint repre- -1 r = (r0; −r¯) (or, alternatively, negating r0). sented as quaternion. Given r_, we have The concatenation of two rotations r, r0 is given as 1 1 r_ ◦ r-1 = (0; w) ◦ r ◦ r-1 = (0; w) (16) the quaternion product 2 2 which allows us to read off the angular velocity in- r ◦ r0 = (r r0 − r¯>r¯0; r r¯0 + r0 r¯ +r ¯0 × r¯) (6) 0 0 0 0 duced by a change of quaternion. However, the RHS zero will hold true only iff r_ is orthogonal to r (where The application of a rotation quaternion r on a vector > > > r_ r =r _0r0 +¯_r r¯ = 0, see (6)). In case r_ r 6= 0, the x can be expressed by converting the vector first to change in length of the quaterion does not represent (0; x) the quaternion , then computing any angular velocity; in typical kinematics engines -1 a non-unit length is ignored. Therefore one first or- r · x = (r ◦ (0; x) ◦ r )1:3 ; (7) thogonalizes r_ r_ − r(_r>r). I think a bit more efficient is to first convert the ro- As a special case of application, consider computing tation quaternion r to the equivalent rotation matrix the partial derivative w.r.t. quaternion coordinates, R, as given by where r_ is the unit vectors e0; ::; e3. In this case, the orthogonalization becomes simply ei ei − rri and 01 − r − r r − r r + r 1 B 22 33 12 03 13 02 C B C B C -1 -1 R = B r + r 1 − r − r r − r C (ei − rir) ◦ r = ei ◦ r − ri(1; 0; 0; 0) (17) B 12 03 11 33 23 01 C @ A r13 − r02 r23 + r01 1 − r11 − r22 -1 wi = 2[ei ◦ r ]1:3 ; (18) rij := 2rirj : (8) where wi is the rotation vector implied by r_ = ei. (Note: In comparison to (3) this does not require to In case the original quaternion r wasn’t normal- compute a sin or cos.) Inversely, the quaterion r for a ized (which could be, if a standard optimization given matrix R is algorithm searches in the quaternion configuration space), then r actually represents the normalized p 1 quaternion r¯ = p1 r, and (due to linearity of the r = 1 + trR (9) r2 0 2 above), the rotation vector implied by r_ = ei is r3 = (R21 − R12)=(4r0) (10) 2 -1 r = (R − R )=(4r ) wi = p [ei ◦ r ]1:3 : (19) 2 13 31 0 (11) r2 r1 = (R32 − R23)=(4r0) : (12) 2 Transformations Angular velocity ! quaternion velocity Given an an- gular velocity w 2 R3 and a current quaterion r(t) 2 We consider two types of transformations here: ei- R, what is the time derivative r_(t) (in analogy to ther static (translation+rotation), or dynamic (transla- Eq. (5))? For simplicity, let’s first assume jwj = 1. For tion+velocity+rotation+angular velocity). The first maps 3D geometry basics (for robotics) lecture notes, Marc Toussaint—October 30, 2011 3 between two static reference frames, the latter between v = v1 + w1 × (r1 · t2) + r1 · v2 (26) moving reference frames, e.g. between reference frames r = r1 ◦ r2 (27) attached to moving rigid bodies. w = w1 + r1 · w2 (28) 2.1 Static transformations For completeness, the footnote1 also describes how ac- celerations transform, including the case when the trans- Concerning the static transformations, again there are dif- form itself is accelerating. The inverse (t0; r0; v0; w0) = ferent representations: (t; r; v; w)-1 of a dynamic transform is given as A homogeneous matrix is a 4 × 4-matrix of the form 0 1 t = −r- · t (35) 0 1 0 -1 BR t C r = r (36) T = B C (20) @ 0 1A v0 = r-1 · (w × t − v) (37) where R is a 3 × 3-matrix (rotation in our case) and t w0 = −r-1 · w (38) a 3-vector (translation).
Recommended publications
  • Math 221: LINEAR ALGEBRA
    Math 221: LINEAR ALGEBRA Chapter 8. Orthogonality §8-3. Positive Definite Matrices Le Chen1 Emory University, 2020 Fall (last updated on 11/10/2020) Creative Commons License (CC BY-NC-SA) 1 Slides are adapted from those by Karen Seyffarth from University of Calgary. Positive Definite Matrices Cholesky factorization – Square Root of a Matrix Positive Definite Matrices Definition An n × n matrix A is positive definite if it is symmetric and has positive eigenvalues, i.e., if λ is a eigenvalue of A, then λ > 0. Theorem If A is a positive definite matrix, then det(A) > 0 and A is invertible. Proof. Let λ1; λ2; : : : ; λn denote the (not necessarily distinct) eigenvalues of A. Since A is symmetric, A is orthogonally diagonalizable. In particular, A ∼ D, where D = diag(λ1; λ2; : : : ; λn). Similar matrices have the same determinant, so det(A) = det(D) = λ1λ2 ··· λn: Since A is positive definite, λi > 0 for all i, 1 ≤ i ≤ n; it follows that det(A) > 0, and therefore A is invertible. Theorem A symmetric matrix A is positive definite if and only if ~xTA~x > 0 for all n ~x 2 R , ~x 6= ~0. Proof. Since A is symmetric, there exists an orthogonal matrix P so that T P AP = diag(λ1; λ2; : : : ; λn) = D; where λ1; λ2; : : : ; λn are the (not necessarily distinct) eigenvalues of A. Let n T ~x 2 R , ~x 6= ~0, and define ~y = P ~x. Then ~xTA~x = ~xT(PDPT)~x = (~xTP)D(PT~x) = (PT~x)TD(PT~x) = ~yTD~y: T Writing ~y = y1 y2 ··· yn , 2 y1 3 6 y2 7 ~xTA~x = y y ··· y diag(λ ; λ ; : : : ; λ ) 6 7 1 2 n 1 2 n 6 .
    [Show full text]
  • MATH 2370, Practice Problems
    MATH 2370, Practice Problems Kiumars Kaveh Problem: Prove that an n × n complex matrix A is diagonalizable if and only if there is a basis consisting of eigenvectors of A. Problem: Let A : V ! W be a one-to-one linear map between two finite dimensional vector spaces V and W . Show that the dual map A0 : W 0 ! V 0 is surjective. Problem: Determine if the curve 2 2 2 f(x; y) 2 R j x + y + xy = 10g is an ellipse or hyperbola or union of two lines. Problem: Show that if a nilpotent matrix is diagonalizable then it is the zero matrix. Problem: Let P be a permutation matrix. Show that P is diagonalizable. Show that if λ is an eigenvalue of P then for some integer m > 0 we have λm = 1 (i.e. λ is an m-th root of unity). Hint: Note that P m = I for some integer m > 0. Problem: Show that if λ is an eigenvector of an orthogonal matrix A then jλj = 1. n Problem: Take a vector v 2 R and let H be the hyperplane orthogonal n n to v. Let R : R ! R be the reflection with respect to a hyperplane H. Prove that R is a diagonalizable linear map. Problem: Prove that if λ1; λ2 are distinct eigenvalues of a complex matrix A then the intersection of the generalized eigenspaces Eλ1 and Eλ2 is zero (this is part of the Spectral Theorem). 1 Problem: Let H = (hij) be a 2 × 2 Hermitian matrix. Use the Min- imax Principle to show that if λ1 ≤ λ2 are the eigenvalues of H then λ1 ≤ h11 ≤ λ2.
    [Show full text]
  • Handout 9 More Matrix Properties; the Transpose
    Handout 9 More matrix properties; the transpose Square matrix properties These properties only apply to a square matrix, i.e. n £ n. ² The leading diagonal is the diagonal line consisting of the entries a11, a22, a33, . ann. ² A diagonal matrix has zeros everywhere except the leading diagonal. ² The identity matrix I has zeros o® the leading diagonal, and 1 for each entry on the diagonal. It is a special case of a diagonal matrix, and A I = I A = A for any n £ n matrix A. ² An upper triangular matrix has all its non-zero entries on or above the leading diagonal. ² A lower triangular matrix has all its non-zero entries on or below the leading diagonal. ² A symmetric matrix has the same entries below and above the diagonal: aij = aji for any values of i and j between 1 and n. ² An antisymmetric or skew-symmetric matrix has the opposite entries below and above the diagonal: aij = ¡aji for any values of i and j between 1 and n. This automatically means the digaonal entries must all be zero. Transpose To transpose a matrix, we reect it across the line given by the leading diagonal a11, a22 etc. In general the result is a di®erent shape to the original matrix: a11 a21 a11 a12 a13 > > A = A = 0 a12 a22 1 [A ]ij = A : µ a21 a22 a23 ¶ ji a13 a23 @ A > ² If A is m £ n then A is n £ m. > ² The transpose of a symmetric matrix is itself: A = A (recalling that only square matrices can be symmetric).
    [Show full text]
  • On the Eigenvalues of Euclidean Distance Matrices
    “main” — 2008/10/13 — 23:12 — page 237 — #1 Volume 27, N. 3, pp. 237–250, 2008 Copyright © 2008 SBMAC ISSN 0101-8205 www.scielo.br/cam On the eigenvalues of Euclidean distance matrices A.Y. ALFAKIH∗ Department of Mathematics and Statistics University of Windsor, Windsor, Ontario N9B 3P4, Canada E-mail: [email protected] Abstract. In this paper, the notion of equitable partitions (EP) is used to study the eigenvalues of Euclidean distance matrices (EDMs). In particular, EP is used to obtain the characteristic poly- nomials of regular EDMs and non-spherical centrally symmetric EDMs. The paper also presents methods for constructing cospectral EDMs and EDMs with exactly three distinct eigenvalues. Mathematical subject classification: 51K05, 15A18, 05C50. Key words: Euclidean distance matrices, eigenvalues, equitable partitions, characteristic poly- nomial. 1 Introduction ( ) An n ×n nonzero matrix D = di j is called a Euclidean distance matrix (EDM) 1, 2,..., n r if there exist points p p p in some Euclidean space < such that i j 2 , ,..., , di j = ||p − p || for all i j = 1 n where || || denotes the Euclidean norm. i , ,..., Let p , i ∈ N = {1 2 n}, be the set of points that generate an EDM π π ( , ,..., ) D. An m-partition of D is an ordered sequence = N1 N2 Nm of ,..., nonempty disjoint subsets of N whose union is N. The subsets N1 Nm are called the cells of the partition. The n-partition of D where each cell consists #760/08. Received: 07/IV/08. Accepted: 17/VI/08. ∗Research supported by the Natural Sciences and Engineering Research Council of Canada and MITACS.
    [Show full text]
  • Lecture 2: Spectral Theorems
    Lecture 2: Spectral Theorems This lecture introduces normal matrices. The spectral theorem will inform us that normal matrices are exactly the unitarily diagonalizable matrices. As a consequence, we will deduce the classical spectral theorem for Hermitian matrices. The case of commuting families of matrices will also be studied. All of this corresponds to section 2.5 of the textbook. 1 Normal matrices Definition 1. A matrix A 2 Mn is called a normal matrix if AA∗ = A∗A: Observation: The set of normal matrices includes all the Hermitian matrices (A∗ = A), the skew-Hermitian matrices (A∗ = −A), and the unitary matrices (AA∗ = A∗A = I). It also " # " # 1 −1 1 1 contains other matrices, e.g. , but not all matrices, e.g. 1 1 0 1 Here is an alternate characterization of normal matrices. Theorem 2. A matrix A 2 Mn is normal iff ∗ n kAxk2 = kA xk2 for all x 2 C : n Proof. If A is normal, then for any x 2 C , 2 ∗ ∗ ∗ ∗ ∗ 2 kAxk2 = hAx; Axi = hx; A Axi = hx; AA xi = hA x; A xi = kA xk2: ∗ n n Conversely, suppose that kAxk = kA xk for all x 2 C . For any x; y 2 C and for λ 2 C with jλj = 1 chosen so that <(λhx; (A∗A − AA∗)yi) = jhx; (A∗A − AA∗)yij, we expand both sides of 2 ∗ 2 kA(λx + y)k2 = kA (λx + y)k2 to obtain 2 2 ∗ 2 ∗ 2 ∗ ∗ kAxk2 + kAyk2 + 2<(λhAx; Ayi) = kA xk2 + kA yk2 + 2<(λhA x; A yi): 2 ∗ 2 2 ∗ 2 Using the facts that kAxk2 = kA xk2 and kAyk2 = kA yk2, we derive 0 = <(λhAx; Ayi − λhA∗x; A∗yi) = <(λhx; A∗Ayi − λhx; AA∗yi) = <(λhx; (A∗A − AA∗)yi) = jhx; (A∗A − AA∗)yij: n ∗ ∗ n Since this is true for any x 2 C , we deduce (A A − AA )y = 0, which holds for any y 2 C , meaning that A∗A − AA∗ = 0, as desired.
    [Show full text]
  • Rotation Matrix - Wikipedia, the Free Encyclopedia Page 1 of 22
    Rotation matrix - Wikipedia, the free encyclopedia Page 1 of 22 Rotation matrix From Wikipedia, the free encyclopedia In linear algebra, a rotation matrix is a matrix that is used to perform a rotation in Euclidean space. For example the matrix rotates points in the xy -Cartesian plane counterclockwise through an angle θ about the origin of the Cartesian coordinate system. To perform the rotation, the position of each point must be represented by a column vector v, containing the coordinates of the point. A rotated vector is obtained by using the matrix multiplication Rv (see below for details). In two and three dimensions, rotation matrices are among the simplest algebraic descriptions of rotations, and are used extensively for computations in geometry, physics, and computer graphics. Though most applications involve rotations in two or three dimensions, rotation matrices can be defined for n-dimensional space. Rotation matrices are always square, with real entries. Algebraically, a rotation matrix in n-dimensions is a n × n special orthogonal matrix, i.e. an orthogonal matrix whose determinant is 1: . The set of all rotation matrices forms a group, known as the rotation group or the special orthogonal group. It is a subset of the orthogonal group, which includes reflections and consists of all orthogonal matrices with determinant 1 or -1, and of the special linear group, which includes all volume-preserving transformations and consists of matrices with determinant 1. Contents 1 Rotations in two dimensions 1.1 Non-standard orientation
    [Show full text]
  • COMPLEX MATRICES and THEIR PROPERTIES Mrs
    ISSN: 2277-9655 [Devi * et al., 6(7): July, 2017] Impact Factor: 4.116 IC™ Value: 3.00 CODEN: IJESS7 IJESRT INTERNATIONAL JOURNAL OF ENGINEERING SCIENCES & RESEARCH TECHNOLOGY COMPLEX MATRICES AND THEIR PROPERTIES Mrs. Manju Devi* *Assistant Professor In Mathematics, S.D. (P.G.) College, Panipat DOI: 10.5281/zenodo.828441 ABSTRACT By this paper, our aim is to introduce the Complex Matrices that why we require the complex matrices and we have discussed about the different types of complex matrices and their properties. I. INTRODUCTION It is no longer possible to work only with real matrices and real vectors. When the basic problem was Ax = b the solution was real when A and b are real. Complex number could have been permitted, but would have contributed nothing new. Now we cannot avoid them. A real matrix has real coefficients in det ( A - λI), but the eigen values may complex. We now introduce the space Cn of vectors with n complex components. The old way, the vector in C2 with components ( l, i ) would have zero length: 12 + i2 = 0 not good. The correct length squared is 12 + 1i12 = 2 2 2 2 This change to 11x11 = 1x11 + …….. │xn│ forces a whole series of other changes. The inner product, the transpose, the definitions of symmetric and orthogonal matrices all need to be modified for complex numbers. II. DEFINITION A matrix whose elements may contain complex numbers called complex matrix. The matrix product of two complex matrices is given by where III. LENGTHS AND TRANSPOSES IN THE COMPLEX CASE The complex vector space Cn contains all vectors x with n complex components.
    [Show full text]
  • 7 Spectral Properties of Matrices
    7 Spectral Properties of Matrices 7.1 Introduction The existence of directions that are preserved by linear transformations (which are referred to as eigenvectors) has been discovered by L. Euler in his study of movements of rigid bodies. This work was continued by Lagrange, Cauchy, Fourier, and Hermite. The study of eigenvectors and eigenvalues acquired in- creasing significance through its applications in heat propagation and stability theory. Later, Hilbert initiated the study of eigenvalue in functional analysis (in the theory of integral operators). He introduced the terms of eigenvalue and eigenvector. The term eigenvalue is a German-English hybrid formed from the German word eigen which means “own” and the English word “value”. It is interesting that Cauchy referred to the same concept as characteristic value and the term characteristic polynomial of a matrix (which we introduce in Definition 7.1) was derived from this naming. We present the notions of geometric and algebraic multiplicities of eigen- values, examine properties of spectra of special matrices, discuss variational characterizations of spectra and the relationships between matrix norms and eigenvalues. We conclude this chapter with a section dedicated to singular values of matrices. 7.2 Eigenvalues and Eigenvectors Let A Cn×n be a square matrix. An eigenpair of A is a pair (λ, x) C (Cn∈ 0 ) such that Ax = λx. We refer to λ is an eigenvalue and to ∈x is× an eigenvector−{ } . The set of eigenvalues of A is the spectrum of A and will be denoted by spec(A). If (λ, x) is an eigenpair of A, the linear system Ax = λx has a non-trivial solution in x.
    [Show full text]
  • Lecture 7 We Know from a Previous Lecture Thatρ(A) a for Any ≤ ||| ||| � Chapter 6 + Appendix D: Location and Perturbation of Matrix Norm
    KTH ROYAL INSTITUTE OF TECHNOLOGY Eigenvalue Perturbation Results, Motivation Lecture 7 We know from a previous lecture thatρ(A) A for any ≤ ||| ||| � Chapter 6 + Appendix D: Location and perturbation of matrix norm. That is, we know that all eigenvalues are in a eigenvalues circular disk with radius upper bounded by any matrix norm. � Some other results on perturbed eigenvalue problems More precise results? � Chapter 8: Nonnegative matrices What can be said about the eigenvalues and eigenvectors of Magnus Jansson/Mats Bengtsson, May 11, 2016 A+�B when� is small? 2 / 29 Geršgorin circles Geršgorin circles, cont. Geršgorin’s Thm: LetA=D+B, whereD= diag(d 1,...,d n), IfG(A) contains a region ofk circles that are disjoint from the andB=[b ] M has zeros on the diagonal. Define rest, then there arek eigenvalues in that region. ij ∈ n n 0.6 r �(B)= b i | ij | j=1 0.4 �j=i � 0.2 ( , )= : �( ) ] C D B z C z d r B λ i { ∈ | − i |≤ i } Then, all eigenvalues ofA are located in imag[ 0 n λ (A) G(A) = C (D,B) k -0.2 k ∈ i ∀ i=1 � -0.4 TheC i (D,B) are called Geršgorin circles. 0 0.2 0.4 0.6 0.8 1 1.2 1.4 real[ λ] 3 / 29 4 / 29 Geršgorin, Improvements Invertibility and stability SinceA T has the same eigenvalues asA, we can do the same IfA M is strictly diagonally dominant such that ∈ n but summing over columns instead of rows. We conclude that n T aii > aij i λi (A) G(A) G(A ) i | | | |∀ ∈ ∩ ∀ j=1 �j=i � 1 SinceS − AS has the same eigenvalues asA, the above can be then “improved” by 1.
    [Show full text]
  • On Almost Normal Matrices
    W&M ScholarWorks Undergraduate Honors Theses Theses, Dissertations, & Master Projects 6-2013 On Almost Normal Matrices Tyler J. Moran College of William and Mary Follow this and additional works at: https://scholarworks.wm.edu/honorstheses Part of the Mathematics Commons Recommended Citation Moran, Tyler J., "On Almost Normal Matrices" (2013). Undergraduate Honors Theses. Paper 574. https://scholarworks.wm.edu/honorstheses/574 This Honors Thesis is brought to you for free and open access by the Theses, Dissertations, & Master Projects at W&M ScholarWorks. It has been accepted for inclusion in Undergraduate Honors Theses by an authorized administrator of W&M ScholarWorks. For more information, please contact [email protected]. On Almost Normal Matrices A thesis submitted in partial fulllment of the requirement for the degree of Bachelor of Science in Mathematics from The College of William and Mary by Tyler J. Moran Accepted for Ilya Spitkovsky, Director Charles Johnson Ryan Vinroot Donald Campbell Williamsburg, VA April 3, 2013 Abstract An n-by-n matrix A is called almost normal if the maximal cardinality of a set of orthogonal eigenvectors is at least n−1. We give several basic properties of almost normal matrices, in addition to studying their numerical ranges and Aluthge transforms. First, a criterion for these matrices to be unitarily irreducible is established, in addition to a criterion for A∗ to be almost normal and a formula for the rank of the self commutator of A. We then show that unitarily irreducible almost normal matrices cannot have flat portions on the boundary of their numerical ranges and that the Aluthge transform of A is never normal when n > 2 and A is unitarily irreducible and invertible.
    [Show full text]
  • Inclusion Relations Between Orthostochastic Matrices And
    Linear and Multilinear Algebra, 1987, Vol. 21, pp. 253-259 Downloaded By: [Li, Chi-Kwong] At: 00:16 20 March 2009 Photocopying permitted by license only 1987 Gordon and Breach Science Publishers, S.A. Printed in the United States of America Inclusion Relations Between Orthostochastic Matrices and Products of Pinchinq-- Matrices YIU-TUNG POON Iowa State University, Ames, lo wa 5007 7 and NAM-KIU TSING CSty Poiytechnic of HWI~K~tig, ;;e,~g ,?my"; 2nd A~~KI:Lkwe~ity, .Auhrn; Al3b3m-l36849 Let C be thc set of ail n x n urthusiuihaatic natiiccs and ;"', the se! of finite prnd~uctpof n x n pinching matrices. Both sets are subsets of a,, the set of all n x n doubly stochastic matrices. We study the inclusion relations between C, and 9'" and in particular we show that.'P,cC,b~t-~p,t~forn>J,andthatC~$~~fori123. 1. INTRODUCTION An n x n matrix is said to be doubly stochastic (d.s.) if it has nonnegative entries and ail row sums and column sums are 1. An n x n d.s. matrix S = (sij)is said to be orthostochastic (0.s.) if there is a unitary matrix U = (uij) such that 2 sij = luijl , i,j = 1, . ,n. Following [6], we write S = IUI2. The set of all n x n d.s. (or 0s.) matrices will be denoted by R, (or (I,, respectively). If P E R, can be expressed as 1-t (1 2 54 Y. T POON AND N K. TSING Downloaded By: [Li, Chi-Kwong] At: 00:16 20 March 2009 for some permutation matrix Q and 0 < t < I, then we call P a pinching matrix.
    [Show full text]
  • Matrices and Matrix Operations with TI-Nspire™ CAS
    Matrices and Matrix Operations with TI-Nspire™ CAS Forest W. Arnold June 2020 Typeset in LATEX. Copyright © 2020 Forest W. Arnold This work is licensed under the Creative Commons Attribution-Noncommercial-Share Alike 4.0 International License. To view a copy of this license, visit http://creativecommons. org/licenses/by-nc-sa/4.0/legalcode/ or send a letter to Creative Commons, 171 Second Street, Suite 300, San Francisco, California, 94105, USA. You can use, print, duplicate, share this work as much as you want. You can base your own work on it and reuse parts if you keep the license the same. Trademarks TI-Nspire is a registered trademark of Texas Instruments, Inc. Attribution Most of the examples in this article are from A First Course in Linear Algebra an Open Text by Lyrix Learning, base textbook version 2017 - revision A, by K. Kuttler. The text is licensed under the Creative Commons License (CC BY) and is available for download at the link https://lyryx.com/first-course-linear-algebra/. 1 Introduction The article Systems of Equations with TI-Nspire™ CAS: Matrices and Gaussian Elim- ination described how to represent and solve systems of linear equations with matrices and elementary row operations. By defining arithmetic and algebraic operations with matrices, applications of matrices can be expanded to include more than simply solving linear systems. This article describes and demonstrates how to use TI-Nspire’s builtin matrix functions to • add and subtract matrices, • multiply matrices by scalars, • multiply matrices, • transpose matrices, • find matrix inverses, and • use matrix equations. The TI-Nspire examples in this article require the CAS version of TI-Nspire.
    [Show full text]