Canonical Forms of Pseudo-Orthogonal Matrices

Total Page:16

File Type:pdf, Size:1020Kb

Canonical Forms of Pseudo-Orthogonal Matrices CANONICAL FORMS OF PSEUDO-ORTHOGONAL MATRICES BY G. ABRAHAM, B. M. ARTHUR AND P. SWAMIDHAS (Madras Christian College, Madras) Received May 15, 1971 (Communicated by Prof. P. L. Bhatnagar, F.n.sc.) ABSTRACT Canonical forms are found for all real four-dimensional matrices of the pseudo-orthogonal group which differs from the Lorentz group only in that its metric has two plus signs and two minus signs. INTRODUCTION REAL orthogonal matrices in n-dimensions have a very simple canonical form. We can construct a basis in which the matrix appears as the direct sum of a diagonal matrix, with + 1 and — I in the diagonal, and two- dimensional rotations.' In the same way, canonical forms for Lorentz matrices in n dimensions have been derived, 2 ' 3 the largest matrix in the direct sums being of order 3. A common feature of the canonical forms for orthogonal and Lorentz matrices is that each component matrix in the direct sum is of the form exp B, where B is antisymmetric (for orthogonal matrices) or pseudo-anti- symmetric (for Lorentz matrices), the latter being defined by Bt G'=—G'B where Bt is the transpose of B and G' is a diagonal matrix with one ele- ment — I and all others + 1. The canonical forms have been used by Wigner 3 for finding the represen- tations of the Lorentz group, which are fundamental in the theory of ele- mentary particles. Canonical forms for other pseudo-orthogonal groups have been derived' for similar applications. 1. In this paper we derive canonical-forms for real pseudo-orthoLor.c:l matrices L of order 4 defined by L tGL=G (1) I59 160 G. ABRAHAM AND OTHERS where G = [g;i] is a diagonal matrix with two elements + I and two ele- ments — 1. Matrices defined by (2) are matrices of isometric transforma- tions in a pseudo-Euclidean space in which the inner product of two vectors x = (ax, a2 , a3 , a4) and y = (b1 , b2 , b3 , b4) is defined by (x, y) = 2 Si aibj (2) 4,f A set of 4 vectors ei , e2 , e3 , e 4 is said to be a pseudo-orthonormal.basis if (ei , ej) = gij. The first step in the reduction of a isometric transformation to a canoni- cal form is to make _ list of the possible Jordan normal forms. This is considerably simplified by the following theorems: If an isometric transformation of a non-degenerate bilinear-metric space contains the elementary divisor (A — a)m ivitlr multiplicity k, then it also con- tains the elementary divisor (A — a, 1)m with multiplicity k. To derive a canonical form, we must obtain a pseudo-orthonormal set of basis vectors from linear combinations of the basis vectors x of the Jordan form. This is a gentr iisation of the familiar orthogonalising procedure for unitary spaces. We use Lagrange's algorithm for quadratic forms to transform the Gram matrix [(x1, x;)] to the diagonal form [g,j]. Only Jordan forms for which this is possible will be retained in our list. This method will be illustrated in the two following sections: 1. Let us denote by Ll the pseudo-orthogonal matrices L which have a complex eigenvalue ß with I ß = 1. From the theorem quoted in the previous section and from the fact that L is real, it immediately follows that L, has 4 different eigenvalues, ß, ß, ß -1 and ß'. It is simpler to derive at first a canonical form fur the corresponding pseudo-antisymmetric matrices B which are defined by Bt G = — GB, and have the property 6 that exp B is a pseudo-orthogonal matrix. Let us denote by B, the matrices B whose eigenvalues are of the form a, '., — a and — á, where a = a + ib, and neither a or b is zero. Obviously the Jordan forms for the L, and B, matrices are diagonal. Let the eigenvaiue equations of B, be Bi xi = aixi (3) Canonical Forms of Pseudo-orthogonal Matrices 161 where a 1 =a,a2 =6,a3 =—a,a 4 =—á and X1 =u1 +iu2,x2=hl,x3 — U3 + iu 4 , x., = .ti 3 . Then, using the antisymmetry property of B 1 , we can show that the Gram matrix [(ui, uj)] has the form 0 0 p q^ [(ui, uj)] = 00 q — p 1(4) p q 0 0 Lq —p 0 0 whose determinant is (p 2 + g 2)/16. Since x1 , x2, x3, x4 , form a basis, the matrix (4) is non-singular and so both p and q cannot be zero. Using the method of Lagrange (for diagonalising quadratic forms) we can . find 4 vectors 4 4 ei =27 akiuk = E tkixk (5) k=1 k=1 such that they form a pseudo-orthonormal basis. The fact that such a basis can be formed shows that the set of eigenvalues we are considering is an allowed one. If p > 0, we can define the vectors e= by the equations e, = p 1 (1 + µ 2)-1 (uß + 11 3 + /L112 y µx.ß) e3 p — -1 (1 + µ 2) - ' (ui — u3 + 1kí2 — e2 ` p-1 (u2 + u4), e4 = p-t (u2 — u4) (6) where µ = q/p. If p < 0, we must replace p} by (—p)r in (6). From (3) we derive Blut = au, — bu2 i Blu3 = — aua + b1f4 B,u2 = bul + au2, B,u4 = — bu3 — au4(7) From (6) and (7) we can find the matrix B 1 ,,, = [bti] defined by B,ej = E bijet (8) B1 , 0 is found to be Al 1B 02 (9) — LA1 Oz 162 Cl. ABRAHAM AND OTHERS where _ a —`, gb (1 + µ 2)t b Al — ^— (1 + 2)bb a — µb j (14) when p = 0, we can define the basis vectors by el = q- } (u1 + u3, e2 = q -1 (u2 — u3) e3 - ( 1 = q } u _` u4), e4 = q-} (u2 + u3) if q > 0, and by the same equations with (—q) 3 instead of q } if q < 0. Then we obtain a canonical form which is the same as (9) and (10) with µ=0. We now go back to the L l matrices. Put ß = exp a, a = a + ib, with a 0, b T 0. The eigenvalue equations can be written in the form Llxi' = ßixi' (11) where o __ Q p - P1 N" N2 = 03 N3 = N -13 N4 __ 1 and xi = U1 + (U2, xQ = xl ' , x3 ' = u3' + 1U41, x4 = x3 ' . Then, using the pseudo-orthogonal property of L l we can show that the Gram matrix [(ui', uï')] has the same form as that of [(ui, u1 given in (4) where now p = 2 ku,', u 41), q = 2 (u2', u3'). This enables us to define a basis es by putting ei' for ei and ui' for ui in (6). Since the vectors e;' are a linear combination of the vectors xi' we can put ei'. = E tki X)', Xi = E Skiek'. (12) k k From (11) and (12) we have Lej' = E (kj L (xk') = E tkjßkxk ' t = v rkiPkkei ' _ ligei where Sikßktkj (13) Canonical Forms of Pseudo-orthogonal Matrices 163 Similarly from (3), 5) and (8), we have bij = E Siikaktki (14) t If T = [tij], L = [lij], and D, D' are diagonal matrices with aj , aí respec- tively as diagonal elements, then from (13) and (14) L,, 0 = T-1 exp D'T = exp (T-' D'T) = exp B'1,0(15) where B',, 0 is defined as in (9) and (10) with a', b' instead of a and b. L,, 0 is therefore a canonical form for pseudo-orthogonal matrices L which have a complex eigenvalue ß whose modulus is not equal to 1. 3. We consider next the class of pseudo-orthogonal matrices L 2 which have the Jordan form 11 ta] (16) J2 = Voi eie, CO e- where 0 is real and non-zero. As in the previous section, we can construct a set of pseudo-orthogonal basis vectors from linear combinations of the basis vectors of the Jordan form (16). A canonical form for this class of matrices is X(8) —XC2+o) 0 A L2,0 = = exp 2 2 X (2 + o) X(0 — A2 02 where cos 0 sin 0 — µ sing ( ) X (e) — (17) sin 8 cos 9 — µ sin 8, _ µ 1 A2— C (18) The parameter µ arises in the same way as in the canonical form L, 0. We find only one more allowed Jordan form with all complex eigen- values, namely the diagonal form with four different eigenvalues eter, e-io,, eiB^, e- id=. Unlike the two previous cases, we can find a canonical form 164 G. ABRAHAM AND OTHERS which is the direct sum of two 2 x 2 matrices, namely the sum of two proper rotations in 2-dimensional Euclidean space, that is L3,o=R(01)4- R(02) (19) where os 8 — sin 8 R (B) _ — [sin B cos B1' All the pseudo-orthogonal matrices considered so far -have positive determinant and can be put in the exponential form. When the eigen- values are e'°, e-te,1, — 1, the matrix has negative determinant, and the canonical form is 1-4, 0 = R (0) -1-- 1 4-(—l). (20) 4. The remaining allowed Jordan forms have only real eigenvalues. For the Jordan form JS = [p e ± e^ ] + i. 0 :1:e-0 ] (21) where ;= 0, the corresponding canonical form is I-s,o f [P ( O) H(() ^)j exp r (22) A5@) Ash (0)•j where H (¢) = [sinh 0cosh ^^ ' P (q) = — e Ii (0) = [0 01 ' A5 = 1l — 11 (23) For the Jordan form Js = ^^ 0 1 J -f- L^ 0 + i J(24) the canonical form is 02 L5, 0 A61 = Ael 25) = rA' 6 E j [A6 Oa J Canonical Forms of Pseudo-orthogonal Matrices 165 where [01 (26) E — OJ , A6 = [— 1 1] .
Recommended publications
  • Orthogonal Reduction 1 the Row Echelon Form -.: Mathematical
    MATH 5330: Computational Methods of Linear Algebra Lecture 9: Orthogonal Reduction Xianyi Zeng Department of Mathematical Sciences, UTEP 1 The Row Echelon Form Our target is to solve the normal equation: AtAx = Atb ; (1.1) m×n where A 2 R is arbitrary; we have shown previously that this is equivalent to the least squares problem: min jjAx−bjj : (1.2) x2Rn t n×n As A A2R is symmetric positive semi-definite, we can try to compute the Cholesky decom- t t n×n position such that A A = L L for some lower-triangular matrix L 2 R . One problem with this approach is that we're not fully exploring our information, particularly in Cholesky decomposition we treat AtA as a single entity in ignorance of the information about A itself. t m×m In particular, the structure A A motivates us to study a factorization A=QE, where Q2R m×n is orthogonal and E 2 R is to be determined. Then we may transform the normal equation to: EtEx = EtQtb ; (1.3) t m×m where the identity Q Q = Im (the identity matrix in R ) is used. This normal equation is equivalent to the least squares problem with E: t min Ex−Q b : (1.4) x2Rn Because orthogonal transformation preserves the L2-norm, (1.2) and (1.4) are equivalent to each n other. Indeed, for any x 2 R : jjAx−bjj2 = (b−Ax)t(b−Ax) = (b−QEx)t(b−QEx) = [Q(Qtb−Ex)]t[Q(Qtb−Ex)] t t t t t t t t 2 = (Q b−Ex) Q Q(Q b−Ex) = (Q b−Ex) (Q b−Ex) = Ex−Q b : Hence the target is to find an E such that (1.3) is easier to solve.
    [Show full text]
  • Tight Frames and Their Symmetries
    Technical Report 9 December 2003 Tight Frames and their Symmetries Richard Vale, Shayne Waldron Department of Mathematics, University of Auckland, Private Bag 92019, Auckland, New Zealand e–mail: [email protected] (http:www.math.auckland.ac.nz/˜waldron) e–mail: [email protected] ABSTRACT The aim of this paper is to investigate symmetry properties of tight frames, with a view to constructing tight frames of orthogonal polynomials in several variables which share the symmetries of the weight function, and other similar applications. This is achieved by using representation theory to give methods for constructing tight frames as orbits of groups of unitary transformations acting on a given finite-dimensional Hilbert space. Along the way, we show that a tight frame is determined by its Gram matrix and discuss how the symmetries of a tight frame are related to its Gram matrix. We also give a complete classification of those tight frames which arise as orbits of an abelian group of symmetries. Key Words: Tight frames, isometric tight frames, Gram matrix, multivariate orthogonal polynomials, symmetry groups, harmonic frames, representation theory, wavelets AMS (MOS) Subject Classifications: primary 05B20, 33C50, 20C15, 42C15, sec- ondary 52B15, 42C40 0 1. Introduction u1 u2 u3 2 The three equally spaced unit vectors u1, u2, u3 in IR provide the following redundant representation 2 3 f = f, u u , f IR2, (1.1) 3 h ji j ∀ ∈ j=1 X which is the simplest example of a tight frame. Such representations arose in the study of nonharmonic Fourier series in L2(IR) (see Duffin and Schaeffer [DS52]) and have recently been used extensively in the theory of wavelets (see, e.g., Daubechies [D92]).
    [Show full text]
  • On the Eigenvalues of Euclidean Distance Matrices
    “main” — 2008/10/13 — 23:12 — page 237 — #1 Volume 27, N. 3, pp. 237–250, 2008 Copyright © 2008 SBMAC ISSN 0101-8205 www.scielo.br/cam On the eigenvalues of Euclidean distance matrices A.Y. ALFAKIH∗ Department of Mathematics and Statistics University of Windsor, Windsor, Ontario N9B 3P4, Canada E-mail: [email protected] Abstract. In this paper, the notion of equitable partitions (EP) is used to study the eigenvalues of Euclidean distance matrices (EDMs). In particular, EP is used to obtain the characteristic poly- nomials of regular EDMs and non-spherical centrally symmetric EDMs. The paper also presents methods for constructing cospectral EDMs and EDMs with exactly three distinct eigenvalues. Mathematical subject classification: 51K05, 15A18, 05C50. Key words: Euclidean distance matrices, eigenvalues, equitable partitions, characteristic poly- nomial. 1 Introduction ( ) An n ×n nonzero matrix D = di j is called a Euclidean distance matrix (EDM) 1, 2,..., n r if there exist points p p p in some Euclidean space < such that i j 2 , ,..., , di j = ||p − p || for all i j = 1 n where || || denotes the Euclidean norm. i , ,..., Let p , i ∈ N = {1 2 n}, be the set of points that generate an EDM π π ( , ,..., ) D. An m-partition of D is an ordered sequence = N1 N2 Nm of ,..., nonempty disjoint subsets of N whose union is N. The subsets N1 Nm are called the cells of the partition. The n-partition of D where each cell consists #760/08. Received: 07/IV/08. Accepted: 17/VI/08. ∗Research supported by the Natural Sciences and Engineering Research Council of Canada and MITACS.
    [Show full text]
  • Week 8-9. Inner Product Spaces. (Revised Version) Section 3.1 Dot Product As an Inner Product
    Math 2051 W2008 Margo Kondratieva Week 8-9. Inner product spaces. (revised version) Section 3.1 Dot product as an inner product. Consider a linear (vector) space V . (Let us restrict ourselves to only real spaces that is we will not deal with complex numbers and vectors.) De¯nition 1. An inner product on V is a function which assigns a real number, denoted by < ~u;~v> to every pair of vectors ~u;~v 2 V such that (1) < ~u;~v>=< ~v; ~u> for all ~u;~v 2 V ; (2) < ~u + ~v; ~w>=< ~u;~w> + < ~v; ~w> for all ~u;~v; ~w 2 V ; (3) < k~u;~v>= k < ~u;~v> for any k 2 R and ~u;~v 2 V . (4) < ~v;~v>¸ 0 for all ~v 2 V , and < ~v;~v>= 0 only for ~v = ~0. De¯nition 2. Inner product space is a vector space equipped with an inner product. Pn It is straightforward to check that the dot product introduces by ~u ¢ ~v = j=1 ujvj is an inner product. You are advised to verify all the properties listed in the de¯nition, as an exercise. The dot product is also called Euclidian inner product. De¯nition 3. Euclidian vector space is Rn equipped with Euclidian inner product < ~u;~v>= ~u¢~v. De¯nition 4. A square matrix A is called positive de¯nite if ~vT A~v> 0 for any vector ~v 6= ~0. · ¸ 2 0 Problem 1. Show that is positive de¯nite. 0 3 Solution: Take ~v = (x; y)T . Then ~vT A~v = 2x2 + 3y2 > 0 for (x; y) 6= (0; 0).
    [Show full text]
  • Gram Matrix and Orthogonality in Frames 1
    U.P.B. Sci. Bull., Series A, Vol. 80, Iss. 1, 2018 ISSN 1223-7027 GRAM MATRIX AND ORTHOGONALITY IN FRAMES Abolhassan FEREYDOONI1 and Elnaz OSGOOEI 2 In this paper, we aim at introducing a criterion that determines if f figi2I is a Bessel sequence, a frame or a Riesz sequence or not any of these, based on the norms and the inner products of the elements in f figi2I. In the cases of Riesz and Bessel sequences, we introduced a criterion but in the case of a frame, we did not find any answers. This criterion will be shown by K(f figi2I). Using the criterion introduced, some interesting extensions of orthogonality will be presented. Keywords: Frames, Bessel sequences, Orthonormal basis, Riesz bases, Gram matrix MSC2010: Primary 42C15, 47A05. 1. Preliminaries Frames are generalizations of orthonormal bases, but, more than orthonormal bases, they have shown their ability and stability in the representation of functions [1, 4, 10, 11]. The frames have been deeply studied from an abstract point of view. The results of such studies have been used in concrete frames such as Gabor and Wavelet frames which are very important from a practical point of view [2, 9, 5, 8]. An orthonormal set feng in a Hilbert space is characterized by a simple relation hem;eni = dm;n: In the other words, the Gram matrix is the identity matrix. Moreover, feng is an orthonor- mal basis if spanfeng = H. But, for frames the situation is more complicated; i.e., the Gram Matrix has no such a simple form.
    [Show full text]
  • Uniqueness of Low-Rank Matrix Completion by Rigidity Theory
    UNIQUENESS OF LOW-RANK MATRIX COMPLETION BY RIGIDITY THEORY AMIT SINGER∗ AND MIHAI CUCURINGU† Abstract. The problem of completing a low-rank matrix from a subset of its entries is often encountered in the analysis of incomplete data sets exhibiting an underlying factor model with applications in collaborative filtering, computer vision and control. Most recent work had been focused on constructing efficient algorithms for exact or approximate recovery of the missing matrix entries and proving lower bounds for the number of known entries that guarantee a successful recovery with high probability. A related problem from both the mathematical and algorithmic point of view is the distance geometry problem of realizing points in a Euclidean space from a given subset of their pairwise distances. Rigidity theory answers basic questions regarding the uniqueness of the realization satisfying a given partial set of distances. We observe that basic ideas and tools of rigidity theory can be adapted to determine uniqueness of low-rank matrix completion, where inner products play the role that distances play in rigidity theory. This observation leads to efficient randomized algorithms for testing necessary and sufficient conditions for local completion and for testing sufficient conditions for global completion. Crucial to our analysis is a new matrix, which we call the completion matrix, that serves as the analogue of the rigidity matrix. Key words. Low rank matrices, missing values, rigidity theory, iterative methods, collaborative filtering. AMS subject classifications. 05C10, 05C75, 15A48 1. Introduction. Can the missing entries of an incomplete real valued matrix be recovered? Clearly, a matrix can be completed in an infinite number of ways by replacing the missing entries with arbitrary values.
    [Show full text]
  • 4.1 RANK of a MATRIX Rank List Given Matrix M, the Following Are Equal
    page 1 of Section 4.1 CHAPTER 4 MATRICES CONTINUED SECTION 4.1 RANK OF A MATRIX rank list Given matrix M, the following are equal: (1) maximal number of ind cols (i.e., dim of the col space of M) (2) maximal number of ind rows (i.e., dim of the row space of M) (3) number of cols with pivots in the echelon form of M (4) number of nonzero rows in the echelon form of M You know that (1) = (3) and (2) = (4) from Section 3.1. To see that (3) = (4) just stare at some echelon forms. This one number (that all four things equal) is called the rank of M. As a special case, a zero matrix is said to have rank 0. how row ops affect rank Row ops don't change the rank because they don't change the max number of ind cols or rows. example 1 12-10 24-20 has rank 1 (maximally one ind col, by inspection) 48-40 000 []000 has rank 0 example 2 2540 0001 LetM= -2 1 -1 0 21170 To find the rank of M, use row ops R3=R1+R3 R4=-R1+R4 R2ØR3 R4=-R2+R4 2540 0630 to get the unreduced echelon form 0001 0000 Cols 1,2,4 have pivots. So the rank of M is 3. how the rank is limited by the size of the matrix IfAis7≈4then its rank is either 0 (if it's the zero matrix), 1, 2, 3 or 4. The rank can't be 5 or larger because there can't be 5 ind cols when there are only 4 cols to begin with.
    [Show full text]
  • Rotation Matrix - Wikipedia, the Free Encyclopedia Page 1 of 22
    Rotation matrix - Wikipedia, the free encyclopedia Page 1 of 22 Rotation matrix From Wikipedia, the free encyclopedia In linear algebra, a rotation matrix is a matrix that is used to perform a rotation in Euclidean space. For example the matrix rotates points in the xy -Cartesian plane counterclockwise through an angle θ about the origin of the Cartesian coordinate system. To perform the rotation, the position of each point must be represented by a column vector v, containing the coordinates of the point. A rotated vector is obtained by using the matrix multiplication Rv (see below for details). In two and three dimensions, rotation matrices are among the simplest algebraic descriptions of rotations, and are used extensively for computations in geometry, physics, and computer graphics. Though most applications involve rotations in two or three dimensions, rotation matrices can be defined for n-dimensional space. Rotation matrices are always square, with real entries. Algebraically, a rotation matrix in n-dimensions is a n × n special orthogonal matrix, i.e. an orthogonal matrix whose determinant is 1: . The set of all rotation matrices forms a group, known as the rotation group or the special orthogonal group. It is a subset of the orthogonal group, which includes reflections and consists of all orthogonal matrices with determinant 1 or -1, and of the special linear group, which includes all volume-preserving transformations and consists of matrices with determinant 1. Contents 1 Rotations in two dimensions 1.1 Non-standard orientation
    [Show full text]
  • §9.2 Orthogonal Matrices and Similarity Transformations
    n×n Thm: Suppose matrix Q 2 R is orthogonal. Then −1 T I Q is invertible with Q = Q . n T T I For any x; y 2 R ,(Q x) (Q y) = x y. n I For any x 2 R , kQ xk2 = kxk2. Ex 0 1 1 1 1 1 2 2 2 2 B C B 1 1 1 1 C B − 2 2 − 2 2 C B C T H = B C ; H H = I : B C B − 1 − 1 1 1 C B 2 2 2 2 C @ A 1 1 1 1 2 − 2 − 2 2 x9.2 Orthogonal Matrices and Similarity Transformations n×n Def: A matrix Q 2 R is said to be orthogonal if its columns n (1) (2) (n)o n q ; q ; ··· ; q form an orthonormal set in R . Ex 0 1 1 1 1 1 2 2 2 2 B C B 1 1 1 1 C B − 2 2 − 2 2 C B C T H = B C ; H H = I : B C B − 1 − 1 1 1 C B 2 2 2 2 C @ A 1 1 1 1 2 − 2 − 2 2 x9.2 Orthogonal Matrices and Similarity Transformations n×n Def: A matrix Q 2 R is said to be orthogonal if its columns n (1) (2) (n)o n q ; q ; ··· ; q form an orthonormal set in R . n×n Thm: Suppose matrix Q 2 R is orthogonal. Then −1 T I Q is invertible with Q = Q . n T T I For any x; y 2 R ,(Q x) (Q y) = x y.
    [Show full text]
  • The Jordan Canonical Forms of Complex Orthogonal and Skew-Symmetric Matrices Roger A
    CORE Metadata, citation and similar papers at core.ac.uk Provided by Elsevier - Publisher Connector Linear Algebra and its Applications 302–303 (1999) 411–421 www.elsevier.com/locate/laa The Jordan Canonical Forms of complex orthogonal and skew-symmetric matrices Roger A. Horn a,∗, Dennis I. Merino b aDepartment of Mathematics, University of Utah, 155 South 1400 East, Salt Lake City, UT 84112–0090, USA bDepartment of Mathematics, Southeastern Louisiana University, Hammond, LA 70402-0687, USA Received 25 August 1999; accepted 3 September 1999 Submitted by B. Cain Dedicated to Hans Schneider Abstract We study the Jordan Canonical Forms of complex orthogonal and skew-symmetric matrices, and consider some related results. © 1999 Elsevier Science Inc. All rights reserved. Keywords: Canonical form; Complex orthogonal matrix; Complex skew-symmetric matrix 1. Introduction and notation Every square complex matrix A is similar to its transpose, AT ([2, Section 3.2.3] or [1, Chapter XI, Theorem 5]), and the similarity class of the n-by-n complex symmetric matrices is all of Mn [2, Theorem 4.4.9], the set of n-by-n complex matrices. However, other natural similarity classes of matrices are non-trivial and can be characterized by simple conditions involving the Jordan Canonical Form. For example, A is similar to its complex conjugate, A (and hence also to its T adjoint, A∗ = A ), if and only if A is similar to a real matrix [2, Theorem 4.1.7]; the Jordan Canonical Form of such a matrix can contain only Jordan blocks with real eigenvalues and pairs of Jordan blocks of the form Jk(λ) ⊕ Jk(λ) for non-real λ.We denote by Jk(λ) the standard upper triangular k-by-k Jordan block with eigenvalue ∗ Corresponding author.
    [Show full text]
  • Inner Products and Norms (Part III)
    Inner Products and Norms (part III) Prof. Dan A. Simovici UMB 1 / 74 Outline 1 Approximating Subspaces 2 Gram Matrices 3 The Gram-Schmidt Orthogonalization Algorithm 4 QR Decomposition of Matrices 5 Gram-Schmidt Algorithm in R 2 / 74 Approximating Subspaces Definition A subspace T of a inner product linear space is an approximating subspace if for every x 2 L there is a unique element in T that is closest to x. Theorem Let T be a subspace in the inner product space L. If x 2 L and t 2 T , then x − t 2 T ? if and only if t is the unique element of T closest to x. 3 / 74 Approximating Subspaces Proof Suppose that x − t 2 T ?. Then, for any u 2 T we have k x − u k2=k (x − t) + (t − u) k2=k x − t k2 + k t − u k2; by observing that x − t 2 T ? and t − u 2 T and applying Pythagora's 2 2 Theorem to x − t and t − u. Therefore, we have k x − u k >k x − t k , so t is the unique element of T closest to x. 4 / 74 Approximating Subspaces Proof (cont'd) Conversely, suppose that t is the unique element of T closest to x and x − t 62 T ?, that is, there exists u 2 T such that (x − t; u) 6= 0. This implies, of course, that u 6= 0L. We have k x − (t + au) k2=k x − t − au k2=k x − t k2 −2(x − t; au) + jaj2 k u k2 : 2 2 Since k x − (t + au) k >k x − t k (by the definition of t), we have 2 2 1 −2(x − t; au) + jaj k u k > 0 for every a 2 F.
    [Show full text]
  • Exceptional Collections in Surface-Like Categories 11
    EXCEPTIONAL COLLECTIONS IN SURFACE-LIKE CATEGORIES ALEXANDER KUZNETSOV Abstract. We provide a categorical framework for recent results of Markus Perling on combinatorics of exceptional collections on numerically rational surfaces. Using it we simplify and generalize some of Perling’s results as well as Vial’s criterion for existence of a numerical exceptional collection. 1. Introduction A beautiful recent paper of Markus Perling [Pe] proves that any numerically exceptional collection of maximal length in the derived category of a numerically rational surface (i.e., a surface with zero irreg- ularity and geometric genus) can be transformed by mutations into an exceptional collection consisting of objects of rank 1. In this note we provide a categorical framework that allows to extend and simplify Perling’s result. For this we introduce a notion of a surface-like category. Roughly speaking, it is a triangulated cate- T num T gory whose numerical Grothendieck group K0 ( ), considered as an abelian group with a bilinear form (Euler form), behaves similarly to the numerical Grothendieck group of a smooth projective sur- face, see Definition 3.1. Of course, for any smooth projective surface X its derived category D(X) is surface-like. However, there are surface-like categories of different nature, for instance derived categories of noncommutative surfaces and of even-dimensional Calabi–Yau varieties turn out to be surface-like (Example 3.4). Also, some subcategories of surface-like categories are surface-like. Thus, the notion of a surface-like category is indeed more general. In fact, all results of this paper have a numerical nature, so instead of considering categories, we pass directly to their numerical Grothendieck groups.
    [Show full text]