Lecture 13: Complex Eigenvalues & Factorization

Total Page:16

File Type:pdf, Size:1020Kb

Lecture 13: Complex Eigenvalues & Factorization Lecture 13: Complex Eigenvalues & Factorization Consider the rotation matrix 0 1 A = . (1) 1 ¡0 · ¸ To …nd eigenvalues, we write ¸ 1 A ¸I = ¡ ¡ , ¡ 1 ¸ · ¡ ¸ and calculate its determinant det (A ¸I) = ¸2 + 1 = 0. ¡ We see that A has only complex eigenvalues ¸= p 1 = i. § ¡ § Therefore, it is impossible to diagonalize the rotation matrix. In general, if a matrix has complex eigenvalues, it is not diagonalizable. In this lecture, we shall study matrices with complex eigenvalues. Since eigenvalues are roots of characteristic polynomials with real coe¢cients, complex eigenvalues always appear in pairs: If ¸0 = a + bi is a complex eigenvalue, so is its conjugate ¸¹0 = a bi. ¡ For any complex eigenvalue, we can proceed to …nd its (complex) eigenvectors in the same way as we did for real eigenvalues. Example 13.1. For the matrix A in (1) above, …nd eigenvectors. ¹ Solution. We already know its eigenvalues: ¸0 = i, ¸0 = i. For ¸0 = i, we solve (A iI) ~x = ~0 as follows: performing (complex) row operations to¡get (noting that i2 = 1) ¡ ¡ i 1 iR1+R2 R2 i 1 A iI = ¡ ¡ ¡ ! ¡ ¡ ¡ 1 i ! 0 0 · ¡ ¸ · ¸ = ix1 x2 = 0 = x2 = ix1 ) ¡ ¡ ) ¡ x1 1 1 0 take x1 = 1, ~u = = = i x2 i 0 ¡ 1 · ¸ ·¡ ¸ · ¸ · ¸ is an eigenvalue for ¸= i. For ¸= i, we proceed similarly as follows: ¡ i 1 iR1+R2 R2 i 1 A + iI = ¡ ! ¡ = ix1 x2 = 0 1 i ! 0 0 ) ¡ · ¸ · ¸ 1 1 1 0 take x = 1, ~v = = + i 1 i 0 1 · ¸ · ¸ · ¸ is an eigenvalue for ¸= i. We note that ¡ ¸¹0 = i is the conjugate of ¸0 = i ¡ ~v = ~u (the conjugate vector of eigenvector ~u) is an eigenvector for ¸¹0. This observation can be justi…ed as a general rule as follows. by taking conjugate on equation (A + ¸ I) ~u = 0 = (A + ¸ I) ~u = 0 0 ) 0 = (A + ¸ I) ~u = 0 = A¹ (¸ I) ~u = 0 = A ¸¹ I ~u = 0. ) 0 ) ¡ 0 ) ¡ 0 ¡ ¢ ³ ´ ¡ ¢ ¡ ¢ Remark. In general, if ~u is an eigenvector associated with ¸, then ~u, the conjugate of ~u, is an eigenvector associated with ¸¹, the conjugate of ¸. Therefore, for each pair, ¸, ¸¹ , of eigenvalue, we only need to compute eigenvectors for ¸. The eigenvectors for ¸¹ can be obtained easily by taking conjugates. ¡ ¢ Though A is not diagonalizable in the classic sense, we can still simplify it by introducing a term called "block-diagonal" matrix. Example 13.2. For the matrix A in (1) above that has complex eigenvalues, we proceed to choose P and D as follows: pick one complex eigenvalue and its eigenvector 1 1 0 ¸0 = i, ~u = = i i 0 ¡ 1 ·¡ ¸ · ¸ · ¸ separate their real part and imaginary part (Re (a + bi) = a, Im (a + bi) = b) : 1 0 Re (¸ ) = 0, Im (¸ ) = 1, Re (~u) = , Im (~u) = . 0 0 0 1 · ¸ ·¡ ¸ We then assemble D and P as Re (¸) Im (¸) 0 1 D = = , Im (¸) Re (¸) 1 0 ·¡ ¸ ·¡ ¸ 1 0 P = [Re (~u) , Im (~u)] = . 0 1 · ¡ ¸ Note that D is not really a diagonal matrix. We can verify AP = P D : 0 1 1 0 0 1 AP = = 1 ¡0 0 1 1 0 · ¸ · ¡ ¸ · ¸ 1 0 0 1 0 1 P D = = . 0 1 1 0 1 0 · ¡ ¸ ·¡ ¸ · ¸ 2 In general, if a matrix A has complex eigenvalues, it may be similar to a block-diagonal matrix B, i.e., there exists an invertible matrix P such that AP = P B, where B has the form C1 0 ... 0 0 C ... 0 B = 2 , P = [P , P , ..., P ] (2) 2 ... ... ... ... 3 1 2 r 6 0 0 ... Cr7 6 7 C is either a real eigenvalue4¸ ( 1 1 matr5ix), or C is a 2 2 matrix (called a block), k k £ k £ corresponding to a complex eigenvalue ¸k = Re (¸k) + i Im (¸k) , in the form Re (¸ ) Im (¸ ) C = k k , (3) k Im (¸ ) Re (¸ ) ·¡ k k ¸ and P is assembled accordingly: if Ck is an real eigenvalue, the corresponding column Pk in P is an eigenvector (real); if Ck is a 2 2 block as in (3) associated with a complex eigenvalue ¸k, £ then Pk consists of two columns: Pk = [Re (~uk) , Im (~uk)] where ~uk is a complex eigenvector for ¸k (note that both Re (~uk) and Im (~uk) are real vectors). Example 13.3. Some examples of block diagonal matrices (2): block matrix from v.s. normal matrix form. Example 1 1. 3 3 matrix with one diagonal entry and one 2 2 block: £ £ 2 0 2 0 0 2 3 = 0 2 3 , (¸ = 2, ¸ = 2 + 3i, ¸ = 2 3i) 0 1 2 3 2 3 2 3 20 3 23 ¡ ·¡ ¸ ¡ 4 5 4 5 2. 4 4 matrix with two diagonal entries and one 2 2 block: £ £ 1 0 0 1 0 0 0 ¡0 2 0 ¡0 2 0 0 2 3 = 2 3 , (¸1 = 1, ¸2 = 2, ¸3 = 4 + 2i, ¸4 = 4 2 0 0 4 2 ¡ ¡ 0 0 ¡ ¡ 6 2 4 7 6 0 0 2 47 6 ·¡ ¡ ¸7 6 ¡ ¡ 7 44 2i) 5 4 5 ¡ ¡ 3. 4 4 matrix with two 2 2 blocks: £4 2 £ 4 2 0 0 0 ¡2 4 ¡2 4 0 0 = , (¸ = 4+2i, ¸ = 4 2i, ¸ = 2+3i, 2·¡ ¡ ¸ 2 3 3 2¡0 ¡0 2 33 1 2 3 0 ¡ ¡ ¡ 6 3 2 7 6 0 0 3 27 6 ·¡ ¸7 6 ¡ 7 ¸4 =4 2 3i). 5 4 5 ¡ 1 The process of …nding a block diagonal matrix B in the form (2) and P such as P ¡ AP = B is called factorization. Note that if all eigenvalues are real, factorization = diagonalization. Factorization process: Step #1. Find all eigenvalues. 3 Step #2. For real eigenvalues, …nd a basis in each eigenspace Null (A ¸). For each pair of complex eigenvalues ¸ and ¸¹, …nd a (complex) basis in Null (A ¸¡) . Then, write complex eigenvectors in the basis in the form ~u = Re (~u) + i Im (~u) . ¡ Note that, the total number of such vectors must be equal to the dimension. Otherwise, it is not factorizable. Step #3. Assemble P and B as in (2): if C1 is a real eigenvalue, the corresponding column P1 is one of its eigenvectors in the basis; if C1 is a 2 2 block matrix as in (3), the £ corresponding P1 consists of two columns (Re (~u) , i Im (~u)) . Remark. Suppose that A is a upper (or lower) block-triangle matrix in the form B1 ... 0 B¤ ... ¤ A = 2 , 2 ... ... ... .¤.. 3 6 0 0 ... Br7 6 7 4 5 where each Bi is a square matrix (with same or di¤erent sizes). Then det (A) = det (B ) det (B ) det (B ) . 1 2 ¢ ¢ ¢ r det (A ¸) = det (B1 ¸) det (B2 ¸) det (Br ¸) . ¡ ¡ ¡ ¢ ¢ ¢ ¡ Therefore, eigenvalues of A = collection of all eigenvalues of B1, ..., Br. Example 13.4. Factorize 2 9 0 2 1 2 1 0 A = . 2¡0 0 3 0 3 6 0 0 1 17 6 ¡ 7 4 5 Solution. Step 1. Find eigenvalues. We notice that 2 ¸ 9 0 2 2 ¸ 9 ¡ ¡ 1 2 ¸ 1 0 1 2 ¸ ¤ A ¸I = 2 ¡ ¡ 3 = 2· ¡ ¡ ¸ 3 ¡ 0 0 3 ¸ 0 3 ¸ 0 ¡ 0 ¡ 6 0 0 1 1 ¸7 6 1 1 ¸ 7 6 ¡ ¡ 7 6 · ¡ ¡ ¸7 4 5 4 5 is an upper block-triangle. Thus, 2 ¸ 9 3 ¸ 0 det (A) = det ¡ det ¡ 1 2 ¸ ¢ 1 1 ¸ µ· ¡ ¡ ¸¶ · ¡ ¡ ¸ = [(2 ¸) (2 ¸) + 9] [(3 ¸) ( 1 ¸)] = 0. ¡ ¡ ¡ ¡ ¡ Hence [(2 ¸) (2 ¸) + 9] = 0 = ¸2 4¸+ 13 = 0 = ¸= 2 3i ¡ ¡ ) ¡ ) § or (3 ¸) ( 1 ¸) = 0 = ¸= 3, ¸= 1. ¡ ¡ ¡ ) ¡ 4 There are two real eigenvalues ¸1 = 3, ¸2 = 1, and one pair of complex eigenvalue ¹= 2+3i, and 2 3i, with real part and imaginary pa¡rt ¡ Re (¹) = 2, Im (¹) = 3. Step 2. We next …nd eigenvectors. For ¸1 = 3, we carry out row operation: 2 3 9 0 2 ¡1 2 3 1 0 A 3I = 2 ¡ ¡ 3 ¡ 0 0 3 3 0 ¡ 6 0 0 1 1 37 6 ¡ ¡ 7 4 5 19 1 9 0 2 R2 R1 R2 1 9 0 2 1 0 0 ¡ ¡ ! ¡ ¡ 5 1 1 1 0 R3 R4 0 10 1 2 2 1 3 = 2¡ ¡ 3 ! 2 ¡ ¡ 3 0 1 0 . 0 0 0 0 ! 0 0 1 4 ! 6 ¡5 7 ¡ 60 0 1 4 7 6 0 0 1 47 6 0 0 0 0 7 6 ¡ 7 6 ¡ 7 6 7 60 0 0 0 7 4 5 4 5 6 7 4 5 System became 19 x x = 0 1 ¡ 5 4 1 x2 x4 = 0 ¡ 5 x3 4x4 = 0. ¡ The eigenvalue is, by taking x4 = 5, 19 1 ~v = , ¸ = 3. 1 2203 1 6 5 7 6 7 4 5 For ¸2 = 1, we proceed as follows: ¡ 3 9 0 2 3 9 0 2 0 18 0 2 1 3 1 0 1 3 0 0 3R2+R1 R1 1 3 0 0 A + I = 2¡ 3 2¡ 3 ! 2¡ 3 0 0 4 0 ! 0 0 1 0 ! 0 0 1 0 6 0 0 1 07 6 0 0 0 07 6 0 0 0 07 6 7 6 7 6 7 4 5 4 5 4 5 R1/18 R1 0 1 0 1/9 1 0 0 1/3 ! ( 3) R1 + R2 R2 1 0 0 1/3 0 1 0 1/9 ¡ ! 2¡ ¡ 3 2 3 . ! 0 0 1 0 ! 0 0 1 0 6 0 0 0 0 7 60 0 0 0 7 6 7 6 7 4 5 4 5 5 We obtain that 1 x = x 1 ¡3 4 1 x2 = x4 ¡9 x3 = 0. By taking x4 = 9, ¡ 3 1 ~v2 = 2 3 , ¸2 = 1. 0 ¡ 6 97 6¡ 7 For ¹ = 2 + 3i, we proceed analogously:4 5 2 (2 + 3i) 9 0 2 ¡ 1 2 (2 + 3i) 1 0 A ¹I = 2 ¡ ¡ 3 ¡ 0 0 3 (2 + 3i) 0 ¡ 6 0 0 1 1 (2 + 3i)7 6 ¡ ¡ 7 4 5 R1 3iR2 R1 ¡ ! 3i 9 0 2 R4 R3/ (1 3i) R4 0 0 0 2 ¡ ¡ ¡ ! 1 3i 1 0 R2 R3/ (1 3i) R2 1 3i 0 0 = 2 ¡ ¡ 3 ¡ ¡ ! 2¡ ¡ 3 0 0 1 3i 0 ! 0 0 1 3i 0 ¡ ¡ 6 0 0 1 3 3i7 6 0 0 0 3 3i7 6 ¡ ¡ 7 6 ¡ ¡ 7 4 5 4 5 R3/ (1 3i) R3 ¡ ¡ R1 R4 ( 3 3i) R1 0 0 0 0 ¡ ¡ ¡ ! R4/ ( 3 3i) R4 1 3i 0 0 ¡ ¡ ¡ 2¡ ¡ 3 .
Recommended publications
  • Lecture Notes: Qubit Representations and Rotations
    Phys 711 Topics in Particles & Fields | Spring 2013 | Lecture 1 | v0.3 Lecture notes: Qubit representations and rotations Jeffrey Yepez Department of Physics and Astronomy University of Hawai`i at Manoa Watanabe Hall, 2505 Correa Road Honolulu, Hawai`i 96822 E-mail: [email protected] www.phys.hawaii.edu/∼yepez (Dated: January 9, 2013) Contents mathematical object (an abstraction of a two-state quan- tum object) with a \one" state and a \zero" state: I. What is a qubit? 1 1 0 II. Time-dependent qubits states 2 jqi = αj0i + βj1i = α + β ; (1) 0 1 III. Qubit representations 2 A. Hilbert space representation 2 where α and β are complex numbers. These complex B. SU(2) and O(3) representations 2 numbers are called amplitudes. The basis states are or- IV. Rotation by similarity transformation 3 thonormal V. Rotation transformation in exponential form 5 h0j0i = h1j1i = 1 (2a) VI. Composition of qubit rotations 7 h0j1i = h1j0i = 0: (2b) A. Special case of equal angles 7 In general, the qubit jqi in (1) is said to be in a superpo- VII. Example composite rotation 7 sition state of the two logical basis states j0i and j1i. If References 9 α and β are complex, it would seem that a qubit should have four free real-valued parameters (two magnitudes and two phases): I. WHAT IS A QUBIT? iθ0 α φ0 e jqi = = iθ1 : (3) Let us begin by introducing some notation: β φ1 e 1 state (called \minus" on the Bloch sphere) Yet, for a qubit to contain only one classical bit of infor- 0 mation, the qubit need only be unimodular (normalized j1i = the alternate symbol is |−i 1 to unity) α∗α + β∗β = 1: (4) 0 state (called \plus" on the Bloch sphere) 1 Hence it lives on the complex unit circle, depicted on the j0i = the alternate symbol is j+i: 0 top of Figure 1.
    [Show full text]
  • Diagonalizing a Matrix
    Diagonalizing a Matrix Definition 1. We say that two square matrices A and B are similar provided there exists an invertible matrix P so that . 2. We say a matrix A is diagonalizable if it is similar to a diagonal matrix. Example 1. The matrices and are similar matrices since . We conclude that is diagonalizable. 2. The matrices and are similar matrices since . After we have developed some additional theory, we will be able to conclude that the matrices and are not diagonalizable. Theorem Suppose A, B and C are square matrices. (1) A is similar to A. (2) If A is similar to B, then B is similar to A. (3) If A is similar to B and if B is similar to C, then A is similar to C. Proof of (3) Since A is similar to B, there exists an invertible matrix P so that . Also, since B is similar to C, there exists an invertible matrix R so that . Now, and so A is similar to C. Thus, “A is similar to B” is an equivalence relation. Theorem If A is similar to B, then A and B have the same eigenvalues. Proof Since A is similar to B, there exists an invertible matrix P so that . Now, Since A and B have the same characteristic equation, they have the same eigenvalues. > Example Find the eigenvalues for . Solution Since is similar to the diagonal matrix , they have the same eigenvalues. Because the eigenvalues of an upper (or lower) triangular matrix are the entries on the main diagonal, we see that the eigenvalues for , and, hence, are .
    [Show full text]
  • On Multivariate Interpolation
    On Multivariate Interpolation Peter J. Olver† School of Mathematics University of Minnesota Minneapolis, MN 55455 U.S.A. [email protected] http://www.math.umn.edu/∼olver Abstract. A new approach to interpolation theory for functions of several variables is proposed. We develop a multivariate divided difference calculus based on the theory of non-commutative quasi-determinants. In addition, intriguing explicit formulae that connect the classical finite difference interpolation coefficients for univariate curves with multivariate interpolation coefficients for higher dimensional submanifolds are established. † Supported in part by NSF Grant DMS 11–08894. April 6, 2016 1 1. Introduction. Interpolation theory for functions of a single variable has a long and distinguished his- tory, dating back to Newton’s fundamental interpolation formula and the classical calculus of finite differences, [7, 47, 58, 64]. Standard numerical approximations to derivatives and many numerical integration methods for differential equations are based on the finite dif- ference calculus. However, historically, no comparable calculus was developed for functions of more than one variable. If one looks up multivariate interpolation in the classical books, one is essentially restricted to rectangular, or, slightly more generally, separable grids, over which the formulae are a simple adaptation of the univariate divided difference calculus. See [19] for historical details. Starting with G. Birkhoff, [2] (who was, coincidentally, my thesis advisor), recent years have seen a renewed level of interest in multivariate interpolation among both pure and applied researchers; see [18] for a fairly recent survey containing an extensive bibli- ography. De Boor and Ron, [8, 12, 13], and Sauer and Xu, [61, 10, 65], have systemati- cally studied the polynomial case.
    [Show full text]
  • Determinants of Commuting-Block Matrices by Istvan Kovacs, Daniel S
    Determinants of Commuting-Block Matrices by Istvan Kovacs, Daniel S. Silver*, and Susan G. Williams* Let R beacommutative ring, and Matn(R) the ring of n × n matrices over R.We (i,j) can regard a k × k matrix M =(A ) over Matn(R)asablock matrix,amatrix that has been partitioned into k2 submatrices (blocks)overR, each of size n × n. When M is regarded in this way, we denote its determinant by |M|.Wewill use the symbol D(M) for the determinant of M viewed as a k × k matrix over Matn(R). It is important to realize that D(M)isann × n matrix. Theorem 1. Let R be acommutative ring. Assume that M is a k × k block matrix of (i,j) blocks A ∈ Matn(R) that commute pairwise. Then | | | | (1,π(1)) (2,π(2)) ··· (k,π(k)) (1) M = D(M) = (sgn π)A A A . π∈Sk Here Sk is the symmetric group on k symbols; the summation is the usual one that appears in the definition of determinant. Theorem 1 is well known in the case k =2;the proof is often left as an exercise in linear algebra texts (see [4, page 164], for example). The general result is implicit in [3], but it is not widely known. We present a short, elementary proof using mathematical induction on k.Wesketch a second proof when the ring R has no zero divisors, a proof that is based on [3] and avoids induction by using the fact that commuting matrices over an algebraically closed field can be simultaneously triangularized.
    [Show full text]
  • 3.3 Diagonalization
    3.3 Diagonalization −4 1 1 1 Let A = 0 1. Then 0 1 and 0 1 are eigenvectors of A, with corresponding @ 4 −4 A @ 2 A @ −2 A eigenvalues −2 and −6 respectively (check). This means −4 1 1 1 −4 1 1 1 0 1 0 1 = −2 0 1 ; 0 1 0 1 = −6 0 1 : @ 4 −4 A @ 2 A @ 2 A @ 4 −4 A @ −2 A @ −2 A Thus −4 1 1 1 1 1 −2 −6 0 1 0 1 = 0−2 0 1 − 6 0 11 = 0 1 @ 4 −4 A @ 2 −2 A @ @ −2 A @ −2 AA @ −4 12 A We have −4 1 1 1 1 1 −2 0 0 1 0 1 = 0 1 0 1 @ 4 −4 A @ 2 −2 A @ 2 −2 A @ 0 −6 A 1 1 (Think about this). Thus AE = ED where E = 0 1 has the eigenvectors of A as @ 2 −2 A −2 0 columns and D = 0 1 is the diagonal matrix having the eigenvalues of A on the @ 0 −6 A main diagonal, in the order in which their corresponding eigenvectors appear as columns of E. Definition 3.3.1 A n × n matrix is A diagonal if all of its non-zero entries are located on its main diagonal, i.e. if Aij = 0 whenever i =6 j. Diagonal matrices are particularly easy to handle computationally. If A and B are diagonal n × n matrices then the product AB is obtained from A and B by simply multiplying entries in corresponding positions along the diagonal, and AB = BA.
    [Show full text]
  • Theory of Angular Momentum and Spin
    Chapter 5 Theory of Angular Momentum and Spin Rotational symmetry transformations, the group SO(3) of the associated rotation matrices and the 1 corresponding transformation matrices of spin{ 2 states forming the group SU(2) occupy a very important position in physics. The reason is that these transformations and groups are closely tied to the properties of elementary particles, the building blocks of matter, but also to the properties of composite systems. Examples of the latter with particularly simple transformation properties are closed shell atoms, e.g., helium, neon, argon, the magic number nuclei like carbon, or the proton and the neutron made up of three quarks, all composite systems which appear spherical as far as their charge distribution is concerned. In this section we want to investigate how elementary and composite systems are described. To develop a systematic description of rotational properties of composite quantum systems the consideration of rotational transformations is the best starting point. As an illustration we will consider first rotational transformations acting on vectors ~r in 3-dimensional space, i.e., ~r R3, 2 we will then consider transformations of wavefunctions (~r) of single particles in R3, and finally N transformations of products of wavefunctions like j(~rj) which represent a system of N (spin- Qj=1 zero) particles in R3. We will also review below the well-known fact that spin states under rotations behave essentially identical to angular momentum states, i.e., we will find that the algebraic properties of operators governing spatial and spin rotation are identical and that the results derived for products of angular momentum states can be applied to products of spin states or a combination of angular momentum and spin states.
    [Show full text]
  • R'kj.Oti-1). (3) the Object of the Present Article Is to Make This Estimate Effective
    TRANSACTIONS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 259, Number 2, June 1980 EFFECTIVE p-ADIC BOUNDS FOR SOLUTIONS OF HOMOGENEOUS LINEAR DIFFERENTIAL EQUATIONS BY B. DWORK AND P. ROBBA Dedicated to K. Iwasawa Abstract. We consider a finite set of power series in one variable with coefficients in a field of characteristic zero having a chosen nonarchimedean valuation. We study the growth of these series near the boundary of their common "open" disk of convergence. Our results are definitive when the wronskian is bounded. The main application involves local solutions of ordinary linear differential equations with analytic coefficients. The effective determination of the common radius of conver- gence remains open (and is not treated here). Let K be an algebraically closed field of characteristic zero complete under a nonarchimedean valuation with residue class field of characteristic p. Let D = d/dx L = D"+Cn_lD'-l+ ■ ■■ +C0 (1) be a linear differential operator with coefficients meromorphic in some neighbor- hood of the origin. Let u = a0 + a,jc + . (2) be a power series solution of L which converges in an open (/>-adic) disk of radius r. Our object is to describe the asymptotic behavior of \a,\rs as s —*oo. In a series of articles we have shown that subject to certain restrictions we may conclude that r'KJ.Oti-1). (3) The object of the present article is to make this estimate effective. At the same time we greatly simplify, and generalize, our best previous results [12] for the noneffective form. Our previous work was based on the notion of a generic disk together with a condition for reducibility of differential operators with unbounded solutions [4, Theorem 4].
    [Show full text]
  • Irreducibility in Algebraic Groups and Regular Unipotent Elements
    PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 141, Number 1, January 2013, Pages 13–28 S 0002-9939(2012)11898-2 Article electronically published on August 16, 2012 IRREDUCIBILITY IN ALGEBRAIC GROUPS AND REGULAR UNIPOTENT ELEMENTS DONNA TESTERMAN AND ALEXANDRE ZALESSKI (Communicated by Pham Huu Tiep) Abstract. We study (connected) reductive subgroups G of a reductive alge- braic group H,whereG contains a regular unipotent element of H.Themain result states that G cannot lie in a proper parabolic subgroup of H. This result is new even in the classical case H =SL(n, F ), the special linear group over an algebraically closed field, where a regular unipotent element is one whose Jor- dan normal form consists of a single block. In previous work, Saxl and Seitz (1997) determined the maximal closed positive-dimensional (not necessarily connected) subgroups of simple algebraic groups containing regular unipotent elements. Combining their work with our main result, we classify all reductive subgroups of a simple algebraic group H which contain a regular unipotent element. 1. Introduction Let H be a reductive linear algebraic group defined over an algebraically closed field F . Throughout this text ‘reductive’ will mean ‘connected reductive’. A unipo- tent element u ∈ H is said to be regular if the dimension of its centralizer CH (u) coincides with the rank of H (or, equivalently, u is contained in a unique Borel subgroup of H). Regular unipotent elements of a reductive algebraic group exist in all characteristics (see [22]) and form a single conjugacy class. These play an important role in the general theory of algebraic groups.
    [Show full text]
  • Diagonalizable Matrix - Wikipedia, the Free Encyclopedia
    Diagonalizable matrix - Wikipedia, the free encyclopedia http://en.wikipedia.org/wiki/Matrix_diagonalization Diagonalizable matrix From Wikipedia, the free encyclopedia (Redirected from Matrix diagonalization) In linear algebra, a square matrix A is called diagonalizable if it is similar to a diagonal matrix, i.e., if there exists an invertible matrix P such that P −1AP is a diagonal matrix. If V is a finite-dimensional vector space, then a linear map T : V → V is called diagonalizable if there exists a basis of V with respect to which T is represented by a diagonal matrix. Diagonalization is the process of finding a corresponding diagonal matrix for a diagonalizable matrix or linear map.[1] A square matrix which is not diagonalizable is called defective. Diagonalizable matrices and maps are of interest because diagonal matrices are especially easy to handle: their eigenvalues and eigenvectors are known and one can raise a diagonal matrix to a power by simply raising the diagonal entries to that same power. Geometrically, a diagonalizable matrix is an inhomogeneous dilation (or anisotropic scaling) — it scales the space, as does a homogeneous dilation, but by a different factor in each direction, determined by the scale factors on each axis (diagonal entries). Contents 1 Characterisation 2 Diagonalization 3 Simultaneous diagonalization 4 Examples 4.1 Diagonalizable matrices 4.2 Matrices that are not diagonalizable 4.3 How to diagonalize a matrix 4.3.1 Alternative Method 5 An application 5.1 Particular application 6 Quantum mechanical application 7 See also 8 Notes 9 References 10 External links Characterisation The fundamental fact about diagonalizable maps and matrices is expressed by the following: An n-by-n matrix A over the field F is diagonalizable if and only if the sum of the dimensions of its eigenspaces is equal to n, which is the case if and only if there exists a basis of Fn consisting of eigenvectors of A.
    [Show full text]
  • EIGENVALUES and EIGENVECTORS 1. Diagonalizable Linear Transformations and Matrices Recall, a Matrix, D, Is Diagonal If It Is
    EIGENVALUES AND EIGENVECTORS 1. Diagonalizable linear transformations and matrices Recall, a matrix, D, is diagonal if it is square and the only non-zero entries are on the diagonal. This is equivalent to D~ei = λi~ei where here ~ei are the standard n n vector and the λi are the diagonal entries. A linear transformation, T : R ! R , is n diagonalizable if there is a basis B of R so that [T ]B is diagonal. This means [T ] is n×n similar to the diagonal matrix [T ]B. Similarly, a matrix A 2 R is diagonalizable if it is similar to some diagonal matrix D. To diagonalize a linear transformation is to find a basis B so that [T ]B is diagonal. To diagonalize a square matrix is to find an invertible S so that S−1AS = D is diagonal. Fix a matrix A 2 Rn×n We say a vector ~v 2 Rn is an eigenvector if (1) ~v 6= 0. (2) A~v = λ~v for some scalar λ 2 R. The scalar λ is the eigenvalue associated to ~v or just an eigenvalue of A. Geo- metrically, A~v is parallel to ~v and the eigenvalue, λ. counts the stretching factor. Another way to think about this is that the line L := span(~v) is left invariant by multiplication by A. n An eigenbasis of A is a basis, B = (~v1; : : : ;~vn) of R so that each ~vi is an eigenvector of A. Theorem 1.1. The matrix A is diagonalizable if and only if there is an eigenbasis of A.
    [Show full text]
  • Rotation Matrix - Wikipedia, the Free Encyclopedia Page 1 of 22
    Rotation matrix - Wikipedia, the free encyclopedia Page 1 of 22 Rotation matrix From Wikipedia, the free encyclopedia In linear algebra, a rotation matrix is a matrix that is used to perform a rotation in Euclidean space. For example the matrix rotates points in the xy -Cartesian plane counterclockwise through an angle θ about the origin of the Cartesian coordinate system. To perform the rotation, the position of each point must be represented by a column vector v, containing the coordinates of the point. A rotated vector is obtained by using the matrix multiplication Rv (see below for details). In two and three dimensions, rotation matrices are among the simplest algebraic descriptions of rotations, and are used extensively for computations in geometry, physics, and computer graphics. Though most applications involve rotations in two or three dimensions, rotation matrices can be defined for n-dimensional space. Rotation matrices are always square, with real entries. Algebraically, a rotation matrix in n-dimensions is a n × n special orthogonal matrix, i.e. an orthogonal matrix whose determinant is 1: . The set of all rotation matrices forms a group, known as the rotation group or the special orthogonal group. It is a subset of the orthogonal group, which includes reflections and consists of all orthogonal matrices with determinant 1 or -1, and of the special linear group, which includes all volume-preserving transformations and consists of matrices with determinant 1. Contents 1 Rotations in two dimensions 1.1 Non-standard orientation
    [Show full text]
  • §9.2 Orthogonal Matrices and Similarity Transformations
    n×n Thm: Suppose matrix Q 2 R is orthogonal. Then −1 T I Q is invertible with Q = Q . n T T I For any x; y 2 R ,(Q x) (Q y) = x y. n I For any x 2 R , kQ xk2 = kxk2. Ex 0 1 1 1 1 1 2 2 2 2 B C B 1 1 1 1 C B − 2 2 − 2 2 C B C T H = B C ; H H = I : B C B − 1 − 1 1 1 C B 2 2 2 2 C @ A 1 1 1 1 2 − 2 − 2 2 x9.2 Orthogonal Matrices and Similarity Transformations n×n Def: A matrix Q 2 R is said to be orthogonal if its columns n (1) (2) (n)o n q ; q ; ··· ; q form an orthonormal set in R . Ex 0 1 1 1 1 1 2 2 2 2 B C B 1 1 1 1 C B − 2 2 − 2 2 C B C T H = B C ; H H = I : B C B − 1 − 1 1 1 C B 2 2 2 2 C @ A 1 1 1 1 2 − 2 − 2 2 x9.2 Orthogonal Matrices and Similarity Transformations n×n Def: A matrix Q 2 R is said to be orthogonal if its columns n (1) (2) (n)o n q ; q ; ··· ; q form an orthonormal set in R . n×n Thm: Suppose matrix Q 2 R is orthogonal. Then −1 T I Q is invertible with Q = Q . n T T I For any x; y 2 R ,(Q x) (Q y) = x y.
    [Show full text]