Semisimple and Unipotent Elements

Total Page:16

File Type:pdf, Size:1020Kb

Semisimple and Unipotent Elements Chapter 3 Semisimple and unipotent elements 3.1 Jordan decomposition 3.1.1 Jordan decomposition in GL(V ) Let us first recall some fact on linear algebra. See for example [Bou58] for proofs. Let V be a vector space. Definition 3.1.1 (ı) We call semisimple any endomorphism of V which is diagonalisable. Equiva- lently if dim V is finite, the minimal polynomial is separable. (ıı) We call nilpotent (resp. unipotent) any endomorphism x such that xn = 0 for some n (resp. x − Id is nilpotent). (ııı) We call locally finite any endomorphism x such that for all v ∈ V , the span of {xn(v) /n ∈ N} is of finite dimension. (ııı) We call locally nilpotent (resp. locally unipotent) any endomorphism x such that for all v ∈ V , there exists an n such that xn(v) = 0 (resp. Id − x is locally nilpotent). Fact 3.1.2 Let x and y in gl(V ) such that x and y commute. (ı) If x is semisimple, then it is locally finite. (ıı) If x and y are semisimple, then so are x + y and xy. (ııı) If x and y are locally nilpotent, then so are x + y and xy. (ıv) If x and y are locally unipotent, then so is xy. Theorem 3.1.3 (Additive Jordan decomposition) Let x ∈ gl(V ) be locally finite. (ı) There exists a unique decomposition x = xs + xn in gl(V ) such that xs is semisimple, xn is nilpotent and xs and xn commute. (ıı) There exists polynomial P and Q in k[T ] such that xs = P (x) and xn = Q(x). In particular xs and xn commute with any endomorphism commuting with x. (ııı) If U ⊂ W ⊂ V are subspaces such that x(W ) ⊂ U, then xs and xn also map W in U. (ıv) If x(W ) ⊂ W , then (x|W )s = (xs)|W and (x|W )n = (xn)|W and (x|V/W )s = (xs)|V/W and (x|V/W )n = (xn)|V/W . Definition 3.1.4 The elements xs (resp. xn) is called the semisimple part of x ∈ End(V ) (resp. nilpotent part The decomposition x = xs + xn is called the Jordan-Chevalley decomposition. Corollary 3.1.5 (Multiplicative Jordan decomposition) Let x ∈ gl(V ) be locally finite and in- vertible. 27 28 CHAPTER 3. SEMISIMPLE AND UNIPOTENT ELEMENTS (ı) There exists a unique decomposition x = xsxu in GL(V ) such that xs is semisimple, xu is unipotent and xs and xu commute. (ıı) The elements xs and xu commute with any endomorphism commuting with x. (ııı) If U ⊂ W ⊂ V are subspaces such that x(W ) ⊂ U, then xs and xn also map W in U. (ıv) If x(W ) ⊂ W , then (x|W )s = (xs)|W and (x|W )u = (xu)|W and (x|V/W )s = (xs)|V/W and (x|V/W )u = (xu)|V/W . Proof. We simply have to write x = xs + xn. Because x is inversible, so is xs thus we may set −1 xu = Id+ xs xn which is easily seen to be unipotent and satisfies the above properties. 3.1.2 Jordan decomposition in G Theorem 3.1.6 Let G be an algebraic group and let g be its Lie algebra. 2 (ı) For any g ∈ G, there exists a unique couple (gs, gu) ∈ G such that g = gsgu and ρ(gs)= ρ(g)s and ρ(gu)= ρ(g)u. 2 (ıı) For any η ∈ g, there exists a unique couple (ηs, ηn) ∈ g such that η = ηs + ηn and deG ρ(ηs)= deG ρ(η)s and deG ρ(ηn)= deG ρ(η)n. ′ (ııı) If φ : G → G is a morphism of algebraic groups, then φ(gs) = φ(g)s, φ(gu) = φ(g)u, deG φ(ηs)= deG φ(η)s and deG φ(ηn)= deG φ(η)n. Proof. Let us first note that because ρ and deρ are faithful, the unicity for g ∈ G and η ∈ g follows from the unicity of the Jordan decomposition for ρ(g) and deρ(η). We first prove (ı) and (ıı) for GL(V ). Proposition 3.1.7 Let g ∈ GL(V ) and X ∈ gl(V ). (ı) If g is semisimple, then so is ρ(g). (ıı) If X is semisimple, then so is deρ(X). Therefore, if g = gsgu and X = Xs + Xn are the Jordan decompositions in GL(V ) and gl(V ), then ρ(g) = ρ(gs)ρ(gu) and deρ(X) = deρ(Xs)+ deρ(Xn) are the Jordan decompositions of ρ(g) and deρ(X). Proof. Assume that g or X is semisimple (resp. unipotent or nilpotent), then let (fi) be a base of V such that these endomorphisms are diagonal (resp. upper triangular with 1 or 0 on the diagonal). Recall also that for f ∈ k[G] with ∆(f)= Pi ai ⊗ ui we have ρ(g)f = X aiui(g) and deρ(X)f = X aiX(g). i i Applying this to the elements Ti,j we get ρ(g)Ti,j = X Tk,j(g)Ti,k and deρ(X)Ti,j = X Tk,jX(Ti,k). k k But if g and X are diagonal, then Ti,j(g)= δi,jλi and X(Ti,j)= δi,jλi. We thus get ρ(g)Ti,j = λjTi,j and deρ(X)Ti,j = λjTi,j. Furthermore for det we have ∆(det) = det ⊗ det thus ρ(g) det = det(g)det and deρ(X)det = X(det) det = Tr(X) det . 3.2. SEMISIMPLE, UNIPOTENT AND NILPOTENT ELEMENTS 29 We thus have in this case a base of eigenvectors. If g and X are unipotent of nilpotent, then the same will be true because in the lexicographical order base of the monomials, we also have a triangular matrix whose diagonal coefficients are those of g or Tr(X) = 0. We are therefore left to prove (ııı) to conclude. We deal with to cases which are enough: φ : G → G′ is injective or surjective. Any morphism can be decomposed in such two morphisms by taking the factorisation through the image. Assume that φ is a closed immersion. Then we have k[G′] → k[G] = k[G′]/I. Let g ∈ G resp. ′ ′ η ∈ g and let g = gsgu resp. η = ηs +ηn the Jordan decomposition of g resp. η in G resp. g . We need to prove that these decompositions are in G resp. in g. For this we check that ρ(gs)I = I, ρ(gu)I = I, ′ deρ(ηs)I ⊂ I and deρ(ηn)I ⊂ I. But I is a vector subspace of k[G ] which is stable under g resp. X thus it is also stable under all these maps. This applied to the inclusion of any algebraic group G in some GL(V ) implies the existence of the decomposition. Assume now that φ is surjective. This in particular implies that φ♯ : k[G′] → k[G] is injective. Let g ∈ G resp. η ∈ g and let g = gsgu resp. η = ηs + ηn the Jordan decomposition of g resp. η in G resp. g. We may realise k[G′] as a ρ(G)-submodule of k[G]. For f ∈ k[G′], g ∈ G and g′ ∈ G′, we have: ′ ′ ρ(g)f(g )= f(g φ(g)). We thus have the formula ρ(g)|k[G′] = ρ(φ(g)). Applying this to g, gs and gu, we have ρ(φ(g)) = ρ(φ(gs))ρ(φ(gu)) = ρ(gs)|k[G′]ρ(gu)|k[G′] but as ρ(gs) and ρ(gu) are semisimple and nipotent, so are their restriction thus this is the Jordan decomposition of ρ(φ(g)) and thus of φ(g). ′ The above submodule structure means that we have an action aG′ of G on G whose action is given by ♯ ♯ ′ aG′ = (Id ⊗ φ ) ◦ ∆G . ′ Note that aG′ =∆G|k[G′]. Thus for f ∈ k[G ] we have deρ(deφ(η))f = (Id ⊗ deφ(η)) ◦ ∆G′ (f) ♯ = (id ⊗ η) ◦ (Id ⊗ φ ) ◦ ∆G′ (f) ♯ = (Id ⊗ η)aG′ (f) = (Id ⊗ η)∆G(f) = η · f. Therefore we have deρ(deφ(η)) = deρ(η)|k[G′] and the same argument as above applies. 3.2 Semisimple, unipotent and nilpotent elements Definition 3.2.1 (ı) Let g ∈ G, then g is called semisimple, resp. unipotent if g = gs resp. g = gu. (ıı) Let η ∈ g, then η is called semisimple, resp. nilpotent if η = ηs resp. η = ηn. (ııı) We denote by Gs resp. Gu the set of semisimple, resp. unipotent elements in G. (ıv) We denote by gs resp. gu the set of semisimple, resp. nilpotent elements in g. Fact 3.2.2 If g ∈ G resp. η ∈ g is semisimple and unipotent (resp. semisimple and nilpotent), then g = e (resp. η = 0). 30 CHAPTER 3. SEMISIMPLE AND UNIPOTENT ELEMENTS Remark 3.2.3 Note that in the case of general Lie algebras, the Jordan decomposition does not always exists. This proves that any Lie algebra is not the Lie algebra of an algebraic group. In general, if char(k) = p > 0, for an algebraic group G defined over the field k with Lie algebra λ(G) g = Derk(k[G], k[G]) , we have an additional structure called the p-operation and given by taking the p-th power of the derivation (p-th composition). This maps invariant derivations to invariant derivations. Definition 3.2.4 A p-Lie algebra is a Lie algebra g with a linear map x → x[p] called the p-operation such that • (λx)[p] = λpx[p], • ad (x[p]) = ad (x)p, ′ [p] [p] ′[p] p−1 −1 ′ • (x + x ) = x + x + Pi=1 i si(x,x ) ′ ′ i ′ p−1 ′ where x,x ∈ g, λ ∈ k and si(x,x ) is the coefficient of a in ad (ax + x ) (x ).
Recommended publications
  • Diagonalizing a Matrix
    Diagonalizing a Matrix Definition 1. We say that two square matrices A and B are similar provided there exists an invertible matrix P so that . 2. We say a matrix A is diagonalizable if it is similar to a diagonal matrix. Example 1. The matrices and are similar matrices since . We conclude that is diagonalizable. 2. The matrices and are similar matrices since . After we have developed some additional theory, we will be able to conclude that the matrices and are not diagonalizable. Theorem Suppose A, B and C are square matrices. (1) A is similar to A. (2) If A is similar to B, then B is similar to A. (3) If A is similar to B and if B is similar to C, then A is similar to C. Proof of (3) Since A is similar to B, there exists an invertible matrix P so that . Also, since B is similar to C, there exists an invertible matrix R so that . Now, and so A is similar to C. Thus, “A is similar to B” is an equivalence relation. Theorem If A is similar to B, then A and B have the same eigenvalues. Proof Since A is similar to B, there exists an invertible matrix P so that . Now, Since A and B have the same characteristic equation, they have the same eigenvalues. > Example Find the eigenvalues for . Solution Since is similar to the diagonal matrix , they have the same eigenvalues. Because the eigenvalues of an upper (or lower) triangular matrix are the entries on the main diagonal, we see that the eigenvalues for , and, hence, are .
    [Show full text]
  • Matrix Lie Groups
    Maths Seminar 2007 MATRIX LIE GROUPS Claudiu C Remsing Dept of Mathematics (Pure and Applied) Rhodes University Grahamstown 6140 26 September 2007 RhodesUniv CCR 0 Maths Seminar 2007 TALK OUTLINE 1. What is a matrix Lie group ? 2. Matrices revisited. 3. Examples of matrix Lie groups. 4. Matrix Lie algebras. 5. A glimpse at elementary Lie theory. 6. Life beyond elementary Lie theory. RhodesUniv CCR 1 Maths Seminar 2007 1. What is a matrix Lie group ? Matrix Lie groups are groups of invertible • matrices that have desirable geometric features. So matrix Lie groups are simultaneously algebraic and geometric objects. Matrix Lie groups naturally arise in • – geometry (classical, algebraic, differential) – complex analyis – differential equations – Fourier analysis – algebra (group theory, ring theory) – number theory – combinatorics. RhodesUniv CCR 2 Maths Seminar 2007 Matrix Lie groups are encountered in many • applications in – physics (geometric mechanics, quantum con- trol) – engineering (motion control, robotics) – computational chemistry (molecular mo- tion) – computer science (computer animation, computer vision, quantum computation). “It turns out that matrix [Lie] groups • pop up in virtually any investigation of objects with symmetries, such as molecules in chemistry, particles in physics, and projective spaces in geometry”. (K. Tapp, 2005) RhodesUniv CCR 3 Maths Seminar 2007 EXAMPLE 1 : The Euclidean group E (2). • E (2) = F : R2 R2 F is an isometry . → | n o The vector space R2 is equipped with the standard Euclidean structure (the “dot product”) x y = x y + x y (x, y R2), • 1 1 2 2 ∈ hence with the Euclidean distance d (x, y) = (y x) (y x) (x, y R2).
    [Show full text]
  • 3.3 Diagonalization
    3.3 Diagonalization −4 1 1 1 Let A = 0 1. Then 0 1 and 0 1 are eigenvectors of A, with corresponding @ 4 −4 A @ 2 A @ −2 A eigenvalues −2 and −6 respectively (check). This means −4 1 1 1 −4 1 1 1 0 1 0 1 = −2 0 1 ; 0 1 0 1 = −6 0 1 : @ 4 −4 A @ 2 A @ 2 A @ 4 −4 A @ −2 A @ −2 A Thus −4 1 1 1 1 1 −2 −6 0 1 0 1 = 0−2 0 1 − 6 0 11 = 0 1 @ 4 −4 A @ 2 −2 A @ @ −2 A @ −2 AA @ −4 12 A We have −4 1 1 1 1 1 −2 0 0 1 0 1 = 0 1 0 1 @ 4 −4 A @ 2 −2 A @ 2 −2 A @ 0 −6 A 1 1 (Think about this). Thus AE = ED where E = 0 1 has the eigenvectors of A as @ 2 −2 A −2 0 columns and D = 0 1 is the diagonal matrix having the eigenvalues of A on the @ 0 −6 A main diagonal, in the order in which their corresponding eigenvectors appear as columns of E. Definition 3.3.1 A n × n matrix is A diagonal if all of its non-zero entries are located on its main diagonal, i.e. if Aij = 0 whenever i =6 j. Diagonal matrices are particularly easy to handle computationally. If A and B are diagonal n × n matrices then the product AB is obtained from A and B by simply multiplying entries in corresponding positions along the diagonal, and AB = BA.
    [Show full text]
  • R'kj.Oti-1). (3) the Object of the Present Article Is to Make This Estimate Effective
    TRANSACTIONS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 259, Number 2, June 1980 EFFECTIVE p-ADIC BOUNDS FOR SOLUTIONS OF HOMOGENEOUS LINEAR DIFFERENTIAL EQUATIONS BY B. DWORK AND P. ROBBA Dedicated to K. Iwasawa Abstract. We consider a finite set of power series in one variable with coefficients in a field of characteristic zero having a chosen nonarchimedean valuation. We study the growth of these series near the boundary of their common "open" disk of convergence. Our results are definitive when the wronskian is bounded. The main application involves local solutions of ordinary linear differential equations with analytic coefficients. The effective determination of the common radius of conver- gence remains open (and is not treated here). Let K be an algebraically closed field of characteristic zero complete under a nonarchimedean valuation with residue class field of characteristic p. Let D = d/dx L = D"+Cn_lD'-l+ ■ ■■ +C0 (1) be a linear differential operator with coefficients meromorphic in some neighbor- hood of the origin. Let u = a0 + a,jc + . (2) be a power series solution of L which converges in an open (/>-adic) disk of radius r. Our object is to describe the asymptotic behavior of \a,\rs as s —*oo. In a series of articles we have shown that subject to certain restrictions we may conclude that r'KJ.Oti-1). (3) The object of the present article is to make this estimate effective. At the same time we greatly simplify, and generalize, our best previous results [12] for the noneffective form. Our previous work was based on the notion of a generic disk together with a condition for reducibility of differential operators with unbounded solutions [4, Theorem 4].
    [Show full text]
  • Diagonalizable Matrix - Wikipedia, the Free Encyclopedia
    Diagonalizable matrix - Wikipedia, the free encyclopedia http://en.wikipedia.org/wiki/Matrix_diagonalization Diagonalizable matrix From Wikipedia, the free encyclopedia (Redirected from Matrix diagonalization) In linear algebra, a square matrix A is called diagonalizable if it is similar to a diagonal matrix, i.e., if there exists an invertible matrix P such that P −1AP is a diagonal matrix. If V is a finite-dimensional vector space, then a linear map T : V → V is called diagonalizable if there exists a basis of V with respect to which T is represented by a diagonal matrix. Diagonalization is the process of finding a corresponding diagonal matrix for a diagonalizable matrix or linear map.[1] A square matrix which is not diagonalizable is called defective. Diagonalizable matrices and maps are of interest because diagonal matrices are especially easy to handle: their eigenvalues and eigenvectors are known and one can raise a diagonal matrix to a power by simply raising the diagonal entries to that same power. Geometrically, a diagonalizable matrix is an inhomogeneous dilation (or anisotropic scaling) — it scales the space, as does a homogeneous dilation, but by a different factor in each direction, determined by the scale factors on each axis (diagonal entries). Contents 1 Characterisation 2 Diagonalization 3 Simultaneous diagonalization 4 Examples 4.1 Diagonalizable matrices 4.2 Matrices that are not diagonalizable 4.3 How to diagonalize a matrix 4.3.1 Alternative Method 5 An application 5.1 Particular application 6 Quantum mechanical application 7 See also 8 Notes 9 References 10 External links Characterisation The fundamental fact about diagonalizable maps and matrices is expressed by the following: An n-by-n matrix A over the field F is diagonalizable if and only if the sum of the dimensions of its eigenspaces is equal to n, which is the case if and only if there exists a basis of Fn consisting of eigenvectors of A.
    [Show full text]
  • EIGENVALUES and EIGENVECTORS 1. Diagonalizable Linear Transformations and Matrices Recall, a Matrix, D, Is Diagonal If It Is
    EIGENVALUES AND EIGENVECTORS 1. Diagonalizable linear transformations and matrices Recall, a matrix, D, is diagonal if it is square and the only non-zero entries are on the diagonal. This is equivalent to D~ei = λi~ei where here ~ei are the standard n n vector and the λi are the diagonal entries. A linear transformation, T : R ! R , is n diagonalizable if there is a basis B of R so that [T ]B is diagonal. This means [T ] is n×n similar to the diagonal matrix [T ]B. Similarly, a matrix A 2 R is diagonalizable if it is similar to some diagonal matrix D. To diagonalize a linear transformation is to find a basis B so that [T ]B is diagonal. To diagonalize a square matrix is to find an invertible S so that S−1AS = D is diagonal. Fix a matrix A 2 Rn×n We say a vector ~v 2 Rn is an eigenvector if (1) ~v 6= 0. (2) A~v = λ~v for some scalar λ 2 R. The scalar λ is the eigenvalue associated to ~v or just an eigenvalue of A. Geo- metrically, A~v is parallel to ~v and the eigenvalue, λ. counts the stretching factor. Another way to think about this is that the line L := span(~v) is left invariant by multiplication by A. n An eigenbasis of A is a basis, B = (~v1; : : : ;~vn) of R so that each ~vi is an eigenvector of A. Theorem 1.1. The matrix A is diagonalizable if and only if there is an eigenbasis of A.
    [Show full text]
  • LIE GROUPS and ALGEBRAS NOTES Contents 1. Definitions 2
    LIE GROUPS AND ALGEBRAS NOTES STANISLAV ATANASOV Contents 1. Definitions 2 1.1. Root systems, Weyl groups and Weyl chambers3 1.2. Cartan matrices and Dynkin diagrams4 1.3. Weights 5 1.4. Lie group and Lie algebra correspondence5 2. Basic results about Lie algebras7 2.1. General 7 2.2. Root system 7 2.3. Classification of semisimple Lie algebras8 3. Highest weight modules9 3.1. Universal enveloping algebra9 3.2. Weights and maximal vectors9 4. Compact Lie groups 10 4.1. Peter-Weyl theorem 10 4.2. Maximal tori 11 4.3. Symmetric spaces 11 4.4. Compact Lie algebras 12 4.5. Weyl's theorem 12 5. Semisimple Lie groups 13 5.1. Semisimple Lie algebras 13 5.2. Parabolic subalgebras. 14 5.3. Semisimple Lie groups 14 6. Reductive Lie groups 16 6.1. Reductive Lie algebras 16 6.2. Definition of reductive Lie group 16 6.3. Decompositions 18 6.4. The structure of M = ZK (a0) 18 6.5. Parabolic Subgroups 19 7. Functional analysis on Lie groups 21 7.1. Decomposition of the Haar measure 21 7.2. Reductive groups and parabolic subgroups 21 7.3. Weyl integration formula 22 8. Linear algebraic groups and their representation theory 23 8.1. Linear algebraic groups 23 8.2. Reductive and semisimple groups 24 8.3. Parabolic and Borel subgroups 25 8.4. Decompositions 27 Date: October, 2018. These notes compile results from multiple sources, mostly [1,2]. All mistakes are mine. 1 2 STANISLAV ATANASOV 1. Definitions Let g be a Lie algebra over algebraically closed field F of characteristic 0.
    [Show full text]
  • §9.2 Orthogonal Matrices and Similarity Transformations
    n×n Thm: Suppose matrix Q 2 R is orthogonal. Then −1 T I Q is invertible with Q = Q . n T T I For any x; y 2 R ,(Q x) (Q y) = x y. n I For any x 2 R , kQ xk2 = kxk2. Ex 0 1 1 1 1 1 2 2 2 2 B C B 1 1 1 1 C B − 2 2 − 2 2 C B C T H = B C ; H H = I : B C B − 1 − 1 1 1 C B 2 2 2 2 C @ A 1 1 1 1 2 − 2 − 2 2 x9.2 Orthogonal Matrices and Similarity Transformations n×n Def: A matrix Q 2 R is said to be orthogonal if its columns n (1) (2) (n)o n q ; q ; ··· ; q form an orthonormal set in R . Ex 0 1 1 1 1 1 2 2 2 2 B C B 1 1 1 1 C B − 2 2 − 2 2 C B C T H = B C ; H H = I : B C B − 1 − 1 1 1 C B 2 2 2 2 C @ A 1 1 1 1 2 − 2 − 2 2 x9.2 Orthogonal Matrices and Similarity Transformations n×n Def: A matrix Q 2 R is said to be orthogonal if its columns n (1) (2) (n)o n q ; q ; ··· ; q form an orthonormal set in R . n×n Thm: Suppose matrix Q 2 R is orthogonal. Then −1 T I Q is invertible with Q = Q . n T T I For any x; y 2 R ,(Q x) (Q y) = x y.
    [Show full text]
  • Unipotent Flows and Applications
    Clay Mathematics Proceedings Volume 10, 2010 Unipotent Flows and Applications Alex Eskin 1. General introduction 1.1. Values of indefinite quadratic forms at integral points. The Op- penheim Conjecture. Let X Q(x1; : : : ; xn) = aijxixj 1≤i≤j≤n be a quadratic form in n variables. We always assume that Q is indefinite so that (so that there exists p with 1 ≤ p < n so that after a linear change of variables, Q can be expresses as: Xp Xn ∗ 2 − 2 Qp(y1; : : : ; yn) = yi yi i=1 i=p+1 We should think of the coefficients aij of Q as real numbers (not necessarily rational or integer). One can still ask what will happen if one substitutes integers for the xi. It is easy to see that if Q is a multiple of a form with rational coefficients, then the set of values Q(Zn) is a discrete subset of R. Much deeper is the following conjecture: Conjecture 1.1 (Oppenheim, 1929). Suppose Q is not proportional to a ra- tional form and n ≥ 5. Then Q(Zn) is dense in the real line. This conjecture was extended by Davenport to n ≥ 3. Theorem 1.2 (Margulis, 1986). The Oppenheim Conjecture is true as long as n ≥ 3. Thus, if n ≥ 3 and Q is not proportional to a rational form, then Q(Zn) is dense in R. This theorem is a triumph of ergodic theory. Before Margulis, the Oppenheim Conjecture was attacked by analytic number theory methods. (In particular it was known for n ≥ 21, and for diagonal forms with n ≥ 5).
    [Show full text]
  • Contents 1 Root Systems
    Stefan Dawydiak February 19, 2021 Marginalia about roots These notes are an attempt to maintain a overview collection of facts about and relationships between some situations in which root systems and root data appear. They also serve to track some common identifications and choices. The references include some helpful lecture notes with more examples. The author of these notes learned this material from courses taught by Zinovy Reichstein, Joel Kam- nitzer, James Arthur, and Florian Herzig, as well as many student talks, and lecture notes by Ivan Loseu. These notes are simply collected marginalia for those references. Any errors introduced, especially of viewpoint, are the author's own. The author of these notes would be grateful for their communication to [email protected]. Contents 1 Root systems 1 1.1 Root space decomposition . .2 1.2 Roots, coroots, and reflections . .3 1.2.1 Abstract root systems . .7 1.2.2 Coroots, fundamental weights and Cartan matrices . .7 1.2.3 Roots vs weights . .9 1.2.4 Roots at the group level . .9 1.3 The Weyl group . 10 1.3.1 Weyl Chambers . 11 1.3.2 The Weyl group as a subquotient for compact Lie groups . 13 1.3.3 The Weyl group as a subquotient for noncompact Lie groups . 13 2 Root data 16 2.1 Root data . 16 2.2 The Langlands dual group . 17 2.3 The flag variety . 18 2.3.1 Bruhat decomposition revisited . 18 2.3.2 Schubert cells . 19 3 Adelic groups 20 3.1 Weyl sets . 20 References 21 1 Root systems The following examples are taken mostly from [8] where they are stated without most of the calculations.
    [Show full text]
  • Contents 5 Eigenvalues and Diagonalization
    Linear Algebra (part 5): Eigenvalues and Diagonalization (by Evan Dummit, 2017, v. 1.50) Contents 5 Eigenvalues and Diagonalization 1 5.1 Eigenvalues, Eigenvectors, and The Characteristic Polynomial . 1 5.1.1 Eigenvalues and Eigenvectors . 2 5.1.2 Eigenvalues and Eigenvectors of Matrices . 3 5.1.3 Eigenspaces . 6 5.2 Diagonalization . 9 5.3 Applications of Diagonalization . 14 5.3.1 Transition Matrices and Incidence Matrices . 14 5.3.2 Systems of Linear Dierential Equations . 16 5.3.3 Non-Diagonalizable Matrices and the Jordan Canonical Form . 19 5 Eigenvalues and Diagonalization In this chapter, we will discuss eigenvalues and eigenvectors: these are characteristic values (and characteristic vectors) associated to a linear operator T : V ! V that will allow us to study T in a particularly convenient way. Our ultimate goal is to describe methods for nding a basis for V such that the associated matrix for T has an especially simple form. We will rst describe diagonalization, the procedure for (trying to) nd a basis such that the associated matrix for T is a diagonal matrix, and characterize the linear operators that are diagonalizable. Then we will discuss a few applications of diagonalization, including the Cayley-Hamilton theorem that any matrix satises its characteristic polynomial, and close with a brief discussion of non-diagonalizable matrices. 5.1 Eigenvalues, Eigenvectors, and The Characteristic Polynomial • Suppose that we have a linear transformation T : V ! V from a (nite-dimensional) vector space V to itself. We would like to determine whether there exists a basis of such that the associated matrix β is a β V [T ]β diagonal matrix.
    [Show full text]
  • Tropical Totally Positive Matrices 3
    TROPICAL TOTALLY POSITIVE MATRICES STEPHANE´ GAUBERT AND ADI NIV Abstract. We investigate the tropical analogues of totally positive and totally nonnegative matrices. These arise when considering the images by the nonarchimedean valuation of the corresponding classes of matrices over a real nonarchimedean valued field, like the field of real Puiseux series. We show that the nonarchimedean valuation sends the totally positive matrices precisely to the Monge matrices. This leads to explicit polyhedral representations of the tropical analogues of totally positive and totally nonnegative matrices. We also show that tropical totally nonnegative matrices with a finite permanent can be factorized in terms of elementary matrices. We finally determine the eigenvalues of tropical totally nonnegative matrices, and relate them with the eigenvalues of totally nonnegative matrices over nonarchimedean fields. Keywords: Total positivity; total nonnegativity; tropical geometry; compound matrix; permanent; Monge matrices; Grassmannian; Pl¨ucker coordinates. AMSC: 15A15 (Primary), 15A09, 15A18, 15A24, 15A29, 15A75, 15A80, 15B99. 1. Introduction 1.1. Motivation and background. A real matrix is said to be totally positive (resp. totally nonneg- ative) if all its minors are positive (resp. nonnegative). These matrices arise in several classical fields, such as oscillatory matrices (see e.g. [And87, §4]), or approximation theory (see e.g. [GM96]); they have appeared more recently in the theory of canonical bases for quantum groups [BFZ96]. We refer the reader to the monograph of Fallat and Johnson in [FJ11] or to the survey of Fomin and Zelevin- sky [FZ00] for more information. Totally positive/nonnegative matrices can be defined over any real closed field, and in particular, over nonarchimedean fields, like the field of Puiseux series with real coefficients.
    [Show full text]