1 Matrix Lie Groups

Total Page:16

File Type:pdf, Size:1020Kb

1 Matrix Lie Groups 1 Matrix Lie Groups 1.1 Definition of a Matrix Lie Group We begin with a very important class of groups, the general linear groups. The groups we will study in this book will all be subgroups (of a certain sort) of one of the general linear groups. This chapter makes use of various standard results from linear algebra that are summarized in Appendix B. This chapter also assumes basic facts and definitions from the theory of abstract groups; the necessary information is provided in Appendix A. Definition 1.1. The general linear group over the real numbers, denoted GL(n; R), is the group of all n × n invertible matrices with real entries. The general linear group over the complex numbers, denoted GL(n; C), is the group of all n × n invertible matrices with complex entries. The general linear groups are indeed groups under the operation of matrix multiplication: The product of two invertible matrices is invertible, the iden- tity matrix is an identity for the group, an invertible matrix has (by definition) an inverse, and matrix multiplication is associative. Definition 1.2. Let Mn(C) denote the space of all n×n matrices with complex entries. Definition 1.3. Let Am be a sequence of complex matrices in Mn(C). We say that Am converges to a matrix A if each entry of Am converges (as →∞ m ) to the corresponding entry of A (i.e., if (Am)kl converges to Akl for all 1 ≤ k, l ≤ n). Definition 1.4. A matrix Lie group is any subgroup G of GL(n; C) with the following property: If Am is any sequence of matrices in G, and Am converges to some matrix A then either A ∈ G,orA is not invertible. The condition on G amounts to saying that G is a closed subset of GL(n; C). (This does not necessarily mean that G is closed in Mn(C).) Thus, Definition 4 1 Matrix Lie Groups 1.4 is equivalent to saying that a matrix Lie group is a closed subgroup of GL(n; C). The condition that G be a closed subgroup, as opposed to merely a sub- group, should be regarded as a technicality, in that most of the interesting subgroups of GL(n; C) have this property. (Most of the matrix Lie groups G we will consider have the stronger property that if Am is any sequence of matrices in G,andAm converges to some matrix A,thenA ∈ G (i.e., that G is closed in Mn(C)).) 1.1.1 Counterexamples An example of a subgroup of GL(n; C) which is not closed (and hence is not a matrix Lie group) is the set of all n×n invertible matrices all of whose entries are real and rational. This is in fact a subgroup of GL(n; C), but not a closed subgroup. That is, one can (easily) have a sequence of invertible matrices with rational entries converging to an invertible matrix with some irrational entries. (In fact, every real invertible matrix is the limit of some sequence of invertible matrices with rational entries.) Another example of a group of matrices which is not a matrix Lie group is the following subgroup of GL(2; C). Let a be an irrational real number and let eit 0 G = t ∈ R . 0 eita Clearly, G is a subgroup of GL(2, C). Because a is irrational, the matrix −I is not in G,sincetomakeeit equal to −1, we must take t to be an odd integer multiple of π,inwhichcaseta cannot be an odd integer multiple of π.Onthe other hand (Exercise 1), by taking t =(2n +1)π for a suitably chosen integer n,wecanmaketa arbitrarily close to an odd integer multiple of π. Hence, we can find a sequence of matrices in G which converges to −I,andsoG is not a matrix Lie group. See Exercise 1 and Exercise 18 for more information. 1.2 Examples of Matrix Lie Groups Mastering the subject of Lie groups involves not only learning the general the- ory but also familiarizing oneself with examples. In this section, we introduce some of the most important examples of (matrix) Lie groups. 1.2.1 The general linear groups GL(n; R) and GL(n; C) The general linear groups (over R or C) are themselves matrix Lie groups. Of course, GL(n; C) is a subgroup of itself. Furthermore, if Am is a sequence of matrices in GL(n; C)andAm converges to A, then by the definition of GL(n; C), either A is in GL(n; C), or A is not invertible. 1.2 Examples of Matrix Lie Groups 5 Moreover, GL(n; R) is a subgroup of GL(n; C), and if Am ∈ GL(n; R)and Am converges to A, then the entries of A are real. Thus, either A is not invertible or A ∈ GL(n; R). 1.2.2 The special linear groups SL(n; R) and SL(n; C) The special linear group (over R or C) is the group of n × n invertible matrices (with real or complex entries) having determinant one. Both of these are subgroups of GL(n; C). Furthermore, if An is a sequence of matrices with determinant one and An converges to A,thenA also has determinant one, because the determinant is a continuous function. Thus, SL(n; R)andSL (n; C) are matrix Lie groups. 1.2.3 The orthogonal and special orthogonal groups, O(n) and SO(n) An n × n real matrix A is said to be orthogonal if the column vectors that make up A are orthonormal, that is, if n AljAlk = δjk, 1 ≤ j, k ≤ n. l=1 (Here δjk is the Kronecker delta, equal to 1 if j = k and equal to zero if j = k.) Equivalently, A is orthogonal if it preserves the inner product, namely if n x, y = Ax, Ay for all vectors x, y in R . ( Angled brackets denote the usual Rn inner product on , x, y = k xkyk.) Still another equivalent definition is that A is orthogonal if AtrA = I, i.e., if Atr = A−1. (Here, Atr is the tr transpose of A,(A )kl = Alk.) See Exercise 2. Since det Atr =detA, we see that if A is orthogonal, then det(AtrA)= (det A)2 =detI = 1. Hence, det A = ±1, for all orthogonal matrices A. This formula tells us in particular that every orthogonal matrix must be invertible. However, if A is an orthogonal matrix, then A−1x, A−1y = A(A−1x),A(A−1y) = x, y . Thus, the inverse of an orthogonal matrix is orthogonal. Furthermore, the product of two orthogonal matrices is orthogonal, since if A and B both preserve inner products, then so does AB. Thus, the set of orthogonal matrices forms a group. The set of all n × n real orthogonal matrices is the orthogonal group O(n), and it is a subgroup of GL(n; C). The limit of a sequence of orthogonal matrices is orthogonal, because the relation AtrA = I is preserved under taking limits. Thus, O(n) is a matrix Lie group. The set of n × n orthogonal matrices with determinant one is the special orthogonal group SO(n). Clearly, this is a subgroup of O(n), and hence of 6 1 Matrix Lie Groups GL(n; C). Moreover, both orthogonality and the property of having determi- nant one are preserved under limits, and so SO(n) is a matrix Lie group. Since elements of O(n) already have determinant ±1, SO(n)is“half”ofO(n). Geometrically, elements of O(n) are either rotations or combinations of rotations and reflections. The elements of SO(n) are just the rotations. See also Exercise 6. 1.2.4 The unitary and special unitary groups, U(n) and SU(n) An n × n complex matrix A is said to be unitary if the column vectors of A are orthonormal, that is, if n AljAlk = δjk. l=1 Equivalently, A is unitary if it preserves the inner product, namely if x, y = n Ax, Ay for all vectors x, y in C . (Angled brackets here denote the inner Cn product on , x, y = k xkyk. We will adopt the convention of putting the complex conjugate on the left.) Still another equivalent definition is that A is unitary if A∗A = I, i.e., if A∗ = A−1. (Here, A∗ is the adjoint of A, ∗ (A )jk = Akj.) See Exercise 3. Since det A∗ = det A, we see that if A is unitary, then det(A∗A)= |det A|2 =detI = 1. Hence, |det A| = 1, for all unitary matrices A. This, in particular, shows that every unitary matrix is invertible. The same argument as for the orthogonal group shows that the set of unitary matrices forms a group. The set of all n × n unitary matrices is the unitary group U(n), and it is a subgroup of GL(n; C). The limit of unitary matrices is unitary, so U(n)is a matrix Lie group. The set of unitary matrices with determinant one is the special unitary group SU(n). It is easy to check that SU(n) is a matrix Lie group. Note that a unitary matrix can have determinant eiθ for any θ,andso SU(n) is a smaller subset of U(n)thanSO(n)isofO(n). (Specifically, SO(n) has the same dimension as O(n), whereas SU(n) has dimension one less than that of U(n).) See also Exercise 8.
Recommended publications
  • Chapter 4. Homomorphisms and Isomorphisms of Groups
    Chapter 4. Homomorphisms and Isomorphisms of Groups 4.1 Note: We recall the following terminology. Let X and Y be sets. When we say that f is a function or a map from X to Y , written f : X ! Y , we mean that for every x 2 X there exists a unique corresponding element y = f(x) 2 Y . The set X is called the domain of f and the range or image of f is the set Image(f) = f(X) = f(x) x 2 X . For a set A ⊆ X, the image of A under f is the set f(A) = f(a) a 2 A and for a set −1 B ⊆ Y , the inverse image of B under f is the set f (B) = x 2 X f(x) 2 B . For a function f : X ! Y , we say f is one-to-one (written 1 : 1) or injective when for every y 2 Y there exists at most one x 2 X such that y = f(x), we say f is onto or surjective when for every y 2 Y there exists at least one x 2 X such that y = f(x), and we say f is invertible or bijective when f is 1:1 and onto, that is for every y 2 Y there exists a unique x 2 X such that y = f(x). When f is invertible, the inverse of f is the function f −1 : Y ! X defined by f −1(y) = x () y = f(x). For f : X ! Y and g : Y ! Z, the composite g ◦ f : X ! Z is given by (g ◦ f)(x) = g(f(x)).
    [Show full text]
  • Group Homomorphisms
    1-17-2018 Group Homomorphisms Here are the operation tables for two groups of order 4: · 1 a a2 + 0 1 2 1 1 a a2 0 0 1 2 a a a2 1 1 1 2 0 a2 a2 1 a 2 2 0 1 There is an obvious sense in which these two groups are “the same”: You can get the second table from the first by replacing 0 with 1, 1 with a, and 2 with a2. When are two groups the same? You might think of saying that two groups are the same if you can get one group’s table from the other by substitution, as above. However, there are problems with this. In the first place, it might be very difficult to check — imagine having to write down a multiplication table for a group of order 256! In the second place, it’s not clear what a “multiplication table” is if a group is infinite. One way to implement a substitution is to use a function. In a sense, a function is a thing which “substitutes” its output for its input. I’ll define what it means for two groups to be “the same” by using certain kinds of functions between groups. These functions are called group homomorphisms; a special kind of homomorphism, called an isomorphism, will be used to define “sameness” for groups. Definition. Let G and H be groups. A homomorphism from G to H is a function f : G → H such that f(x · y)= f(x) · f(y) forall x,y ∈ G.
    [Show full text]
  • INTEGER POINTS and THEIR ORTHOGONAL LATTICES 2 to Remove the Congruence Condition
    INTEGER POINTS ON SPHERES AND THEIR ORTHOGONAL LATTICES MENNY AKA, MANFRED EINSIEDLER, AND URI SHAPIRA (WITH AN APPENDIX BY RUIXIANG ZHANG) Abstract. Linnik proved in the late 1950’s the equidistribution of in- teger points on large spheres under a congruence condition. The congru- ence condition was lifted in 1988 by Duke (building on a break-through by Iwaniec) using completely different techniques. We conjecture that this equidistribution result also extends to the pairs consisting of a vector on the sphere and the shape of the lattice in its orthogonal complement. We use a joining result for higher rank diagonalizable actions to obtain this conjecture under an additional congruence condition. 1. Introduction A theorem of Legendre, whose complete proof was given by Gauss in [Gau86], asserts that an integer D can be written as a sum of three squares if and only if D is not of the form 4m(8k + 7) for some m, k N. Let D = D N : D 0, 4, 7 mod8 and Z3 be the set of primitive∈ vectors { ∈ 6≡ } prim in Z3. Legendre’s Theorem also implies that the set 2 def 3 2 S (D) = v Zprim : v 2 = D n ∈ k k o is non-empty if and only if D D. This important result has been refined in many ways. We are interested∈ in the refinement known as Linnik’s problem. Let S2 def= x R3 : x = 1 . For a subset S of rational odd primes we ∈ k k2 set 2 D(S)= D D : for all p S, D mod p F× .
    [Show full text]
  • The Fundamental Group
    The Fundamental Group Tyrone Cutler July 9, 2020 Contents 1 Where do Homotopy Groups Come From? 1 2 The Fundamental Group 3 3 Methods of Computation 8 3.1 Covering Spaces . 8 3.2 The Seifert-van Kampen Theorem . 10 1 Where do Homotopy Groups Come From? 0 Working in the based category T op∗, a `point' of a space X is a map S ! X. Unfortunately, 0 the set T op∗(S ;X) of points of X determines no topological information about the space. The same is true in the homotopy category. The set of `points' of X in this case is the set 0 π0X = [S ;X] = [∗;X]0 (1.1) of its path components. As expected, this pointed set is a very coarse invariant of the pointed homotopy type of X. How might we squeeze out some more useful information from it? 0 One approach is to back up a step and return to the set T op∗(S ;X) before quotienting out the homotopy relation. As we saw in the first lecture, there is extra information in this set in the form of track homotopies which is discarded upon passage to [S0;X]. Recall our slogan: it matters not only that a map is null homotopic, but also the manner in which it becomes so. So, taking a cue from algebraic geometry, let us try to understand the automorphism group of the zero map S0 ! ∗ ! X with regards to this extra structure. If we vary the basepoint of X across all its points, maybe it could be possible to detect information not visible on the level of π0.
    [Show full text]
  • Arxiv:2003.06292V1 [Math.GR] 12 Mar 2020 Eggnrtr N Ignlmti.Tedaoa Arxi Matrix Diagonal the Matrix
    ALGORITHMS IN LINEAR ALGEBRAIC GROUPS SUSHIL BHUNIA, AYAN MAHALANOBIS, PRALHAD SHINDE, AND ANUPAM SINGH ABSTRACT. This paper presents some algorithms in linear algebraic groups. These algorithms solve the word problem and compute the spinor norm for orthogonal groups. This gives us an algorithmic definition of the spinor norm. We compute the double coset decompositionwith respect to a Siegel maximal parabolic subgroup, which is important in computing infinite-dimensional representations for some algebraic groups. 1. INTRODUCTION Spinor norm was first defined by Dieudonné and Kneser using Clifford algebras. Wall [21] defined the spinor norm using bilinear forms. These days, to compute the spinor norm, one uses the definition of Wall. In this paper, we develop a new definition of the spinor norm for split and twisted orthogonal groups. Our definition of the spinornorm is rich in the sense, that itis algorithmic in nature. Now one can compute spinor norm using a Gaussian elimination algorithm that we develop in this paper. This paper can be seen as an extension of our earlier work in the book chapter [3], where we described Gaussian elimination algorithms for orthogonal and symplectic groups in the context of public key cryptography. In computational group theory, one always looks for algorithms to solve the word problem. For a group G defined by a set of generators hXi = G, the problem is to write g ∈ G as a word in X: we say that this is the word problem for G (for details, see [18, Section 1.4]). Brooksbank [4] and Costi [10] developed algorithms similar to ours for classical groups over finite fields.
    [Show full text]
  • The Unitary Representations of the Poincaré Group in Any Spacetime
    The unitary representations of the Poincar´e group in any spacetime dimension Xavier Bekaert a and Nicolas Boulanger b a Institut Denis Poisson, Unit´emixte de Recherche 7013, Universit´ede Tours, Universit´ed’Orl´eans, CNRS, Parc de Grandmont, 37200 Tours (France) [email protected] b Service de Physique de l’Univers, Champs et Gravitation Universit´ede Mons – UMONS, Place du Parc 20, 7000 Mons (Belgium) [email protected] An extensive group-theoretical treatment of linear relativistic field equa- tions on Minkowski spacetime of arbitrary dimension D > 2 is presented in these lecture notes. To start with, the one-to-one correspondence be- tween linear relativistic field equations and unitary representations of the isometry group is reviewed. In turn, the method of induced representa- tions reduces the problem of classifying the representations of the Poincar´e group ISO(D 1, 1) to the classification of the representations of the sta- − bility subgroups only. Therefore, an exhaustive treatment of the two most important classes of unitary irreducible representations, corresponding to massive and massless particles (the latter class decomposing in turn into the “helicity” and the “infinite-spin” representations) may be performed via the well-known representation theory of the orthogonal groups O(n) (with D 4 <n<D ). Finally, covariant field equations are given for each − unitary irreducible representation of the Poincar´egroup with non-negative arXiv:hep-th/0611263v2 13 Jun 2021 mass-squared. Tachyonic representations are also examined. All these steps are covered in many details and with examples. The present notes also include a self-contained review of the representation theory of the general linear and (in)homogeneous orthogonal groups in terms of Young diagrams.
    [Show full text]
  • The General Linear Group
    18.704 Gabe Cunningham 2/18/05 [email protected] The General Linear Group Definition: Let F be a field. Then the general linear group GLn(F ) is the group of invert- ible n × n matrices with entries in F under matrix multiplication. It is easy to see that GLn(F ) is, in fact, a group: matrix multiplication is associative; the identity element is In, the n × n matrix with 1’s along the main diagonal and 0’s everywhere else; and the matrices are invertible by choice. It’s not immediately clear whether GLn(F ) has infinitely many elements when F does. However, such is the case. Let a ∈ F , a 6= 0. −1 Then a · In is an invertible n × n matrix with inverse a · In. In fact, the set of all such × matrices forms a subgroup of GLn(F ) that is isomorphic to F = F \{0}. It is clear that if F is a finite field, then GLn(F ) has only finitely many elements. An interesting question to ask is how many elements it has. Before addressing that question fully, let’s look at some examples. ∼ × Example 1: Let n = 1. Then GLn(Fq) = Fq , which has q − 1 elements. a b Example 2: Let n = 2; let M = ( c d ). Then for M to be invertible, it is necessary and sufficient that ad 6= bc. If a, b, c, and d are all nonzero, then we can fix a, b, and c arbitrarily, and d can be anything but a−1bc. This gives us (q − 1)3(q − 2) matrices.
    [Show full text]
  • Orthogonal Reduction 1 the Row Echelon Form -.: Mathematical
    MATH 5330: Computational Methods of Linear Algebra Lecture 9: Orthogonal Reduction Xianyi Zeng Department of Mathematical Sciences, UTEP 1 The Row Echelon Form Our target is to solve the normal equation: AtAx = Atb ; (1.1) m×n where A 2 R is arbitrary; we have shown previously that this is equivalent to the least squares problem: min jjAx−bjj : (1.2) x2Rn t n×n As A A2R is symmetric positive semi-definite, we can try to compute the Cholesky decom- t t n×n position such that A A = L L for some lower-triangular matrix L 2 R . One problem with this approach is that we're not fully exploring our information, particularly in Cholesky decomposition we treat AtA as a single entity in ignorance of the information about A itself. t m×m In particular, the structure A A motivates us to study a factorization A=QE, where Q2R m×n is orthogonal and E 2 R is to be determined. Then we may transform the normal equation to: EtEx = EtQtb ; (1.3) t m×m where the identity Q Q = Im (the identity matrix in R ) is used. This normal equation is equivalent to the least squares problem with E: t min Ex−Q b : (1.4) x2Rn Because orthogonal transformation preserves the L2-norm, (1.2) and (1.4) are equivalent to each n other. Indeed, for any x 2 R : jjAx−bjj2 = (b−Ax)t(b−Ax) = (b−QEx)t(b−QEx) = [Q(Qtb−Ex)]t[Q(Qtb−Ex)] t t t t t t t t 2 = (Q b−Ex) Q Q(Q b−Ex) = (Q b−Ex) (Q b−Ex) = Ex−Q b : Hence the target is to find an E such that (1.3) is easier to solve.
    [Show full text]
  • Group Theory
    Appendix A Group Theory This appendix is a survey of only those topics in group theory that are needed to understand the composition of symmetry transformations and its consequences for fundamental physics. It is intended to be self-contained and covers those topics that are needed to follow the main text. Although in the end this appendix became quite long, a thorough understanding of group theory is possible only by consulting the appropriate literature in addition to this appendix. In order that this book not become too lengthy, proofs of theorems were largely omitted; again I refer to other monographs. From its very title, the book by H. Georgi [211] is the most appropriate if particle physics is the primary focus of interest. The book by G. Costa and G. Fogli [102] is written in the same spirit. Both books also cover the necessary group theory for grand unification ideas. A very comprehensive but also rather dense treatment is given by [428]. Still a classic is [254]; it contains more about the treatment of dynamical symmetries in quantum mechanics. A.1 Basics A.1.1 Definitions: Algebraic Structures From the structureless notion of a set, one can successively generate more and more algebraic structures. Those that play a prominent role in physics are defined in the following. Group A group G is a set with elements gi and an operation ◦ (called group multiplication) with the properties that (i) the operation is closed: gi ◦ g j ∈ G, (ii) a neutral element g0 ∈ G exists such that gi ◦ g0 = g0 ◦ gi = gi , (iii) for every gi exists an −1 ∈ ◦ −1 = = −1 ◦ inverse element gi G such that gi gi g0 gi gi , (iv) the operation is associative: gi ◦ (g j ◦ gk) = (gi ◦ g j ) ◦ gk.
    [Show full text]
  • The Invertible Matrix Theorem
    The Invertible Matrix Theorem Ryan C. Daileda Trinity University Linear Algebra Daileda The Invertible Matrix Theorem Introduction It is important to recognize when a square matrix is invertible. We can now characterize invertibility in terms of every one of the concepts we have now encountered. We will continue to develop criteria for invertibility, adding them to our list as we go. The invertibility of a matrix is also related to the invertibility of linear transformations, which we discuss below. Daileda The Invertible Matrix Theorem Theorem 1 (The Invertible Matrix Theorem) For a square (n × n) matrix A, TFAE: a. A is invertible. b. A has a pivot in each row/column. RREF c. A −−−→ I. d. The equation Ax = 0 only has the solution x = 0. e. The columns of A are linearly independent. f. Null A = {0}. g. A has a left inverse (BA = In for some B). h. The transformation x 7→ Ax is one to one. i. The equation Ax = b has a (unique) solution for any b. j. Col A = Rn. k. A has a right inverse (AC = In for some C). l. The transformation x 7→ Ax is onto. m. AT is invertible. Daileda The Invertible Matrix Theorem Inverse Transforms Definition A linear transformation T : Rn → Rn (also called an endomorphism of Rn) is called invertible iff it is both one-to-one and onto. If [T ] is the standard matrix for T , then we know T is given by x 7→ [T ]x. The Invertible Matrix Theorem tells us that this transformation is invertible iff [T ] is invertible.
    [Show full text]
  • Unitary Group - Wikipedia
    Unitary group - Wikipedia https://en.wikipedia.org/wiki/Unitary_group Unitary group In mathematics, the unitary group of degree n, denoted U( n), is the group of n × n unitary matrices, with the group operation of matrix multiplication. The unitary group is a subgroup of the general linear group GL( n, C). Hyperorthogonal group is an archaic name for the unitary group, especially over finite fields. For the group of unitary matrices with determinant 1, see Special unitary group. In the simple case n = 1, the group U(1) corresponds to the circle group, consisting of all complex numbers with absolute value 1 under multiplication. All the unitary groups contain copies of this group. The unitary group U( n) is a real Lie group of dimension n2. The Lie algebra of U( n) consists of n × n skew-Hermitian matrices, with the Lie bracket given by the commutator. The general unitary group (also called the group of unitary similitudes ) consists of all matrices A such that A∗A is a nonzero multiple of the identity matrix, and is just the product of the unitary group with the group of all positive multiples of the identity matrix. Contents Properties Topology Related groups 2-out-of-3 property Special unitary and projective unitary groups G-structure: almost Hermitian Generalizations Indefinite forms Finite fields Degree-2 separable algebras Algebraic groups Unitary group of a quadratic module Polynomial invariants Classifying space See also Notes References Properties Since the determinant of a unitary matrix is a complex number with norm 1, the determinant gives a group 1 of 7 2/23/2018, 10:13 AM Unitary group - Wikipedia https://en.wikipedia.org/wiki/Unitary_group homomorphism The kernel of this homomorphism is the set of unitary matrices with determinant 1.
    [Show full text]
  • Material on Algebraic and Lie Groups
    2 Lie groups and algebraic groups. 2.1 Basic Definitions. In this subsection we will introduce the class of groups to be studied. We first recall that a Lie group is a group that is also a differentiable manifold 1 and multiplication (x, y xy) and inverse (x x ) are C1 maps. An algebraic group is a group7! that is also an algebraic7! variety such that multi- plication and inverse are morphisms. Before we can introduce our main characters we first consider GL(n, C) as an affi ne algebraic group. Here Mn(C) denotes the space of n n matrices and GL(n, C) = g Mn(C) det(g) =) . Now Mn(C) is given the structure nf2 2 j 6 g of affi ne space C with the coordinates xij for X = [xij] . This implies that GL(n, C) is Z-open and as a variety is isomorphic with the affi ne variety 1 Mn(C) det . This implies that (GL(n, C)) = C[xij, det ]. f g O Lemma 1 If G is an algebraic group over an algebraically closed field, F , then every point in G is smooth. Proof. Let Lg : G G be given by Lgx = gx. Then Lg is an isomorphism ! 1 1 of G as an algebraic variety (Lg = Lg ). Since isomorphisms preserve the set of smooth points we see that if x G is smooth so is every element of Gx = G. 2 Proposition 2 If G is an algebraic group over an algebraically closed field F then the Z-connected components Proof.
    [Show full text]