A Comprehensive Introduction to Grassmann Manifolds

Total Page:16

File Type:pdf, Size:1020Kb

A Comprehensive Introduction to Grassmann Manifolds An Introduction to Grassmann Manifolds and their Matrix Representation Daniel Karrasch∗y Technische Universität München Center for Mathematics, M3 Boltzmannstr. 3 85748 Garching bei München, Germany September 26, 2017 In the following we give a self-contained introduction to Grassmann manifolds and their representation by matrix manifolds. The presentation is based on [6,1] and all results are taken from these references if not stated otherwise. Familiarity with calculus and elementary notions of differential geometry are assumed, for a textbook reference see, for instance, [12]. Throughout, let n 2 N>0 and k 2 f1; : : : ; ng. The object of investigation is the set of all k-dimensional linear subspaces V ⊆ Rn, i.e. Gr(k; Rn) := fV ⊆ Rn; V is a linear subspace; dim V = kg : The objective is to establish Gr(k; Rn) as a Riemannian (differentiable) manifold. 1 Topological & Metric Structure In this section, we follow [6]. We consider A 2 Rn×k as A = (aij)i2f1;:::;ng;j2f1;:::;kg = a1 ··· ak ; n n×k where ai = (aji)j2f1;:::;ng 2 R , i 2 f1; : : : ; kg. We endow R with the norm induced by the Frobenius inner product, i.e. q s > > X 2 kAk = tr(A A) = ja1j2 · · · jakj2 = aij: (1) 2 i;j ∗These notes were written while I was doing my PhD at Fachrichtung Mathematik, Technische Uni- versität Dresden, 01062 Dresden, Germany. Funding by ESF grant “Landesinnovationspromotion” No. 080942988 is gratefully acknowledged. yE-mail: [email protected] 1 We define the (non-compact) Stiefel manifold as n o St(k; Rn) := A 2 Rn×k; rk(A) = k n n×k o = A = a1 ··· ak 2 R ; a1; : : : ; ak are linearly independent : St(k; Rn) is often referred to as the set of all k-frames (of linearly independent vectors) in Rn. Note that St(k; Rn) is an open subset of Rn×k (as the preimage of the open set R n f0g under the continuous map A 7! det(A>A)). We define the compact Stiefel manifold as ∗ n n n > o St (k; R ) := A 2 St(k; R ); A A = Ik : p Clearly, St∗(k; Rn) is a bounded (by k) and closed (as the preimage of the closed set > n fIkg under the continuous map A 7! A A) and hence compact subset of St(k; R ) and St∗(k; Rn) ,! St(k; Rn). It is often referred to as the set of all orthonormal k-frames of linearly independent vectors in Rn. We consider the map n n π : St(k; R ) ! Gr(k; R );A = a1 ··· ak 7! span fa1; : : : ; akg ; (2) and endow Gr(k; Rn) with the final topology with respect to π, which we call the Grass- mann topology, i.e. U ⊆ Gr(k; Rn) is open if and only if π−1[U] is open in St(k; Rn). Let f 2 L(Rk; Rn) be injective and A 2 St(k; Rn) be the matrix representation with respect to the canonical bases in Rk and Rn. Then π(A) can be equivalently considered as f[Rk], i.e. the image of f. 1.1 Lemma. Consider Gr(k; Rn) with the Grassmann topology. (i) The map π is surjective and for any A 2 St(k; Rn) we have π−1[π(A)] = fAP ; P 2 GL(k; R)g : (3) (ii) The map π is continuous and open. Proof. Obviously, π is surjective and continuous. Furthermore, A; B 2 St(k; Rn) span the same subspace, i.e. π(A) = π(B) if and only if there exists a P 2 GL(k; R) such that B = AP . To see the sufficiency note that by assumption we have A = BP and B = AQ = BP Q for some P; Q 2 Rk×k, from which we directly read off that P = Q−1 2 GL(k; R). On the other hand, the necessity is obvious. For an open subset V ⊆ St(k; Rn) −1 S −1 we have π [π[V ]] = P 2GL(k;R)[V ]P , i.e. π [π[V ]] is a union of open subsets. Thus, π[V ] ⊆ Gr(k; Rn) is open by the definition of the Grassmann topology. Eq. (3) identifies Gr(k; Rn) with the quotient space St(k; Rn)=GL(k; R) := fA[GL(k; R)]; A 2 St(k; Rn)g ; (4) which can be endowed with the structure of a quotient manifold; see [2, Section 3.4]. However, we define a topological and differentiable structure on Gr(k; Rn) directly. 2 ∗ Rn : We consider the restriction of π to St (k; ), i.e. π = π St∗(k;Rn), and the “Gram- Schmidt orthonormalization map”, i.e. GS(A) is the matrix obtained by the application of the Gram-Schmidt orthonormalization method to the columns a1; : : : ; an of A. It is well-known that the Gram-Schmidt method defines a continuous map. Thus, we have π ◦ GS = π. Clearly, π is surjective and continuous. Next, we show a characterization of the continuity of functions f defined on a topo- logical space endowed with the final topology with respect to some other function and mapping to another topological space. 1.2 Lemma. Consider Gr(k; Rn) with the Grassmann topology. Let Ω be a topological space and f : Gr(k; Rn) ! Ω: Then the following statements are equivalent: (a) f is continuous; (b) f ◦ π is continuous; (c) f ◦ π is continuous. Proof. (a) , (b) follows from the definition of final topology with respect to π. (b) ) (c) is obvious since π is the restriction of π. (c) ) (b) follows from the representation f ◦ π = f ◦ π ◦ GS and the continuity of GS. One can show that the set ∆ := (A; B) 2 St(k; Rn)2; 9 P 2 GL(k; R): AP = B is closed in St(k; Rn) × St(k; Rn) by realizing that (A; B) 2 ∆ if and only if rk AB = k. This condition can be characterized by the requirement that all (con- tinuous) k + 1 minors of AB vanish. The set of matrices satisfying this condition is hence closed. Since Gr(k; Rn) = π[St∗(k; Rn)], π is continuous and St∗(k; Rn) compact, we have that Gr(k; Rn) is compact, too, and it remains to prove that the Grassmann topology is Hausdorff. To this end, let L; M 2 Gr(k; Rn) with L 6= M, A 2 π−1[fLg], B 2 π−1[fMg]. By Lemma 1.1 we have (A; B) 62 ∆ and since ∆ is closed, there exist neighborhoods UA and UB of A and B, respectively, such that (UA × UB) \ ∆ = ?. By Lemma 1.1 π[UA] and π[UB] are disjoint open sets containing π(A) = L and π(B) = M. Summarizing, we obtain the following result. 1.3 Proposition. Gr(k; Rn) is a compact space with respect to the Grassmann topology. Often, Gr(k; Rn) is endowed and investigated with the so-called gap metric, which we introduce next. 1.4 Definition (Gap metric). We define n n Θ: Gr(k; R ) × Gr(k; R ) ! R≥0; (L; M) 7! kΠL − ΠM k ; (5) where ΠX denotes the orthogonal projection on a subspace X and k·k denotes the oper- ator norm. Θ is called the gap metric on Gr(k; Rn). 3 In the literature, Θ(L; M) is also referred to as the opening between the subspaces L and M and was originally introduced in [11]. By the Projection Theorem for Hilbert spaces, there is a one-to-one correspondence between (closed) subspaces U and orthogonal ? projections ΠU with im ΠU = U and ker ΠU = U . Therefore, it is easy to see that Θ is indeed a metric on Gr(k; Rn). Next we show some further properties of the gap metric. 1.5 Proposition ([7, Theorems 13.1.1 & 13.1.2]). For L; M 2 Gr(k; Rn) (i) one has Θ(L; M) = max sup d(x; M); sup d(x; L) ; (6) x2S\L x2S\M where d(x; M) := inf fjx − yj ; y 2 Mg; (ii) Θ(L; M) = Θ(L?;M ?) ≤ 1; (iii) Θ(L; M) < 1 , L \ M ? = L? \ M = f0g , Rn = L ⊕ M ? = L? ⊕ M. Proof. (i) Let x 2 L \S. Then ΠM x 2 M and it follows jx − ΠM xj = j(ΠL − ΠM ) xj ≤ kΠL − ΠM k : Therefore, sup d(x; M) ≤ kΠL − ΠM k : x2S\L Analogously we have sup d(x; L) ≤ kΠL − ΠM k x2S\M and we obtain max sup d(x; M); sup d(x; L) ≤ kΠL − ΠM k = Θ(L; M): x2S\L x2S\M To derive the complementary inequality, observe that by the Projection Theorem in Hilbert spaces we have ρL := sup d(x; M) = sup jx − ΠM xj = sup j(I − ΠM )xj ; x2S\L x2S\L x2S\L ρM := sup d(x; L) = sup jx − ΠLxj = sup j(I − ΠL)xj : x2S\M x2S\M x2S\M Consequently, we have for every x 2 Rn that j(I − ΠL)ΠM xj ≤ ρM jΠM xj ; and j(I − ΠM )ΠLxj ≤ ρL jΠLxj : (7) We estimate 2 jΠM (I − ΠL)xj = hΠM (I − ΠL)x; ΠM (I − ΠL)xi = hΠM (I − ΠL)x; (I − ΠL)xi = hΠM (I − ΠL)x; (I − ΠL)(I − ΠL)xi = h(I − ΠL)ΠM (I − ΠL)x; (I − ΠL)xi ≤ j((I − ΠL)ΠM (I − ΠL)x)j j(I − ΠL)xj ; 4 thus by Eq. (7) 2 jΠM (I − ΠL)xj ≤ ρM jΠM (I − ΠL)xj j(I − ΠL)xj and hence jΠM (I − ΠL)xj ≤ ρM j(I − ΠL)xj : (8) On the other hand, using ΠM − ΠL = ΠM (I − ΠL) − (I − ΠM )ΠL and the orthogonality of ΠM , we obtain by Eqs.
Recommended publications
  • The Grassmann Manifold
    The Grassmann Manifold 1. For vector spaces V and W denote by L(V; W ) the vector space of linear maps from V to W . Thus L(Rk; Rn) may be identified with the space Rk£n of k £ n matrices. An injective linear map u : Rk ! V is called a k-frame in V . The set k n GFk;n = fu 2 L(R ; R ): rank(u) = kg of k-frames in Rn is called the Stiefel manifold. Note that the special case k = n is the general linear group: k k GLk = fa 2 L(R ; R ) : det(a) 6= 0g: The set of all k-dimensional (vector) subspaces ¸ ½ Rn is called the Grassmann n manifold of k-planes in R and denoted by GRk;n or sometimes GRk;n(R) or n GRk(R ). Let k ¼ : GFk;n ! GRk;n; ¼(u) = u(R ) denote the map which assigns to each k-frame u the subspace u(Rk) it spans. ¡1 For ¸ 2 GRk;n the fiber (preimage) ¼ (¸) consists of those k-frames which form a basis for the subspace ¸, i.e. for any u 2 ¼¡1(¸) we have ¡1 ¼ (¸) = fu ± a : a 2 GLkg: Hence we can (and will) view GRk;n as the orbit space of the group action GFk;n £ GLk ! GFk;n :(u; a) 7! u ± a: The exercises below will prove the following n£k Theorem 2. The Stiefel manifold GFk;n is an open subset of the set R of all n £ k matrices. There is a unique differentiable structure on the Grassmann manifold GRk;n such that the map ¼ is a submersion.
    [Show full text]
  • Cheap Orthogonal Constraints in Neural Networks: a Simple Parametrization of the Orthogonal and Unitary Group
    Cheap Orthogonal Constraints in Neural Networks: A Simple Parametrization of the Orthogonal and Unitary Group Mario Lezcano-Casado 1 David Mart´ınez-Rubio 2 Abstract Recurrent Neural Networks (RNNs). In RNNs the eigenval- ues of the gradient of the recurrent kernel explode or vanish We introduce a novel approach to perform first- exponentially fast with the number of time-steps whenever order optimization with orthogonal and uni- the recurrent kernel does not have unitary eigenvalues (Ar- tary constraints. This approach is based on a jovsky et al., 2016). This behavior is the same as the one parametrization stemming from Lie group theory encountered when computing the powers of a matrix, and through the exponential map. The parametrization results in very slow convergence (vanishing gradient) or a transforms the constrained optimization problem lack of convergence (exploding gradient). into an unconstrained one over a Euclidean space, for which common first-order optimization meth- In the seminal paper (Arjovsky et al., 2016), they note that ods can be used. The theoretical results presented unitary matrices have properties that would solve the ex- are general enough to cover the special orthogo- ploding and vanishing gradient problems. These matrices nal group, the unitary group and, in general, any form a group called the unitary group and they have been connected compact Lie group. We discuss how studied extensively in the fields of Lie group theory and Rie- this and other parametrizations can be computed mannian geometry. Optimization methods over the unitary efficiently through an implementation trick, mak- and orthogonal group have found rather fruitful applications ing numerically complex parametrizations usable in RNNs in recent years (cf.
    [Show full text]
  • 2 Hilbert Spaces You Should Have Seen Some Examples Last Semester
    2 Hilbert spaces You should have seen some examples last semester. The simplest (finite-dimensional) ex- C • A complex Hilbert space H is a complete normed space over whose norm is derived from an ample is Cn with its standard inner product. It’s worth recalling from linear algebra that if V is inner product. That is, we assume that there is a sesquilinear form ( , ): H H C, linear in · · × → an n-dimensional (complex) vector space, then from any set of n linearly independent vectors we the first variable and conjugate linear in the second, such that can manufacture an orthonormal basis e1, e2,..., en using the Gram-Schmidt process. In terms of this basis we can write any v V in the form (f ,д) = (д, f ), ∈ v = a e , a = (v, e ) (f , f ) 0 f H, and (f , f ) = 0 = f = 0. i i i i ≥ ∀ ∈ ⇒ ∑ The norm and inner product are related by which can be derived by taking the inner product of the equation v = aiei with ei. We also have n ∑ (f , f ) = f 2. ∥ ∥ v 2 = a 2. ∥ ∥ | i | i=1 We will always assume that H is separable (has a countable dense subset). ∑ Standard infinite-dimensional examples are l2(N) or l2(Z), the space of square-summable As usual for a normed space, the distance on H is given byd(f ,д) = f д = (f д, f д). • • ∥ − ∥ − − sequences, and L2(Ω) where Ω is a measurable subset of Rn. The Cauchy-Schwarz and triangle inequalities, • √ (f ,д) f д , f + д f + д , | | ≤ ∥ ∥∥ ∥ ∥ ∥ ≤ ∥ ∥ ∥ ∥ 2.1 Orthogonality can be derived fairly easily from the inner product.
    [Show full text]
  • Building Deep Networks on Grassmann Manifolds
    Building Deep Networks on Grassmann Manifolds Zhiwu Huangy, Jiqing Wuy, Luc Van Goolyz yComputer Vision Lab, ETH Zurich, Switzerland zVISICS, KU Leuven, Belgium fzhiwu.huang, jiqing.wu, [email protected] Abstract The popular applications of Grassmannian data motivate Learning representations on Grassmann manifolds is popular us to build a deep neural network architecture for Grassman- in quite a few visual recognition tasks. In order to enable deep nian representation learning. To this end, the new network learning on Grassmann manifolds, this paper proposes a deep architecture is designed to accept Grassmannian data di- network architecture by generalizing the Euclidean network rectly as input, and learns new favorable Grassmannian rep- paradigm to Grassmann manifolds. In particular, we design resentations that are able to improve the final visual recog- full rank mapping layers to transform input Grassmannian nition tasks. In other words, the new network aims to deeply data to more desirable ones, exploit re-orthonormalization learn Grassmannian features on their underlying Rieman- layers to normalize the resulting matrices, study projection nian manifolds in an end-to-end learning architecture. In pooling layers to reduce the model complexity in the Grass- summary, two main contributions are made by this paper: mannian context, and devise projection mapping layers to re- spect Grassmannian geometry and meanwhile achieve Eu- • We explore a novel deep network architecture in the con- clidean forms for regular output layers. To train the Grass- text of Grassmann manifolds, where it has not been possi- mann networks, we exploit a stochastic gradient descent set- ble to apply deep neural networks.
    [Show full text]
  • [DRAFT] a Peripatetic Course in Algebraic Topology
    [DRAFT] A Peripatetic Course in Algebraic Topology Julian Salazar [email protected]• http://slzr.me July 22, 2016 Abstract These notes are based on lectures in algebraic topology taught by Peter May and Henry Chan at the 2016 University of Chicago Math REU. They are loosely chrono- logical, having been reorganized for my benefit and significantly annotated by my personal exposition, plus solutions to in-class/HW exercises, plus content from read- ings (from May’s Finite Book), books (e.g. May’s Concise Course, Munkres’ Elements of Algebraic Topology, and Hatcher’s Algebraic Topology), Wikipedia, etc. I Foundations + Weeks 1 to 33 1 Topological notions3 1.1 Topological spaces.................................3 1.2 Separation properties...............................4 1.3 Continuity and operations on spaces......................4 2 Algebraic notions5 2.1 Rings and modules................................6 2.2 Tensor products..................................7 3 Categorical notions 11 3.1 Categories..................................... 11 3.2 Functors...................................... 13 3.3 Natural transformations............................. 15 3.4 [DRAFT] Universal properties.......................... 17 3.5 Adjoint functors.................................. 20 1 4 The fundamental group 21 4.1 Connectedness and paths............................. 21 4.2 Homotopy and homotopy equivalence..................... 22 4.3 The fundamental group.............................. 24 4.4 Applications.................................... 26
    [Show full text]
  • Orthogonal Complements (Revised Version)
    Orthogonal Complements (Revised Version) Math 108A: May 19, 2010 John Douglas Moore 1 The dot product You will recall that the dot product was discussed in earlier calculus courses. If n x = (x1: : : : ; xn) and y = (y1: : : : ; yn) are elements of R , we define their dot product by x · y = x1y1 + ··· + xnyn: The dot product satisfies several key axioms: 1. it is symmetric: x · y = y · x; 2. it is bilinear: (ax + x0) · y = a(x · y) + x0 · y; 3. and it is positive-definite: x · x ≥ 0 and x · x = 0 if and only if x = 0. The dot product is an example of an inner product on the vector space V = Rn over R; inner products will be treated thoroughly in Chapter 6 of [1]. Recall that the length of an element x 2 Rn is defined by p jxj = x · x: Note that the length of an element x 2 Rn is always nonnegative. Cauchy-Schwarz Theorem. If x 6= 0 and y 6= 0, then x · y −1 ≤ ≤ 1: (1) jxjjyj Sketch of proof: If v is any element of Rn, then v · v ≥ 0. Hence (x(y · y) − y(x · y)) · (x(y · y) − y(x · y)) ≥ 0: Expanding using the axioms for dot product yields (x · x)(y · y)2 − 2(x · y)2(y · y) + (x · y)2(y · y) ≥ 0 or (x · x)(y · y)2 ≥ (x · y)2(y · y): 1 Dividing by y · y, we obtain (x · y)2 jxj2jyj2 ≥ (x · y)2 or ≤ 1; jxj2jyj2 and (1) follows by taking the square root.
    [Show full text]
  • The Fixed-Point Property for Represented Spaces Mathieu Hoyrup
    The fixed-point property for represented spaces Mathieu Hoyrup To cite this version: Mathieu Hoyrup. The fixed-point property for represented spaces. 2021. hal-03117745v1 HAL Id: hal-03117745 https://hal.inria.fr/hal-03117745v1 Preprint submitted on 21 Jan 2021 (v1), last revised 28 Jan 2021 (v2) HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. The fixed-point property for represented spaces Mathieu Hoyrup Universit´ede Lorraine, CNRS, Inria, LORIA, F-54000 Nancy, France [email protected] January 21, 2021 Abstract We investigate which represented spaces enjoy the fixed-point property, which is the property that every continuous multi-valued function has a fixed-point. We study the basic theory of this notion and of its uniform version. We provide a complete characterization of countable-based spaces with the fixed-point property, showing that they are exactly the pointed !-continuous dcpos. We prove that the spaces whose lattice of open sets enjoys the fixed-point property are exactly the countably-based spaces. While the role played by fixed-point free functions in the diagonal argument is well-known, we show how it can be adapted to fixed-point free multi-valued functions, and apply the technique to identify the base-complexity of the Kleene-Kreisel spaces, which was an open problem.
    [Show full text]
  • Admissibly Represented Spaces and Qcb-Spaces
    Admissibly Represented Spaces and Qcb-Spaces∗ Matthias Schr¨oder Abstract A basic concept of Type Two Theory of Effectivity (TTE) is the notion of an admissibly represented space. Admissibly represented spaces are closely related to qcb-spaces. The latter form a well-behaved subclass of topological spaces. We give a survey of basic facts about Type Two Theory of Effectivity, admissibly represented spaces, qcb-spaces and effective qcb-spaces. Moreover, we discuss the relationship of qcb-spaces to other categories relevant to Computable Analysis. 1 Introduction Computable Analysis investigates computability on real numbers and related spaces. Type Two Theory of Effectivity (TTE) constitutes a popular approach to Com- putable Analysis, providing a rigorous computational framework for non-discrete spaces with cardinality of the continuum (cf. [48, 49]). The basic tool of this frame- work are representations. A representation equips the objects of a given space with names, giving rise to the concept of a represented space. Computable functions be- tween represented spaces are those which are realized by a computable function on the names. The ensuing category of represented spaces and computable functions enjoys excellent closure properties. Any represented space is equipped with a natural topology, turning it into a arXiv:2004.09450v1 [cs.LO] 20 Apr 2020 qcb-space. Qcb-spaces form a subclass of topological spaces with a remarkably rich structure. For example it is cartesian closed, hence products and function spaces can be formed. Admissibility is a notion of topological well-behavedness for representations. The category of admissibly represented spaces and continuously realizable functions is equivalent to the category QCB0 of qcb-spaces with the T0-property.
    [Show full text]
  • LINEAR ALGEBRA FALL 2007/08 PROBLEM SET 9 SOLUTIONS In
    MATH 110: LINEAR ALGEBRA FALL 2007/08 PROBLEM SET 9 SOLUTIONS In the following V will denote a finite-dimensional vector space over R that is also an inner product space with inner product denoted by h ·; · i. The norm induced by h ·; · i will be denoted k · k. 1. Let S be a subset (not necessarily a subspace) of V . We define the orthogonal annihilator of S, denoted S?, to be the set S? = fv 2 V j hv; wi = 0 for all w 2 Sg: (a) Show that S? is always a subspace of V . ? Solution. Let v1; v2 2 S . Then hv1; wi = 0 and hv2; wi = 0 for all w 2 S. So for any α; β 2 R, hαv1 + βv2; wi = αhv1; wi + βhv2; wi = 0 ? for all w 2 S. Hence αv1 + βv2 2 S . (b) Show that S ⊆ (S?)?. Solution. Let w 2 S. For any v 2 S?, we have hv; wi = 0 by definition of S?. Since this is true for all v 2 S? and hw; vi = hv; wi, we see that hw; vi = 0 for all v 2 S?, ie. w 2 S?. (c) Show that span(S) ⊆ (S?)?. Solution. Since (S?)? is a subspace by (a) (it is the orthogonal annihilator of S?) and S ⊆ (S?)? by (b), we have span(S) ⊆ (S?)?. ? ? (d) Show that if S1 and S2 are subsets of V and S1 ⊆ S2, then S2 ⊆ S1 . ? Solution. Let v 2 S2 . Then hv; wi = 0 for all w 2 S2 and so for all w 2 S1 (since ? S1 ⊆ S2).
    [Show full text]
  • Stiefel Manifolds and Polygons
    Stiefel Manifolds and Polygons Clayton Shonkwiler Department of Mathematics, Colorado State University; [email protected] Abstract Polygons are compound geometric objects, but when trying to understand the expected behavior of a large collection of random polygons – or even to formalize what a random polygon is – it is convenient to interpret each polygon as a point in some parameter space, essentially trading the complexity of the object for the complexity of the space. In this paper I describe such an interpretation where the parameter space is an abstract but very nice space called a Stiefel manifold and show how to exploit the geometry of the Stiefel manifold both to generate random polygons and to morph one polygon into another. Introduction The goal is to describe an identification of polygons with points in certain nice parameter spaces. For example, every triangle in the plane can be identified with a pair of orthonormal vectors in 3-dimensional 3 3 space R , and hence planar triangle shapes correspond to points in the space St2(R ) of all such pairs, which is an example of a Stiefel manifold. While this space is defined somewhat abstractly and is hard to visualize – for example, it cannot be embedded in Euclidean space of fewer than 5 dimensions [9] – it is easy to visualize and manipulate points in it: they’re just pairs of perpendicular unit vectors. Moreover, this is a familiar and well-understood space in the setting of differential geometry. For example, the group SO(3) of rotations of 3-space acts transitively on it, meaning that there is a rotation of space which transforms any triangle into any other triangle.
    [Show full text]
  • A Guided Tour to the Plane-Based Geometric Algebra PGA
    A Guided Tour to the Plane-Based Geometric Algebra PGA Leo Dorst University of Amsterdam Version 1.15{ July 6, 2020 Planes are the primitive elements for the constructions of objects and oper- ators in Euclidean geometry. Triangulated meshes are built from them, and reflections in multiple planes are a mathematically pure way to construct Euclidean motions. A geometric algebra based on planes is therefore a natural choice to unify objects and operators for Euclidean geometry. The usual claims of `com- pleteness' of the GA approach leads us to hope that it might contain, in a single framework, all representations ever designed for Euclidean geometry - including normal vectors, directions as points at infinity, Pl¨ucker coordinates for lines, quaternions as 3D rotations around the origin, and dual quaternions for rigid body motions; and even spinors. This text provides a guided tour to this algebra of planes PGA. It indeed shows how all such computationally efficient methods are incorporated and related. We will see how the PGA elements naturally group into blocks of four coordinates in an implementation, and how this more complete under- standing of the embedding suggests some handy choices to avoid extraneous computations. In the unified PGA framework, one never switches between efficient representations for subtasks, and this obviously saves any time spent on data conversions. Relative to other treatments of PGA, this text is rather light on the mathematics. Where you see careful derivations, they involve the aspects of orientation and magnitude. These features have been neglected by authors focussing on the mathematical beauty of the projective nature of the algebra.
    [Show full text]
  • The Biflag Manifold and the Fixed Points of a Unipotent Transformation on the Flag Manifold* Uwe Helmke Department of Mathematic
    The Biflag Manifold and the Fixed Points of a Unipotent Transformation on the Flag Manifold* Uwe Helmke Department of Mathematics Regensburg University 8400 Regensburg, Federal Republic of Germany and Mark A. Shaymanf Electrical Engineering Department and Systems Research Center University of Mayland College Park, Maryland 20742 Submitted by Paul A. Fuhrmann ‘, ABSTRACT We investigate the topology of the submanifold of the partial flag manifold consisting of those flags S, c . c S,, in___@ (.F = R or C) that are fixed by a given unipotent A E SL,(.F) and for which the restrictions AIS, have prescribed Jordan type. It is shown that this submanifold has a compact nonsingular projective variety as a strong deformation retract. This variety, the b&g manifold, consists of rectangular arrays of subspaces which are nested along each row and column. It is proven that the biflag manifold has the homology type (mod 2 for F = R) of a product of Grassmannians. A partition of the biflag manifold into a finite union of affine spaces is also given. In the special case when the biflag manifold is a flag manifold, this partition coincides with the Bruhat cell decomposition. *Presented at the 824th meeting of the American Mathematical Society, Claremont, California, November 1985. ‘Research partially supported by the National Science Foundation under Grant ECS- 8301015. LINEAR ALGEBRA AND ITS APPLlCATlONS 92:125-159 (1987) 125 0 Elsevier Science Publishing Co., Inc., 1987 52 Vanderbilt Ave., New York, NY 10017 0024-3795/87/$3.50 126 UWE HELMKE AND MARK A. SHAYMAN 1. INTRODUCTION Let n be a positive integer, and let k = (k,, .
    [Show full text]