34 | Principal Component Analysis

Total Page:16

File Type:pdf, Size:1020Kb

34 | Principal Component Analysis 34 j Principal component analysis 34.1 Introduction Principal component analysis (PCA) is a useful method to reduce the dimensionality of a multivariate data set Y 2 Rn×m. It is closely related to the singular value decomposition (SVD) of a matrix. The SVD of a data matrix Y 2 Rn×m is defined as a matrix decomposition Y = UΣV T ; (34.1) where U 2 Rn×n is an orthogonal matrix, Σ 2 Rn×m is a rectangular diagonal matrix, and V 2 Rm×m is an orthogonal matrix. The diagonal entries σii of Σ for i = 1; :::; n , referred to as the singular values of Y , correspond to the square roots of the non-zero eigenvalues of both the matrices Y T Y 2 Rm×m and YY T 2 Rn×n. The n columns of U are referred to as the left-singular vectors and correspond to the eigenvectors of YY T , while the m columns of V are referred to as right-singular vectors and correspond to the eigenvectors of Y T Y . The aim of the current chapter is to review the linear algebra terminology involved in both SVD and PCA, and, more importantly, to endow the linear algebraic concepts of SVD with some data analytic intuition. Before we provide an outline of the current chapter, we give two examples for the application of PCA in functional neuroimaging. Example 1 In fMRI one is often interested in the BOLD signal time-series of anatomical regions of interest, for example as the data basis for biophysical modelling approaches (Chapter 43). If a region of interest comprises both voxels that exhibit MR signal increases for a given experimental perturbation and other voxels that exhibit MR signal decreases for the same perturbation, averaging the voxel time-series over space can artificially create an average time-series that exhibits no modulation by the experimental perturbation - despite the fact that both voxel populations were in fact responsive to the experimental perturbation (Figure 34.1A, left panel). This effect can be mitigated by using the first eigenvector of the of the region's voxel-by-voxel covariance matrix, sometimes referred to as the first eigenmode as a summary of the region's MR signal time series instead (Figure 34.1A, right panel). On the other hand, if the voxel MR signal time-series within a region of interest are spatially coherent, than the average time-series and the first eigenvector of the voxel MR time-series matrix do not differ much. Example 2 In biophysical modelling approaches for event-related potentials, such as dynamic causal modelling (Chap- ter 44), the data corresponds to a matrix in the number of electrodes and the number of peri-event time-bins. For computationally efficiency, this potentially large matrix can be projected onto a smaller matrix of feature timecourses. Only this reduced matrix is then subjected to biophysical modelling using the DCM framework. As an example, the leftmost panel of Figure 34.1B visualizes an event-related potential EEG electrode × data sample matrix. The central panel of Figure 34.1B depicts the feature representation of these data comprising five eigenvectors of the data covariance matrix that are associated with the largest variance, and the rightmost panel visualizes the reconstructed data based on these PCA results only. Notably, the reconstructed data based on the PCA-selected features is virtually identical to the original data. To get at the inner workings of both PCA and SVD we have to revisit some elementary concepts from matrix theory and vector algebra. We proceed as follows: we first review some fundamentals of matrix eigenanalysis, including the notions of eigenvalues, eigenvectors and diagonalization of real symmetric matrices. We then review some essential prerequisites from vector space theory, including the notions of abstract vectors spaces, linear vector combinations, vector space bases, orthogonal and orthonormal bases, vector projections, and vector coordinate transforms. In essence, PCA corresponds to a coordinate 0.3 20 0.2 0.1 40 0 60 -0.1 80 -0.2 -0.3 100 50 100 150 200 250 0 50 100 150 200 250 6 1 10 1 10 1 4 20 0.5 2 20 0.5 2 30 0 30 0 3 0 -0.5 -0.5 40 40 -2 -1 4 -1 50 50 -1.5 -4 -1.5 5 60 60 50 100 150 200 250 300 350 50 100 150 200 250 300 350 50 100 150 200 250 300 350 Figure 34.1. PCA applications in functional neuroimaging. (A) Eigenmode analysis as spatial summary measure for region-of-interest timecourse extraction. For the current example, it is assumed that a region of interest comprises two voxel populations, one of which is positively modulated by some temporal event of interest (left panel, upper half of voxels), the other of which is negatively modulated by the same event of interest (left panel, lower half of voxels). The right panel depicts the resulting spatial average exhibiting no systematic variation with the temporal event of interest, as well as the the first eigenmode of the voxel × TR matrix shown on the left, which retains the event-related modulation. (B) PCA for feature selection and dimensionality reduction. The leftmost panels depicts an EEG event-related potential electrode × data samples matrix. Using PCA, these data can be compressed to the feature representation shown in the central panel. Notably, the reconstructed data based on this feature representation is virtually identical to the original data, as shown in the rightmost panel. transform of a data set onto a basis that is formed by the eigenvectors of its empirical covariance matrix. In this transformed space, the data features have zero covariance and are hence maximal informative. This property can be used to remove redundant features from the data set and hence allows for data compression. 34.2 Eigenanalysis An intuitive understanding of the concepts of eigenanalysis requires familiarity with differential equations. We will here thus strive only for a formal understanding. Let A 2 Rm×m be a square matrix. Any vector v 2 Rm; v 6= 0 that fulfils the equation Av = λv (34.2) for a scalar λ 2 R is called an eigenvector of A. The scalar λ is called an eigenvalue of A. Each eigenvector has an associated eigenvalue. Eigenvalues for different eigenvectors can be identical. Note that if v 2 Rm is an eigenvector, then av 2 Rm with a 2 R is an eigenvector with eigenvalue aλ. Therefore, one assumes without loss of generality that eigenvectors have length one, i.e., vT v = 1. Computing eigenvectors and eigenvalues Eigenvectors and eigenvalues of a matrix A 2 Rm×m can be computed as follows. First, from the definition of eigenvectors and eigenvalues we have Av = λv , Av − λv = 0 , (A − λI) v = 0 (34.3) This shows, that we are interested in a vector v 2 Rm and a scalar λ 2 R, such that the matrix product of (A − λI) and v results in the zero vector 0 2 Rm. A trivial solution for this would be to set v = 0, but this is not allowed by the definition of the eigenvector. If v 6= 0, we must adjust λ and v such that v is an element of the nullspace of A. The nullspace of a matrix M 2 Rm×m, here denoted by N (M) is the set of all vectors w 2 Rm that are mapped onto the zero vector, i.e., m N (M) = fw 2 R jMw = 0g: (34.4) If the nullspace of a matrix contains any other element than zero, the matrix is noninvertible (singular). This holds, because the zero vector is always mapped onto the zero vector by premultiplication with any matrix. If another vector is mapped into the zero vector, we would not know which vector we should assign to the zero vector when inverting the matrix multiplication by finding the inverse of the matrix in question. Therefore, the matrix cannot be invertible. We know that determinants can be checked to see whether a matrix is invertible or not, and we can make use of this here: if a matrix is not invertible, then its determinant must be zero. Therefore, we are searching for all scalars λ 2 R, such that χA(λ) := det (A − λI) = 0 (34.5) The expression det(A−λI), conceived as a function of λ is referred to as the characteristic polynomialof A, because, written in full, it corresponds to a polynomial in λ. Formulation of the characteristic polynomial then allows for the following strategy to compute eigenvalues and eigenvectors of matrices ∗ 1. Solve χA (λ) = 0 for zero-crossings (also referred to as roots) λi ; i = 1; 2; ::: The roots of the characteristic polynomial are the eigenvalues of A. ∗ 2. Substitute the values λi in (34.3), which yields the system of linear equations ∗ (A − λi I) vi = 0 (34.6) and solve the system for the associated eigenvectors vi; i = 1; 2; :::. For small matrices with nice properties such as symmetry, the above strategy can be applied by hand. In practice, matrices are usually larger than 3 × 3 and eigenanalysis problems are usually solved using numerically computing. Eigenvalues and eigenvectors of symmetric matrices We next consider how eigenvalues and eigenvectors can be used to decompose or diagonalize matrices. To this end, assume that the square matrix A 2 Rm×m is symmetric, for example, because A is a covariance matrix. A corollary of a fundamental result from linear algebra, which is known as the spectral theorem, asserts that symmetric matrices of size m × m have m distinct eigenvalues λ1; :::; λm with associated m orthogonal eigenvectors q1; :::; qm 2 R .
Recommended publications
  • 1 Euclidean Vector Space and Euclidean Affi Ne Space
    Profesora: Eugenia Rosado. E.T.S. Arquitectura. Euclidean Geometry1 1 Euclidean vector space and euclidean a¢ ne space 1.1 Scalar product. Euclidean vector space. Let V be a real vector space. De…nition. A scalar product is a map (denoted by a dot ) V V R ! (~u;~v) ~u ~v 7! satisfying the following axioms: 1. commutativity ~u ~v = ~v ~u 2. distributive ~u (~v + ~w) = ~u ~v + ~u ~w 3. ( ~u) ~v = (~u ~v) 4. ~u ~u 0, for every ~u V 2 5. ~u ~u = 0 if and only if ~u = 0 De…nition. Let V be a real vector space and let be a scalar product. The pair (V; ) is said to be an euclidean vector space. Example. The map de…ned as follows V V R ! (~u;~v) ~u ~v = x1x2 + y1y2 + z1z2 7! where ~u = (x1; y1; z1), ~v = (x2; y2; z2) is a scalar product as it satis…es the …ve properties of a scalar product. This scalar product is called standard (or canonical) scalar product. The pair (V; ) where is the standard scalar product is called the standard euclidean space. 1.1.1 Norm associated to a scalar product. Let (V; ) be a real euclidean vector space. De…nition. A norm associated to the scalar product is a map de…ned as follows V kk R ! ~u ~u = p~u ~u: 7! k k Profesora: Eugenia Rosado, E.T.S. Arquitectura. Euclidean Geometry.2 1.1.2 Unitary and orthogonal vectors. Orthonormal basis. Let (V; ) be a real euclidean vector space. De…nition.
    [Show full text]
  • Glossary of Linear Algebra Terms
    INNER PRODUCT SPACES AND THE GRAM-SCHMIDT PROCESS A. HAVENS 1. The Dot Product and Orthogonality 1.1. Review of the Dot Product. We first recall the notion of the dot product, which gives us a familiar example of an inner product structure on the real vector spaces Rn. This product is connected to the Euclidean geometry of Rn, via lengths and angles measured in Rn. Later, we will introduce inner product spaces in general, and use their structure to define general notions of length and angle on other vector spaces. Definition 1.1. The dot product of real n-vectors in the Euclidean vector space Rn is the scalar product · : Rn × Rn ! R given by the rule n n ! n X X X (u; v) = uiei; viei 7! uivi : i=1 i=1 i n Here BS := (e1;:::; en) is the standard basis of R . With respect to our conventions on basis and matrix multiplication, we may also express the dot product as the matrix-vector product 2 3 v1 6 7 t î ó 6 . 7 u v = u1 : : : un 6 . 7 : 4 5 vn It is a good exercise to verify the following proposition. Proposition 1.1. Let u; v; w 2 Rn be any real n-vectors, and s; t 2 R be any scalars. The Euclidean dot product (u; v) 7! u · v satisfies the following properties. (i:) The dot product is symmetric: u · v = v · u. (ii:) The dot product is bilinear: • (su) · v = s(u · v) = u · (sv), • (u + v) · w = u · w + v · w.
    [Show full text]
  • Vector Differential Calculus
    KAMIWAAI – INTERACTIVE 3D SKETCHING WITH JAVA BASED ON Cl(4,1) CONFORMAL MODEL OF EUCLIDEAN SPACE Submitted to (Feb. 28,2003): Advances in Applied Clifford Algebras, http://redquimica.pquim.unam.mx/clifford_algebras/ Eckhard M. S. Hitzer Dept. of Mech. Engineering, Fukui Univ. Bunkyo 3-9-, 910-8507 Fukui, Japan. Email: [email protected], homepage: http://sinai.mech.fukui-u.ac.jp/ Abstract. This paper introduces the new interactive Java sketching software KamiWaAi, recently developed at the University of Fukui. Its graphical user interface enables the user without any knowledge of both mathematics or computer science, to do full three dimensional “drawings” on the screen. The resulting constructions can be reshaped interactively by dragging its points over the screen. The programming approach is new. KamiWaAi implements geometric objects like points, lines, circles, spheres, etc. directly as software objects (Java classes) of the same name. These software objects are geometric entities mathematically defined and manipulated in a conformal geometric algebra, combining the five dimensions of origin, three space and infinity. Simple geometric products in this algebra represent geometric unions, intersections, arbitrary rotations and translations, projections, distance, etc. To ease the coordinate free and matrix free implementation of this fundamental geometric product, a new algebraic three level approach is presented. Finally details about the Java classes of the new GeometricAlgebra software package and their associated methods are given. KamiWaAi is available for free internet download. Key Words: Geometric Algebra, Conformal Geometric Algebra, Geometric Calculus Software, GeometricAlgebra Java Package, Interactive 3D Software, Geometric Objects 1. Introduction The name “KamiWaAi” of this new software is the Romanized form of the expression in verse sixteen of chapter four, as found in the Japanese translation of the first Letter of the Apostle John, which is part of the New Testament, i.e.
    [Show full text]
  • Basics of Linear Algebra
    Basics of Linear Algebra Jos and Sophia Vectors ● Linear Algebra Definition: A list of numbers with a magnitude and a direction. ○ Magnitude: a = [4,3] |a| =sqrt(4^2+3^2)= 5 ○ Direction: angle vector points ● Computer Science Definition: A list of numbers. ○ Example: Heights = [60, 68, 72, 67] Dot Product of Vectors Formula: a · b = |a| × |b| × cos(θ) ● Definition: Multiplication of two vectors which results in a scalar value ● In the diagram: ○ |a| is the magnitude (length) of vector a ○ |b| is the magnitude of vector b ○ Θ is the angle between a and b Matrix ● Definition: ● Matrix elements: ● a)Matrix is an arrangement of numbers into rows and columns. ● b) A matrix is an m × n array of scalars from a given field F. The individual values in the matrix are called entries. ● Matrix dimensions: the number of rows and columns of the matrix, in that order. Multiplication of Matrices ● The multiplication of two matrices ● Result matrix dimensions ○ Notation: (Row, Column) ○ Columns of the 1st matrix must equal the rows of the 2nd matrix ○ Result matrix is equal to the number of (1, 2, 3) • (7, 9, 11) = 1×7 +2×9 + 3×11 rows in 1st matrix and the number of = 58 columns in the 2nd matrix ○ Ex. 3 x 4 ॱ 5 x 3 ■ Dot product does not work ○ Ex. 5 x 3 ॱ 3 x 4 ■ Dot product does work ■ Result: 5 x 4 Dot Product Application ● Application: Ray tracing program ○ Quickly create an image with lower quality ○ “Refinement rendering pass” occurs ■ Removes the jagged edges ○ Dot product used to calculate ■ Intersection between a ray and a sphere ■ Measure the length to the intersection points ● Application: Forward Propagation ○ Input matrix * weighted matrix = prediction matrix http://immersivemath.com/ila/ch03_dotprodu ct/ch03.html#fig_dp_ray_tracer Projections One important use of dot products is in projections.
    [Show full text]
  • A Guided Tour to the Plane-Based Geometric Algebra PGA
    A Guided Tour to the Plane-Based Geometric Algebra PGA Leo Dorst University of Amsterdam Version 1.15{ July 6, 2020 Planes are the primitive elements for the constructions of objects and oper- ators in Euclidean geometry. Triangulated meshes are built from them, and reflections in multiple planes are a mathematically pure way to construct Euclidean motions. A geometric algebra based on planes is therefore a natural choice to unify objects and operators for Euclidean geometry. The usual claims of `com- pleteness' of the GA approach leads us to hope that it might contain, in a single framework, all representations ever designed for Euclidean geometry - including normal vectors, directions as points at infinity, Pl¨ucker coordinates for lines, quaternions as 3D rotations around the origin, and dual quaternions for rigid body motions; and even spinors. This text provides a guided tour to this algebra of planes PGA. It indeed shows how all such computationally efficient methods are incorporated and related. We will see how the PGA elements naturally group into blocks of four coordinates in an implementation, and how this more complete under- standing of the embedding suggests some handy choices to avoid extraneous computations. In the unified PGA framework, one never switches between efficient representations for subtasks, and this obviously saves any time spent on data conversions. Relative to other treatments of PGA, this text is rather light on the mathematics. Where you see careful derivations, they involve the aspects of orientation and magnitude. These features have been neglected by authors focussing on the mathematical beauty of the projective nature of the algebra.
    [Show full text]
  • Basics of Euclidean Geometry
    This is page 162 Printer: Opaque this 6 Basics of Euclidean Geometry Rien n'est beau que le vrai. |Hermann Minkowski 6.1 Inner Products, Euclidean Spaces In a±ne geometry it is possible to deal with ratios of vectors and barycen- ters of points, but there is no way to express the notion of length of a line segment or to talk about orthogonality of vectors. A Euclidean structure allows us to deal with metric notions such as orthogonality and length (or distance). This chapter and the next two cover the bare bones of Euclidean ge- ometry. One of our main goals is to give the basic properties of the transformations that preserve the Euclidean structure, rotations and re- ections, since they play an important role in practice. As a±ne geometry is the study of properties invariant under bijective a±ne maps and projec- tive geometry is the study of properties invariant under bijective projective maps, Euclidean geometry is the study of properties invariant under certain a±ne maps called rigid motions. Rigid motions are the maps that preserve the distance between points. Such maps are, in fact, a±ne and bijective (at least in the ¯nite{dimensional case; see Lemma 7.4.3). They form a group Is(n) of a±ne maps whose corresponding linear maps form the group O(n) of orthogonal transformations. The subgroup SE(n) of Is(n) corresponds to the orientation{preserving rigid motions, and there is a corresponding 6.1. Inner Products, Euclidean Spaces 163 subgroup SO(n) of O(n), the group of rotations.
    [Show full text]
  • Math 102 -- Linear Algebra I -- Study Guide
    Math 102 Linear Algebra I Stefan Martynkiw These notes are adapted from lecture notes taught by Dr.Alan Thompson and from “Elementary Linear Algebra: 10th Edition” :Howard Anton. Picture above sourced from (http://i.imgur.com/RgmnA.gif) 1/52 Table of Contents Chapter 3 – Euclidean Vector Spaces.........................................................................................................7 3.1 – Vectors in 2-space, 3-space, and n-space......................................................................................7 Theorem 3.1.1 – Algebraic Vector Operations without components...........................................7 Theorem 3.1.2 .............................................................................................................................7 3.2 – Norm, Dot Product, and Distance................................................................................................7 Definition 1 – Norm of a Vector..................................................................................................7 Definition 2 – Distance in Rn......................................................................................................7 Dot Product.......................................................................................................................................8 Definition 3 – Dot Product...........................................................................................................8 Definition 4 – Dot Product, Component by component..............................................................8
    [Show full text]
  • Clifford Algebra with Mathematica
    Clifford Algebra with Mathematica J.L. ARAGON´ G. ARAGON-CAMARASA Universidad Nacional Aut´onoma de M´exico University of Glasgow Centro de F´ısica Aplicada School of Computing Science y Tecnolog´ıa Avanzada Sir Alwyn William Building, Apartado Postal 1-1010, 76000 Quer´etaro Glasgow, G12 8QQ Scotland MEXICO UNITED KINGDOM [email protected] [email protected] G. ARAGON-GONZ´ ALEZ´ M.A. RODRIGUEZ-ANDRADE´ Universidad Aut´onoma Metropolitana Instituto Polit´ecnico Nacional Unidad Azcapotzalco Departamento de Matem´aticas, ESFM San Pablo 180, Colonia Reynosa-Tamaulipas, UP Adolfo L´opez Mateos, 02200 D.F. M´exico Edificio 9. 07300 D.F. M´exico MEXICO MEXICO [email protected] [email protected] Abstract: The Clifford algebra of a n-dimensional Euclidean vector space provides a general language comprising vectors, complex numbers, quaternions, Grassman algebra, Pauli and Dirac matrices. In this work, we present an introduction to the main ideas of Clifford algebra, with the main goal to develop a package for Clifford algebra calculations for the computer algebra program Mathematica.∗ The Clifford algebra package is thus a powerful tool since it allows the manipulation of all Clifford mathematical objects. The package also provides a visualization tool for elements of Clifford Algebra in the 3-dimensional space. clifford.m is available from https://github.com/jlaragonvera/Geometric-Algebra Key–Words: Clifford Algebras, Geometric Algebra, Mathematica Software. 1 Introduction Mathematica, resulting in a package for doing Clif- ford algebra computations. There exists some other The importance of Clifford algebra was recognized packages and specialized programs for doing Clif- for the first time in quantum field theory.
    [Show full text]
  • Inner Product and Orthogonality
    Math 240 TA: Shuyi Weng Winter 2017 March 10, 2017 Inner Product and Orthogonality Inner Product The notion of inner product is important in linear algebra in the sense that it provides a sensible notion of length and angle in a vector space. This seems very natural in the Euclidean space Rn through the concept of dot product. However, the inner product is much more general and can be extended to other non-Euclidean vector spaces. For this course, you are not required to understand the non-Euclidean examples. I just want to show you a glimpse of linear algebra in a more general setting in mathematics. Definition. Let V be a vector space. An inner product on V is a function h−; −i: V × V ! R such that for all vectors u; v; w 2 V and scalar c 2 R, it satisfies the axioms 1. hu; vi = hv; ui; 2. hu + v; wi = hu; wi + hv; wi, and hu; v + wi = hu; vi + hu; wi; 3. hcu; vi = chu; vi + hu; cvi; 4. hu; ui ≥ 0, and hu; ui = 0 if and only if u = 0. Definition. In a Euclidean space Rn, the dot product of two vectors u and v is defined to be the function u · v = uT v: In coordinates, if we write 2 3 2 3 u1 v1 6 . 7 6 . 7 u = 6 . 7 ; and v = 6 . 7 ; 4 5 4 5 un vn then 2 3 v1 T h i 6 . 7 u · v = u v = u1 ··· un 6 . 7 = u1v1 + ··· unvn: 4 5 vn The definitions in the remainder of this note will assume the Euclidean vector space Rn, and the dot product as the natural inner product.
    [Show full text]
  • 1 Euclidean Vector Spaces
    1 Euclidean Vector Spaces 1.1 Euclidean n-space In this chapter we will generalize the ¯ndings from last chapters for a space with n dimensions, called n-space. De¯nition 1 If n 2 Nnf0g, then an ordered n-tuple is a sequence of n numbers in R:(a1; a2; : : : ; an). The set of all ordered n-tuples is called n-space and is denoted by Rn. The elements in Rn can be perceived as points or vectors, similar to what we have done in 2- and 3-space. (a1; a2; a3) was used to indicate the components of a vector or the coordinates of a point. De¯nition 2 n Two vectors u = (u1; u2; : : : ; un) and v = (v1; v2; : : : ; vn) in R are called equal if u1 = v1; u2 = v2; : : : ; un = vn The sum u + v is de¯ned by u + v = (u1 + v1; u2 + v2; : : : un + vn) If k 2 R the scalar multiple of u is de¯ned by ku = (ku1; ku2; : : : ; kun) These operations are called the standard operations in Rn. De¯nition 3 The zero vector 0 in Rn is de¯ned by 0 = (0; 0;:::; 0) n For u = (u1; u2; : : : ; un) 2 R the negative of u is de¯ned by ¡u = (¡u1; ¡u2;:::; ¡un) The di®erence between two vectors u; v 2 Rn is de¯ned by u ¡ v = u + (¡v) Theorem 1 If u; v and w in Rn and k; l 2 R, then (a) u + v = v + u (b) (u + v) + w = u + (v + w) 1 (c) u + 0 = u (d) u + (¡u) = 0 (e) k(lu) = (kl)u (f) k(u + v) = ku + kv (g) (k + l)u = ku + lu (h) 1u) = u This theorem permits us to manipulate equations without writing them in component form.
    [Show full text]
  • Course Notes Geometric Algebra for Computer Graphics∗ SIGGRAPH 2019
    Course notes Geometric Algebra for Computer Graphics∗ SIGGRAPH 2019 Charles G. Gunn, Ph. D.y ∗Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author. Copyright is held by the owner/author(s). SIGGRAPH '19 Courses, July 28 - August 01, 2019, Los Angeles, CA, USA ACM 978-1-4503-6307-5/19/07. 10.1145/3305366.3328099 yAuthor's address: Raum+Gegenraum, Brieselanger Weg 1, 14612 Falkensee, Germany, Email: [email protected] 1 Contents 1 The question 4 2 Wish list for doing geometry 4 3 Structure of these notes 5 4 Immersive introduction to geometric algebra 6 4.1 Familiar components in a new setting . .6 4.2 Example 1: Working with lines and points in 3D . .7 4.3 Example 2: A 3D Kaleidoscope . .8 4.4 Example 3: A continuous 3D screw motion . .9 5 Mathematical foundations 11 5.1 Historical overview . 11 5.2 Vector spaces . 11 5.3 Normed vector spaces . 12 5.4 Sylvester signature theorem . 12 5.5 Euclidean space En ........................... 13 5.6 The tensor algebra of a vector space . 13 5.7 Exterior algebra of a vector space . 14 5.8 The dual exterior algebra . 15 5.9 Projective space of a vector space .
    [Show full text]
  • Normed Vector Spaces
    Normed vector spaces A normed vector space is a vector space where each vector is associated with a “length”. In the 2 or 3 dimensional Euclidean vector space, this notion is intuitive: the norm of a vector can simply be defined to be the length of the arrow. However, the concept of a norm generalizes this idea of the length of an arrow vector. Moreover, the norm of a vector changes when we multiply a vector by a scalar. Thus, a norm implements our intuition that scaling a vector changes its length. Definition 1 More rigorously, a normed vector space is a tuple (V; F ; k:k) where V is a vector space, F is a field, and k:k is a function called the norm that maps vectors to strictly positive lengths: k:k : V! [0; 1) . The norm function must satisfy the following conditions: 1. 8v 2 V kvk ≥ 0 2. kαxk = jαj kxk 3. kx + yk ≤ kxk + kyk Below we outline the intuitive reasoning behind each aspect of the definition of a norm: 1. says that all vectors should have a positive length. This enforces our intuition that a “length” is a positive quantity. 2. says that if we multiply a vector by a scalar, it’s length should increase by the magnitude (i.e. the absolute) value of that scalar. This axiom ties together the notion of scaling vectors (Axiom 6 in the definition of a vector space) and the norm of a vector. 3. says that the length of the sum of two vectors should not exceed the sum of the lengths of each vector.
    [Show full text]