Practical Linear Algebra: a Geometry Toolbox Fourth Edition Chapter 4: Changing Shapes: Linear Maps in 2D

Total Page:16

File Type:pdf, Size:1020Kb

Practical Linear Algebra: a Geometry Toolbox Fourth Edition Chapter 4: Changing Shapes: Linear Maps in 2D Practical Linear Algebra: A Geometry Toolbox Fourth Edition Chapter 4: Changing Shapes: Linear Maps in 2D Gerald Farin & Dianne Hansford A K Peters/CRC Press www.farinhansford.com/books/pla c 2021 Farin & Hansford Practical Linear Algebra 1 / 55 Outline 1 Introduction to Linear Maps in 2D 2 Skew Target Boxes 3 The Matrix Form 4 Linear Spaces 5 Scalings 6 Reflections 7 Rotations 8 Shears 9 Projections 10 Application: Free-form Deformations 11 Areas and Linear Maps: Determinants 12 Composing Linear Maps 13 More on Matrix Multiplication 14 Matrix Arithmetic Rules 15 WYSK Farin & Hansford Practical Linear Algebra 2 / 55 Introduction to Linear Maps in 2D 2D linear maps (rotation and scaling) applied repeatedly to a square Geometry has two parts 1 description of the objects 2 how these objects can be changed (transformed) Transformations also called maps May be described using the tools of matrix operations: linear maps Matrices first introduced by H. Grassmann in 1844 Became basis of linear algebra Farin & Hansford Practical Linear Algebra 3 / 55 Skew Target Boxes Revisit unit square to a rectangular target box mapping Examine part of mapping that is a linear map Unit square defined by e1 and e2 Vector v in [e1, e2]-system defined as v = v1e1 + v2e2 v is now mapped to a vector v′ by ′ v = v1a1 + v2a2 Duplicates the [e1, e2]-geometry in the [a1, a2]-system This is a change of basis The two equations on this page are linear combinations Farin & Hansford Practical Linear Algebra 4 / 55 Skew Target Boxes Example: [a1, a2]-coordinate system: origin and 2 2 a1 = a2 = − 1 4 1/2 Given v = in [e , e ]-system 1 1 2 ′ 1 2 2 1 v = + 1 − = − 2 × 1 × 4 9/2 1/2 v′ has components with 1 respect to [a1, a2]-system ′ 1 v has components − with 9/2 respect to [e1, e2]-system Farin & Hansford Practical Linear Algebra 5 / 55 The Matrix Form Components of a subscripted vector written with a double subscript a1,1 a1 = a2,1 The vector component index precedes the vector subscript ′ Components for v in [e1, e2]-system expressed as 1 1 2 2 − = + 1 − 9/2 2 × 1 × 4 Write this linear combination in matrix notation: 1 2 2 1/2 − = − 9/2 1 4 1 2 2 matrix: 2 rows and 2 columns defined as vectors a and a × 1 2 Farin & Hansford Practical Linear Algebra 6 / 55 The Matrix Form In general: ′ a a v v = 1,1 1,2 1 = Av a2,1 a2,2v2 A is a 2 2 matrix × Elements a1,1 and a2,2 form the diagonal v′ is the image of v v is the pre-image of v′ v′ is in the range of the map v is in the domain of the map Farin & Hansford Practical Linear Algebra 7 / 55 The Matrix Form Product Av has two components: v1a1,1 + v2a1,2 Av = v1a1 + v2a2 = v a , + v a , 1 2 1 2 2 2 Each component obtained as a dot product between the corresponding row of the matrix and v Example: 0 1 1 0 1+ 1 4 4 − − = ×− − × = − 2 4 4 2 1 + 4 4 14 ×− × Column space of A: all v′ formed as linear combination of the columns of A Farin & Hansford Practical Linear Algebra 8 / 55 The Matrix Form [e1, e2]-system can be interpreted as a matrix with columns e1 and e2: 1 0 [e , e ] 2 2 identity matrix 1 2 ≡ 0 1 × Falk’s scheme – A neat way to write matrix-times-vector: 2 1/2 2 2 3 − 1 4 4 Here: (2 2)(2 1) results in (2 1) vector × × × Interior dimensions (both 2) must be identical Outer dimensions (2 and 1) indicate the resulting vector or matrix size Farin & Hansford Practical Linear Algebra 9 / 55 The Matrix Form Matrix addition: a a b b a + b a + b 1,1 1,2 + 1,1 1,2 = 1,1 1,1 1,2 1,2 a2,1 a2,2 b2,1 b2,2 a2,1 + b2,1 a2,2 + b2,2 Matrices must be of the same dimensions Distributive law Av + Bv = (A + B)v Farin & Hansford Practical Linear Algebra 10 / 55 The Matrix Form Transpose matrix denoted by AT Formed by interchanging the rows and columns of A: 1 2 T 1 3 A = − then A = 3 5 2 5 − May think of a vector v as a matrix: 1 T v = − then v = 1 4 4 − Identities: T T T [A + B] = A + B TT T T A = A and [cA] = cA Farin & Hansford Practical Linear Algebra 11 / 55 The Matrix Form Symmetric matrix: A = AT Example: 5 8 8 1 No restrictions on diagonal elements All other elements equal to element about the diagonal with reversed indices For a 2 2 matrix: a , = a , × 2 1 1 2 2 2 zero matrix: × 0 0 0 0 Farin & Hansford Practical Linear Algebra 12 / 55 The Matrix Form Matrix rank: number of linearly independent column (row) vectors For 2 2 matrix columns define an [a , a ]-system × 1 2 Full rank = 2: a1 and a2 are linearly independent Rank deficient: matrix that does not have full rank If a1 and a2 are linearly dependent then matrix has rank 1 Also called a singular matrix Only matrix with rank zero is zero matrix Rank of A and AT are equal. Farin & Hansford Practical Linear Algebra 13 / 55 Linear Spaces 2D linear maps act on vectors in 2D linear spaces Also known as 2D vector spaces Standard operations in a linear space are addition and scalar multiplication of vectors ′ v = v1a1 + v2a2 linearity property Linear maps – matrices – characterized by preservation of linear combinations: A(au + bv)= aAu + bAv. Let’s break this statement down into the two basic elements: scalar multiplication and addition Farin & Hansford Practical Linear Algebra 14 / 55 Linear Spaces Matrices preserve scalings Example: 1 1/2 A = − 0 1/2 − 1 1 u = v = − 2 4 A(cu)= cAu Let c = 2 1 1/2 1 − 2 0 1/2 × 2 − 1 1/2 1 0 = 2 − = × 0 1/22 2 − − Farin & Hansford Practical Linear Algebra 15 / 55 Linear Spaces Matrices preserve sums (distributive law): A(u + v)= Au + Av Farin & Hansford Practical Linear Algebra 16 / 55 Linear Spaces Matrices preserve linear combinations A(3u + 2v) 1 1/2 1 1 = − 3 + 2 − 0 1/2 2 4 − 6 = 3Au + 2Av 7 − 1 1/2 1 = 3 − 0 1/22 − 1 1/2 1 +2 − − 0 1/2 4 − 6 = 7 − Farin & Hansford Practical Linear Algebra 17 / 55 Scalings Uniform scaling: ′ 1/2 0 v /2 v = v = 1 0 1/2 v2/2 Farin & Hansford Practical Linear Algebra 18 / 55 Scalings s 0 1/2 0 General scaling: v′ = 1,1 v Example: v′ = v 0 s2,2 0 2 Scaling affects the area of the object: — Scale by s1,1 in e1-direction, then area changes by a factor s1,1 — Similarly for s2,2 and e2-direction Total effect: factor of s1,1s2,2 Action of matrix: action ellipse Farin & Hansford Practical Linear Algebra 19 / 55 Reflections Special scaling: ′ 1 0 v v = v = 1 0 1 v − − 2 v reflected about e1-axis or the line x1 = 0 Reflection maps each vector about a line through the origin Farin & Hansford Practical Linear Algebra 20 / 55 Reflections Reflection about line x1 = x2: ′ 0 1 v v = v = 2 1 0 v1 Reflections change the sign of the area due to a change in orientation — Rotate e1 into e2: move in a counterclockwise — Rotate a1 into a2: move in a clockwise Farin & Hansford Practical Linear Algebra 21 / 55 Reflections Reflection? ′ 1 0 v = − v 0 1 − Check a1 rotate to a2 orientation: counterclockwise — same as e1, e2 orientation This is a 180◦ rotation Action ellipse: circle Farin & Hansford Practical Linear Algebra 22 / 55 Rotations Rotate e1 and e2 around the origin to ′ cos α ′ sin α e = and e = − 1 sin α 2 cos α These are the column vectors of the rotation matrix cos α sin α R = − sin α cos α Farin & Hansford Practical Linear Algebra 23 / 55 Rotations Rotation matrix for α = 45◦: √2/2 √2/2 R = − √2/2 √2/2 Rotations: special class of transformations called rigid body motions Action ellipse: circle Rotations do not change areas Farin & Hansford Practical Linear Algebra 24 / 55 Shears Map a rectangle to a parallelogram Example: 0 ′ d v = v = 1 1 −→ 1 A shear in matrix form: d 1 d 0 1 = 1 1 0 1 1 Application: generate italic fonts from standard ones. Farin & Hansford Practical Linear Algebra 25 / 55 Shears Shear along the e1-axis applied to an arbitrary vector: ′ 1 d v v + v d v = 1 1 = 1 2 1 0 1 v2 v2 Farin & Hansford Practical Linear Algebra 26 / 55 Shears Shear along the e2-axis: ′ 1 0 v v v = 1 = 1 d2 1v2 v1d2 + v2 Farin & Hansford Practical Linear Algebra 27 / 55 Shears What is the shear that achieves v ′ v v = 1 v = 1 ? v2 −→ 0 A shear parallel to the e2-axis: ′ v 1 0 v v = 1 = 1 0 v /v 1v − 2 1 2 Shears do not change areas (See rectangle to parallelogram sketch: both have the same base and the same height) Farin & Hansford Practical Linear Algebra 28 / 55 Projections Parallel projections: all vectors are projected in a parallel direction 2D: all vectors are projected onto a line Example: 3 1 0 3 = 0 0 01 Orthogonal projection: angle of incidence with the line is 90◦ Otherwise: oblique projection Perspective projection: projection direction is not constant — not a linear map Farin & Hansford Practical Linear Algebra 29 / 55 Projections Orthogonal projections important for best approximation Oblique projections important to applications in fields such as computer graphics and architecture Main property of a projection: reduces dimensionality Action ellipse: straight line segment which is covered twice Farin & Hansford Practical Linear Algebra 30 / 55 Projections Construction: Choose unit vector u to define a line onto which to project Projections of e1 and e2 are column vectors of matrix A u e1 a1 = · u = u1u u 2 T A = u1u u2u = uu uk ke a = · 2 u = u u 2 u 2 2 k k u Example: u = [cos 30◦ sin 30◦]T Farin & Hansford Practical Linear Algebra 31 / 55 Projections T Projection matrix A = u1u u2u = uu Columns of A linearly dependent rank one ⇒ Map reduces dimensionality area after map is zero ⇒ Projection matrix is idempotent: A = AA Geometrically: once a vector projected onto a line, application of same projection leaves result unchanged 1/√2 0.5 0.5 Example: u = then A = 1/√2 0.5 0.5 Farin & Hansford Practical Linear Algebra 32 / 55 Projections Action of projection matrix on
Recommended publications
  • Equivalence of A-Approximate Continuity for Self-Adjoint
    Equivalence of A-Approximate Continuity for Self-Adjoint Expansive Linear Maps a,1 b, ,2 Sz. Gy. R´ev´esz , A. San Antol´ın ∗ aA. R´enyi Institute of Mathematics, Hungarian Academy of Sciences, Budapest, P.O.B. 127, 1364 Hungary bDepartamento de Matem´aticas, Universidad Aut´onoma de Madrid, 28049 Madrid, Spain Abstract Let A : Rd Rd, d 1, be an expansive linear map. The notion of A-approximate −→ ≥ continuity was recently used to give a characterization of scaling functions in a multiresolution analysis (MRA). The definition of A-approximate continuity at a point x – or, equivalently, the definition of the family of sets having x as point of A-density – depend on the expansive linear map A. The aim of the present paper is to characterize those self-adjoint expansive linear maps A , A : Rd Rd for which 1 2 → the respective concepts of Aµ-approximate continuity (µ = 1, 2) coincide. These we apply to analyze the equivalence among dilation matrices for a construction of systems of MRA. In particular, we give a full description for the equivalence class of the dyadic dilation matrix among all self-adjoint expansive maps. If the so-called “four exponentials conjecture” of algebraic number theory holds true, then a similar full description follows even for general self-adjoint expansive linear maps, too. arXiv:math/0703349v2 [math.CA] 7 Sep 2007 Key words: A-approximate continuity, multiresolution analysis, point of A-density, self-adjoint expansive linear map. 1 Supported in part in the framework of the Hungarian-Spanish Scientific and Technological Governmental Cooperation, Project # E-38/04.
    [Show full text]
  • LECTURES on PURE SPINORS and MOMENT MAPS Contents 1
    LECTURES ON PURE SPINORS AND MOMENT MAPS E. MEINRENKEN Contents 1. Introduction 1 2. Volume forms on conjugacy classes 1 3. Clifford algebras and spinors 3 4. Linear Dirac geometry 8 5. The Cartan-Dirac structure 11 6. Dirac structures 13 7. Group-valued moment maps 16 References 20 1. Introduction This article is an expanded version of notes for my lectures at the summer school on `Poisson geometry in mathematics and physics' at Keio University, Yokohama, June 5{9 2006. The plan of these lectures was to give an elementary introduction to the theory of Dirac structures, with applications to Lie group valued moment maps. Special emphasis was given to the pure spinor approach to Dirac structures, developed in Alekseev-Xu [7] and Gualtieri [20]. (See [11, 12, 16] for the more standard approach.) The connection to moment maps was made in the work of Bursztyn-Crainic [10]. Parts of these lecture notes are based on a forthcoming joint paper [1] with Anton Alekseev and Henrique Bursztyn. I would like to thank the organizers of the school, Yoshi Maeda and Guiseppe Dito, for the opportunity to deliver these lectures, and for a greatly enjoyable meeting. I also thank Yvette Kosmann-Schwarzbach and the referee for a number of helpful comments. 2. Volume forms on conjugacy classes We will begin with the following FACT, which at first sight may seem quite unrelated to the theme of these lectures: FACT. Let G be a simply connected semi-simple real Lie group. Then every conjugacy class in G carries a canonical invariant volume form.
    [Show full text]
  • NOTES on DIFFERENTIAL FORMS. PART 3: TENSORS 1. What Is A
    NOTES ON DIFFERENTIAL FORMS. PART 3: TENSORS 1. What is a tensor? 1 n Let V be a finite-dimensional vector space. It could be R , it could be the tangent space to a manifold at a point, or it could just be an abstract vector space. A k-tensor is a map T : V × · · · × V ! R 2 (where there are k factors of V ) that is linear in each factor. That is, for fixed ~v2; : : : ;~vk, T (~v1;~v2; : : : ;~vk−1;~vk) is a linear function of ~v1, and for fixed ~v1;~v3; : : : ;~vk, T (~v1; : : : ;~vk) is a k ∗ linear function of ~v2, and so on. The space of k-tensors on V is denoted T (V ). Examples: n • If V = R , then the inner product P (~v; ~w) = ~v · ~w is a 2-tensor. For fixed ~v it's linear in ~w, and for fixed ~w it's linear in ~v. n • If V = R , D(~v1; : : : ;~vn) = det ~v1 ··· ~vn is an n-tensor. n • If V = R , T hree(~v) = \the 3rd entry of ~v" is a 1-tensor. • A 0-tensor is just a number. It requires no inputs at all to generate an output. Note that the definition of tensor says nothing about how things behave when you rotate vectors or permute their order. The inner product P stays the same when you swap the two vectors, but the determinant D changes sign when you swap two vectors. Both are tensors. For a 1-tensor like T hree, permuting the order of entries doesn't even make sense! ~ ~ Let fb1;:::; bng be a basis for V .
    [Show full text]
  • Tensor Products
    Tensor products Joel Kamnitzer April 5, 2011 1 The definition Let V, W, X be three vector spaces. A bilinear map from V × W to X is a function H : V × W → X such that H(av1 + v2, w) = aH(v1, w) + H(v2, w) for v1,v2 ∈ V, w ∈ W, a ∈ F H(v, aw1 + w2) = aH(v, w1) + H(v, w2) for v ∈ V, w1, w2 ∈ W, a ∈ F Let V and W be vector spaces. A tensor product of V and W is a vector space V ⊗ W along with a bilinear map φ : V × W → V ⊗ W , such that for every vector space X and every bilinear map H : V × W → X, there exists a unique linear map T : V ⊗ W → X such that H = T ◦ φ. In other words, giving a linear map from V ⊗ W to X is the same thing as giving a bilinear map from V × W to X. If V ⊗ W is a tensor product, then we write v ⊗ w := φ(v ⊗ w). Note that there are two pieces of data in a tensor product: a vector space V ⊗ W and a bilinear map φ : V × W → V ⊗ W . Here are the main results about tensor products summarized in one theorem. Theorem 1.1. (i) Any two tensor products of V, W are isomorphic. (ii) V, W has a tensor product. (iii) If v1,...,vn is a basis for V and w1,...,wm is a basis for W , then {vi ⊗ wj }1≤i≤n,1≤j≤m is a basis for V ⊗ W .
    [Show full text]
  • Determinants in Geometric Algebra
    Determinants in Geometric Algebra Eckhard Hitzer 16 June 2003, recovered+expanded May 2020 1 Definition Let f be a linear map1, of a real linear vector space Rn into itself, an endomor- phism n 0 n f : a 2 R ! a 2 R : (1) This map is extended by outermorphism (symbol f) to act linearly on multi- vectors f(a1 ^ a2 ::: ^ ak) = f(a1) ^ f(a2) ::: ^ f(ak); k ≤ n: (2) By definition f is grade-preserving and linear, mapping multivectors to mul- tivectors. Examples are the reflections, rotations and translations described earlier. The outermorphism of a product of two linear maps fg is the product of the outermorphisms f g f[g(a1)] ^ f[g(a2)] ::: ^ f[g(ak)] = f[g(a1) ^ g(a2) ::: ^ g(ak)] = f[g(a1 ^ a2 ::: ^ ak)]; (3) with k ≤ n. The square brackets can safely be omitted. The n{grade pseudoscalars of a geometric algebra are unique up to a scalar factor. This can be used to define the determinant2 of a linear map as det(f) = f(I)I−1 = f(I) ∗ I−1; and therefore f(I) = det(f)I: (4) For an orthonormal basis fe1; e2;:::; eng the unit pseudoscalar is I = e1e2 ::: en −1 q q n(n−1)=2 with inverse I = (−1) enen−1 ::: e1 = (−1) (−1) I, where q gives the number of basis vectors, that square to −1 (the linear space is then Rp;q). According to Grassmann n-grade vectors represent oriented volume elements of dimension n. The determinant therefore shows how these volumes change under linear maps.
    [Show full text]
  • 28. Exterior Powers
    28. Exterior powers 28.1 Desiderata 28.2 Definitions, uniqueness, existence 28.3 Some elementary facts 28.4 Exterior powers Vif of maps 28.5 Exterior powers of free modules 28.6 Determinants revisited 28.7 Minors of matrices 28.8 Uniqueness in the structure theorem 28.9 Cartan's lemma 28.10 Cayley-Hamilton Theorem 28.11 Worked examples While many of the arguments here have analogues for tensor products, it is worthwhile to repeat these arguments with the relevant variations, both for practice, and to be sensitive to the differences. 1. Desiderata Again, we review missing items in our development of linear algebra. We are missing a development of determinants of matrices whose entries may be in commutative rings, rather than fields. We would like an intrinsic definition of determinants of endomorphisms, rather than one that depends upon a choice of coordinates, even if we eventually prove that the determinant is independent of the coordinates. We anticipate that Artin's axiomatization of determinants of matrices should be mirrored in much of what we do here. We want a direct and natural proof of the Cayley-Hamilton theorem. Linear algebra over fields is insufficient, since the introduction of the indeterminate x in the definition of the characteristic polynomial takes us outside the class of vector spaces over fields. We want to give a conceptual proof for the uniqueness part of the structure theorem for finitely-generated modules over principal ideal domains. Multi-linear algebra over fields is surely insufficient for this. 417 418 Exterior powers 2. Definitions, uniqueness, existence Let R be a commutative ring with 1.
    [Show full text]
  • Math 395. Tensor Products and Bases Let V and V Be Finite-Dimensional
    Math 395. Tensor products and bases Let V and V 0 be finite-dimensional vector spaces over a field F . Recall that a tensor product of V and V 0 is a pait (T, t) consisting of a vector space T over F and a bilinear pairing t : V × V 0 → T with the following universal property: for any bilinear pairing B : V × V 0 → W to any vector space W over F , there exists a unique linear map L : T → W such that B = L ◦ t. Roughly speaking, t “uniquely linearizes” all bilinear pairings of V and V 0 into arbitrary F -vector spaces. In class it was proved that if (T, t) and (T 0, t0) are two tensor products of V and V 0, then there exists a unique linear isomorphism T ' T 0 carrying t and t0 (and vice-versa). In this sense, the tensor product of V and V 0 (equipped with its “universal” bilinear pairing from V × V 0!) is unique up to unique isomorphism, and so we may speak of “the” tensor product of V and V 0. You must never forget to think about the data of t when you contemplate the tensor product of V and V 0: it is the pair (T, t) and not merely the underlying vector space T that is the focus of interest. In this handout, we review a method of construction of tensor products (there is another method that involved no choices, but is horribly “big”-looking and is needed when considering modules over commutative rings) and we work out some examples related to the construction.
    [Show full text]
  • 1 the Spin Homomorphism SL2(C) → SO1,3(R) a Summary from Multiple Sources Written by Y
    1 The spin homomorphism SL2(C) ! SO1;3(R) A summary from multiple sources written by Y. Feng and Katherine E. Stange Abstract We will discuss the spin homomorphism SL2(C) ! SO1;3(R) in three manners. Firstly we interpret SL2(C) as acting on the Minkowski 1;3 spacetime R ; secondly by viewing the quadratic form as a twisted 1 1 P × P ; and finally using Clifford groups. 1.1 Introduction The spin homomorphism SL2(C) ! SO1;3(R) is a homomorphism of classical matrix Lie groups. The lefthand group con- sists of 2 × 2 complex matrices with determinant 1. The righthand group consists of 4 × 4 real matrices with determinant 1 which preserve some fixed real quadratic form Q of signature (1; 3). This map is alternately called the spinor map and variations. The image of this map is the identity component + of SO1;3(R), denoted SO1;3(R). The kernel is {±Ig. Therefore, we obtain an isomorphism + PSL2(C) = SL2(C)= ± I ' SO1;3(R): This is one of a family of isomorphisms of Lie groups called exceptional iso- morphisms. In Section 1.3, we give the spin homomorphism explicitly, al- though these formulae are unenlightening by themselves. In Section 1.4 we describe O1;3(R) in greater detail as the group of Lorentz transformations. This document describes this homomorphism from three distinct per- spectives. The first is very concrete, and constructs, using the language of Minkowski space, Lorentz transformations and Hermitian matrices, an ex- 4 plicit action of SL2(C) on R preserving Q (Section 1.5).
    [Show full text]
  • Inner Product Spaces
    CHAPTER 6 Woman teaching geometry, from a fourteenth-century edition of Euclid’s geometry book. Inner Product Spaces In making the definition of a vector space, we generalized the linear structure (addition and scalar multiplication) of R2 and R3. We ignored other important features, such as the notions of length and angle. These ideas are embedded in the concept we now investigate, inner products. Our standing assumptions are as follows: 6.1 Notation F, V F denotes R or C. V denotes a vector space over F. LEARNING OBJECTIVES FOR THIS CHAPTER Cauchy–Schwarz Inequality Gram–Schmidt Procedure linear functionals on inner product spaces calculating minimum distance to a subspace Linear Algebra Done Right, third edition, by Sheldon Axler 164 CHAPTER 6 Inner Product Spaces 6.A Inner Products and Norms Inner Products To motivate the concept of inner prod- 2 3 x1 , x 2 uct, think of vectors in R and R as x arrows with initial point at the origin. x R2 R3 H L The length of a vector in or is called the norm of x, denoted x . 2 k k Thus for x .x1; x2/ R , we have The length of this vector x is p D2 2 2 x x1 x2 . p 2 2 x1 x2 . k k D C 3 C Similarly, if x .x1; x2; x3/ R , p 2D 2 2 2 then x x1 x2 x3 . k k D C C Even though we cannot draw pictures in higher dimensions, the gener- n n alization to R is obvious: we define the norm of x .x1; : : : ; xn/ R D 2 by p 2 2 x x1 xn : k k D C C The norm is not linear on Rn.
    [Show full text]
  • Linear Maps on Rn
    Linear maps on Rn Linear structure of Rn n I The vector space R has a linear structure with two features: n n I Vector addition: For all u; v 2 R we have u + v 2 R with the components (u + v)i = ui + vi for 1 ≤ i ≤ n with respect, say, to the standard basis vectors of Rn. n I Scalar multiplication: For all u 2 R and all c 2 R, we n have cu 2 R with the components (cu)i = cui for 1 ≤ i ≤ n. I Note that these two structures exist independent of any particular basis we choose to represent the coordinates of vectors. cu u+v v u u Convex combination of vectors 2 I The convex combination of any two vectors u; v 2 R is the line segment au + bv joining u and v, where 0 ≤ a ≤ 1, 0 ≤ b ≤ 1 and a + b = 1. v u I Similarly, the convex combination of three vectors u; v; w 2 R3 is the triangle au + bv + cw with vertices u, v and w, where 0 ≤ a; b; c ≤ 1 and a + b + c = 1. w v u Scalar product of vectors 3 I Given two vectors u; v 2 R , with coordinates 2 3 2 3 u1 v1 u = 4u25 v = 4v25 u3 v3 their scalar product is given by 3 X u · v = ui vi = kukkvk cos θ; i=1 q 2 2 2 where kuk = u1 + u2 + u3 and θ is the angle between u and v. Check this formula in R2! v u θ Scalar product and coordinates I Note that the lengths kuk and kvk as well as the angle θ are independent of the three mutually perpendicular axes used to determine the coordinates of u and v.
    [Show full text]
  • Differential Geometry: Discrete Exterior Calculus
    Differential Geometry: Discrete Exterior Calculus [Discrete Exterior Calculus (Thesis). Hirani, 2003] [Discrete Differential Forms for Computational Modeling. Desbrun et al., 2005] [Build Your Own DEC at Home. Elcott et al., 2006] Chains Recall: A k-chain of a simplicial complex Κ is linear combination of the k-simplices in Κ: c = ∑c(σ )⋅σ σ∈Κ k where c is a real-valued function. The dual of a k-chain is a k-cochain which is a linear map taking a k-chain to a real value. Chains and cochains related through evaluation: –Cochains: What we integrate –Chains: What we integrate over Boundary Operator Recall: The boundary ∂σ of a k-simplex σ is the signed union of all (k-1)-faces. ∂ ∂ ∂ ∂ +1 -1 The boundary operator extends linearly to act on chains: ∂c = ∂ ∑c(σ )⋅σ = ∑c(σ )⋅∂σ σ∈Κ k σ∈Κ k Discrete Exterior Derivative Recall: The exterior derivative d:Ωk→ Ωk+1 is the operator on cochains that is the complement of the boundary operator, defined to satisfy Stokes’ Theorem. -1 -2 -1 -3.5 -2 0.5 0.5 2 0 0.5 2 0 0.5 2 1.5 1 1 Since the boundary of the boundary is empty: ∂∂ = 0 dd = 0 The Laplacian Definition: The Laplacian of a function f is a function defined as the divergence of the gradient of f: ∆f = div(∇f ) = ∇ ⋅∇f The Laplacian Definition: The Laplacian of a function f is a function defined as the divergence of the gradient of f: ∆f = div(∇f ) = ∇ ⋅∇f Q.
    [Show full text]
  • 4 Exterior Algebra
    4 Exterior algebra 4.1 Lines and 2-vectors The time has come now to develop some new linear algebra in order to handle the space of lines in a projective space P (V ). In the projective plane we have seen that duality can deal with this but lines in higher dimensional spaces behave differently. From the point of view of linear algebra we are looking at 2-dimensional vector sub- spaces U ⊂ V . To motivate what we shall do, consider how in Euclidean geometry we describe a 2-dimensional subspace of R3. We could describe it through its unit normal n, which is also parallel to u×v where u and v are linearly independent vectors in the space and u×v is the vector cross product. The vector product has the following properties: • u×v = −v×u • (λ1u1 + λ2u2)×v = λ1u1×v + λ2u2×v We shall generalize these properties to vectors in any vector space V – the difference is that the product will not be a vector in V , but will lie in another associated vector space. Definition 12 An alternating bilinear form on a vector space V is a map B : V × V → F such that • B(v, w) = −B(w, v) • B(λ1v1 + λ2v2, w) = λ1B(v1, w) + λ2B(v2, w) This is the skew-symmetric version of the symmetric bilinear forms we used to define quadrics. Given a basis {v1, . , vn}, B is uniquely determined by the skew symmetric matrix B(vi, vj). We can add alternating forms and multiply by scalars so they form a vector space, isomorphic to the space of skew-symmetric n × n matrices.
    [Show full text]