Identities Generalizing the Theorems of Pappus and Desargues

Total Page:16

File Type:pdf, Size:1020Kb

Identities Generalizing the Theorems of Pappus and Desargues S S symmetry Article Identities Generalizing the Theorems of Pappus and Desargues Roger D. Maddux Department of Mathmatics, Iowa State University, Ames, IA 50011, USA; [email protected] Abstract: The Theorems of Pappus and Desargues (for the projective plane over a field) are general- ized here by two identities involving determinants and cross products. These identities are proved to hold in the three-dimensional vector space over a field. They are closely related to the Arguesian identity in lattice theory and to Cayley-Grassmann identities in invariant theory. Keywords: Pappus theorem; Desargues theorem; projective plane over a field 1. Introduction Identities that generalize the theorems of Pappus and Desargues come from two primary sources: lattice theory and invariant theory. Schützenberger [1] showed in 1945 that the theorem of Desargues can be expressed as an identity (the Arguesian identity) that holds in the lattice of subspaces of a projective geometry just in case the geometry is Desarguesian. Jónsson [2] showed in 1953 that the Arguesian identity holds in every lattice of commuting equivalence relations. The further literature on Arguesian lattices includes [3–22]. Two identities similar to those in this paper were published in 1974 [23] (with different meanings for the operations), as examples in the context of invariant theory. For proofs see [24] (p. 150, p. 156). Further literature on such identities includes [25–39]. Citation: Maddux, R.D. Identities The identities in lattice theory use join and meet and apply to lattices. The identities Generalizing the Theorems of Pappus in invariant theory use the wedge (exterior) product (corresponding to join), bracket and Desargues. Symmetry 2021, 13, (a multilinear altering operation like determinant), and an operation corresponding to 1382. https://doi.org/10.3390/ meet. The identities apply to Cayley-Grassmann algebras (exterior algebras with a bracket sym13081382 operation). The connections between the two kinds of identities are not straightforward and are studied in several of the papers listed above. Academic Editor: Ivan Chajda The identities presented in this paper have a more mundane origin. In the spring semester of 1997, the author taught the second semester of college geometry out of the text Received: 1 July 2021 by Ryan [40]. The text’s analytical approach included the fact that three points (or lines) in Accepted: 22 July 2021 the projective plane over a field are collinear (or concurrent) if and only if the determinant Published: 29 July 2021 of their homogeneous coordinates is zero; see (6) below. Pappus and Desargues both assert that if certain collinearities or concurrences hold, then others do, that is, if some numbers Publisher’s Note: MDPI stays neutral are zero, then so are others. Clearly, then, there must be algebraic formulas that assert such with regard to jurisdictional claims in dependences between determinants and from which the two theorems can be deduced. published maps and institutional affil- iations. Such formulas were found and included in the course notes [41]; parts of those notes appear in this paper. Section2 presents the two identities along with definitions of their operations. They use cross products and determinants rather than the joins, meets, and brackets of invariant theory [24]. Section3 is a review of the projective plane over a field. Section4 derives Copyright: © 2021 by the authors. the theorems of Pappus and Desargues from the identities. Section5 shows how cross Licensee MDPI, Basel, Switzerland. product relates to join and meet, followed by a Discussion, an Acknowledgement, and an This article is an open access article AppendixA containing proofs of the identities. Good survey papers on the theorems of distributed under the terms and conditions of the Creative Commons Pappus and Desargues include [42,43]. Attribution (CC BY) license (https:// 2. Cross Products and Determinants creativecommons.org/licenses/by/ 3 4.0/). Let F be a field. The elements of F are written as column vectors. Let Symmetry 2021, 13, 1382. https://doi.org/10.3390/sym13081382 https://www.mdpi.com/journal/symmetry Symmetry 2021, 13, 1382 2 of 10 2 3 2 3 2 3 x0 y0 z0 3 3 3 r 2 F, a = 4x15 2 F , b = 4y15 2 F , c = 4z15 2 F . x2 y2 z2 1. The scalar product of r and a is 2 3 rx0 3 ra = 4rx15 2 F . rx2 2. The inner product of a and b is ha, bi = x0y0 + x1y1 + x2y2 2 F. 3. The cross product of a and b is 2 3 x1y2 − x2y1 3 a × b = 4x2y0 − x0y25 2 F . x0y1 − x1y0 4. The determinant of a, b, c is x0 y0 z0 det[a, b, c] = x y z 1 1 1 x2 y2 z2 = x0y1z2 + x2y0z1 + x1y2z0 − x0y2z1 − x2y1z0 − x1y0z2 2 F. Identity (1) in the next theorem generalizes the theorem of Pappus and identity (2) generalizes the theorem of Desargues. They can be proved by expanding both sides in 18 variables (3 for each of 6 vectors) and obtaining the same result. A proof of Theorem1 using properties of cross products and determinants is in the AppendixA. In Theorem2, the theorems of Pappus and Desargues are proved using Theorem1. 0 0 0 Theorem 1. For any field F and any a, b, c, a , b , c 2 F3, det[(a × b0) × (a0 × b), (b × c0) × (b0 × c), (c × a0) × (c0 × a)] (1) = det[a0, b0, b] det[b0, c0, c] det[c0, a0, a] det[a, b, c] + det[b, a, a0] det[a, c, c0] det[c, b, b0] det[a0, b0, c0], det[(a × b) × (a0 × b0), (b × c) × (b0 × c0), (c × a) × (c0 × a0)] (2) = det[a × a0, b × b0, c × c0] det[a, b, c] det[a0, b0, c0]. 3. The Projective Plane over F The points A, B, . of the projective plane over F are the one-dimensional sub- spaces of the three-dimensional vector space F3 over F, and its lines l, m, . are the two-dimensional subspaces of F3. A point A is on a line l if A is a subset of l: A is on l iff A ⊆ l. (3) Every non-zero vector in F3 determines both a point and a line. Suppose 0 6= a 2 F3. Then A = fra : r 2 Fg is the 1-dimensional subspace generated by a, so A is a point in the projective plane over F, and l = fx : hx, bi = 0g is the 2-dimensional subspace of vectors perpendicular to a, so l is a line in the projective plane over F. The vector a is referred to as homogeneous coordinates for point A and line l. The incidence relation between a point and a line holds when their homogeneous coordinates are perpendicular, for if 0 6= a, b 2 F3 then Symmetry 2021, 13, 1382 3 of 10 point fra : r 2 Fg is on line fx : hx, bi = 0g iff ha, bi = 0 (4) An axiom of projective geometry (every two points are on a unique line) takes the form in the projective plane over a field that every two 1-dimensional subspaces are contained in a unique 2-dimensional subspace. The 2-dimensional subspace containing two 1-dimensional subspaces is the set of sums of two vectors, one from each subspace. To notate this, we extend the notion of addition from individual vectors to sets of vectors. For arbitrary sets of vectors A, B ⊆ F3, let A + B be the set of their sums: A + B = fa + b : a 2 A, b 2 Bg. (5) If A and B are distinct points, then there are linearly independent non-zero vectors a, b 2 F3 such that A = fra : r 2 Fg and B = fsb : s 2 Fg. The unique line containing both A and B is the 2-dimensional subspace generated by a and b, consisting of all their linear combinations, A + B = fra + sb : r, s 2 Fg. The cross product a × b is perpendicular to both a and b, so any vector perpendicular to a × b will be in the unique line A + B containing A and B (and conversely), that is, A + B = fx : hx, a × bi = 0g. Thus, points with homogeneous coordinates a and b determine a line with homo- geneous coordinates a × b. Suppose two lines, l and m, have homogeneous coordinates a, b 2 F, respectively: l = fx : hx, ai = 0g, m = fx : hx, bi = 0g. Note that a and b are linearly independent, since otherwise l = m (and there is only one line, not two, as postulated). An axiom for projective planes is that distinct lines meet at a unique point. In the projective plane over a field, this axiom asserts the fact that distinct 2-dimensional subspaces contain a unique 1-dimensional subspace, namely, their intersection. Thus, the intersection l \ m of the lines l and m is the unique point that occurs on both lines. When l and m have coordinates a and b, their intersection is the 1-dimensional subspace of F3 generated by any non-zero vector perpendicular to both a and b, such as a × b: l \ m = fr(a × b) : r 2 Fg. Thus, two lines with homogeneous coordinates a and b meet at a point with homoge- neous coordinates a × b. Suppose 0 6= a, b, c 2 F3. Let A, B, and C be the points, and l, m, and n the lines, with homogeneous coordinates a, b and c, respectively: A = fra : r 2 Fg, B = frb : r 2 Fg, C = frc : r 2 Fg, l = fx : hx, ai = 0g, m = fx : hx, bi = 0g, n = fx : hx, ci = 0g. By definition, A, B, C are collinear iff A + B = A + C = B + C, and l, m, n are concur- rent iff l \ m \ n is a point.
Recommended publications
  • Multilinear Algebra
    Appendix A Multilinear Algebra This chapter presents concepts from multilinear algebra based on the basic properties of finite dimensional vector spaces and linear maps. The primary aim of the chapter is to give a concise introduction to alternating tensors which are necessary to define differential forms on manifolds. Many of the stated definitions and propositions can be found in Lee [1], Chaps. 11, 12 and 14. Some definitions and propositions are complemented by short and simple examples. First, in Sect. A.1 dual and bidual vector spaces are discussed. Subsequently, in Sects. A.2–A.4, tensors and alternating tensors together with operations such as the tensor and wedge product are introduced. Lastly, in Sect. A.5, the concepts which are necessary to introduce the wedge product are summarized in eight steps. A.1 The Dual Space Let V be a real vector space of finite dimension dim V = n.Let(e1,...,en) be a basis of V . Then every v ∈ V can be uniquely represented as a linear combination i v = v ei , (A.1) where summation convention over repeated indices is applied. The coefficients vi ∈ R arereferredtoascomponents of the vector v. Throughout the whole chapter, only finite dimensional real vector spaces, typically denoted by V , are treated. When not stated differently, summation convention is applied. Definition A.1 (Dual Space)Thedual space of V is the set of real-valued linear functionals ∗ V := {ω : V → R : ω linear} . (A.2) The elements of the dual space V ∗ are called linear forms on V . © Springer International Publishing Switzerland 2015 123 S.R.
    [Show full text]
  • Math Reference
    Math Reference Eran Guendelman Last revised: November 16, 2002 1 Vectors 1.1 Cross Product 1.1.1 Basic Properties • The cross product is not associative 1.1.2 The star operator [Bara, Rigid Body Simulation]: 0 −uz uy ∗ • Let u = uz 0 −ux . Then −uy ux 0 u∗v = u × v vT u∗ = (v × u)T • (u∗)T = −u∗ • (u∗)2 = uuT − uT uI (using u × (u × v) = uuT − uT uI v from 1.1.4) ∗ • If A = (a1 | a2 | a3) then u A = (u × a1 | u × a2 | u × a3). We could write u × A but I'm not sure how standard this notation is... In summation notation ∗ • (u )ij = −εijkuk 1.1.3 Scalar triple product [Weisstein, Scalar triple product]: [u, v, w] = u · (v × w) = v · (w × u) = w · (u × v) ux uy uz = vx vy vz wx wy wz 1 1.1.4 Vector triple product [Weisstein, Vector triple product]: u × (v × w) = v (u · w) − w (u · v) • Note that u × (v × w) = vwT − wvT u = uT wI − wuT v = vuT − uT vI w • As a special case, we get u × (v × u) = v (u · u) − u (u · v) = uT uI − uuT v • You can also look at it as decomposing v into components along orthogonal vectors u and u × (v × u): v · u v · (u × (v × u)) v = u + u × (v × u) |u|2 |u × (v × u)|2 uT v 1 = u + u × (v × u) uT u uT u where we used v · (u × (v × u)) = (v × u) · (v × u) = |v × u|2 and |u × (v × u)|2 = |u|2 |v × u|2 • Yet another way to look at it (related to above) is to note that Pu (v) = T uu is the linear operator projecting onto the vector .
    [Show full text]
  • Exercises: Answers and Solutions
    Appendix Exercises: Answers and Solutions Exercise Chapter 1 1.1 Complex Numbers as 2D Vectors (p. 6) Convince yourself that the complex numbers z = x + iy are elements of a vector space, i.e. that they obey the rules (1.1)–(1.6). Make a sketch to demonstrate that z1 + z2 = z2 + z1, with z1 = 3 + 4i and z2 = 4 + 3i, in accord with the vector addition in 2D. Exercise Chapter 2 2.1 Exercise: Compute Scalar Product for given Vectors (p. 14) Compute the length, the scalar products and the angles betwen the three vectors a, b, c which have the components {1, 0, 0}, {1, 1, 0}, and {1, 1, 1}. Hint: to visualize the directions of the vectors, make a sketch of a cube and draw them there! The scalar products of the vectors with themselves are: a · a = 1, b · b = 2, c · c = 3, and consequently √ √ a =|a|=1, b =|b|= 2, c =|c|= 3. The mutual scalar products are a · b = 1, a · c = 1, b · c = 2. The cosine of the angle ϕ between these vectors are √1 , √1 , √2 , 2 3 6 © Springer International Publishing Switzerland 2015 389 S. Hess, Tensors for Physics, Undergraduate Lecture Notes in Physics, DOI 10.1007/978-3-319-12787-3 390 Appendix: Exercises… respectively. The corresponding angles are exactly 45◦ for the angle between a and b, and ≈70.5◦ and ≈35.3◦, for the other two angles. Exercises Chapter 3 3.1 Symmetric and Antisymmetric Parts of a Dyadic in Matrix Notation (p. 38) Write the symmetric traceless and the antisymmetric parts of the dyadic tensor Aμν = aμbν in matrix form for the vectors a :{1, 0, 0} and b :{0, 1, 0}.
    [Show full text]
  • Unit 10 Vector Algebra
    UNIT 10 VECTOR ALGEBRA Structure 10.1 Introduction Objectives 10.2 Basic Concepts 10.3 Components of a Vector 10.4 Operations on vectors 10.4.1 Addition of Vectors 10.4.2 Properties of Vector Addition 10.4.3 Multiplication of Vectors by S&lars 10.5 Product of Two Vectors 10.5.1 Scalar or Dot Product 10.5.2 Vector Product of Two Vectors 10.6 Multiple Products of Vectors 10.6.1 Scalar Triple Product 10.6.2 Vector Triple Product 10.6.3 Quadruple Product of Vectors 10.7 Polar and Axial Vectors 10.8 Coordinate Systems for Space 10.8.1 Cartesian Coordinates 10.8.2 Cylindrical Coordinates 10.8.3 Spherical Coordinates 10.9 Summary 1 0.1 INTKODUCTION Vectors are used extensively in almost all branches of physics, mathematics and engineering. The usefulness of vectors in engineering mathematics results from the fact that many physical quantities-for example velocity of a body, the forces acting on a body, linear and angular momentum of a body, magnetic and electrostatic forces may be represented by vectors. In several respects, the rules of vector calculations are as simple as the rules governing thesystem of real numbers. It is true that any problem that can be solved by the use of vectors can also be treated by non-vectorial methods, but vectol analysis simplifies many calculations considerably. Further more, it is a way of visualizing physical and geometrical quantities and relations between them. For all these reasons, extensive use is made of vector notation in modern engineering literature.
    [Show full text]
  • Cross Product from Wikipedia, the Free Encyclopedia
    Cross product From Wikipedia, the free encyclopedia In mathematics and vector calculus, the cross product or vector product (occasionally directed area product to emphasize the geometric significance) is a binary operation on two vectors in three-dimensional space (R3) and is denoted by the symbol ×. Given two linearly independent vectors a and b, the cross product, a × b, is a vector that is perpendicular to both a and b and therefore normal to the plane containing them. It has many applications in mathematics, physics, engineering, and computer programming. It should not be confused with dot product (projection product). If two vectors have the same direction (or have the exact opposite direction from one another, i.e. are not linearly independent) or if either one has zero length, then their cross product is zero. More generally, the magnitude of the product equals the area of a parallelogram with the vectors for sides; in particular, the magnitude of the product of two perpendicular vectors is the product of their lengths. The cross product is anticommutative (i.e. a × b = −(b × a)) and is distributive over addition (i.e. a × (b + c) = a × b + a × c). The space R3 together with the cross product is an algebra over the real numbers, which is neither commutative nor associative, but is a Lie algebra with the cross product being the Lie bracket. Like the dot product, it depends on the metric of Euclidean space, but unlike the dot product, it also depends on a choice of orientation or “handedness“. The product can be generalized in various ways; it can be made independent of orientation by changing the result to pseudovector, or in arbitrary dimensions the exterior product of vectors can be used with a bivector or two-form result.
    [Show full text]
  • Vector Calculus for Engineers
    Vector Calculus for Engineers Lecture Notes for Jeffrey R. Chasnov The Hong Kong University of Science and Technology Department of Mathematics Clear Water Bay, Kowloon Hong Kong Copyright © 2019 by Jeffrey Robert Chasnov This work is licensed under the Creative Commons Attribution 3.0 Hong Kong License. To view a copy of this license, visit http://creativecommons.org/licenses/by/3.0/hk/ or send a letter to Creative Commons, 171 Second Street, Suite 300, San Francisco, California, 94105, USA. Preface View the promotional video on YouTube These are the lecture notes for my online Coursera course, Vector Calculus for Engineers. Students who take this course are expected to already know single-variable differential and integral calculus to the level of an introductory college calculus course. Students should also be familiar with matrices, and be able to compute a three-by-three determinant. I have divided these notes into chapters called Lectures, with each Lecture corresponding to a video on Coursera. I have also uploaded all my Coursera videos to YouTube, and links are placed at the top of each Lecture. There are some problems at the end of each lecture chapter. These problems are designed to exemplify the main ideas of the lecture. Students taking a formal university course in multivariable calculus will usually be assigned many more problems, some of them quite difficult, but here I follow the philosophy that less is more. I give enough problems for students to solidify their understanding of the material, but not so many that students feel overwhelmed. I do encourage students to attempt the given problems, but, if they get stuck, full solutions can be found in the Appendix.
    [Show full text]
  • Vector Calculus
    VECTOR CALCULUS Syafiq Johar [email protected] Contents 1 Introduction 2 2 Vector Spaces 2 n 2.1 Vectors in R ......................................3 2.2 Geometry of Vectors . .4 n 2.3 Lines and Planes in R .................................9 3 Quadric Surfaces 12 3.1 Curvilinear Coordinates . 15 4 Vector-Valued Functions 18 4.1 Derivatives of Vectors . 18 4.2 Length of Curves . 22 4.3 Parametrisation of Curves . 23 4.4 Integration of Vectors . 28 5 Multivariable Functions 29 5.1 Level Sets . 30 5.2 Partial Derivatives . 31 5.3 Gradient . 34 5.4 Critical and Extrema Points . 41 5.5 Lagrange Multipliers . 44 5.6 Integration over Multiple Variables . 45 6 Vector Fields 52 6.1 Divergence and Curl . 54 7 Line and Surface Integrals 56 7.1 Line Integral . 56 7.2 Green's Theorem . 62 7.3 Surface Integral . 65 7.4 Stokes' Theorem . 72 1 1 Introduction In previous calculus course, you have seen calculus of single variable. This variable, usually denoted x, is fed through a function f to give a real value f(x), which then can be represented 2 as a graph (x; f(x)) in R . From this graph, as long as f is sufficiently nice, we can deduce many properties and compute extra information using differentiation and integration. Differentiation df here measures the rate of change of the function as we vary x, that is dx is the infinitesimal rate of change of the quantity f. In this course, we are going to extend the notion of one-dimensional calculus to higher dimensions.
    [Show full text]
  • Unit 2 Vector Algebra
    - UNIT 2 VECTOR ALGEBRA 2.1 Introduction 2.2 Basic Concepts 2.3 Components of a Vector 2.4 Operations on Vectors 2 4.1 Addition of Vectors 2.4.2 Properties of Vector Addition 2.4.3 Multiplication of Vectors by Scalars 2.5 Product of Two Vectors 2.5.1 Scalar or Dot Product 2.5.2 Vector Product of Two Vectors 2.6 Multiple Products of Vectors 2.6.1 Scalar Triple Product 2.6.2 Vector Triple Product 2.6.3 Quadruple Product of Vectors 2.7 Polar and Axial Vectors 2.8 Coordinate Systems for Space 2.8.1 Cartesian Coordinates 2.8.2 Cylindrical Coordinates 2.8.3 Spherical Coordinates 2.9 Summary 2.10 Answers to SAQs 2.1 INTRODUCTION Vectors are used extensively in almost all branches of physics, mathematics and engineering. The usefulness of vectors in engineering mathematics resuIts from the fact that many physical quantities - for example, velocity of a body, the forces acting on a body, linear and angular momentum of a body, magnetic and electrostatic forces, may be represented by vectors. In several respects, the rule of vector calculations are as simple as the rules governing the system of real It is true that any problem that can be solved by the use of vectors can also be treated by non-vectorial methods, but vector analysis simplifies many calculations considerably. Further more, it is a way of visualizing physical and geometrical quantities and relations between them. For all these reasons, extensive use is made of vector notation in modern engineering literature.
    [Show full text]
  • Cross Product 1 Cross Product
    Cross product 1 Cross product In mathematics, the cross product or vector product is a binary operation on two vectors in three-dimensional space. It results in a vector which is perpendicular to both and therefore normal to the plane containing them. It has many applications in mathematics, physics, and engineering. If the vectors have the same direction or one has zero length, then their cross product is zero. More generally, the magnitude of the product equals the area of a parallelogram with the vectors for sides; in particular for perpendicular vectors this is a rectangle and the magnitude of the product is the product of their lengths. The cross product is anticommutative and is distributive over addition. The space and product form an algebra over a field, which is neither commutative nor associative, but is a Lie algebra with the cross product being the Lie bracket. Like the dot product, it depends on the metric of Euclidean space, but unlike the dot product, it also depends on the choice of orientation or "handedness". The product can be generalized in various ways; it can be made independent of orientation by changing the result to pseudovector, or in arbitrary dimensions the exterior product of vectors can be used with a bivector or two-form result. Also, using the orientation and metric structure just as for the traditional 3-dimensional cross product, one can in n dimensions take the product of n − 1 vectors to produce a vector perpendicular to all of them. But if the product is limited to non-trivial binary products with vector results, it exists only in three and seven dimensions.
    [Show full text]