Notes for Math 152: Linear Systems Spring, 2013

Total Page:16

File Type:pdf, Size:1020Kb

Notes for Math 152: Linear Systems Spring, 2013 Notes for Math 152: Linear Systems Spring, 2013 Richard Froese and Brian Wetton © 2010 Richard Froese and Brian Wetton. Permission is granted to make and distribute copies of this document provided the copyright notice and this permission notice are preserved on all copies. Contents 1 Introduction 5 1.1 Course Goals . 5 1.2 About the Subject . 6 1.2.1 Connection to Geometry . 6 1.2.2 Linear Systems . 6 1.2.3 Eigen-analysis . 7 1.3 About These Notes . 8 1.4 About The Computer Labs . 9 2 Vectors and Geometry 10 2.1 Chapter Introduction . 10 2.2 Vectors . 10 2.2.1 Multiplication by a number and vector addition . 11 2.2.2 Co-ordinates . 12 2.2.3 Properties of vector addition and scalar multiplication . 16 2.2.4 MATLAB: basic scalar and vector operations . 17 2.2.5 Problems . 18 2.3 Geometrical Aspects of Vectors . 19 2.3.1 Length of a vector . 19 2.3.2 The dot product . 19 2.3.3 Projections . 22 2.3.4 MATLAB: norm and dot commands . 23 2.3.5 Problems . 23 2.4 Determinants and the Cross Product . 25 2.4.1 The determinant in two and three dimensions . 25 2.4.2 The cross product . 27 2.4.3 The triple product and the determinant in three dimensions 30 2.4.4 MATLAB: assigning matrices and det and cross commands 31 2.4.5 MATLAB: generating scripts with the MATLAB editor . 31 2.4.6 MATLAB: floating point representation of real numbers . 32 2.4.7 Problems . 33 2.5 Lines and Planes . 34 2.5.1 Describing linear sets . 34 2.5.2 Lines in two dimensions: Parametric form . 34 1 CONTENTS CONTENTS 2.5.3 Lines in two dimensions: Equation form . 35 2.5.4 Lines in three dimensions: Parametric form . 35 2.5.5 Lines in three dimensions: Equation form . 36 2.5.6 Planes in three dimensions: Parametric form . 37 2.5.7 Planes in three dimensions: Equation form . 37 2.5.8 Problems . 37 2.6 Introduction to Linear Systems . 40 2.6.1 Description of points and the geometry of solutions to systems of equations . 40 2.6.2 Describing the whole plane in two dimensions and all of space in three dimensions . 42 2.6.3 Linear dependence and independence . 43 2.6.4 Problems . 44 2.7 Additional Topics . 45 2.7.1 Application: rotational motion . 45 2.7.2 Application: 3-D graphics . 46 2.8 Solutions to Chapter Problems . 49 3 Solving Linear Systems 63 3.1 Linear Systems . 63 3.1.1 General Form of Linear Systems . 63 3.1.2 Solving Linear Systems by Substitution . 63 3.1.3 Elementary row (equation) operations . 64 3.1.4 Augmented Matrices . 65 3.1.5 Problems . 66 3.2 Gaussian Elimination . 67 3.2.1 Using MATLAB for row reductions . 73 3.2.2 Problems . 74 3.3 Homogeneous Equations . 76 3.3.1 Properties of solutions of homogeneous systems. 77 3.3.2 Connection of solutions to homogeneous and inhomoge- neous systems. 77 3.3.3 Problems . 79 3.4 Geometric Applications . 80 3.4.1 Problems . 81 3.5 Resistor Networks . 82 3.5.1 Elements of Basic Circuits . 82 3.5.2 Two Simple Examples Made Complicated . 84 3.5.3 Loop Currents . 86 3.5.4 Alternate Presentation of Resistor Networks . 90 3.5.5 Problems . 93 3.6 Additional Topics . 96 3.6.1 Quadratic Functions . 96 3.6.2 Least squares fit . 98 3.6.3 Equilibrium configuration of hanging weights and springs 99 3.6.4 Problems . 101 2 CONTENTS CONTENTS 3.7 Solutions to Chapter Problems . 102 4 Matrices and Determinants 117 4.1 Matrix operations . 117 4.1.1 MATLAB . 120 4.1.2 Problems . 120 4.2 Linear Transformations and Matrices . 121 4.2.1 Linear Transformations . 121 4.2.2 Rotations in two dimensions . 122 4.2.3 Projections in two dimensions . 123 4.2.4 Reflections in two dimensions . 124 4.2.5 Every linear transformation is multiplication by a matrix 125 4.2.6 Composition of linear transformations and matrix product 127 4.2.7 Problems . 127 4.3 Application: random walks . 129 4.3.1 Problems . 134 4.4 The Transpose . 136 4.4.1 MATLAB . 138 4.4.2 Problems . 138 4.5 Matrix Inverses . 138 4.5.1 Computing the inverse . 141 4.5.2 Inverses of Products . 144 4.5.3 MATLAB . 145 4.5.4 Problems . 145 4.6 Return to Resistor Networks . 146 4.7 Determinants . 150 4.7.1 Definition of Determinants . 150 4.7.2 Determinants of Triangular matrices . 151 4.7.3 Summary of determinant calculation rules . 152 4.7.4 Calculation of determinant using row operations . 153 4.7.5 More expansion formulae . 153 4.7.6 MATLAB . 155 4.7.7 Problems . 155 4.8 Additional Topics . 157 4.8.1 Application: General Least Squares . 157 4.8.2 Least squares solutions . 158 4.8.3 Problems . 159 4.8.4 Elementary matrices . 159 4.8.5 Problems . 162 4.8.6 Exchanging two rows changes the sign of the determinant 163 4.8.7 The determinant is linear in each row separately . 164 4.8.8 Adding a multiple of one row to another doesn't change the determinant . 165 4.8.9 The determinant of QA . 166 4.8.10 The determinant of A is zero exactly when A is not invertible167 4.8.11 The product formula: det(AB) = det(A) det(B) . 167 3 CONTENTS CONTENTS 4.8.12 The determinant of the transpose . 168 4.8.13 An impractical formula for the inverse . 169 4.8.14 Cramer's rule, an impractical way to solve systems . 170 4.8.15 Problems . 170 4.9 Solutions to Chapter Problems . 170 5 Complex numbers 193 5.1 Complex arithmetic . 193 5.2 Complex exponential . 194 5.3 Polar representation of a complex number . 196 5.4 MATLAB . 196 5.5 Problems . 197 5.6 Solutions to Chapter Problems . 197 6 Eigen-analysis 200 6.1 Eigenvalues and eigenvectors . 200 6.1.1 Computing the eigenvalues and eigenvectors . 201 6.1.2 Complex eigenvalues and eigenvectors . 207 6.1.3 MATLAB . 209 6.1.4 Problems . 210 6.2 Eigenanalysis simplifies matrix powers . 211 6.2.1 Problems . 215 6.3 Systems of linear differential equations . 215 6.3.1 Problems . 221 6.4 LCR circuits . 222 6.4.1 Capacitors and inductors . 222 6.4.2 Differential equations for LCR circuits . 223 6.4.3 Alternate description of LCR circuits . 225 6.4.4 Problems . 227 6.5 Additional Topics . 227 6.5.1 Diagonalization . 227 6.5.2 Computing high powers of a matrix . 229 6.5.3 Another formula for the determinant . 230 6.5.4 The matrix exponential and differential equations . ..
Recommended publications
  • DERIVATIONS and PROJECTIONS on JORDAN TRIPLES an Introduction to Nonassociative Algebra, Continuous Cohomology, and Quantum Functional Analysis
    DERIVATIONS AND PROJECTIONS ON JORDAN TRIPLES An introduction to nonassociative algebra, continuous cohomology, and quantum functional analysis Bernard Russo July 29, 2014 This paper is an elaborated version of the material presented by the author in a three hour minicourse at V International Course of Mathematical Analysis in Andalusia, at Almeria, Spain September 12-16, 2011. The author wishes to thank the scientific committee for the opportunity to present the course and to the organizing committee for their hospitality. The author also personally thanks Antonio Peralta for his collegiality and encouragement. The minicourse on which this paper is based had its genesis in a series of talks the author had given to undergraduates at Fullerton College in California. I thank my former student Dana Clahane for his initiative in running the remarkable undergraduate research program at Fullerton College of which the seminar series is a part. With their knowledge only of the product rule for differentiation as a starting point, these enthusiastic students were introduced to some aspects of the esoteric subject of non associative algebra, including triple systems as well as algebras. Slides of these talks and of the minicourse lectures, as well as other related material, can be found at the author's website (www.math.uci.edu/∼brusso). Conversely, these undergraduate talks were motivated by the author's past and recent joint works on derivations of Jordan triples ([116],[117],[200]), which are among the many results discussed here. Part I (Derivations) is devoted to an exposition of the properties of derivations on various algebras and triple systems in finite and infinite dimensions, the primary questions addressed being whether the derivation is automatically continuous and to what extent it is an inner derivation.
    [Show full text]
  • Triple Product Formula and the Subconvexity Bound of Triple Product L-Function in Level Aspect
    TRIPLE PRODUCT FORMULA AND THE SUBCONVEXITY BOUND OF TRIPLE PRODUCT L-FUNCTION IN LEVEL ASPECT YUEKE HU Abstract. In this paper we derived a nice general formula for the local integrals of triple product formula whenever one of the representations has sufficiently higher level than the other two. As an application we generalized Venkatesh and Woodbury’s work on the subconvexity bound of triple product L-function in level aspect, allowing joint ramifications, higher ramifications, general unitary central characters and general special values of local epsilon factors. 1. introduction 1.1. Triple product formula. Let F be a number field. Let ⇡i, i = 1, 2, 3 be three irreducible unitary cuspidal automorphic representations, such that the product of their central characters is trivial: (1.1) w⇡i = 1. Yi Let ⇧=⇡ ⇡ ⇡ . Then one can define the triple product L-function L(⇧, s) associated to them. 1 ⌦ 2 ⌦ 3 It was first studied in [6] by Garrett in classical languages, where explicit integral representation was given. In particular the triple product L-function has analytic continuation and functional equation. Later on Shapiro and Rallis in [19] reformulated his work in adelic languages. In this paper we shall study the following integral representing the special value of triple product L function (see Section 2.2 for more details): − ⇣2(2)L(⇧, 1/2) (1.2) f (g) f (g) f (g)dg 2 = F I0( f , f , f ), | 1 2 3 | ⇧, , v 1,v 2,v 3,v 8L( Ad 1) v ZAD (FZ) D (A) ⇤ \ ⇤ Y D D Here fi ⇡i for a specific quaternion algebra D, and ⇡i is the image of ⇡i under Jacquet-Langlands 2 0 correspondence.
    [Show full text]
  • Vectors, Matrices and Coordinate Transformations
    S. Widnall 16.07 Dynamics Fall 2009 Lecture notes based on J. Peraire Version 2.0 Lecture L3 - Vectors, Matrices and Coordinate Transformations By using vectors and defining appropriate operations between them, physical laws can often be written in a simple form. Since we will making extensive use of vectors in Dynamics, we will summarize some of their important properties. Vectors For our purposes we will think of a vector as a mathematical representation of a physical entity which has both magnitude and direction in a 3D space. Examples of physical vectors are forces, moments, and velocities. Geometrically, a vector can be represented as arrows. The length of the arrow represents its magnitude. Unless indicated otherwise, we shall assume that parallel translation does not change a vector, and we shall call the vectors satisfying this property, free vectors. Thus, two vectors are equal if and only if they are parallel, point in the same direction, and have equal length. Vectors are usually typed in boldface and scalar quantities appear in lightface italic type, e.g. the vector quantity A has magnitude, or modulus, A = |A|. In handwritten text, vectors are often expressed using the −→ arrow, or underbar notation, e.g. A , A. Vector Algebra Here, we introduce a few useful operations which are defined for free vectors. Multiplication by a scalar If we multiply a vector A by a scalar α, the result is a vector B = αA, which has magnitude B = |α|A. The vector B, is parallel to A and points in the same direction if α > 0.
    [Show full text]
  • Matrices and Tensors
    APPENDIX MATRICES AND TENSORS A.1. INTRODUCTION AND RATIONALE The purpose of this appendix is to present the notation and most of the mathematical tech- niques that are used in the body of the text. The audience is assumed to have been through sev- eral years of college-level mathematics, which included the differential and integral calculus, differential equations, functions of several variables, partial derivatives, and an introduction to linear algebra. Matrices are reviewed briefly, and determinants, vectors, and tensors of order two are described. The application of this linear algebra to material that appears in under- graduate engineering courses on mechanics is illustrated by discussions of concepts like the area and mass moments of inertia, Mohr’s circles, and the vector cross and triple scalar prod- ucts. The notation, as far as possible, will be a matrix notation that is easily entered into exist- ing symbolic computational programs like Maple, Mathematica, Matlab, and Mathcad. The desire to represent the components of three-dimensional fourth-order tensors that appear in anisotropic elasticity as the components of six-dimensional second-order tensors and thus rep- resent these components in matrices of tensor components in six dimensions leads to the non- traditional part of this appendix. This is also one of the nontraditional aspects in the text of the book, but a minor one. This is described in §A.11, along with the rationale for this approach. A.2. DEFINITION OF SQUARE, COLUMN, AND ROW MATRICES An r-by-c matrix, M, is a rectangular array of numbers consisting of r rows and c columns: ¯MM..
    [Show full text]
  • Jordan Triple Systems with Completely Reducible Derivation Or Structure Algebras
    Pacific Journal of Mathematics JORDAN TRIPLE SYSTEMS WITH COMPLETELY REDUCIBLE DERIVATION OR STRUCTURE ALGEBRAS E. NEHER Vol. 113, No. 1 March 1984 PACIFIC JOURNAL OF MATHEMATICS Vol. 113, No. 1,1984 JORDAN TRIPLE SYSTEMS WITH COMPLETELY REDUCIBLE DERIVATION OR STRUCTURE ALGEBRAS ERHARD NEHER We prove that a finite-dimensional Jordan triple system over a field k of characteristic zero has a completely reducible structure algebra iff it is a direct sum of a trivial and a semisimple ideal. This theorem depends on a classification of Jordan triple systems with completely reducible derivation algebra in the case where k is algebraically closed. As another application we characterize real Jordan triple systems with compact automorphism group. The main topic of this paper is finite-dimensional Jordan triple systems over a field of characteristic zero which have a completely reducible derivation algebra. The history of the subject begins with [7] where G. Hochschild proved, among other results, that for an associative algebra & the deriva- tion algebra is semisimple iff & itself is semisimple. Later on R. D. Schafer considered in [18] the case of a Jordan algebra £. His result was that Der f is semisimple if and only if $ is semisimple with each simple component of dimension not equal to 3 over its center. This theorem was extended by K.-H. Helwig, who proved in [6]: Let f be a Jordan algebra which is finite-dimensional over a field of characteristic zero. Then the following are equivlent: (1) Der % is completely reducible and every derivation of % has trace zero, (2) £ is semisimple, (3) the bilinear form on Der f> given by (Dl9 D2) -> trace(Z>!Z>2) is non-degenerate and every derivation of % is inner.
    [Show full text]
  • Fully Homomorphic Encryption Over Exterior Product Spaces
    FULLY HOMOMORPHIC ENCRYPTION OVER EXTERIOR PRODUCT SPACES by DAVID WILLIAM HONORIO ARAUJO DA SILVA B.S.B.A., Universidade Potiguar (Brazil), 2012 A thesis submitted to the Graduate Faculty of the University of Colorado Colorado Springs in partial fulfillment of the requirements for the degree of Master of Science Department of Computer Science 2017 © Copyright by David William Honorio Araujo da Silva 2017 All Rights Reserved This thesis for the Master of Science degree by David William Honorio Araujo da Silva has been approved for the Department of Computer Science by C. Edward Chow, Chair Carlos Paz de Araujo Jonathan Ventura 9 December 2017 Date ii Honorio Araujo da Silva, David William (M.S., Computer Science) Fully Homomorphic Encryption Over Exterior Product Spaces Thesis directed by Professor C. Edward Chow ABSTRACT In this work I propose a new symmetric fully homomorphic encryption powered by Exterior Algebra and Product Spaces, more specifically by Geometric Algebra as a mathematical language for creating cryptographic solutions, which is organized and presented as the En- hanced Data-Centric Homomorphic Encryption - EDCHE, invented by Dr. Carlos Paz de Araujo, Professor and Associate Dean at the University of Colorado Colorado Springs, in the Electrical Engineering department. Given GA as mathematical language, EDCHE is the framework for developing solutions for cryptology, such as encryption primitives and sub-primitives. In 1978 Rivest et al introduced the idea of an encryption scheme able to provide security and the manipulation of encrypted data, without decrypting it. With such encryption scheme, it would be possible to process encrypted data in a meaningful way.
    [Show full text]
  • Math 102 -- Linear Algebra I -- Study Guide
    Math 102 Linear Algebra I Stefan Martynkiw These notes are adapted from lecture notes taught by Dr.Alan Thompson and from “Elementary Linear Algebra: 10th Edition” :Howard Anton. Picture above sourced from (http://i.imgur.com/RgmnA.gif) 1/52 Table of Contents Chapter 3 – Euclidean Vector Spaces.........................................................................................................7 3.1 – Vectors in 2-space, 3-space, and n-space......................................................................................7 Theorem 3.1.1 – Algebraic Vector Operations without components...........................................7 Theorem 3.1.2 .............................................................................................................................7 3.2 – Norm, Dot Product, and Distance................................................................................................7 Definition 1 – Norm of a Vector..................................................................................................7 Definition 2 – Distance in Rn......................................................................................................7 Dot Product.......................................................................................................................................8 Definition 3 – Dot Product...........................................................................................................8 Definition 4 – Dot Product, Component by component..............................................................8
    [Show full text]
  • Generalized Cross Product Peter Lewintan
    A coordinate-free view on ”the” generalized cross product Peter Lewintan To cite this version: Peter Lewintan. A coordinate-free view on ”the” generalized cross product. 2021. hal-03233545 HAL Id: hal-03233545 https://hal.archives-ouvertes.fr/hal-03233545 Preprint submitted on 24 May 2021 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. A coordinate-free view on \the" generalized cross product Peter Lewintan∗ May 24, 2021 Abstract The higher dimensional generalization of the cross product is associated with an adequate matrix multiplication. This coordinate-free view allows for a better understanding of the underlying algebraic structures, among which are generalizations of Grassmann's, Jacobi's and Room's identities. Moreover, such a view provides a the higher dimensional analogue of the decomposition of the vector Laplacian which itself gives an explicit coordinate-free Helmholtz decomposition in arbitrary dimensions n ≥ 2. AMS 2020 subject classification: 15A24, 15A69, 47A06, 35J05. Keywords: Generalized cross product, Jacobi's identity, generalized curl, coordinate-free view, vector Laplacian, Helmholtz decomposition. 1 Introduction The interplay between different differential operators is at the basis not only of pure analysis but also in many applied mathematical considerations.
    [Show full text]
  • Review of Matrices and Block Structures
    APPENDIX A Review of Matrices and Block Structures Numerical linear algebra lies at the heart of modern scientific computing and computational science. Today it is not uncommon to perform numerical computations with matrices having millions of components. The key to understanding how to implement such algorithms is to exploit underlying structure within the matrices. In these notes we touch on a few ideas and tools for dissecting matrix structure. Specifically we are concerned with the block structure matrices. 1. Rows and Columns m n Let A R ⇥ so that A has m rows and n columns. Denote the element of A in the ith row and jth column 2 as Aij. Denote the m rows of A by A1 ,A2 ,A3 ,...,Am and the n columns of A by A 1,A2,A3,...,An. For example, if · · · · · · · · 32 15 73 − A = 2 27 32 100 0 0 , 2 −89 0 47− 22 21 333 − − then A = 100, 4 5 2,4 − A1 = 32 1573,A2 = 2 27 32 100 0 0 ,A3 = 89 0 47 22 21 33 · − · − − · − − and ⇥ ⇤ ⇥ ⇤ ⇥ ⇤ 3 2 1 5 7 3 − A 1 = 2 ,A2 = 27 ,A3 = 32 ,A4 = 100 ,A5 = 0 ,A6 = 0 . · 2 −893 · 2 0 3 · 2 47 3 · 2−22 3 · 2 213 · 2333 − − 4 5 4 5 4 5 4 5 4 5 4 5 Exercise 1.1. If 3 41100 220010− 2 3 C = 100001, − 6 0002147 6 7 6 0001037 6 7 4 5 4 −2 2 3 what are C4,4, C 4 and C4 ? For example, C2 = 220010and C 2 = 0 .
    [Show full text]
  • Scalar and Vector Triple Products
    OLLSCOIL NA hEIREANN MA NUAD THE NATIONAL UNIVERSITY OF IRELAND MAYNOOTH MATHEMATICAL PHYSICS EE112 Engineering Mathematics II Vector Triple Products Prof. D. M. Heffernan and Mr. S. Pouryahya 1 3 Vectors: Triple Products 3.1 The Scalar Triple Product The scalar triple product, as its name may suggest, results in a scalar as its result. It is a means of combining three vectors via cross product and a dot product. Given the vectors A = A1 i + A2 j + A3 k B = B1 i + B2 j + B3 k C = C1 i + C2 j + C3 k a scalar triple product will involve a dot product and a cross product A · (B × C) It is necessary to perform the cross product before the dot product when computing a scalar triple product, i j k B2 B3 B1 B3 B1 B2 B × C = B1 B2 B3 = i − j + k C2 C3 C1 C3 C1 C2 C1 C2 C3 since A = A1 i + A2 j + A3 k one can take the dot product to find that B2 B3 B1 B3 B1 B2 A · (B × C) = (A1) − (A2) + (A3) C2 C3 C1 C3 C1 C2 which is simply Important Formula 3.1. A1 A2 A3 A · (B × C) = B1 B2 B3 C1 C2 C3 The usefulness of being able to write the scalar triple product as a determinant is not only due to convenience in calculation but also due to the following property of determinants Note 3.1. Exchanging any two adjacent rows in a determinant changes the sign of the original determinant. 2 Thus, B1 B2 B3 A1 A2 A3 B · (A × C) = A1 A2 A3 = − B1 B2 B3 = −A · (B × C): C1 C2 C3 C1 C2 C3 Formula 3.1.
    [Show full text]
  • Young Research Forum on International Conclave On
    Mathematica Eterna Short Communication 2020 Short Communication on Introduction of Exterior algebra Mohammad Adil* Associate Professor, Department of Mathematics, Assiut University, Egypt In mathematics, the exterior product or wedge product of vectors only k-blades, but sums of k-blades; such a calculation is called a k- is an algebraic construction worn in geometry to Vector the k-blades, since they are uncomplicated products of learn areas, volumes, and their advanced dimensional analogues. vectors, are called the easy fundamentals of the algebra. The rank of The external result of two vectors u and v, denoted by u ∧ v, is any k-vector is distinct to be the fewest integer of simple called a bivector and lives in a space called the exterior square, fundamentals of which it is a sum. The exterior product extends to a vector space that is distinctive from the inventive gap of vectors. the complete exterior algebra, so that it makes sense to multiply any The degree of u ∧ v can be interpret as the area of the lozenge two elements of the algebra. Capable of with this product, the with sides u and v, which in three dimensions can also be exterior algebra is an associative algebra, which means that α ∧ compute by means of the cross product of the two vectors. Like (β ∧ γ) = (α ∧ β) ∧ γ for any elements α, β, γ. The k-vectors have the cross product, the exterior product is anticommutative, degree k, meaning that they are sums of products of k vectors. significance that When fundamentals of dissimilar degrees are multiply, the degrees add like multiplication of polynomials.
    [Show full text]
  • 3 Two-Dimensional Linear Algebra
    3 Two-Dimensional Linear Algebra 3.1 Basic Rules Vectors are the way we represent points in 2, 3 and 4, 5, 6, ..., n, ...-dimensional space. There even exist spaces of vectors which are said to be infinite- dimensional! A vector can be considered as an object which has a length and a direction, or it can be thought of as a point in space, with coordinates representing that point. In terms of coordinates, we shall use the notation µ ¶ x x = y for a column vector. This vector denotes the point in the xy-plane which is x units in the horizontal direction and y units in the vertical direction from the origin. We shall write xT = (x, y) for the row vector which corresponds to x, and the superscript T is read transpose. This is the operation which flips between two equivalent represen- tations of a vector, but since they represent the same point, we can sometimes be quite cavalier and not worry too much about whether we have a vector or its transpose. To avoid the problem of running out of letters when using vectors we shall also write vectors in the form (x1, x2) or (x1, x2, x3, .., xn). If we write R for real, one-dimensional numbers, then we shall also write R2 to denote the two-dimensional space of all vectors of the form (x, y). We shall use this notation too, and R3 then means three-dimensional space. By analogy, n-dimensional space is the set of all n-tuples {(x1, x2, ..., xn): x1, x2, ..., xn ∈ R} , where R is the one-dimensional number line.
    [Show full text]