Basics from Linear Algebra

Total Page:16

File Type:pdf, Size:1020Kb

Basics from Linear Algebra Basics from linear algebra Definition. A vector space is a set V with the operations of addition + : V × V ! V; denoted ~w + ~v = +(~v; ~w); where ~v; ~w 2 V and multiplication by a scalar · : R × V ! V denoted r~v = ·(r;~v); where r 2 R~v 2 V such that the following hold: (1) We have ~v + (~w + ~u) = (~v + ~w) + ~u for all ~v; ~w; ~u 2 V . (2) We have ~v + ~w = ~w + ~v for all ~v; ~w 2 V . (3) There exists an element ~0 2 V such that for every ~v 2 V we have ~0 + ~v = ~v + ~0 = ~v. (One can prove that if an element ~0 with this property exists then such an element is unique). (4) For every ~v 2 V there exists an element ~w 2 V such that ~v + ~w = ~w + ~v = ~0. Again, one can show that for any given ~v an element ~w with this property is unique, and it is denoted ~w = −~v. (5) For every ~v 2 V we have 1 · ~v = ~v. (6) For every r 2 R and for all ~v; ~w 2 V we have r(~v + ~w) = r~v + r ~w. (7) For every r; s 2 R and every ~v 2 V we have (r + s)~v = r~v + r ~w. Elements ~v of a vector space V are called vectors. Examples: n (1) If n ≥ 1 is an integer, then the Euclidean space R , with the standard operations of addition and multiplication by a scalar, is a vector space. (2) The set Mn;n(R) of all n×n matrices with entries in R, with the stan- dard operations of matrix addition and multiplication by a scalar, is a vector space. (3) If X is a nonempty set, then the set F (X; R) of all functions f : X ! R, with point-wise addition and point-wise multiplication by a scalar, is a vector space. That is, for f; g : X ! R, f +g : X ! R is defined as (f +g)(x) = f(x)+g(x) for all x 2 X. Similarly, if r 2 R and f : X ! R, then the function rf : X ! R is defined as (rf)(x) := rf(x), where x 2 X. Basic properties of vector spaces. Let V be a vector space. Then: (1) We have 0 · ~v = ~0 for every ~v 2 V . (2) We have (−1) · ~v = −~v for all ~v 2 V . Definition. Let V be a vector space and let ~v1; : : : ;~vm 2 V be m vectors in V (where m ≥ 1). We say that ~v1; : : : ;~vm are linearly independent in ~ V if whenever c1; : : : ; cm 2 R are such that c1~v1 + : : : cm~vm = 0 then c1 = ··· = cm = 0. The vectors ~v1; : : : ;~vm are linearly dependent if they are not linearly independent. 1 2 Thus ~v1; : : : ;~vm are linearly dependent if and only if there exist c1; : : : cm 2 ~ R such that c1~v1 + : : : cm~vm = 0 but that ci 6= 0 for some i. Example. 3 (1) The vectors ~v1 = (0; 1; 3);~v2 = (−1; 1; 2) 2 R are linearly indepen- 3 dent in R . 3 (2) The vectors ~v1 = (0; 1; 3);~v2 = (−1; 1; 2);~v3 = (−2; 3; 7) 2 R are 3 ~ linearly dependent in R . Indeed 1·v1 + 2~v2 + (−1)~v3 = (0; 0; 0) = 0. 3 (3) The vectors ~v1 = (0; 1; 3);~v2 = (0; 0; 0) 2 R are linearly dependent 3 ~ in R . Indeed, 0 · v1 + 1 · ~v2 = (0; 0; 0) = 0 and 1 6= 0. 2 3 (4) The functions x; x ; 5x are linearly independent in F (R; R) (try to prove this fact). n Recall that the Euclidean space R is also equipped with the dot-product 1 n 1 n 1 1 operation (x ; : : : ; x ) · (y ; : : : ; y ) = x y + ··· + xnyn. 1 n n Recall that for ~x = (x ; : : : ; x ) 2 R the norm or length of ~x is p p jj~xjj := ~x · ~x = (x1)2 + ··· + (xn)2: Thus we always have jj~xjj ≥ 0 and, moreover jj~xjj = 0 if and only if ~x = ~0. n A system of vectors ~v1; : : :~vm 2 R is called orthogonal if ~vi · ~vj = 0 for all i 6= j, 1 ≤ i; j ≤ m. n Fact: Let ~v1; : : :~vm 2 R be an orthogonal system of vectors such that ~vi 6= ~0 for i = 1; : : : ; m. Then the vectors ~v1; : : : ;~vm are linearly independent. Proof. Suppose c1; : : : ; cm 2 R are such that c1~v1 + ··· + cm~vm = ~0: Let i 2 f1; : : : ; mg be arbitrary. take the dot-product of the above equation with ~vi. Then (c1~v1 + ··· + cm~vm) · ~vi = ~0 · ~vi = 0 c1(~v1 · ~vi) + : : : cm(~vm · ~vi) = 0 n Because ~v1; : : :~vm 2 R is, by assumption, an orthogonal system in the above sum all terms ~vj · ~vi are = 0 except for the case j = i. Thus we get 2 ci(~vi · ~vi) = cijj~vijj = 0: Since, again by assumption, ~vi 6= ~0, we have jj~vijj > 0. Therefore from 2 cijj~vijj = 0 we get ci = 0. Since i 2 f1; : : : ; mg was arbitrary, we conclude that c1 = ··· = cm = 0. Thus ~v1; : : :~vm are linearly independent, as claimed. Definition. Let V be a vector space and let W ⊆ V be a subset. The subset W is called a linear subspace of V if it satisfies the following properties: (1) ~0 2 W . (2) Whenever ~v 2 W and r 2 R then r~v 2 W . 3 (3) For every ~v; ~w 2 W we have ~v + ~w 2 W . If W is a linear subspace of V , we write W ≤ V . Note that if W ≤ V then W is itself a vector space, with the operations of addition and multiplication by a scalar restricted from V . Example: 2 2 (1) The set W = f(x; y) 2 R jy = 3xg is a linear subspace of R . 2 2 (2) The set W = f(x; y) 2 R jy = 3x + 1g is a not linear subspace of R . (3) The set W = ff : R ! R : f(3) = 0g is a linear subspace of F (R; R). (4) The set W = ff : R ! R : f(3) = 2f(5)g is a linear subspace of F (R; R). (5) The set W = ff : R ! R : f is continuousg is a linear subspace of F (R; R). (6) If A 2 M2;2(R) is a 2 × 2 matrix, then 1 2 2 x1 0 ker(A) := f(x ; x ) 2 R jA = g x2 0 2 is a linear subspace of R . (7) Let V be a vector space and let S ⊆ V be a nonempty subset. The span of S is defined as: Span(S) := fr1~v1 + : : : rn~vnjn ≥ 1;~v1; : : : ;~vn 2 V; and r1; : : : rn 2 R (Note that n in the above definition is not fixed, so that Span(S) consists of all finite linear combinations of elements of S). Then Span(S) is a linear subspace of V . 3 (8) For S = f(0; 1; 2); (0; 0; −1)g ⊆ R try to prove that Span(S) = f(0; y; z)jy; z 2 R are arbitraryg. Definition. Let V be a vector space. A collection of vectors ~v1; : : : ;~vn 2 V is called a basis of V if the vectors ~v1; : : : ;~vn are linearly independent and if Span(~v1; : : : ;~vn) = V . Fact. A collection of vectors ~v1; : : : ;~vn 2 V is a basis of V if and only if for every ~v 2 V there exists a unique n-tuple of real numbers c1; : : : ; cn such that c1~v1 + ··· + cn~vn = ~v. Basic properties of bases: (1) If ~v1; : : : ;~vn 2 V and ~w1; : : : ; ~wm 2 V are bases of V then n = m. For this reason, if a vector space V admits a finite basis ~v1; : : : ;~vn 2 V , then the number n is called the dimension of V and denoted n = dim V . If a vector space V does not admit a finite basis, we set dim V := 1. (2) If ~v1; : : : ;~vm 2 V is a linearly independent collection of vectors then ~v1; : : : ;~vm is a basis of the linear subspace Span(~v1; : : : ;~vm). (3) If dimV = n < 1 and ~v1; : : : ;~vm 2 V is a linearly independent col- lection of vectors then m ≤ n and there exist vectors ~vm+1; : : :~vn 2 V such that ~v1; : : : ;~vm;~vm+1; : : :~vn 2 V is a basis of V . 4 (4) If dim(V ) = n < 1 and if W ≤ V then dim W ≤ n. Examples: n n (1) dim R = n and ~e1; : : :~en is a basis of R where ~ei = (0;:::; 1;::: 0) with 1 occurring in the i-th position. (2) dimF (X; R) = jXj, the cardinality of the set X. In particular, dimF (X; R) < 1 if and only if X is a finite set. 2 (3) dim Mn;n(R) = n . n n (4) Let ~v1; : : : ;~vn 2 R be n vectors in R and let A = [v1jv2j ::: jvn] be the n × n matrix with the i-th column being the vector ~vi. n Then ~v1; : : : ;~vn is a basis of R if and only if det(A) 6= 0. n ~ (5) Let ~v1; : : : ;~vn 2 R an orthogonal system of vectors such that ~vi 6= 0 n for i = 1; : : : ; n. Then the vectors ~v1; : : :~vn form a basis of R . Definition. Let V and W be vector spaces. A function T : V ! W is called a linear map if for every ~v1;~v2 2 V and r1; r2 2 R we have T (r1~v1 + r2~v2) = r1T (~v1) + r2(~v2): Basic facts: (1) If T : V ! W is a linear map then T (~0) = ~0.
Recommended publications
  • Orthogonal Complements (Revised Version)
    Orthogonal Complements (Revised Version) Math 108A: May 19, 2010 John Douglas Moore 1 The dot product You will recall that the dot product was discussed in earlier calculus courses. If n x = (x1: : : : ; xn) and y = (y1: : : : ; yn) are elements of R , we define their dot product by x · y = x1y1 + ··· + xnyn: The dot product satisfies several key axioms: 1. it is symmetric: x · y = y · x; 2. it is bilinear: (ax + x0) · y = a(x · y) + x0 · y; 3. and it is positive-definite: x · x ≥ 0 and x · x = 0 if and only if x = 0. The dot product is an example of an inner product on the vector space V = Rn over R; inner products will be treated thoroughly in Chapter 6 of [1]. Recall that the length of an element x 2 Rn is defined by p jxj = x · x: Note that the length of an element x 2 Rn is always nonnegative. Cauchy-Schwarz Theorem. If x 6= 0 and y 6= 0, then x · y −1 ≤ ≤ 1: (1) jxjjyj Sketch of proof: If v is any element of Rn, then v · v ≥ 0. Hence (x(y · y) − y(x · y)) · (x(y · y) − y(x · y)) ≥ 0: Expanding using the axioms for dot product yields (x · x)(y · y)2 − 2(x · y)2(y · y) + (x · y)2(y · y) ≥ 0 or (x · x)(y · y)2 ≥ (x · y)2(y · y): 1 Dividing by y · y, we obtain (x · y)2 jxj2jyj2 ≥ (x · y)2 or ≤ 1; jxj2jyj2 and (1) follows by taking the square root.
    [Show full text]
  • Does Geometric Algebra Provide a Loophole to Bell's Theorem?
    Discussion Does Geometric Algebra provide a loophole to Bell’s Theorem? Richard David Gill 1 1 Leiden University, Faculty of Science, Mathematical Institute; [email protected] Version October 30, 2019 submitted to Entropy Abstract: Geometric Algebra, championed by David Hestenes as a universal language for physics, was used as a framework for the quantum mechanics of interacting qubits by Chris Doran, Anthony Lasenby and others. Independently of this, Joy Christian in 2007 claimed to have refuted Bell’s theorem with a local realistic model of the singlet correlations by taking account of the geometry of space as expressed through Geometric Algebra. A series of papers culminated in a book Christian (2014). The present paper first explores Geometric Algebra as a tool for quantum information and explains why it did not live up to its early promise. In summary, whereas the mapping between 3D geometry and the mathematics of one qubit is already familiar, Doran and Lasenby’s ingenious extension to a system of entangled qubits does not yield new insight but just reproduces standard QI computations in a clumsy way. The tensor product of two Clifford algebras is not a Clifford algebra. The dimension is too large, an ad hoc fix is needed, several are possible. I further analyse two of Christian’s earliest, shortest, least technical, and most accessible works (Christian 2007, 2011), exposing conceptual and algebraic errors. Since 2015, when the first version of this paper was posted to arXiv, Christian has published ambitious extensions of his theory in RSOS (Royal Society - Open Source), arXiv:1806.02392, and in IEEE Access, arXiv:1405.2355.
    [Show full text]
  • Signing a Linear Subspace: Signature Schemes for Network Coding
    Signing a Linear Subspace: Signature Schemes for Network Coding Dan Boneh1?, David Freeman1?? Jonathan Katz2???, and Brent Waters3† 1 Stanford University, {dabo,dfreeman}@cs.stanford.edu 2 University of Maryland, [email protected] 3 University of Texas at Austin, [email protected]. Abstract. Network coding offers increased throughput and improved robustness to random faults in completely decentralized networks. In contrast to traditional routing schemes, however, network coding requires intermediate nodes to modify data packets en route; for this reason, standard signature schemes are inapplicable and it is a challenge to provide resilience to tampering by malicious nodes. Here, we propose two signature schemes that can be used in conjunction with network coding to prevent malicious modification of data. In particular, our schemes can be viewed as signing linear subspaces in the sense that a signature σ on V authenticates exactly those vectors in V . Our first scheme is homomorphic and has better performance, with both public key size and per-packet overhead being constant. Our second scheme does not rely on random oracles and uses weaker assumptions. We also prove a lower bound on the length of signatures for linear subspaces showing that both of our schemes are essentially optimal in this regard. 1 Introduction Network coding [1, 25] refers to a general class of routing mechanisms where, in contrast to tra- ditional “store-and-forward” routing, intermediate nodes modify data packets in transit. Network coding has been shown to offer a number of advantages with respect to traditional routing, the most well-known of which is the possibility of increased throughput in certain network topologies (see, e.g., [21] for measurements of the improvement network coding gives even for unicast traffic).
    [Show full text]
  • Quantification of Stability in Systems of Nonlinear Ordinary Differential Equations Jason Terry
    University of New Mexico UNM Digital Repository Mathematics & Statistics ETDs Electronic Theses and Dissertations 2-14-2014 Quantification of Stability in Systems of Nonlinear Ordinary Differential Equations Jason Terry Follow this and additional works at: https://digitalrepository.unm.edu/math_etds Recommended Citation Terry, Jason. "Quantification of Stability in Systems of Nonlinear Ordinary Differential Equations." (2014). https://digitalrepository.unm.edu/math_etds/48 This Dissertation is brought to you for free and open access by the Electronic Theses and Dissertations at UNM Digital Repository. It has been accepted for inclusion in Mathematics & Statistics ETDs by an authorized administrator of UNM Digital Repository. For more information, please contact [email protected]. Candidate Department This dissertation is approved, and it is acceptable in quality and form for publication: Approved by the Dissertation Committee: , Chairperson Quantification of Stability in Systems of Nonlinear Ordinary Differential Equations by Jason Terry B.A., Mathematics, California State University Fresno, 2003 B.S., Computer Science, California State University Fresno, 2003 M.A., Interdiscipline Studies, California State University Fresno, 2005 M.S., Applied Mathematics, University of New Mexico, 2009 DISSERTATION Submitted in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy Mathematics The University of New Mexico Albuquerque, New Mexico December, 2013 c 2013, Jason Terry iii Dedication To my mom. iv Acknowledgments I would like
    [Show full text]
  • Lecture 6: Linear Codes 1 Vector Spaces 2 Linear Subspaces 3
    Error Correcting Codes: Combinatorics, Algorithms and Applications (Fall 2007) Lecture 6: Linear Codes January 26, 2009 Lecturer: Atri Rudra Scribe: Steve Uurtamo 1 Vector Spaces A vector space V over a field F is an abelian group under “+” such that for every α ∈ F and every v ∈ V there is an element αv ∈ V , and such that: i) α(v1 + v2) = αv1 + αv2, for α ∈ F, v1, v2 ∈ V. ii) (α + β)v = αv + βv, for α, β ∈ F, v ∈ V. iii) α(βv) = (αβ)v for α, β ∈ F, v ∈ V. iv) 1v = v for all v ∈ V, where 1 is the unit element of F. We can think of the field F as being a set of “scalars” and the set V as a set of “vectors”. If the field F is a finite field, and our alphabet Σ has the same number of elements as F , we can associate strings from Σn with vectors in F n in the obvious way, and we can think of codes C as being subsets of F n. 2 Linear Subspaces Assume that we’re dealing with a vector space of dimension n, over a finite field with q elements. n We’ll denote this as: Fq . Linear subspaces of a vector space are simply subsets of the vector space that are closed under vector addition and scalar multiplication: n n In particular, S ⊆ Fq is a linear subspace of Fq if: i) For all v1, v2 ∈ S, v1 + v2 ∈ S. ii) For all α ∈ Fq, v ∈ S, αv ∈ S. Note that the vector space itself is a linear subspace, and that the zero vector is always an element of every linear subspace.
    [Show full text]
  • Worksheet 1, for the MATLAB Course by Hans G. Feichtinger, Edinburgh, Jan
    Worksheet 1, for the MATLAB course by Hans G. Feichtinger, Edinburgh, Jan. 9th Start MATLAB via: Start > Programs > School applications > Sci.+Eng. > Eng.+Electronics > MATLAB > R2007 (preferably). Generally speaking I suggest that you start your session by opening up a diary with the command: diary mydiary1.m and concluding the session with the command diary off. If you want to save your workspace you my want to call save today1.m in order to save all the current variable (names + values). Moreover, using the HELP command from MATLAB you can get help on more or less every MATLAB command. Try simply help inv or help plot. 1. Define an arbitrary (random) 3 × 3 matrix A, and check whether it is invertible. Calculate the inverse matrix. Then define an arbitrary right hand side vector b and determine the (unique) solution to the equation A ∗ x = b. Find in two different ways the solution to this problem, by either using the inverse matrix, or alternatively by applying Gauss elimination (i.e. the RREF command) to the extended system matrix [A; b]. In addition look at the output of the command rref([A,eye(3)]). What does it tell you? 2. Produce an \arbitrary" 7 × 7 matrix of rank 5. There are at east two simple ways to do this. Either by factorization, i.e. by obtaining it as a product of some 7 × 5 matrix with another random 5 × 7 matrix, or by purposely making two of the rows or columns linear dependent from the remaining ones. Maybe the second method is interesting (if you have time also try the first one): 3.
    [Show full text]
  • 3. Hilbert Spaces
    3. Hilbert spaces In this section we examine a special type of Banach spaces. We start with some algebraic preliminaries. Definition. Let K be either R or C, and let Let X and Y be vector spaces over K. A map φ : X × Y → K is said to be K-sesquilinear, if • for every x ∈ X, then map φx : Y 3 y 7−→ φ(x, y) ∈ K is linear; • for every y ∈ Y, then map φy : X 3 y 7−→ φ(x, y) ∈ K is conjugate linear, i.e. the map X 3 x 7−→ φy(x) ∈ K is linear. In the case K = R the above properties are equivalent to the fact that φ is bilinear. Remark 3.1. Let X be a vector space over C, and let φ : X × X → C be a C-sesquilinear map. Then φ is completely determined by the map Qφ : X 3 x 7−→ φ(x, x) ∈ C. This can be see by computing, for k ∈ {0, 1, 2, 3} the quantity k k k k k k k Qφ(x + i y) = φ(x + i y, x + i y) = φ(x, x) + φ(x, i y) + φ(i y, x) + φ(i y, i y) = = φ(x, x) + ikφ(x, y) + i−kφ(y, x) + φ(y, y), which then gives 3 1 X (1) φ(x, y) = i−kQ (x + iky), ∀ x, y ∈ X. 4 φ k=0 The map Qφ : X → C is called the quadratic form determined by φ. The identity (1) is referred to as the Polarization Identity.
    [Show full text]
  • 10. Vector Subspaces the Solution to a Homogeneous Equation a X = 0
    10. Vector subspaces The solution to a homogeneous equation A~x = ~0 in R3 is one of • The origin. • A line through the origin. • A plane through the origin. • The whole of R3. These are all examples of linear subspaces. Definition 10.1. Let H be a subset of Rn. H is called a linear subspace if (1) ~0 2 H. (2) H is closed under addition: If ~u and ~v 2 H then ~u + ~v 2 H. (3) H is closed under scalar multiplication: If ~u and λ is a scalar then λ~u 2 H. Geometrically H is closed under scalar multiplication if and only if H is a union of lines through the origin. H is then closed under addition if and only if it contains every plane containing ever pair of lines. Example 10.2. Let H = f~0g. Then H is a linear subspace. Indeed, ~0 2 H. ~0 + ~0 = ~0 2 H. Similarly λ~0 = ~0. Example 10.3. Let H = Rn. Then H is a linear subspace. Indeed, ~0 2 H. H is obviously closed under addition and scalar multiplication. Now consider lines in R3. Example 10.4. Let H be the x-axis. Then H is a linear subspace. Indeed, ~0 2 H. If ~u and ~v belong to H then ~u and ~v are multiples of (1; 0; 0) and the sum is a multiple of (1; 0; 0). Similarly if λ is a scalar then λ~u is a multiple of (1; 0; 0). Example 10.5. Let H be a line in R3 through the origin.
    [Show full text]
  • Dual Principal Component Pursuit and Filtrated Algebraic Subspace
    Dual Principal Component Pursuit and Filtrated Algebraic Subspace Clustering by Manolis C. Tsakiris A dissertation submitted to The Johns Hopkins University in conformity with the requirements for the degree of Doctor of Philosophy. Baltimore, Maryland March, 2017 c Manolis C. Tsakiris 2017 All rights reserved Abstract Recent years have witnessed an explosion of data across scientific fields enabled by ad- vances in sensor technology and distributed hardware systems. This has given rise to the challenge of efficiently processing such data for performing various tasks, such as face recognition in images and videos. Towards that end, two operations on the data are of fun- damental importance, i.e., dimensionality reduction and clustering, and the idea of learning one or more linear subspaces from the data has proved a fruitful notion in that context. Nev- ertheless, state-of-the-art methods, such as Robust Principal Component Analysis (RPCA) or Sparse Subspace Clustering (SSC), operate under the hypothesis that the subspaces have dimensions that are small relative to the ambient dimension, and fail otherwise. This thesis attempts to advance the state-of-the-art of subspace learning methods in the regime where the subspaces have high relative dimensions. The first major contribution of this thesis is a single subspace learning method called Dual Principal Component Pursuit (DPCP), which solves the robust PCA problem in the presence of outliers. Contrary to sparse and low-rank state-of-the-art methods, the theoretical guarantees of DPCP do not place any constraints on the dimension of the subspace. In particular, DPCP computes the ii ABSTRACT orthogonal complement of the subspace, thus it is particularly suited for subspaces of low codimension.
    [Show full text]
  • Linear Algebra Review Vectors
    Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka [email protected] http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 108F Linear Algebra review UCSD Vectors The length of x, a.k.a. the norm or 2-norm of x, is 2 2 2 x = x1 + x2 +L+ xn e.g., x = 32 + 22 + 52 = 38 ! ! 1 Good Review Materials http://www.imageprocessingbook.com/DIP2E/dip2e_downloads/review_material_downloads.htm (Gonzales & Woods review materials) Chapt. 1: Linear Algebra Review Chapt. 2: Probability, Random Variables, Random Vectors Online vector addition demo: http://www.pa.uky.edu/~phy211/VecArith/index.html 2 Vector Addition u+v v u Vector Subtraction u-v u v 3 Example (on board) Inner product (dot product) of two vectors # 6 & "4 % % ( $ ' a = % 2 ( b = $1 ' $%" 3'( #$5 &' a" b = aT b ! ! #4 & % ( = [6 2 "3] %1 ( ! $%5 '( = 6" 4 + 2"1+ (#3)" 5 =11 ! ! ! 4 Inner (dot) Product The inner product is a SCALAR. v α u 5 Transpose of a Matrix Transpose: Examples: If , we say A is symmetric. Example of symmetric matrix 6 7 8 9 Matrix Product Product: A and B must have compatible dimensions In Matlab: >> A*B Examples: Matrix Multiplication is not commutative: 10 Matrix Sum Sum: A and B must have the same dimensions Example: Determinant of a Matrix Determinant: A must be square Example: 11 Determinant in Matlab Inverse of a Matrix If A is a square matrix, the inverse of A, called A-1, satisfies AA-1 = I and A-1A = I, Where I, the identity matrix, is a diagonal matrix with all 1’s on the diagonal.
    [Show full text]
  • Linear Subspaces
    LINEAR SUBSPACES 1. Subspaces of Rn We wish to generalize the notion of lines and planes. To that end, say a subset W ⊂ Rn is a (linear) subspace if it has the following three properties: (1) (Non-empty): ~0 2 W ; (2) (Closed under addition): ~v1;~v2 2 W ) ~v1 + ~v2 2 W ; (3) (Closed under scaling): ~v 2 W and k 2 R ) k~v 2 W . n o EXAMPLE: ~0 and Rn are subspaces of Rn for any n ≥ 1. 1 EXAMPLE: Line t : t 2 given by x = x is a subspace. 1 R 1 2 x 1 1 NON-EXAMPLE: W = : x ≥ 0; y ≥ 0 . As 2 W; but − 62 W: y 1 1 EXAMPLE: If T : Rm ! Rn, then ker(T ) is a subspace and Rm and Im (T ) is a subspace of Rn. For instance, (1) T (~0) = ~0 ) ~0 2 ker(T ); (2) ~v1;~v2 2 ker(T ) ) T (~v1 + ~v2) = T (~v1) + T (~v2) = ~0 ) ~v1 + ~v2 2 ker(T ); (3) ~v 2 ker(T ); k 2 R ! T (k~v) = kT (~v) = k~0 = ~0 ) k~v 2 ker(T ): 2. Span of vectors n Given a set of vectors ~v1; : : : ;~vm 2 R we define the span of these vectors to be W = span(~v1; : : : ;~vk) = fc1~v1 + ::: + ck~vk : c1; : : : ; cm 2 Rg In other words, ~w 2 W means ~w is a linear combination of the ~vi. If A = ~v1 j · · · j ~vm is the n × m matrix with columns ~vi, then span(~v1; : : : ;~vk) = Im (A) n and so span(~v1; : : : ;~vk) is a subspace of R .
    [Show full text]
  • Clifford Algebra and the Projective Model of Homogeneous Metric Spaces
    Clifford algebra and the projective model of homogeneous metric spaces: Foundations Andrey Sokolov July 12, 2013 Contents 1 Introduction 3 2 2-dimensional geometry 5 2.1 Projective foundations . .5 2.1.1 Projective duality . .5 2.1.2 Top-down view of geometry . .7 2.1.3 Points at infinity . .8 2.1.4 Extending lines with points at infinity . 10 2.1.5 The line at infinity . 10 2.1.6 Orientation of finite points (sheaves) . 12 2.1.7 Orientation of points at infinity (stacks) and lines . 13 2.1.8 Embedding . 14 2.2 Grassmann algebra . 16 2.3 Visualising bivectors and trivectors . 18 2.4 The metric and Clifford algebra . 20 2.5 Orthogonality in R3∗ with elliptic metric . 22 arXiv:1307.2917v1 [math.MG] 8 Jul 2013 2.6 Duality transformation on multivectors and the join . 24 2.7 Generalising duality transformation to R(n+1)∗ ...................... 26 2.8 Representing orientation and weight with blades . 28 2.9 Euclidean plane E2 ..................................... 32 2.10 Linear functions . 40 3 3-dimensional geometry 46 3.1 Points and planes . 46 3.2 Lines . 47 1 3.3 Points, lines, and the plane at infinity . 49 3.4 Embedding and Grassmann algebra . 50 3.5 Orientation . 53 3.6 The metric and Clifford algebra . 57 3.7 Euclidean space E3 ..................................... 59 4 4-dimensional geometry 72 4.1 Introduction . 72 4.2 Embedding and Grassmann algebra . 73 4.3 The metric and Clifford algebra . 80 4.4 Euclidean hyperspace E4 .................................. 81 5 1-dimensional geometry 85 5.1 Euclidean line E1 .....................................
    [Show full text]