Linear Algebra

Total Page:16

File Type:pdf, Size:1020Kb

Linear Algebra Linear Algebra E. Kowalski ETH Zurich¨ { Fall 2015 and Spring 2016 Version of September 15, 2016 [email protected] Contents Chapter 1. Preliminaries1 Chapter 2. Vector spaces and linear maps2 2.1. Matrices and vectors2 2.2. Matrix products3 2.3. Vector spaces and linear maps6 2.4. Subspaces 11 2.5. Generating sets 15 2.6. Linear independence and bases 18 2.7. Dimension 20 2.8. Properties of dimension 24 2.9. Matrices and linear maps 27 2.10. Solving linear equations 34 2.11. Applications 47 Chapter 3. Determinants 54 3.1. Axiomatic characterization 54 3.2. Example 57 3.3. Uniqueness and existence of the determinant 58 3.4. Properties of the determinant 61 3.5. The Vandermonde determinant 65 3.6. Permutations 67 Chapter 4. Endomorphisms 72 4.1. Sums and direct sums of vector spaces 72 4.2. Endomorphisms 77 4.3. Eigenvalues and eigenvectors 81 4.4. Some special endomorphisms 90 Chapter 5. Euclidean spaces 94 5.1. Properties of the transpose 94 5.2. Bilinear forms 94 5.3. Euclidean scalar products 98 5.4. Orthogonal bases, I 102 5.5. Orthogonal complement 106 5.6. Adjoint, I 108 5.7. Self-adjoint endomorphisms 110 5.8. Orthogonal endomorphisms 112 5.9. Quadratic forms 116 5.10. Singular values decomposition 120 Chapter 6. Unitary spaces 123 6.1. Hermitian forms 123 ii 6.2. Orthogonal bases, II 131 6.3. Orthogonal complement, II 134 6.4. Adjoint, II 136 6.5. Unitary endomorphisms 139 6.6. Normal and self-adjoint endomorphisms, II 141 6.7. Singular values decomposition, II 145 Chapter 7. The Jordan normal form 147 7.1. Statement 147 7.2. Proof of the Jordan normal form 154 Chapter 8. Duality 164 8.1. Dual space and dual basis 164 8.2. Transpose of a linear map 170 8.3. Transpose and matrix transpose 173 Chapter 9. Fields 175 9.1. Definition 175 9.2. Characteristic of a field 177 9.3. Linear algebra over arbitrary fields 178 9.4. Polynomials over a field 179 Chapter 10. Quotient vector spaces 183 10.1. Motivation 183 10.2. General definition and properties 186 10.3. Examples 188 Chapter 11. Tensor products and multilinear algebra 195 11.1. The tensor product of vector spaces 195 11.2. Examples 200 11.3. Exterior algebra 207 Appendix: dictionary 215 iii CHAPTER 1 Preliminaries We assume knowledge of: ‚ Proof by induction and by contradiction ‚ Complex numbers ‚ Basic set-theoretic definitions and notation ‚ Definitions of maps between sets, and especially injective, surjective and bijective maps ‚ Finite sets and cardinality of finite sets (in particular with respect to maps between finite sets). We denote by N “ t1; 2;:::u the set of all natural numbers. (In particuler, 0 R N). 1 CHAPTER 2 Vector spaces and linear maps Before the statement of the formal definition of a field, a field K is either Q, R, or C. 2.1. Matrices and vectors Consider a system of linear equations a11x1 ` ¨ ¨ ¨ ` a1nxn “ b1 $¨ ¨ ¨ &'am1x1 ` ¨ ¨ ¨ ` amnxn “ bm with n unknowns px1; : : : ; xn%'q in K and coefficients aij in K, bi in K. It is represented concisely by the equation fApxq “ b where A “ paijq1¤i¤m is the matrix 1¤j¤n a11 ¨ ¨ ¨ a1n A “ ¨ ¨ ¨ ¨ ¨ ¨ ¨ ¨ ¨ ¨ ˛ am1 ¨ ¨ ¨ amn ˝ ‚ with m rows and n columns, b is the column vector b1 . b “ . ; ¨ ˛ bm ˝ ‚ n m x is the column vector with coefficients x1,..., xn, and fA is the map fA : K Ñ K defined by x1 a x ` ¨ ¨ ¨ ` a x . 11 1 1n n fA . “ ¨ ¨ ¨ : ¨ ˛ ¨a x ` ¨ ¨ ¨ ` a x ˛ ´ xn ¯ m1 1 mn n ˝ ‚ ˝ ‚ We use the notation Mm;npKq for the set of all matrices with m rows and n columns n and coefficients in K, and K or Mn;1pKq for the set of all columns vectors with n rows and coefficients in K. We will also use the notation Kn for the space of row vectors with n columns. We want to study the equation fApxq “ b by composing with other maps: if fApxq “ b, m then gpfApxqq “ gpbq for any map g defined on K . If g is bijective, then conversely if ´1 gpfApxqq “ gpbq, we obtain fApxq “ b by applying the inverse map g to both sides. We do this with g also defined using a matrix. This leads to matrix products. 2 2.2. Matrix products Theorem 2.2.1. Let m, n, p be natural numbers. Let A P Mm;npKq be a matrix with m rows and n columns, and let B P Mp;mpKq be a matrix with p rows and m columns. Write A “ paijq1¤i¤m;B “ pbkiq1¤k¤p : 1¤j¤n 1¤i¤m Consider the map f obtained by composition f f Kn ÝÑA Km ÝÑB Kp; that is, f “ fB ˝ fA. Then we have f “ fC where C P Mp;npKq is the matrix C “ pckjq1¤k¤p with 1¤j¤n ckj “ bk1a1j ` bk2a2j ` ¨ ¨ ¨ ` bkmamj m “ bkiaij: i“1 ¸ n Proof. Let x “ pxjq1¤j¤n P K . We compute fpxq and check that this is the same as fC pxq. First we get by definition fApxq “ y; where y “ pyiq1¤i¤m is the row vector such that n yi “ ai1x1 ` ¨ ¨ ¨ ` ainxn “ aijxj: j“1 ¸ Then we get fpxq “ fBpyq, which is the row vector pzkq1¤k¤p with m zk “ bk1y1 ` ¨ ¨ ¨ ` bkmym “ bkiyi: i“1 ¸ Inserting the value of yi in this expression, this is m n n zk “ bki aijxj “ ckjxj i“1 j“1 j“1 ¸ ¸ ¸ where m ckj “ bk1a1j ` ¨ ¨ ¨ ` bkmamj “ bkiaij: i“1 ¸ a b x y Exercise 2.2.2. Take A “ and B “ and check the computation c d z t completely. ˆ ˙ ˆ ˙ Definition 2.2.3 (Matrix product). For A and B as above, the matrix C is called the matrix product of B and A, and is denoted C “ BA. n Example 2.2.4. (1) For A P Mm;npKq and x P K , if we view x as a column vector with n rows, we can compute the product Ax corresponding to the composition f f K ÝÑx Kn ÝÑA Km: 3 Using the formula defining fx and fA and the matrix product, we see that a11x1 ` ¨ ¨ ¨ ` a1nxn Ax “ ¨ ¨ ¨ “ fApxq: ¨ ˛ am1x1 ` ¨ ¨ ¨ ` amnxn ˝ ‚ This means that fA can also be interpreted as the map that maps a vector x to the matrix product Ax. (2) Consider B P Mk;mpKq, A P Mm;npKq. Let C “ BA, and write A “ paijq, B “ pbkjq, C “ pckiq. Consider an integer j, 1 ¤ j ¤ n and let Aj be the j-th column of A: a1j . Aj “ . : ¨ ˛ anj ˝ ‚ k We can then compute the matrix product BAj, which is an element of Mk;1pKq “ K : b11a1j ` b12a2j ` ¨ ¨ ¨ ` b1nanj . BAj “ . : ¨ ˛ bp1a1j ` bp2a2j ` ¨ ¨ ¨ ` bpnanj Comparing with the definition of˝ C, we see that ‚ c1j . BAj “ . ¨ ˛ cpj is the j-th column of C. So the columns of the˝ matrix‚ product are obtained by products of matrices with column vectors. Proposition 2.2.5 (Properties of the matrix product). (1) Given positive integers m, n and matrices A and B in Mm;npKq, we have fA “ fB if and only if A “ B. In n m particular, if a map f : K Ñ K is of the form f “ fA for some matrix A P Mm;npKq, then this matrix is unique. (2) Given positive integers m, n, p and q, and matrices A “ paijq1¤i¤m;B “ pbkiq1¤k¤p ;C “ pclkq 1¤l¤q ; 1¤j¤n 1¤i¤m 1¤k¤p defining maps f f f Kn ÝÑA Km ÝÑB Kp ÝÑC Kq; we have the equality of matrix products CpBAq “ pCBqA: In particular, for any n ¥ 1, the product of matrices is an operation on Mn;npKq (the product of matrices A and B which have both n rows and n columns is a matrix of the same size), and it is associative: ApBCq “ pABqC for all matrices A, B, C in Mn;npKq. Proof. (1) For 1 ¤ i ¤ n, consider the particular vector ei with all coefficients 0, except that the i-th coefficient is 1: 1 0 0 . 0 1 . e1 “ ¨.˛ ; e2 “ ¨ ˛ ; ¨ ¨ ¨ ; en “ ¨ ˛ : . 0 . 0 ˚0‹ ˚.‹ ˚1‹ ˚ ‹ ˚ ‹ ˚ ‹ ˝ ‚ ˝ ‚4 ˝ ‚ Computing the matrix product fApeiq “ Aei, we find that a1i . (2.1) fApeiq “ . ; ¨ ˛ ami ˝ ‚ which is the i-th column of the matrix A. Therefore, if fA “ fB, the i-column of A and B are the same (since fApeiq “ fBpeiq), which means that A and B are the same (since this is true for all columns). (3) Since composition of maps is associative, we get pfC ˝ fBq ˝ fA “ fC ˝ pfB ˝ fAq; or fCB ˝ fA “ fC ˝ fBA, or even fpCBqA “ fCpBAq, which by (1) means that pCBqA “ CpBAq. Exercise 2.2.6. Check directly using the formula for the matrix product that CpBAq “ pCBqA. Now we define two additional operations on matrices and vector: (1) addition; (2) multiplication by an element t P K. Definition 2.2.7 (Operations and special matrices). (1) For m, n natural numbers and A “ paijq1¤i¤m, B “ pbijq1¤i¤m matrices in Mm;npKq, the sum A ` B is the matrix 1¤j¤n 1¤j¤n A ` B “ paij ` bijq1¤i¤m P Mm;npKq: 1¤j¤n (2) For t P K, for m, n natural numbers and A “ paijq1¤i¤m a matrix in Mm;npKq, 1¤j¤n the product tA is the matrix tA “ ptaijq1¤i¤m P Mm;npKq: 1¤j¤n (3) For m, n natural numbers, the zero matrix 0mn is the matrix in Mm;npKq with all coefficients 0.
Recommended publications
  • Quantum Information
    Quantum Information J. A. Jones Michaelmas Term 2010 Contents 1 Dirac Notation 3 1.1 Hilbert Space . 3 1.2 Dirac notation . 4 1.3 Operators . 5 1.4 Vectors and matrices . 6 1.5 Eigenvalues and eigenvectors . 8 1.6 Hermitian operators . 9 1.7 Commutators . 10 1.8 Unitary operators . 11 1.9 Operator exponentials . 11 1.10 Physical systems . 12 1.11 Time-dependent Hamiltonians . 13 1.12 Global phases . 13 2 Quantum bits and quantum gates 15 2.1 The Bloch sphere . 16 2.2 Density matrices . 16 2.3 Propagators and Pauli matrices . 18 2.4 Quantum logic gates . 18 2.5 Gate notation . 21 2.6 Quantum networks . 21 2.7 Initialization and measurement . 23 2.8 Experimental methods . 24 3 An atom in a laser field 25 3.1 Time-dependent systems . 25 3.2 Sudden jumps . 26 3.3 Oscillating fields . 27 3.4 Time-dependent perturbation theory . 29 3.5 Rabi flopping and Fermi's Golden Rule . 30 3.6 Raman transitions . 32 3.7 Rabi flopping as a quantum gate . 32 3.8 Ramsey fringes . 33 3.9 Measurement and initialisation . 34 1 CONTENTS 2 4 Spins in magnetic fields 35 4.1 The nuclear spin Hamiltonian . 35 4.2 The rotating frame . 36 4.3 On-resonance excitation . 38 4.4 Excitation phases . 38 4.5 Off-resonance excitation . 39 4.6 Practicalities . 40 4.7 The vector model . 40 4.8 Spin echoes . 41 4.9 Measurement and initialisation . 42 5 Photon techniques 43 5.1 Spatial encoding .
    [Show full text]
  • Lecture Notes: Qubit Representations and Rotations
    Phys 711 Topics in Particles & Fields | Spring 2013 | Lecture 1 | v0.3 Lecture notes: Qubit representations and rotations Jeffrey Yepez Department of Physics and Astronomy University of Hawai`i at Manoa Watanabe Hall, 2505 Correa Road Honolulu, Hawai`i 96822 E-mail: [email protected] www.phys.hawaii.edu/∼yepez (Dated: January 9, 2013) Contents mathematical object (an abstraction of a two-state quan- tum object) with a \one" state and a \zero" state: I. What is a qubit? 1 1 0 II. Time-dependent qubits states 2 jqi = αj0i + βj1i = α + β ; (1) 0 1 III. Qubit representations 2 A. Hilbert space representation 2 where α and β are complex numbers. These complex B. SU(2) and O(3) representations 2 numbers are called amplitudes. The basis states are or- IV. Rotation by similarity transformation 3 thonormal V. Rotation transformation in exponential form 5 h0j0i = h1j1i = 1 (2a) VI. Composition of qubit rotations 7 h0j1i = h1j0i = 0: (2b) A. Special case of equal angles 7 In general, the qubit jqi in (1) is said to be in a superpo- VII. Example composite rotation 7 sition state of the two logical basis states j0i and j1i. If References 9 α and β are complex, it would seem that a qubit should have four free real-valued parameters (two magnitudes and two phases): I. WHAT IS A QUBIT? iθ0 α φ0 e jqi = = iθ1 : (3) Let us begin by introducing some notation: β φ1 e 1 state (called \minus" on the Bloch sphere) Yet, for a qubit to contain only one classical bit of infor- 0 mation, the qubit need only be unimodular (normalized j1i = the alternate symbol is |−i 1 to unity) α∗α + β∗β = 1: (4) 0 state (called \plus" on the Bloch sphere) 1 Hence it lives on the complex unit circle, depicted on the j0i = the alternate symbol is j+i: 0 top of Figure 1.
    [Show full text]
  • Multivector Differentiation and Linear Algebra 0.5Cm 17Th Santaló
    Multivector differentiation and Linear Algebra 17th Santalo´ Summer School 2016, Santander Joan Lasenby Signal Processing Group, Engineering Department, Cambridge, UK and Trinity College Cambridge [email protected], www-sigproc.eng.cam.ac.uk/ s jl 23 August 2016 1 / 78 Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. 2 / 78 Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. 3 / 78 Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. 4 / 78 Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... 5 / 78 Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary 6 / 78 We now want to generalise this idea to enable us to find the derivative of F(X), in the A ‘direction’ – where X is a general mixed grade multivector (so F(X) is a general multivector valued function of X). Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi. Then, provided A has same grades as X, it makes sense to define: F(X + tA) − F(X) A ∗ ¶XF(X) = lim t!0 t The Multivector Derivative Recall our definition of the directional derivative in the a direction F(x + ea) − F(x) a·r F(x) = lim e!0 e 7 / 78 Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi.
    [Show full text]
  • Appendix a Spinors in Four Dimensions
    Appendix A Spinors in Four Dimensions In this appendix we collect the conventions used for spinors in both Minkowski and Euclidean spaces. In Minkowski space the flat metric has the 0 1 2 3 form ηµν = diag(−1, 1, 1, 1), and the coordinates are labelled (x ,x , x , x ). The analytic continuation into Euclidean space is madethrough the replace- ment x0 = ix4 (and in momentum space, p0 = −ip4) the coordinates in this case being labelled (x1,x2, x3, x4). The Lorentz group in four dimensions, SO(3, 1), is not simply connected and therefore, strictly speaking, has no spinorial representations. To deal with these types of representations one must consider its double covering, the spin group Spin(3, 1), which is isomorphic to SL(2, C). The group SL(2, C) pos- sesses a natural complex two-dimensional representation. Let us denote this representation by S andlet us consider an element ψ ∈ S with components ψα =(ψ1,ψ2) relative to some basis. The action of an element M ∈ SL(2, C) is β (Mψ)α = Mα ψβ. (A.1) This is not the only action of SL(2, C) which one could choose. Instead of M we could have used its complex conjugate M, its inverse transpose (M T)−1,or its inverse adjoint (M †)−1. All of them satisfy the same group multiplication law. These choices would correspond to the complex conjugate representation S, the dual representation S,and the dual complex conjugate representation S. We will use the following conventions for elements of these representations: α α˙ ψα ∈ S, ψα˙ ∈ S, ψ ∈ S, ψ ∈ S.
    [Show full text]
  • Formal Power Series - Wikipedia, the Free Encyclopedia
    Formal power series - Wikipedia, the free encyclopedia http://en.wikipedia.org/wiki/Formal_power_series Formal power series From Wikipedia, the free encyclopedia In mathematics, formal power series are a generalization of polynomials as formal objects, where the number of terms is allowed to be infinite; this implies giving up the possibility to substitute arbitrary values for indeterminates. This perspective contrasts with that of power series, whose variables designate numerical values, and which series therefore only have a definite value if convergence can be established. Formal power series are often used merely to represent the whole collection of their coefficients. In combinatorics, they provide representations of numerical sequences and of multisets, and for instance allow giving concise expressions for recursively defined sequences regardless of whether the recursion can be explicitly solved; this is known as the method of generating functions. Contents 1 Introduction 2 The ring of formal power series 2.1 Definition of the formal power series ring 2.1.1 Ring structure 2.1.2 Topological structure 2.1.3 Alternative topologies 2.2 Universal property 3 Operations on formal power series 3.1 Multiplying series 3.2 Power series raised to powers 3.3 Inverting series 3.4 Dividing series 3.5 Extracting coefficients 3.6 Composition of series 3.6.1 Example 3.7 Composition inverse 3.8 Formal differentiation of series 4 Properties 4.1 Algebraic properties of the formal power series ring 4.2 Topological properties of the formal power series
    [Show full text]
  • 1 Euclidean Vector Space and Euclidean Affi Ne Space
    Profesora: Eugenia Rosado. E.T.S. Arquitectura. Euclidean Geometry1 1 Euclidean vector space and euclidean a¢ ne space 1.1 Scalar product. Euclidean vector space. Let V be a real vector space. De…nition. A scalar product is a map (denoted by a dot ) V V R ! (~u;~v) ~u ~v 7! satisfying the following axioms: 1. commutativity ~u ~v = ~v ~u 2. distributive ~u (~v + ~w) = ~u ~v + ~u ~w 3. ( ~u) ~v = (~u ~v) 4. ~u ~u 0, for every ~u V 2 5. ~u ~u = 0 if and only if ~u = 0 De…nition. Let V be a real vector space and let be a scalar product. The pair (V; ) is said to be an euclidean vector space. Example. The map de…ned as follows V V R ! (~u;~v) ~u ~v = x1x2 + y1y2 + z1z2 7! where ~u = (x1; y1; z1), ~v = (x2; y2; z2) is a scalar product as it satis…es the …ve properties of a scalar product. This scalar product is called standard (or canonical) scalar product. The pair (V; ) where is the standard scalar product is called the standard euclidean space. 1.1.1 Norm associated to a scalar product. Let (V; ) be a real euclidean vector space. De…nition. A norm associated to the scalar product is a map de…ned as follows V kk R ! ~u ~u = p~u ~u: 7! k k Profesora: Eugenia Rosado, E.T.S. Arquitectura. Euclidean Geometry.2 1.1.2 Unitary and orthogonal vectors. Orthonormal basis. Let (V; ) be a real euclidean vector space. De…nition.
    [Show full text]
  • A Representations of SU(2)
    A Representations of SU(2) In this appendix we provide details of the parameterization of the group SU(2) and differential forms on the group space. An arbitrary representation of the group SU(2) is given by the set of three generators Tk, which satisfy the Lie algebra [Ti,Tj ]=iεijkTk, with ε123 =1. The element of the group is given by the matrix U =exp{iT · ω} , (A.1) 1 where in the fundamental representation Tk = 2 σk, k =1, 2, 3, with 01 0 −i 10 σ = ,σ= ,σ= , 1 10 2 i 0 3 0 −1 standard Pauli matrices, which satisfy the relation σiσj = δij + iεijkσk . The vector ω has components ωk in a given coordinate frame. Geometrically, the matrices U are generators of spinor rotations in three- 3 dimensional space R and the parameters ωk are the corresponding angles of rotation. The Euler parameterization of an arbitrary matrix of SU(2) transformation is defined in terms of three angles θ, ϕ and ψ,as ϕ θ ψ iσ3 iσ2 iσ3 U(ϕ, θ, ψ)=Uz(ϕ)Uy(θ)Uz(ψ)=e 2 e 2 e 2 ϕ ψ ei 2 0 cos θ sin θ ei 2 0 = 2 2 −i ϕ θ θ − ψ 0 e 2 − sin cos 0 e i 2 2 2 i i θ 2 (ψ+ϕ) θ − 2 (ψ−ϕ) cos 2 e sin 2 e = i − − i . (A.2) − θ 2 (ψ ϕ) θ 2 (ψ+ϕ) sin 2 e cos 2 e Thus, the SU(2) group manifold is isomorphic to three-sphere S3.
    [Show full text]
  • Handout 9 More Matrix Properties; the Transpose
    Handout 9 More matrix properties; the transpose Square matrix properties These properties only apply to a square matrix, i.e. n £ n. ² The leading diagonal is the diagonal line consisting of the entries a11, a22, a33, . ann. ² A diagonal matrix has zeros everywhere except the leading diagonal. ² The identity matrix I has zeros o® the leading diagonal, and 1 for each entry on the diagonal. It is a special case of a diagonal matrix, and A I = I A = A for any n £ n matrix A. ² An upper triangular matrix has all its non-zero entries on or above the leading diagonal. ² A lower triangular matrix has all its non-zero entries on or below the leading diagonal. ² A symmetric matrix has the same entries below and above the diagonal: aij = aji for any values of i and j between 1 and n. ² An antisymmetric or skew-symmetric matrix has the opposite entries below and above the diagonal: aij = ¡aji for any values of i and j between 1 and n. This automatically means the digaonal entries must all be zero. Transpose To transpose a matrix, we reect it across the line given by the leading diagonal a11, a22 etc. In general the result is a di®erent shape to the original matrix: a11 a21 a11 a12 a13 > > A = A = 0 a12 a22 1 [A ]ij = A : µ a21 a22 a23 ¶ ji a13 a23 @ A > ² If A is m £ n then A is n £ m. > ² The transpose of a symmetric matrix is itself: A = A (recalling that only square matrices can be symmetric).
    [Show full text]
  • MAS4107 Linear Algebra 2 Linear Maps And
    Introduction Groups and Fields Vector Spaces Subspaces, Linear . Bases and Coordinates MAS4107 Linear Algebra 2 Linear Maps and . Change of Basis Peter Sin More on Linear Maps University of Florida Linear Endomorphisms email: [email protected]fl.edu Quotient Spaces Spaces of Linear . General Prerequisites Direct Sums Minimal polynomial Familiarity with the notion of mathematical proof and some experience in read- Bilinear Forms ing and writing proofs. Familiarity with standard mathematical notation such as Hermitian Forms summations and notations of set theory. Euclidean and . Self-Adjoint Linear . Linear Algebra Prerequisites Notation Familiarity with the notion of linear independence. Gaussian elimination (reduction by row operations) to solve systems of equations. This is the most important algorithm and it will be assumed and used freely in the classes, for example to find JJ J I II coordinate vectors with respect to basis and to compute the matrix of a linear map, to test for linear dependence, etc. The determinant of a square matrix by cofactors Back and also by row operations. Full Screen Close Quit Introduction 0. Introduction Groups and Fields Vector Spaces These notes include some topics from MAS4105, which you should have seen in one Subspaces, Linear . form or another, but probably presented in a totally different way. They have been Bases and Coordinates written in a terse style, so you should read very slowly and with patience. Please Linear Maps and . feel free to email me with any questions or comments. The notes are in electronic Change of Basis form so sections can be changed very easily to incorporate improvements.
    [Show full text]
  • Glossary of Linear Algebra Terms
    INNER PRODUCT SPACES AND THE GRAM-SCHMIDT PROCESS A. HAVENS 1. The Dot Product and Orthogonality 1.1. Review of the Dot Product. We first recall the notion of the dot product, which gives us a familiar example of an inner product structure on the real vector spaces Rn. This product is connected to the Euclidean geometry of Rn, via lengths and angles measured in Rn. Later, we will introduce inner product spaces in general, and use their structure to define general notions of length and angle on other vector spaces. Definition 1.1. The dot product of real n-vectors in the Euclidean vector space Rn is the scalar product · : Rn × Rn ! R given by the rule n n ! n X X X (u; v) = uiei; viei 7! uivi : i=1 i=1 i n Here BS := (e1;:::; en) is the standard basis of R . With respect to our conventions on basis and matrix multiplication, we may also express the dot product as the matrix-vector product 2 3 v1 6 7 t î ó 6 . 7 u v = u1 : : : un 6 . 7 : 4 5 vn It is a good exercise to verify the following proposition. Proposition 1.1. Let u; v; w 2 Rn be any real n-vectors, and s; t 2 R be any scalars. The Euclidean dot product (u; v) 7! u · v satisfies the following properties. (i:) The dot product is symmetric: u · v = v · u. (ii:) The dot product is bilinear: • (su) · v = s(u · v) = u · (sv), • (u + v) · w = u · w + v · w.
    [Show full text]
  • Vector Differential Calculus
    KAMIWAAI – INTERACTIVE 3D SKETCHING WITH JAVA BASED ON Cl(4,1) CONFORMAL MODEL OF EUCLIDEAN SPACE Submitted to (Feb. 28,2003): Advances in Applied Clifford Algebras, http://redquimica.pquim.unam.mx/clifford_algebras/ Eckhard M. S. Hitzer Dept. of Mech. Engineering, Fukui Univ. Bunkyo 3-9-, 910-8507 Fukui, Japan. Email: [email protected], homepage: http://sinai.mech.fukui-u.ac.jp/ Abstract. This paper introduces the new interactive Java sketching software KamiWaAi, recently developed at the University of Fukui. Its graphical user interface enables the user without any knowledge of both mathematics or computer science, to do full three dimensional “drawings” on the screen. The resulting constructions can be reshaped interactively by dragging its points over the screen. The programming approach is new. KamiWaAi implements geometric objects like points, lines, circles, spheres, etc. directly as software objects (Java classes) of the same name. These software objects are geometric entities mathematically defined and manipulated in a conformal geometric algebra, combining the five dimensions of origin, three space and infinity. Simple geometric products in this algebra represent geometric unions, intersections, arbitrary rotations and translations, projections, distance, etc. To ease the coordinate free and matrix free implementation of this fundamental geometric product, a new algebraic three level approach is presented. Finally details about the Java classes of the new GeometricAlgebra software package and their associated methods are given. KamiWaAi is available for free internet download. Key Words: Geometric Algebra, Conformal Geometric Algebra, Geometric Calculus Software, GeometricAlgebra Java Package, Interactive 3D Software, Geometric Objects 1. Introduction The name “KamiWaAi” of this new software is the Romanized form of the expression in verse sixteen of chapter four, as found in the Japanese translation of the first Letter of the Apostle John, which is part of the New Testament, i.e.
    [Show full text]
  • 28. Exterior Powers
    28. Exterior powers 28.1 Desiderata 28.2 Definitions, uniqueness, existence 28.3 Some elementary facts 28.4 Exterior powers Vif of maps 28.5 Exterior powers of free modules 28.6 Determinants revisited 28.7 Minors of matrices 28.8 Uniqueness in the structure theorem 28.9 Cartan's lemma 28.10 Cayley-Hamilton Theorem 28.11 Worked examples While many of the arguments here have analogues for tensor products, it is worthwhile to repeat these arguments with the relevant variations, both for practice, and to be sensitive to the differences. 1. Desiderata Again, we review missing items in our development of linear algebra. We are missing a development of determinants of matrices whose entries may be in commutative rings, rather than fields. We would like an intrinsic definition of determinants of endomorphisms, rather than one that depends upon a choice of coordinates, even if we eventually prove that the determinant is independent of the coordinates. We anticipate that Artin's axiomatization of determinants of matrices should be mirrored in much of what we do here. We want a direct and natural proof of the Cayley-Hamilton theorem. Linear algebra over fields is insufficient, since the introduction of the indeterminate x in the definition of the characteristic polynomial takes us outside the class of vector spaces over fields. We want to give a conceptual proof for the uniqueness part of the structure theorem for finitely-generated modules over principal ideal domains. Multi-linear algebra over fields is surely insufficient for this. 417 418 Exterior powers 2. Definitions, uniqueness, existence Let R be a commutative ring with 1.
    [Show full text]