Matrix Calculations: Determinants and Basis Transformation

Total Page:16

File Type:pdf, Size:1020Kb

Matrix Calculations: Determinants and Basis Transformation Determinants Change of basis Matrices and basis transformations Radboud University Nijmegen Matrix Calculations: Determinants and Basis Transformation A. Kissinger Institute for Computing and Information Sciences Radboud University Nijmegen Version: autumn 2017 A. Kissinger Version: autumn 2017 Matrix Calculations 1 / 32 Determinants Change of basis Matrices and basis transformations Radboud University Nijmegen Outline Determinants Change of basis Matrices and basis transformations A. Kissinger Version: autumn 2017 Matrix Calculations 2 / 32 Determinants Change of basis Matrices and basis transformations Radboud University Nijmegen Last time • Any linear map can be represented as a matrix: f (v) = A · v g(v) = B · v • Last time, we saw that composing linear maps could be done by multiplying their matrices: f (g(v)) = A · B · v • Matrix multiplication is pretty easy: 1 2 1 −1 1 · 1 + 2 · 0 1 · (−1) + 2 · 4 1 7 · = = 3 4 0 4 3 · 1 + 4 · 0 3 · (−1) + 4 · 4 3 13 ...so if we can solve other stuff by matrix multiplication, we are pretty happy. A. Kissinger Version: autumn 2017 Matrix Calculations 3 / 32 Determinants Change of basis Matrices and basis transformations Radboud University Nijmegen Last time • For example, we can solve systems of linear equations: A · x = b ...by finding the inverse of a matrix: x = A−1 · b • There is an easy shortcut formula for 2 × 2 matrices: a b 1 d −b A = =) A−1 = c d ad − bc −c a ...as long as ad − bc 6= 0. • We'll see today that \ad − bc" is an example of a special number we can compute for any square matrix (not just 2 × 2) called the determinant. A. Kissinger Version: autumn 2017 Matrix Calculations 4 / 32 Determinants Change of basis Matrices and basis transformations Radboud University Nijmegen Determinants What a determinant does For an n × n matrix A, the determinant det(A) is a number (in R) It satisfies: det(A) = 0 () A is not invertible () A−1 does not exist () A has < n pivots in its echolon form Determinants have useful properties, but calculating determinants involves some work. A. Kissinger Version: autumn 2017 Matrix Calculations 6 / 32 Determinants Change of basis Matrices and basis transformations Radboud University Nijmegen Determinant of a 2 × 2 matrix a b • Assume A = c d • Recall that the inverse A−1 exists if and only if ad − bc 6= 0, and in that case is: d −b A−1 = 1 ad−bc −c a • In this 2 × 2-case we define: a b ab det = = ad − bc c d cd • Thus, indeed: det(A) = 0 () A−1 does not exist. A. Kissinger Version: autumn 2017 Matrix Calculations 7 / 32 Determinants Change of basis Matrices and basis transformations Radboud University Nijmegen Determinant of a 2 × 2 matrix: example • Example: 0:8 0:1 8 1 P = = 1 0:2 0:9 10 2 9 • Then: 8 9 1 2 det(P) = 10 · 10 − 10 · 10 72 2 = 100 − 100 70 7 = 100 = 10 • We have already seen that P−1 exists, so the determinant must be non-zero. A. Kissinger Version: autumn 2017 Matrix Calculations 8 / 32 Determinants Change of basis Matrices and basis transformations Radboud University Nijmegen Determinant of a 3 × 3 matrix 0 1 a11 a12 a13 • Assume A = @a21 a22 a23A a31 a32 a33 • Then one defines: a11 a12 a13 det A = a21 a22 a23 a31 a32 a33 a22 a23 a12 a13 a12 a13 =+ a11 · − a21 · + a31 · a32 a33 a32 a33 a22 a23 • Methodology: • take entries ai1 from first column, with alternating signs (+,-) • take determinant from square submatrix obtained by deleting the first column and the i-th row A. Kissinger Version: autumn 2017 Matrix Calculations 9 / 32 Determinants Change of basis Matrices and basis transformations Radboud University Nijmegen Determinant of a 3 × 3 matrix, example 1 2 −1 3 4 2 −1 2 −1 5 3 4 = 1 − 5 + −2 0 1 0 1 3 4 −2 0 1 = 3 − 0 − 5 2 − 0 − 2 8 + 3 = 3 − 10 − 22 = −29 A. Kissinger Version: autumn 2017 Matrix Calculations 10 / 32 Determinants Change of basis Matrices and basis transformations Radboud University Nijmegen The general, n × n case a12 ··· a1n a11 ··· a1n a22 ··· a2n a32 ··· a3n . =+ a · . − a · . 11 . 21 . an1 ::: ann an2 ::: ann an2 ::: ann a12 ··· a1n ··· . + a31 ··· ···± an1 . ··· a(n−1)2 ::: a(n−1)n (where the last sign ± is+ if n is odd and- if n is even) Then, each of the smaller determinants is computed recursively. A. Kissinger Version: autumn 2017 Matrix Calculations 11 / 32 Determinants Change of basis Matrices and basis transformations Radboud University Nijmegen Applications • Determinants detect when a matrix is invertible • Though we showed an inefficient way to compute determinants, there is an efficient algorithm using, you guessed it...Gaussian elimination! • Solutions to non-homogeneous systems can be expressed directly in terms of determinants using Cramer's rule (wiki it!) • Most importantly: determinants will be used to calculate eigenvalues in the next lecture A. Kissinger Version: autumn 2017 Matrix Calculations 12 / 32 Determinants Change of basis Matrices and basis transformations Radboud University Nijmegen Vectors in a basis Recall: a basis for a vector space V is a set of vectors B = fv 1;:::; v ng in V such that: 1 They uniquely span V , i.e. for all v 2 V , there exist unique ai such that: v = a1v 1 + ::: + anv n Because of this, we use a special notation for this linear combination: 0 1 a1 B . C @ . A := a1v 1 + ::: + anv n a n B A. Kissinger Version: autumn 2017 Matrix Calculations 14 / 32 Determinants Change of basis Matrices and basis transformations Radboud University Nijmegen Same vector, different outfits The same vector can look different, depending on the choice of basis. Consider the standard basis: S = f(1; 0); (0; 1)g vs. another basis: 100 100 B = ; 0 1 Is this a basis? Yes... 100 100 • It's independent because: has 2 pivots. 0 1 • It's spanning because... we can make every vector in S using linear combinations of vectors in B: 1 1 100 0 100 100 = = − 0 100 0 1 1 0 2 ...so we can also make any vector in R . A. Kissinger Version: autumn 2017 Matrix Calculations 15 / 32 Determinants Change of basis Matrices and basis transformations Radboud University Nijmegen Same vector, different outfits 1 0 100 100 S = ; B = ; 0 1 0 1 Examples: 100 1 300 2 = = 0 0 1 1 S B S B 1 1 0 −1 = 100 = 0 0 1 1 S B S B A. Kissinger Version: autumn 2017 Matrix Calculations 16 / 32 Determinants Change of basis Matrices and basis transformations Radboud University Nijmegen Why??? • Many find the idea of multiple bases confusing the first time around. 2 • S = f(1; 0); (0; 1)g is a perfectly good basis for R . Why bother with others? 1 Some vector spaces don't have one \obvious" choice of basis. Example: subspaces S ⊆ Rn. 2 Sometimes it is way more efficient to write a vector with respect to a different basis, e.g.: 0 93718234 1 011 B −438203 C B1C B C B C B 110224 C B0C B C = B C B−5423204980C B0C @ . A @.A . S B 3 The choice of basis for vectors affects how we write matrices as well. Often this can be done cleverly. Example: JPEGs, MP3s, search engine rankings, ... A. Kissinger Version: autumn 2017 Matrix Calculations 17 / 32 Determinants Change of basis Matrices and basis transformations Radboud University Nijmegen Transforming bases, part I • Problem: given a vector written in B = f(100; 0); (100; 1)g, how can we write it in the standard basis? Just use the definition: x 100 100 100x + 100y = x · + y · = y 0 1 y B S • Or, as matrix multiplication: 100 100 x 100x + 100y · = 0 1 y y | {z } |{z} | {z } T B)S in basis B in basis S • Let T B)S be the matrix whose columns are the basis vectors B. Then T B)S transforms a vector written in B into a vector written in S. A. Kissinger Version: autumn 2017 Matrix Calculations 19 / 32 Determinants Change of basis Matrices and basis transformations Radboud University Nijmegen Transforming bases, part II • How do we transform back? Need T S)B which undoes the matrix T B)S . −1 • Solution: use the inverse! T S)B := (T B)S ) • Example: 100 100−1 1 −1 (T )−1 = = 100 B)S 0 1 0 1 • ...which indeed gives: 1 −1 a a−100b 100 · = 100 0 1 b b A. Kissinger Version: autumn 2017 Matrix Calculations 20 / 32 Determinants Change of basis Matrices and basis transformations Radboud University Nijmegen Transforming bases, part IV • How about two non-standard bases? 100 100 −1 1 B = f ; g C = f ; g 0 1 2 2 a a0 • Problem: translate a vector from to b b0 B C • Solution: do this in two steps: T B)S · v | {z } first translate from B to S... −1 T S)C · T B)S · v = (T C)S ) · T B)S · v | {z } ...then translate from S to C A. Kissinger Version: autumn 2017 Matrix Calculations 21 / 32 Determinants Change of basis Matrices and basis transformations Radboud University Nijmegen Transforming bases, example • For bases: 100 100 −1 1 B = f ; g C = f ; g 0 1 2 2 • ...we need to find a0 and b0 such that a0 a = b0 b C B • Translating both sides to the standard basis gives: −1 1 a0 100 100 a · = · 2 2 b0 0 1 b • This we can solve using the matrix-inverse: a0 −1 1−1 100 100 a = · · b0 2 2 0 1 b A.
Recommended publications
  • Introduction to Linear Bialgebra
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by University of New Mexico University of New Mexico UNM Digital Repository Mathematics and Statistics Faculty and Staff Publications Academic Department Resources 2005 INTRODUCTION TO LINEAR BIALGEBRA Florentin Smarandache University of New Mexico, [email protected] W.B. Vasantha Kandasamy K. Ilanthenral Follow this and additional works at: https://digitalrepository.unm.edu/math_fsp Part of the Algebra Commons, Analysis Commons, Discrete Mathematics and Combinatorics Commons, and the Other Mathematics Commons Recommended Citation Smarandache, Florentin; W.B. Vasantha Kandasamy; and K. Ilanthenral. "INTRODUCTION TO LINEAR BIALGEBRA." (2005). https://digitalrepository.unm.edu/math_fsp/232 This Book is brought to you for free and open access by the Academic Department Resources at UNM Digital Repository. It has been accepted for inclusion in Mathematics and Statistics Faculty and Staff Publications by an authorized administrator of UNM Digital Repository. For more information, please contact [email protected], [email protected], [email protected]. INTRODUCTION TO LINEAR BIALGEBRA W. B. Vasantha Kandasamy Department of Mathematics Indian Institute of Technology, Madras Chennai – 600036, India e-mail: [email protected] web: http://mat.iitm.ac.in/~wbv Florentin Smarandache Department of Mathematics University of New Mexico Gallup, NM 87301, USA e-mail: [email protected] K. Ilanthenral Editor, Maths Tiger, Quarterly Journal Flat No.11, Mayura Park, 16, Kazhikundram Main Road, Tharamani, Chennai – 600 113, India e-mail: [email protected] HEXIS Phoenix, Arizona 2005 1 This book can be ordered in a paper bound reprint from: Books on Demand ProQuest Information & Learning (University of Microfilm International) 300 N.
    [Show full text]
  • 21. Orthonormal Bases
    21. Orthonormal Bases The canonical/standard basis 011 001 001 B C B C B C B0C B1C B0C e1 = B.C ; e2 = B.C ; : : : ; en = B.C B.C B.C B.C @.A @.A @.A 0 0 1 has many useful properties. • Each of the standard basis vectors has unit length: q p T jjeijj = ei ei = ei ei = 1: • The standard basis vectors are orthogonal (in other words, at right angles or perpendicular). T ei ej = ei ej = 0 when i 6= j This is summarized by ( 1 i = j eT e = δ = ; i j ij 0 i 6= j where δij is the Kronecker delta. Notice that the Kronecker delta gives the entries of the identity matrix. Given column vectors v and w, we have seen that the dot product v w is the same as the matrix multiplication vT w. This is the inner product on n T R . We can also form the outer product vw , which gives a square matrix. 1 The outer product on the standard basis vectors is interesting. Set T Π1 = e1e1 011 B C B0C = B.C 1 0 ::: 0 B.C @.A 0 01 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 0 . T Πn = enen 001 B C B0C = B.C 0 0 ::: 1 B.C @.A 1 00 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 1 In short, Πi is the diagonal square matrix with a 1 in the ith diagonal position and zeros everywhere else.
    [Show full text]
  • Partitioned (Or Block) Matrices This Version: 29 Nov 2018
    Partitioned (or Block) Matrices This version: 29 Nov 2018 Intermediate Econometrics / Forecasting Class Notes Instructor: Anthony Tay It is frequently convenient to partition matrices into smaller sub-matrices. e.g. 2 3 2 1 3 2 3 2 1 3 4 1 1 0 7 4 1 1 0 7 A B (2×2) (2×3) 3 1 1 0 0 = 3 1 1 0 0 = C I 1 3 0 1 0 1 3 0 1 0 (3×2) (3×3) 2 0 0 0 1 2 0 0 0 1 The same matrix can be partitioned in several different ways. For instance, we can write the previous matrix as 2 3 2 1 3 2 3 2 1 3 4 1 1 0 7 4 1 1 0 7 a b0 (1×1) (1×4) 3 1 1 0 0 = 3 1 1 0 0 = c D 1 3 0 1 0 1 3 0 1 0 (4×1) (4×4) 2 0 0 0 1 2 0 0 0 1 One reason partitioning is useful is that we can do matrix addition and multiplication with blocks, as though the blocks are elements, as long as the blocks are conformable for the operations. For instance: A B D E A + D B + E (2×2) (2×3) (2×2) (2×3) (2×2) (2×3) + = C I C F 2C I + F (3×2) (3×3) (3×2) (3×3) (3×2) (3×3) A B d E Ad + BF AE + BG (2×2) (2×3) (2×1) (2×3) (2×1) (2×3) = C I F G Cd + F CE + G (3×2) (3×3) (3×1) (3×3) (3×1) (3×3) | {z } | {z } | {z } (5×5) (5×4) (5×4) 1 Intermediate Econometrics / Forecasting 2 Examples (1) Let 1 2 1 1 2 1 c 1 4 2 3 4 2 3 h i A = = = a a a and c = c 1 2 3 2 3 0 1 3 0 1 c 0 1 3 0 1 3 3 c1 h i then Ac = a1 a2 a3 c2 = c1a1 + c2a2 + c3a3 c3 The product Ac produces a linear combination of the columns of A.
    [Show full text]
  • Multivector Differentiation and Linear Algebra 0.5Cm 17Th Santaló
    Multivector differentiation and Linear Algebra 17th Santalo´ Summer School 2016, Santander Joan Lasenby Signal Processing Group, Engineering Department, Cambridge, UK and Trinity College Cambridge [email protected], www-sigproc.eng.cam.ac.uk/ s jl 23 August 2016 1 / 78 Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. 2 / 78 Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. 3 / 78 Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. 4 / 78 Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... 5 / 78 Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary 6 / 78 We now want to generalise this idea to enable us to find the derivative of F(X), in the A ‘direction’ – where X is a general mixed grade multivector (so F(X) is a general multivector valued function of X). Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi. Then, provided A has same grades as X, it makes sense to define: F(X + tA) − F(X) A ∗ ¶XF(X) = lim t!0 t The Multivector Derivative Recall our definition of the directional derivative in the a direction F(x + ea) − F(x) a·r F(x) = lim e!0 e 7 / 78 Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi.
    [Show full text]
  • Handout 9 More Matrix Properties; the Transpose
    Handout 9 More matrix properties; the transpose Square matrix properties These properties only apply to a square matrix, i.e. n £ n. ² The leading diagonal is the diagonal line consisting of the entries a11, a22, a33, . ann. ² A diagonal matrix has zeros everywhere except the leading diagonal. ² The identity matrix I has zeros o® the leading diagonal, and 1 for each entry on the diagonal. It is a special case of a diagonal matrix, and A I = I A = A for any n £ n matrix A. ² An upper triangular matrix has all its non-zero entries on or above the leading diagonal. ² A lower triangular matrix has all its non-zero entries on or below the leading diagonal. ² A symmetric matrix has the same entries below and above the diagonal: aij = aji for any values of i and j between 1 and n. ² An antisymmetric or skew-symmetric matrix has the opposite entries below and above the diagonal: aij = ¡aji for any values of i and j between 1 and n. This automatically means the digaonal entries must all be zero. Transpose To transpose a matrix, we reect it across the line given by the leading diagonal a11, a22 etc. In general the result is a di®erent shape to the original matrix: a11 a21 a11 a12 a13 > > A = A = 0 a12 a22 1 [A ]ij = A : µ a21 a22 a23 ¶ ji a13 a23 @ A > ² If A is m £ n then A is n £ m. > ² The transpose of a symmetric matrix is itself: A = A (recalling that only square matrices can be symmetric).
    [Show full text]
  • Math 217: Multilinearity of Determinants Professor Karen Smith (C)2015 UM Math Dept Licensed Under a Creative Commons By-NC-SA 4.0 International License
    Math 217: Multilinearity of Determinants Professor Karen Smith (c)2015 UM Math Dept licensed under a Creative Commons By-NC-SA 4.0 International License. A. Let V −!T V be a linear transformation where V has dimension n. 1. What is meant by the determinant of T ? Why is this well-defined? Solution note: The determinant of T is the determinant of the B-matrix of T , for any basis B of V . Since all B-matrices of T are similar, and similar matrices have the same determinant, this is well-defined—it doesn't depend on which basis we pick. 2. Define the rank of T . Solution note: The rank of T is the dimension of the image. 3. Explain why T is an isomorphism if and only if det T is not zero. Solution note: T is an isomorphism if and only if [T ]B is invertible (for any choice of basis B), which happens if and only if det T 6= 0. 3 4. Now let V = R and let T be rotation around the axis L (a line through the origin) by an 21 0 0 3 3 angle θ. Find a basis for R in which the matrix of ρ is 40 cosθ −sinθ5 : Use this to 0 sinθ cosθ compute the determinant of T . Is T othogonal? Solution note: Let v be any vector spanning L and let u1; u2 be an orthonormal basis ? for V = L . Rotation fixes ~v, which means the B-matrix in the basis (v; u1; u2) has 213 first column 405.
    [Show full text]
  • Bases for Infinite Dimensional Vector Spaces Math 513 Linear Algebra Supplement
    BASES FOR INFINITE DIMENSIONAL VECTOR SPACES MATH 513 LINEAR ALGEBRA SUPPLEMENT Professor Karen E. Smith We have proven that every finitely generated vector space has a basis. But what about vector spaces that are not finitely generated, such as the space of all continuous real valued functions on the interval [0; 1]? Does such a vector space have a basis? By definition, a basis for a vector space V is a linearly independent set which generates V . But we must be careful what we mean by linear combinations from an infinite set of vectors. The definition of a vector space gives us a rule for adding two vectors, but not for adding together infinitely many vectors. By successive additions, such as (v1 + v2) + v3, it makes sense to add any finite set of vectors, but in general, there is no way to ascribe meaning to an infinite sum of vectors in a vector space. Therefore, when we say that a vector space V is generated by or spanned by an infinite set of vectors fv1; v2;::: g, we mean that each vector v in V is a finite linear combination λi1 vi1 + ··· + λin vin of the vi's. Likewise, an infinite set of vectors fv1; v2;::: g is said to be linearly independent if the only finite linear combination of the vi's that is zero is the trivial linear combination. So a set fv1; v2; v3;:::; g is a basis for V if and only if every element of V can be be written in a unique way as a finite linear combination of elements from the set.
    [Show full text]
  • Determinants Math 122 Calculus III D Joyce, Fall 2012
    Determinants Math 122 Calculus III D Joyce, Fall 2012 What they are. A determinant is a value associated to a square array of numbers, that square array being called a square matrix. For example, here are determinants of a general 2 × 2 matrix and a general 3 × 3 matrix. a b = ad − bc: c d a b c d e f = aei + bfg + cdh − ceg − afh − bdi: g h i The determinant of a matrix A is usually denoted jAj or det (A). You can think of the rows of the determinant as being vectors. For the 3×3 matrix above, the vectors are u = (a; b; c), v = (d; e; f), and w = (g; h; i). Then the determinant is a value associated to n vectors in Rn. There's a general definition for n×n determinants. It's a particular signed sum of products of n entries in the matrix where each product is of one entry in each row and column. The two ways you can choose one entry in each row and column of the 2 × 2 matrix give you the two products ad and bc. There are six ways of chosing one entry in each row and column in a 3 × 3 matrix, and generally, there are n! ways in an n × n matrix. Thus, the determinant of a 4 × 4 matrix is the signed sum of 24, which is 4!, terms. In this general definition, half the terms are taken positively and half negatively. In class, we briefly saw how the signs are determined by permutations.
    [Show full text]
  • Week 8-9. Inner Product Spaces. (Revised Version) Section 3.1 Dot Product As an Inner Product
    Math 2051 W2008 Margo Kondratieva Week 8-9. Inner product spaces. (revised version) Section 3.1 Dot product as an inner product. Consider a linear (vector) space V . (Let us restrict ourselves to only real spaces that is we will not deal with complex numbers and vectors.) De¯nition 1. An inner product on V is a function which assigns a real number, denoted by < ~u;~v> to every pair of vectors ~u;~v 2 V such that (1) < ~u;~v>=< ~v; ~u> for all ~u;~v 2 V ; (2) < ~u + ~v; ~w>=< ~u;~w> + < ~v; ~w> for all ~u;~v; ~w 2 V ; (3) < k~u;~v>= k < ~u;~v> for any k 2 R and ~u;~v 2 V . (4) < ~v;~v>¸ 0 for all ~v 2 V , and < ~v;~v>= 0 only for ~v = ~0. De¯nition 2. Inner product space is a vector space equipped with an inner product. Pn It is straightforward to check that the dot product introduces by ~u ¢ ~v = j=1 ujvj is an inner product. You are advised to verify all the properties listed in the de¯nition, as an exercise. The dot product is also called Euclidian inner product. De¯nition 3. Euclidian vector space is Rn equipped with Euclidian inner product < ~u;~v>= ~u¢~v. De¯nition 4. A square matrix A is called positive de¯nite if ~vT A~v> 0 for any vector ~v 6= ~0. · ¸ 2 0 Problem 1. Show that is positive de¯nite. 0 3 Solution: Take ~v = (x; y)T . Then ~vT A~v = 2x2 + 3y2 > 0 for (x; y) 6= (0; 0).
    [Show full text]
  • Matrices and Tensors
    APPENDIX MATRICES AND TENSORS A.1. INTRODUCTION AND RATIONALE The purpose of this appendix is to present the notation and most of the mathematical tech- niques that are used in the body of the text. The audience is assumed to have been through sev- eral years of college-level mathematics, which included the differential and integral calculus, differential equations, functions of several variables, partial derivatives, and an introduction to linear algebra. Matrices are reviewed briefly, and determinants, vectors, and tensors of order two are described. The application of this linear algebra to material that appears in under- graduate engineering courses on mechanics is illustrated by discussions of concepts like the area and mass moments of inertia, Mohr’s circles, and the vector cross and triple scalar prod- ucts. The notation, as far as possible, will be a matrix notation that is easily entered into exist- ing symbolic computational programs like Maple, Mathematica, Matlab, and Mathcad. The desire to represent the components of three-dimensional fourth-order tensors that appear in anisotropic elasticity as the components of six-dimensional second-order tensors and thus rep- resent these components in matrices of tensor components in six dimensions leads to the non- traditional part of this appendix. This is also one of the nontraditional aspects in the text of the book, but a minor one. This is described in §A.11, along with the rationale for this approach. A.2. DEFINITION OF SQUARE, COLUMN, AND ROW MATRICES An r-by-c matrix, M, is a rectangular array of numbers consisting of r rows and c columns: ¯MM..
    [Show full text]
  • Glossary of Linear Algebra Terms
    INNER PRODUCT SPACES AND THE GRAM-SCHMIDT PROCESS A. HAVENS 1. The Dot Product and Orthogonality 1.1. Review of the Dot Product. We first recall the notion of the dot product, which gives us a familiar example of an inner product structure on the real vector spaces Rn. This product is connected to the Euclidean geometry of Rn, via lengths and angles measured in Rn. Later, we will introduce inner product spaces in general, and use their structure to define general notions of length and angle on other vector spaces. Definition 1.1. The dot product of real n-vectors in the Euclidean vector space Rn is the scalar product · : Rn × Rn ! R given by the rule n n ! n X X X (u; v) = uiei; viei 7! uivi : i=1 i=1 i n Here BS := (e1;:::; en) is the standard basis of R . With respect to our conventions on basis and matrix multiplication, we may also express the dot product as the matrix-vector product 2 3 v1 6 7 t î ó 6 . 7 u v = u1 : : : un 6 . 7 : 4 5 vn It is a good exercise to verify the following proposition. Proposition 1.1. Let u; v; w 2 Rn be any real n-vectors, and s; t 2 R be any scalars. The Euclidean dot product (u; v) 7! u · v satisfies the following properties. (i:) The dot product is symmetric: u · v = v · u. (ii:) The dot product is bilinear: • (su) · v = s(u · v) = u · (sv), • (u + v) · w = u · w + v · w.
    [Show full text]
  • 28. Exterior Powers
    28. Exterior powers 28.1 Desiderata 28.2 Definitions, uniqueness, existence 28.3 Some elementary facts 28.4 Exterior powers Vif of maps 28.5 Exterior powers of free modules 28.6 Determinants revisited 28.7 Minors of matrices 28.8 Uniqueness in the structure theorem 28.9 Cartan's lemma 28.10 Cayley-Hamilton Theorem 28.11 Worked examples While many of the arguments here have analogues for tensor products, it is worthwhile to repeat these arguments with the relevant variations, both for practice, and to be sensitive to the differences. 1. Desiderata Again, we review missing items in our development of linear algebra. We are missing a development of determinants of matrices whose entries may be in commutative rings, rather than fields. We would like an intrinsic definition of determinants of endomorphisms, rather than one that depends upon a choice of coordinates, even if we eventually prove that the determinant is independent of the coordinates. We anticipate that Artin's axiomatization of determinants of matrices should be mirrored in much of what we do here. We want a direct and natural proof of the Cayley-Hamilton theorem. Linear algebra over fields is insufficient, since the introduction of the indeterminate x in the definition of the characteristic polynomial takes us outside the class of vector spaces over fields. We want to give a conceptual proof for the uniqueness part of the structure theorem for finitely-generated modules over principal ideal domains. Multi-linear algebra over fields is surely insufficient for this. 417 418 Exterior powers 2. Definitions, uniqueness, existence Let R be a commutative ring with 1.
    [Show full text]