Matrices, Transposes, and Inverses

Total Page:16

File Type:pdf, Size:1020Kb

Matrices, Transposes, and Inverses Matrices, transposes, and inverses Math 40, Introduction to Linear Algebra Wednesday, February 1, 2012 Matrix-vector multiplication: two views • 1st perspective: A x is linear combination of columns of A 44 11 232 3 11 22 33 4 −− 33 ==44 ++33 −− ++22 = 215215 22 11 55 21 22 A x • 2nd perspective: A x is computed as dot product of rows of A with vector x 4 4 1 23 4 − 3 = 2 4 = 2 1 5 dot product of 1 and 3 21 2 5 2 A x Notice that # of columns of A = # of rows of x . This is a requirement in order for matrix multiplication to be defined. Matrix multiplication What sizes of matrices can be multiplied together? the matrix product AB For m x n matrix A and n x p matrix B, is an m x p matrix. m x n n x p “inner” parameters must match “outer” parameters become parameters of matrix AB If A is a square matrix and k is a positive integer, we define Ak = A A A · ··· k factors Properties of matrix multiplication Most of the properties that we expect to hold for matrix multiplication do.... A(B + C)=AB + AC (AB)C = A(BC) k(AB)=(kA)B = A(kB) for scalar k .... except commutativity!! In general, AB = BA. Matrix multiplication not commutative Problems with hoping AB and BA are equal: In general, • BA may not be well-defined. AB = BA. (e.g., A is 2 x 3 matrix, B is 3 x 5 matrix) • Even if AB and BA are both defined, AB and BA may not be the same size. (e.g., A is 2 x 3 matrix, B is 3 x 2 matrix) • Even if AB and BA are both defined and of the same size, they still may not be equal. 11 12 24 33 12 11 = = = 11 12 24 33 12 11 Truth or fiction? Question 1 For n x n matrices A and B, is A2 B2 =(A B)(A + B)? − − No!! (A B)(A + B)=A2 + AB BA B2 − − − =0 2 2 2 Question 2 For n x n matrices A and B, is (AB) = A B ? No!! (AB)2 = ABAB = AABB = A2B2 Matrix transpose Definition The transpose of an m x n matrix A is the n x m matrix AT obtained by interchanging rows and columns of A, i.e., (AT ) = A i, j. ij ji ∀ Example 15 135 2 A = T 33 532− 1 A = 52 21 Transpose operation can be viewed as − flipping entries about the diagonal. Definition A square matrix A is symmetric if AT = A. Properties of transpose apply twice -- get back (1) (AT )T = A to where you started (2) (A + B)T = AT + BT (3) For a scalar c,(cA)T = cAT (4) (AB)T = BT AT To prove this, we show that T [(AB) ]ij = Exercise . T . Prove that for any matrix A, A A is symmetric. T T =[(B A )]ij Special matrices Definition A matrix with all zero entries is called 0000 A = 0000 a zero matrix and is denoted 0. 0000 Definition A square matrix is upper-triangular 1 2 4 5 if all entries below main diagonal are zero. A = 06 0 00 3 analogous definition for a lower-triangular matrix − 3 8 000 −0 200 Definition A square matrix whose off-diagonal A = 00− 40 entries are all zero is called a diagonal matrix. − 0001 100 Definition The identity matrix, denoted In, is the I3 = 010 n x n diagonal matrix with all ones on the diagonal. 001 Identity matrix 100 Definition The identity matrix, denoted In, is the I3 = 010 n x n diagonal matrix with all ones on the diagonal. 001 Important property of identity matrix If A is an m x n matrix, then ImA = A and AIn = A. If A is a square matrix, then IA = A = AI. TheTheThe inverse notion inverse of aof of matrix inverse a matrix Exploration Let’s think about inverses first in the context of real num- ExplorationExploration ConsiderLet’s think the about set of inverses real numbers, first in theand context say that of realwe have num- bers. Saybers. we Say have we equation have equation the equation 3x = 2 3x =23x =2 and weand want we towant solve to forsolvex.Todoso,multiplybothsidesby for x. What1 to do1 obtain we do? and we want to solve for x.Todoso,multiplybothsidesby3 3 to obtain 1 1 12 We multiply both(3 sidesx)=1 of (2)the 1equation = x by= 3 .to obtain2 3 (33x)= (2)⇒ = 3x = . 13 31 ⇒ 3 2 For , 1 is the1 multiplicative(3 inversex)= of(2) 3 since= 1(3) =x1 = 1. R For3 R, 3 is the multiplicative3 inverse3 of 3 since3 3(3) = 1. multiplicative inverse ⇒ 3 1 of 3 since (3) = 1 Now considerNow consider the3 following the following system system of ofNoticeNotice that thatwe can we rewrite can rewrite these these equationsTheNow,equationsThe inverse consider inverse of the a linear of matrix a system matrix equationsNoticeequations that as we as can rewrite equations as 3x 35x =65x =6 3 35 x51 x1 6 6 ExplorationExploration1 Let’s12 thinkLet’s2 aboutthink about inverses inverses first in first the in context the− context= of real= of realnum- num- − − − 23 x 1 bers. Say we2x have+32 equationx1 +3= x12 = 1 23 x2 2 1 bers. Say1 − we have2 equation− − − − − − − A 3x =23x =2 A x x b b which wewhich want we to want solve to for solvex1 forandx1xand2. x2. and we want to solveHow for dox .Todoso,multiplybothsidesbywe isolate the vector x by itself on1 LHS?to1 obtain and we want to solve for x.Todoso,multiplybothsidesby x 3 to obtain How do we isolate the vector xx1= 1 by itself on the3 LHS? How do we isolate1 the1 vector1 x1= byx itself2 on2 the LHS? (3x)=(3x(2))= (2) = x2 =x =2x. = . 3 3 3 3 ⇒ ⇒ 3 3 1 1 3 5 For ,Multiplyis the1 multiplicative both sides of matrix inverse equation of 3 since by3 5(3)−2 − =13 1.: MultiplyR For3 bothR, sidesis the of multiplicative matrix equation inverse by of− 32 − since33−: − (3) = 1. 3 − − 3 3 5 3 5 x1 3 5 6 The notion3 −5 of− inverse3 5 −x1 =3 −5 − 6 Now considerNow consider− the following− the2 following3 system− system23 of ofNoticex=2 Notice− that− thatwe2 can3 we rewrite can1 rewrite these these 2 −3 − 23− x2 2 −3 − 1 − equationsNow,equations consider− − the linear− system equationsNoticeequations− that as− we as −can rewrite equations as 10 x1 13 10 x1 =13 − 3x1 35x12 =65x2 =6 01 =x2 −3 359 x51 x1 6 6 − − 01 x2 9−− − = = 2323x x2 1 1 2x1 +32x12 +3= x12 = 1 I − 2 − − I − − − − − − A x1 A 13 x x b b which wewhich want we to want solve to for solvex1 forandx1xand2. xx1 2. =13 − How do we isolate the vector=x2 − x by itself9 on LHS? x2 x 9 − How do we isolate the vector xx1= −1 by itself on the LHS? How do we isolate the vector3 x =5 x1byx itself on the LHS? 6 ? − x2 2= ? Thus, the solution to ()is23x1 = 13x and2 x2 = 9. 1 Thus, the solution to ()isx1 =− 13 and−x2 = 9.3 −5 − Multiply both sides of matrix equation by3 5− − : Multiply both sides of matrix equation− by − −− 2: 3 want this equal to identity matrix, I 2 3− − Lecture 7 Math 40, Spring ’12,− Prof.− Kindred Page 1 3 5 3 5 x1 3 5 6 Lecture 73 −5 Math− 3 40, Spring5 − ’12,x1 Prof.3 Kindred5=3 −5 − 6 Page 1 − − 2 3 − 23− x=2 −− − 2 3 1 2 −3 − 23− x2 2 3 2 −3 − 1 − − − − − −− − − 10 x1 13 10 x1 =13 − x1 10 x011 =x2 −3 5 9 6 13 = 01 x2= − 9−− = − x2 01 x2I 2− 3 1 9 I − − − − x1 13 x1 =13 − =x2 − 9 x 9 − 2 − Thus, the solution to ()isx1 = 13 and x2 = 9. Thus, the solution to ()isx = 13 and− x = 9. − 1 − 2 − Lecture 7 Math 40, Spring ’12, Prof. Kindred Page 1 Lecture 7 Math 40, Spring ’12, Prof. Kindred Page 1 Matrix inverses Definition AsquarematrixA is invertible (or nonsingular)if matrix ∃ B such that AB = I and BA = I.(WesayB is an inverse of A.) Example 27 4 7 A = is invertible because for B = − , 14 12 − 27 4 7 10 we have AB = − = = I 14 12 01 − 4 7 27 10 and likewise BA = − = = I. 12 14 01 − The notion of an inverse matrix only applies to square matrices. - For rectangular matrices of full rank, there are one-sided inverses. - For matrices in general, there are pseudoinverses, which are a generalization to matrix inverses. 11 Example Find the inverse of A = .Wehave 11 11 ab 10 a + cb+ d 10 = = = 11 cd 01 ⇒ a + cb+ d 01 = a + c =1anda + c =0 Impossible! ⇒ 11 Therefore, A = is not invertible (or singular). 11 Take-home message: Not all square matrices are invertible. Lecture 7 Math 40, Spring ’12, Prof. Kindred Page 1 Important questions: When does a square matrix have an inverse? • If it does have an inverse, how do we compute it? • Can a matrix have more than one inverse? • Theorem. If A is invertible, then its inverse is unique. Proof. Assume A is invertible. Suppose, by way of contradiction, that the inverse of A is not unique, i.e., let B and C be two distinct inverses of A. Then, by def’n of inverse, we have BA = I = AB (1) and CA = I = AC.
Recommended publications
  • Parametrizations of K-Nonnegative Matrices
    Parametrizations of k-Nonnegative Matrices Anna Brosowsky, Neeraja Kulkarni, Alex Mason, Joe Suk, Ewin Tang∗ October 2, 2017 Abstract Totally nonnegative (positive) matrices are matrices whose minors are all nonnegative (positive). We generalize the notion of total nonnegativity, as follows. A k-nonnegative (resp. k-positive) matrix has all minors of size k or less nonnegative (resp. positive). We give a generating set for the semigroup of k-nonnegative matrices, as well as relations for certain special cases, i.e. the k = n − 1 and k = n − 2 unitriangular cases. In the above two cases, we find that the set of k-nonnegative matrices can be partitioned into cells, analogous to the Bruhat cells of totally nonnegative matrices, based on their factorizations into generators. We will show that these cells, like the Bruhat cells, are homeomorphic to open balls, and we prove some results about the topological structure of the closure of these cells, and in fact, in the latter case, the cells form a Bruhat-like CW complex. We also give a family of minimal k-positivity tests which form sub-cluster algebras of the total positivity test cluster algebra. We describe ways to jump between these tests, and give an alternate description of some tests as double wiring diagrams. 1 Introduction A totally nonnegative (respectively totally positive) matrix is a matrix whose minors are all nonnegative (respectively positive). Total positivity and nonnegativity are well-studied phenomena and arise in areas such as planar networks, combinatorics, dynamics, statistics and probability. The study of total positivity and total nonnegativity admit many varied applications, some of which are explored in “Totally Nonnegative Matrices” by Fallat and Johnson [5].
    [Show full text]
  • Introduction to Linear Bialgebra
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by University of New Mexico University of New Mexico UNM Digital Repository Mathematics and Statistics Faculty and Staff Publications Academic Department Resources 2005 INTRODUCTION TO LINEAR BIALGEBRA Florentin Smarandache University of New Mexico, [email protected] W.B. Vasantha Kandasamy K. Ilanthenral Follow this and additional works at: https://digitalrepository.unm.edu/math_fsp Part of the Algebra Commons, Analysis Commons, Discrete Mathematics and Combinatorics Commons, and the Other Mathematics Commons Recommended Citation Smarandache, Florentin; W.B. Vasantha Kandasamy; and K. Ilanthenral. "INTRODUCTION TO LINEAR BIALGEBRA." (2005). https://digitalrepository.unm.edu/math_fsp/232 This Book is brought to you for free and open access by the Academic Department Resources at UNM Digital Repository. It has been accepted for inclusion in Mathematics and Statistics Faculty and Staff Publications by an authorized administrator of UNM Digital Repository. For more information, please contact [email protected], [email protected], [email protected]. INTRODUCTION TO LINEAR BIALGEBRA W. B. Vasantha Kandasamy Department of Mathematics Indian Institute of Technology, Madras Chennai – 600036, India e-mail: [email protected] web: http://mat.iitm.ac.in/~wbv Florentin Smarandache Department of Mathematics University of New Mexico Gallup, NM 87301, USA e-mail: [email protected] K. Ilanthenral Editor, Maths Tiger, Quarterly Journal Flat No.11, Mayura Park, 16, Kazhikundram Main Road, Tharamani, Chennai – 600 113, India e-mail: [email protected] HEXIS Phoenix, Arizona 2005 1 This book can be ordered in a paper bound reprint from: Books on Demand ProQuest Information & Learning (University of Microfilm International) 300 N.
    [Show full text]
  • 21. Orthonormal Bases
    21. Orthonormal Bases The canonical/standard basis 011 001 001 B C B C B C B0C B1C B0C e1 = B.C ; e2 = B.C ; : : : ; en = B.C B.C B.C B.C @.A @.A @.A 0 0 1 has many useful properties. • Each of the standard basis vectors has unit length: q p T jjeijj = ei ei = ei ei = 1: • The standard basis vectors are orthogonal (in other words, at right angles or perpendicular). T ei ej = ei ej = 0 when i 6= j This is summarized by ( 1 i = j eT e = δ = ; i j ij 0 i 6= j where δij is the Kronecker delta. Notice that the Kronecker delta gives the entries of the identity matrix. Given column vectors v and w, we have seen that the dot product v w is the same as the matrix multiplication vT w. This is the inner product on n T R . We can also form the outer product vw , which gives a square matrix. 1 The outer product on the standard basis vectors is interesting. Set T Π1 = e1e1 011 B C B0C = B.C 1 0 ::: 0 B.C @.A 0 01 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 0 . T Πn = enen 001 B C B0C = B.C 0 0 ::: 1 B.C @.A 1 00 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 1 In short, Πi is the diagonal square matrix with a 1 in the ith diagonal position and zeros everywhere else.
    [Show full text]
  • Partitioned (Or Block) Matrices This Version: 29 Nov 2018
    Partitioned (or Block) Matrices This version: 29 Nov 2018 Intermediate Econometrics / Forecasting Class Notes Instructor: Anthony Tay It is frequently convenient to partition matrices into smaller sub-matrices. e.g. 2 3 2 1 3 2 3 2 1 3 4 1 1 0 7 4 1 1 0 7 A B (2×2) (2×3) 3 1 1 0 0 = 3 1 1 0 0 = C I 1 3 0 1 0 1 3 0 1 0 (3×2) (3×3) 2 0 0 0 1 2 0 0 0 1 The same matrix can be partitioned in several different ways. For instance, we can write the previous matrix as 2 3 2 1 3 2 3 2 1 3 4 1 1 0 7 4 1 1 0 7 a b0 (1×1) (1×4) 3 1 1 0 0 = 3 1 1 0 0 = c D 1 3 0 1 0 1 3 0 1 0 (4×1) (4×4) 2 0 0 0 1 2 0 0 0 1 One reason partitioning is useful is that we can do matrix addition and multiplication with blocks, as though the blocks are elements, as long as the blocks are conformable for the operations. For instance: A B D E A + D B + E (2×2) (2×3) (2×2) (2×3) (2×2) (2×3) + = C I C F 2C I + F (3×2) (3×3) (3×2) (3×3) (3×2) (3×3) A B d E Ad + BF AE + BG (2×2) (2×3) (2×1) (2×3) (2×1) (2×3) = C I F G Cd + F CE + G (3×2) (3×3) (3×1) (3×3) (3×1) (3×3) | {z } | {z } | {z } (5×5) (5×4) (5×4) 1 Intermediate Econometrics / Forecasting 2 Examples (1) Let 1 2 1 1 2 1 c 1 4 2 3 4 2 3 h i A = = = a a a and c = c 1 2 3 2 3 0 1 3 0 1 c 0 1 3 0 1 3 3 c1 h i then Ac = a1 a2 a3 c2 = c1a1 + c2a2 + c3a3 c3 The product Ac produces a linear combination of the columns of A.
    [Show full text]
  • Determinant Notes
    68 UNIT FIVE DETERMINANTS 5.1 INTRODUCTION In unit one the determinant of a 2× 2 matrix was introduced and used in the evaluation of a cross product. In this chapter we extend the definition of a determinant to any size square matrix. The determinant has a variety of applications. The value of the determinant of a square matrix A can be used to determine whether A is invertible or noninvertible. An explicit formula for A–1 exists that involves the determinant of A. Some systems of linear equations have solutions that can be expressed in terms of determinants. 5.2 DEFINITION OF THE DETERMINANT a11 a12 Recall that in chapter one the determinant of the 2× 2 matrix A = was a21 a22 defined to be the number a11a22 − a12 a21 and that the notation det (A) or A was used to represent the determinant of A. For any given n × n matrix A = a , the notation A [ ij ]n×n ij will be used to denote the (n −1)× (n −1) submatrix obtained from A by deleting the ith row and the jth column of A. The determinant of any size square matrix A = a is [ ij ]n×n defined recursively as follows. Definition of the Determinant Let A = a be an n × n matrix. [ ij ]n×n (1) If n = 1, that is A = [a11], then we define det (A) = a11 . n 1k+ (2) If na>=1, we define det(A) ∑(-1)11kk det(A ) k=1 Example If A = []5 , then by part (1) of the definition of the determinant, det (A) = 5.
    [Show full text]
  • What's in a Name? the Matrix As an Introduction to Mathematics
    St. John Fisher College Fisher Digital Publications Mathematical and Computing Sciences Faculty/Staff Publications Mathematical and Computing Sciences 9-2008 What's in a Name? The Matrix as an Introduction to Mathematics Kris H. Green St. John Fisher College, [email protected] Follow this and additional works at: https://fisherpub.sjfc.edu/math_facpub Part of the Mathematics Commons How has open access to Fisher Digital Publications benefited ou?y Publication Information Green, Kris H. (2008). "What's in a Name? The Matrix as an Introduction to Mathematics." Math Horizons 16.1, 18-21. Please note that the Publication Information provides general citation information and may not be appropriate for your discipline. To receive help in creating a citation based on your discipline, please visit http://libguides.sjfc.edu/citations. This document is posted at https://fisherpub.sjfc.edu/math_facpub/12 and is brought to you for free and open access by Fisher Digital Publications at St. John Fisher College. For more information, please contact [email protected]. What's in a Name? The Matrix as an Introduction to Mathematics Abstract In lieu of an abstract, here is the article's first paragraph: In my classes on the nature of scientific thought, I have often used the movie The Matrix to illustrate the nature of evidence and how it shapes the reality we perceive (or think we perceive). As a mathematician, I usually field questions elatedr to the movie whenever the subject of linear algebra arises, since this field is the study of matrices and their properties. So it is natural to ask, why does the movie title reference a mathematical object? Disciplines Mathematics Comments Article copyright 2008 by Math Horizons.
    [Show full text]
  • Multivector Differentiation and Linear Algebra 0.5Cm 17Th Santaló
    Multivector differentiation and Linear Algebra 17th Santalo´ Summer School 2016, Santander Joan Lasenby Signal Processing Group, Engineering Department, Cambridge, UK and Trinity College Cambridge [email protected], www-sigproc.eng.cam.ac.uk/ s jl 23 August 2016 1 / 78 Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. 2 / 78 Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. 3 / 78 Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. 4 / 78 Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... 5 / 78 Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary 6 / 78 We now want to generalise this idea to enable us to find the derivative of F(X), in the A ‘direction’ – where X is a general mixed grade multivector (so F(X) is a general multivector valued function of X). Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi. Then, provided A has same grades as X, it makes sense to define: F(X + tA) − F(X) A ∗ ¶XF(X) = lim t!0 t The Multivector Derivative Recall our definition of the directional derivative in the a direction F(x + ea) − F(x) a·r F(x) = lim e!0 e 7 / 78 Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi.
    [Show full text]
  • Linear Algebra for Dummies
    Linear Algebra for Dummies Jorge A. Menendez October 6, 2017 Contents 1 Matrices and Vectors1 2 Matrix Multiplication2 3 Matrix Inverse, Pseudo-inverse4 4 Outer products 5 5 Inner Products 5 6 Example: Linear Regression7 7 Eigenstuff 8 8 Example: Covariance Matrices 11 9 Example: PCA 12 10 Useful resources 12 1 Matrices and Vectors An m × n matrix is simply an array of numbers: 2 3 a11 a12 : : : a1n 6 a21 a22 : : : a2n 7 A = 6 7 6 . 7 4 . 5 am1 am2 : : : amn where we define the indexing Aij = aij to designate the component in the ith row and jth column of A. The transpose of a matrix is obtained by flipping the rows with the columns: 2 3 a11 a21 : : : an1 6 a12 a22 : : : an2 7 AT = 6 7 6 . 7 4 . 5 a1m a2m : : : anm T which evidently is now an n × m matrix, with components Aij = Aji = aji. In other words, the transpose is obtained by simply flipping the row and column indeces. One particularly important matrix is called the identity matrix, which is composed of 1’s on the diagonal and 0’s everywhere else: 21 0 ::: 03 60 1 ::: 07 6 7 6. .. .7 4. .5 0 0 ::: 1 1 It is called the identity matrix because the product of any matrix with the identity matrix is identical to itself: AI = A In other words, I is the equivalent of the number 1 for matrices. For our purposes, a vector can simply be thought of as a matrix with one column1: 2 3 a1 6a2 7 a = 6 7 6 .
    [Show full text]
  • Appendix a Spinors in Four Dimensions
    Appendix A Spinors in Four Dimensions In this appendix we collect the conventions used for spinors in both Minkowski and Euclidean spaces. In Minkowski space the flat metric has the 0 1 2 3 form ηµν = diag(−1, 1, 1, 1), and the coordinates are labelled (x ,x , x , x ). The analytic continuation into Euclidean space is madethrough the replace- ment x0 = ix4 (and in momentum space, p0 = −ip4) the coordinates in this case being labelled (x1,x2, x3, x4). The Lorentz group in four dimensions, SO(3, 1), is not simply connected and therefore, strictly speaking, has no spinorial representations. To deal with these types of representations one must consider its double covering, the spin group Spin(3, 1), which is isomorphic to SL(2, C). The group SL(2, C) pos- sesses a natural complex two-dimensional representation. Let us denote this representation by S andlet us consider an element ψ ∈ S with components ψα =(ψ1,ψ2) relative to some basis. The action of an element M ∈ SL(2, C) is β (Mψ)α = Mα ψβ. (A.1) This is not the only action of SL(2, C) which one could choose. Instead of M we could have used its complex conjugate M, its inverse transpose (M T)−1,or its inverse adjoint (M †)−1. All of them satisfy the same group multiplication law. These choices would correspond to the complex conjugate representation S, the dual representation S,and the dual complex conjugate representation S. We will use the following conventions for elements of these representations: α α˙ ψα ∈ S, ψα˙ ∈ S, ψ ∈ S, ψ ∈ S.
    [Show full text]
  • Things You Need to Know About Linear Algebra Math 131 Multivariate
    the standard (x; y; z) coordinate three-space. Nota- tions for vectors. Geometric interpretation as vec- tors as displacements as well as vectors as points. Vector addition, the zero vector 0, vector subtrac- Things you need to know about tion, scalar multiplication, their properties, and linear algebra their geometric interpretation. Math 131 Multivariate Calculus We start using these concepts right away. In D Joyce, Spring 2014 chapter 2, we'll begin our study of vector-valued functions, and we'll use the coordinates, vector no- The relation between linear algebra and mul- tation, and the rest of the topics reviewed in this tivariate calculus. We'll spend the first few section. meetings reviewing linear algebra. We'll look at just about everything in chapter 1. Section 1.2. Vectors and equations in R3. Stan- Linear algebra is the study of linear trans- dard basis vectors i; j; k in R3. Parametric equa- formations (also called linear functions) from n- tions for lines. Symmetric form equations for a line dimensional space to m-dimensional space, where in R3. Parametric equations for curves x : R ! R2 m is usually equal to n, and that's often 2 or 3. in the plane, where x(t) = (x(t); y(t)). A linear transformation f : Rn ! Rm is one that We'll use standard basis vectors throughout the preserves addition and scalar multiplication, that course, beginning with chapter 2. We'll study tan- is, f(a + b) = f(a) + f(b), and f(ca) = cf(a). We'll gent lines of curves and tangent planes of surfaces generally use bold face for vectors and for vector- in that chapter and throughout the course.
    [Show full text]
  • Handout 9 More Matrix Properties; the Transpose
    Handout 9 More matrix properties; the transpose Square matrix properties These properties only apply to a square matrix, i.e. n £ n. ² The leading diagonal is the diagonal line consisting of the entries a11, a22, a33, . ann. ² A diagonal matrix has zeros everywhere except the leading diagonal. ² The identity matrix I has zeros o® the leading diagonal, and 1 for each entry on the diagonal. It is a special case of a diagonal matrix, and A I = I A = A for any n £ n matrix A. ² An upper triangular matrix has all its non-zero entries on or above the leading diagonal. ² A lower triangular matrix has all its non-zero entries on or below the leading diagonal. ² A symmetric matrix has the same entries below and above the diagonal: aij = aji for any values of i and j between 1 and n. ² An antisymmetric or skew-symmetric matrix has the opposite entries below and above the diagonal: aij = ¡aji for any values of i and j between 1 and n. This automatically means the digaonal entries must all be zero. Transpose To transpose a matrix, we reect it across the line given by the leading diagonal a11, a22 etc. In general the result is a di®erent shape to the original matrix: a11 a21 a11 a12 a13 > > A = A = 0 a12 a22 1 [A ]ij = A : µ a21 a22 a23 ¶ ji a13 a23 @ A > ² If A is m £ n then A is n £ m. > ² The transpose of a symmetric matrix is itself: A = A (recalling that only square matrices can be symmetric).
    [Show full text]
  • Schaum's Outline of Linear Algebra (4Th Edition)
    SCHAUM’S SCHAUM’S outlines outlines Linear Algebra Fourth Edition Seymour Lipschutz, Ph.D. Temple University Marc Lars Lipson, Ph.D. University of Virginia Schaum’s Outline Series New York Chicago San Francisco Lisbon London Madrid Mexico City Milan New Delhi San Juan Seoul Singapore Sydney Toronto Copyright © 2009, 2001, 1991, 1968 by The McGraw-Hill Companies, Inc. All rights reserved. Except as permitted under the United States Copyright Act of 1976, no part of this publication may be reproduced or distributed in any form or by any means, or stored in a database or retrieval system, without the prior writ- ten permission of the publisher. ISBN: 978-0-07-154353-8 MHID: 0-07-154353-8 The material in this eBook also appears in the print version of this title: ISBN: 978-0-07-154352-1, MHID: 0-07-154352-X. All trademarks are trademarks of their respective owners. Rather than put a trademark symbol after every occurrence of a trademarked name, we use names in an editorial fashion only, and to the benefit of the trademark owner, with no intention of infringement of the trademark. Where such designations appear in this book, they have been printed with initial caps. McGraw-Hill eBooks are available at special quantity discounts to use as premiums and sales promotions, or for use in corporate training programs. To contact a representative please e-mail us at [email protected]. TERMS OF USE This is a copyrighted work and The McGraw-Hill Companies, Inc. (“McGraw-Hill”) and its licensors reserve all rights in and to the work.
    [Show full text]