Chapter 2 Linear Algebra

Chapter 2 Linear Algebra

Chapter Linear Algebra Intro duction We discuss vectors matrices transp oses covariance correlation diagonal and inverse matrices orthogonality subspaces and eigenanalysis An alterntive source for much of this material is the excellent b o ok by Strang Transp oses and Inner Pro ducts A collection of variables may b e treated as a single entity by writing them as a vector For example the three variables x x and x may b e written as the vector 1 2 3 x 1 x x 2 x 3 Bold face typ e is often used to denote vectors scalars single variables are written with normal typ e Vectors can b e written as column vectors where the variables go down the page or as row vectors where the variables go across the page it needs to b e made clear when using vectors whether x means a row vector or a column vector most often it will mean a column vector and in our text it will always mean a column vector unless we say otherwise To turn a column vector into a row vector we use the transpose op erator T x x x x 1 2 3 The transp ose op erator also turns row vectors into column vectors We now dene the inner product of two vectors y 1 T y x y x x x 2 1 2 3 y 3 x y x y x y 1 1 2 2 3 3 Signal Pro cessing Course WD Penny April 3 X x y i i i=1 which is seen to b e a scalar The outer product of two vectors pro duces a matrix x 1 T x y y y xy 2 1 2 3 x 3 x y x y x y 1 1 1 2 1 3 x y x y x y 2 1 2 2 2 3 x y x y x y 3 1 3 2 3 3 An N M matrix has N rows and M columns The ij th entry of a matrix is the entry on the j th column of the ith row Given a matrix A matrices are also often written in b old typ e the ij th entry is written as A When applying the transp ose ij op erator to a matrix the ith row b ecomes the ith column That is if a a a 11 12 13 a a a A 21 22 23 a a a 31 32 33 then a a a 11 21 31 T a a a A 12 22 32 a a a 13 23 33 A matrix is symmetric if A A Another way to say this is that for symmetric ij j i T matrices A A Two matrices can b e multiplied if the numb er of columns in the rst matrix equals the numb er of rows in the second Multiplying A an N M matrix by B an M K matrix results in C an N K matrix The ij th entry in C is the inner pro duct b etween the ith row in A and the j th column in B As an example Given two matrices A and B we note that T T T AB B A Prop erties of matrix multiplication Matrix multiplication is asso ciative AB C ABC distributive AB C AB AC but not commutative AB BA Signal Pro cessing Course WD Penny April Typ es of matrices Covariance matrices In the previous chapter the covariance b etween two variables x and y was de xy ned Given p variables there are p p covariances to take account of If we write the covariances b etween variables x and x as then all the covariances can b e i j ij summarised in a covariance matrix which we write b elow for p 2 12 13 1 2 C 21 23 2 2 31 32 3 The ith diagonal element is the covariance b etween the ith variable and itself which 2 is simply the variance of that variable we therefore write instead of Also note ii i that b ecause covariance matrices are symmetric ij j i We now lo ok at computing a covariance matrix from a given data set Supp ose we have p variables and that a single observation x a row vector consists of measuring i these variables and supp ose there are N such observations We now make a matrix X by putting each x into the ith row The matrix X is therefore an N p matrix i whose rows are made up of dierent observation vectors If all the variables have zero mean then the covariance matrix can then b e evaluated as T C X X N T This is a multiplication of a p N matrix X by a N p matrix X which results in a p p matrix To illustrate the use of covariance matrices for time series gure shows time series which have the following covariance relation C 1 and mean vector T m 1 Diagonal matrices A diagonal matrix is a square matrix M N where all the entries are zero except along the diagonal For example D Signal Pro cessing Course WD Penny April 28 26 24 22 20 18 16 14 12 10 0 20 40 60 80 100 t Figure Three time series having the covariance matrix C and mean vector m 1 1 shown in the text The top and bottom series have high covariance but none of the other pairings do There is also a more compact notation for the same matrix D diag If a covariance matrix is diagonal it means that the covariances b etween variables are zero that is the variables are all uncorrelated Nondiagonal covariance matrices are 2 2 2 T known as ful l covariance matrices If V is a vector of variances V then 1 2 3 the corresp onding diagonal covariance matrix is V diag V d The correlation matrix The correlation matrix R can b e derived from the covariance matrix by the equation R BCB where B is a diagonal matrix of inverse standard deviations B diag 1 2 3 The identity matrix The identity matrix is a diagonal matrix with ones along the diagonal Multiplication of any matrix X by the identity matrix results in X That is IX X The identity matrix is the matrix equivalent of multiplying by for scalars Signal Pro cessing Course WD Penny April The Matrix Inverse 1 Given a matrix X its inverse X is dened by the prop erties 1 X X I 1 XX I where I is the identity matrix The inverse of a diagonal matrix with entries d is ii another diagonal matrix with entries d This satises the denition of an inverse ii eg More generally the calculation of inverses involves a lot more computation Before lo oking at the general case we rst consider the problem of solving simultaneous equations These constitute relations b etween a set of input or independent variables x and a set of output or dependent variables y Each inputoutput pair constitutes i i an observation In the following example we consider just N observations and p dimensions p er observation w w w 1 2 3 w w 1 2 w w w 1 2 3 which can b e written in matrix form w 1 w 2 w 3 or in matrix form X w y This system of equations can b e solved in a systematic way by subtracting multiples of the rst equation from the second and third equations and then subtracting mul tiples of the second equation from the third For example subtracting twice the rst equation from the second and times the rst from the third gives w 1 w 2 w 3 Then subtracting times the second from the third gives w 1 w 2 w 3 This pro cess is known as forward elimination We can then substitute the value for w from the third equation into the second etc This pro cess is backsubstitution The 3 Signal Pro cessing Course WD Penny April two pro cesses are together known as Gaussian elimination Following this through T for our example we get w When we come to invert a matrix as opp osed to solve a system of equations as in the previous example we start with the equation 1 AA I and just write down all the entries in the A and I matrices in one big matrix 1 We then p erform forward elimination until the part of the matrix corresp onding to 1 A equals the identity matrix the matrix on the right is then A this is b ecause in 1 equation if A b ecomes I then the left hand side is A and the right side must equal the left side We get 12 5 6 16 16 16 4 3 2 8 8 8 This pro cess is known as the GaussJordan metho d For more details see Strangs excellent b o ok on Linear Algebra where this example was taken from Inverses can b e used to solve equations of the form X w y This is achieved by 1 multiplying b oth sides by X giving 1 w X y Hence 5 6 12 w 1 16 16 16 4 3 2 w 2 8 8 8 w 3 T which also gives w The inverse of a pro duct of matrices is given by 1 1 1 AB B A Only square matrices are invertible b ecause for y Ax if y and x are of dierent dimension then we will not necessarily have a onetoone mapping b etween them 1 We do not p erform backsubstitution but instead continue with forward elimination until we get a diagonal matrix Signal Pro cessing Course WD Penny April Orthogonality The length of a delement vector x is written as jjxjj where d X 2 2 x jjxjj i i=1 T x x Two vectors x and y are orthogonal if y x-y x Figure Two vectors x and y These vectors wil l be orthogonal if they obey Pythagoras relation ie that the sum of the squares of the sides equals the square of the hypoteneuse 2 2 2 jjxjj jjy jj jjx y jj That is if 2 2 2 2 2 2 x x y y x y x y 1 1 d d 1 d 1 d Expanding the terms on the right and rearranging leaves only the crossterms x y x y 1 1 d d T x y That is two vectors are orthogonal if their inner pro duct is zero Angles b etween vectors T T Given a vector b b b and a vector a a a we can work out that 1 2 1 2 Signal Pro cessing Course WD Penny April b ||b-a|| ||b|| a ||a|| β δ α Figure Working out the angle between two vectors a 1 cos jjajj a 2 sin jjajj b 1 cos jjbjj b 2 sin jjbjj Now cos cos which we can expand using the trig identity cos cos cos sin sin Hence a b a b 1 1 2 2 cos jjajjjjbjj More generally we have T a b cos jjajjjjbjj T Because cos this again shows that vectors are orthogonal for a b Also b ecause j cos j where jxj denotes the absolute value of x we have T ja bj jjajjjjbjj which is known as the Schwarz Inequality Pro

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    16 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us