<<

Background Material

„ A "encyclopedia": CVonline.

http://homepages.inf.ed.ac.uk/rbf/CVonline/ Review „ Linear Algebra:

and ‰ Eero Simoncelli “A Geometric View of Linear Algebra” Matlab Tutorial http://www.cns.nyu.edu/~eero/NOTES/geomLinAlg.pdf

‰ Michael Jordan slightly more in depth linear algebra review Assigned Reading: http://www.cs.brown.edu/courses/cs143/Materials/linalg_jordan_86.pdf •Eero Simoncelli “A Geometric View of Linear Algebra” http://www.cns.nyu.edu/~eero/NOTES/geomLinAlg.pdf ‰ Online Introductory Linear Algebra Book by Jim Hefferon. http://joshua.smcvt.edu/linearalgebra/

Notation Overview

2 „ Standard math textbook notation „ Vectors in R ‰ Scalars are italic times roman: n, N „

‰ Vectors are bold lowercase: x „ „ Row vectors are denoted with a : xT „ Bases and transformations ‰ Matrices are bold uppercase: M „ Inverse Transformations

‰ are calligraphic letters: T „ Eigendecomposition „ Singular Value Decomposition

Warm-up: Vectors in Rn Vectors in Rn

„ We can think of vectors in two ways: „ Notation:

‰ Points in a multidimensional space with respect to some

‰ translation of a in a multidimensional space ex., translation of the origin (0,0)

„ Length of a vector:

n 2 2 2 2 x = x1 + x2 +Lxn = ∑ xi i=1

1 or scalar product Scalar Product

„ Dot product is the product of two vectors „ Notation „ Example:

x1   y1  x⋅ y =   ⋅   = x y + x y = s x,y 1 1 2 2  y  x2  y2  1   T x⋅y = x y = []x1 L xn  M  „ It is the projection of one vector onto another     yn 

„ We will use the last two notations to denote the dot x ⋅y = x y cosθ product θ x.y

Scalar Product Norms in Rn

„ Commutative: x⋅y = y ⋅x „ Euclidean (sometimes called 2-norm):

„ Distributive: ()x + y ⋅z = x⋅z + y ⋅z n 2 2 2 2 „ x = x = x⋅x = x + x + + x = x 2 1 2 L n ∑ i ()cx ⋅y = c (x⋅y ) i=1 x⋅()cy = c (x⋅y ) „ The length of a vector is defined to be its (Euclidean) norm.

„ A is of length 1. (c1x)⋅(c2y) = (c1c2 )(x⋅y)

„ Non-negativity properties also hold for the norm: „ Non-negativity:

„ : ∀x ≠ 0,y ≠ 0 x⋅y = 0 ⇔ x ⊥ y

Bases and Transformations Linear Dependence

„ We will look at: „ of vectors x1, x2, … xn

‰

‰ Bases c1x1 + c2x2 +L+ cnxn

‰ Orthogonality

‰ Change of (Linear Transformation) „ A of vectors X={x1, x2, … xn} are linearly dependent if there

‰ Matrices and Operations exists a vector xi ∈ X that is a linear combination of the rest of the vectors.

2 Linear Dependence Bases (Examples in R2) n „ In R

‰ sets of n+1vectors are always dependent

‰ there can be at most n linearly independent vectors

Bases Bases

n „ A basis is a linearly independent set of vectors that spans the „ in R is made up of a set of unit vectors: “whole space”. ie., we can write every vector in our space as linear combination of vectors in that set. ˆ ˆ e1 e 2 eˆ n n n „ Every set of n linearly independent vectors in R is a basis of R

„ A basis is called „ We can write a vector in terms of its standard basis:

‰ orthogonal, if every basis vector is orthogonal to all other basis vectors

‰ orthonormal, if additionally all basis vectors have length 1. eˆ 1 eˆ 2 eˆ 3

„ Observation: -- to find the for a particular basis vector, we project our vector onto it.

xi = eˆi ⋅x

Change of basis Outer Product

m „ Suppose we have a new basis B = [] b 1 L b n , bi ∈R m and a vector x ∈ R that we would like to represent in  x1    terms of B T b x o y = xy =  M []y1 ym = M 2 ~x   x x 2   xn  ~ x2 b ~ 1 x1

„ A matrix M that is the outer product of two vectors is a matrix of 1. ~ −1 „ Compute the new components x = B x

T „ b x When B is orthonormal 1 ~ ~   ‰ x is a projection of x onto bi x =  M   T  ‰ Note the use of a dot product b x  n 

3 – dot product Matrix Multiplication – outer product

„ Matrix multiplication can be expressed using a sum of outer „ Matrix multiplication can be expressed using dot products products      aT  a a   1  1 L n           BA = b1 L bn  M      BA = T T  b1  b ⋅a b ⋅a     1 1 1 n   an        M   O  = b aT + b aT + b aT     1 1 2 2 L n n n  b T     m  bm ⋅a1 bm ⋅an  = ∑bi oai i=1

Rank of a Matrix Singular Value Decomposition: D=USVT = V D U S

I1x I2 „ A matrix D ∈ R has a column space and a row space

„ SVD orthogonalizes these spaces and decomposes D D= USV T ( U contains the left singular vectors/eigenvectors ) ( V contains the right singular vectors/eigenvectors ) „ Rewrite as a sum of a minimum number of rank-1 matrices

R

D = ∑ σ r u r o v r r =1

T Matrix SVD Properties: D=USV Matrix Inverse

R „ Rank Decomposition: D= ∑ σ r ur vr r =1 o ‰ sum of min. number of rank-1 matrices

σ1 +σ 2 ….. +σ R D = T T T v1 v2 vR

u 1 u2 uR

R1 R2

„ Multilinear Rank Decomposition: D= σ ur vr ∑ ∑ r1r 2 1 o 2 r1=1 r2 =1

= V D U S

4 Some matrix properties Matlab Tutorial

5