<<

Linear & why is linear algebra useful in ?

References: -Any book on linear algebra! -[HZ] – chapters 2, 4

Some of the slides in this lecture are courtesy to Prof. Octavia I. Camps, Penn State University Why is linear algebra useful in computer vision?

• Representation – 3D points in the scene – 2D points in the

• Coordinates will be used to – Perform geometrical transformations – Associate 3D with 2D points

• Images are matrices of numbers – Find properties of these numbers Agenda

1. Basics definitions and properties 2. Geometrical transformations 3. SVD and its applications Vectors (i.e., 2D or 3D vectors)

P = [x,y,z]

p = [x,y] 3D world

Image Vectors (i.e., 2D vectors)

x2 P v θ

x1 Magnitude:

If , Is a

Is a unit vector

Orientation: Vector

v+w

v w Vector Subtraction

v-w v w Product

av v Inner (dot) Product v α w

The inner product is a SCALAR! Orthonormal x2 P v j θ i x1 Vector (cross) Product u w α v The is a VECTOR!

Magnitude: u = v w = v w sin ↵ k k k ⇥ k k kk k

Orientation: Vector Product Computation

i = (1,0,0) || i ||= 1 i = j k ⇥ j = (0,1,0) || j||=1 j = k i ⇥ k = (0,0,1) || k ||= 1 k = i j ⇥

= − + − + − (x2 y3 x3 y2 )i (x3 y1 x1 y3 )j (x1 y2 x2 y1)k Matrices

a11 a12 ... a1m a21 a22 ... a2m An m = 2 . . . . 3 ⇥ . . . . 6 7 6a a ... a 7 6 n1 n2 nm7 4 5 Pixel’s intensity value

Sum:

A and B must have the same dimensions!

Example: Matrices

a11 a12 ... a1m a21 a22 ... a2m An m = 2 . . . . 3 ⇥ . . . . 6 7 6a a ... a 7 6 n1 n2 nm7 4 5 Product:

A and B must have compatible dimensions!

Definition:

T Cm×n = An×m cij = aji

Identities:

(A + B)T = AT + BT (AB)T = BT AT

If A = AT , then A is

Useful value computed from the elements of a A

  det a11 = a11

  a11 a12 det = a11a22 − a12a21 a21 a22

  a11 a12 a13 det a21 a22 a23 = a11a22a33 + a12a23a31 + a13a21a32 a31 a32 a33

− a13a22a31 − a23a32a11 − a33a12a21 Matrix Inverse

Does not exist for all matrices, necessary (but not sufficient) that the matrix is square

AA−1 = A−1A = I

a a −1 1  a −a  A−1 = 11 12 = 22 12 , det A 6= 0 a21 a22 det A −a21 a11

If det A = 0, A does not have an inverse. 2D Geometrical Transformations 2D Translation P’

P

t 2D Translation

P’ ty P t y

x tx 2D Translation using Matrices

P’ t ty P y

x tx Homogeneous Coordinates

• Multiply the coordinates by a non-zero scalar and add an extra coordinate equal to that scalar. For example, Back to Cartesian Coordinates:

• Divide by the last coordinate and eliminate it. For example,

• NOTE: in our example the scalar was 1 2D Translation using Homogeneous Coordinates

P’ t ty P t y P

x tx Scaling

P’

P Scaling Equation P’ sy y

P y

x sx x Scaling & Translating

P’’

P

P’=S·P P’’=T·P’ P’’=T · P’=T ·(S · P)=(T · S)·P = A · P Scaling & Translating

A Translating & Scaling = Scaling & Translating ?

P’

P Rotation

Counter-clockwise rotation by an angle θ

P’ y’ θ y P x’ x Degrees of Freedom

R is 2x2 4 elements

Note: R belongs to the category of normal matrices and satisfies many interesting properties: Rotation+ Scaling +Translation P’= (T R S) P

If sx=sy, this is a similarity transformation! Transformation in 2D

-

-Similarities

-Afnity

-Projective Transformation in 2D

Isometries: [Euclideans]

- Preserve distance () - 3 DOF - Regulate motion of rigid object Transformation in 2D

Similarities:

- Preserve - ratio of lengths - angles -4 DOF Transformation in 2D

Afnities: Transformation in 2D

Afnities:

-Preserve: - Parallel lines - Ratio of areas - Ratio of lengths on collinear lines - others… - 6 DOF Transformation in 2D

Projective:

- 8 DOF - Preserve: - cross ratio of 4 collinear points - collinearity - and a few others… Eigenvalues and Eigenvectors

A eigenvalue λ and eigenvector u satisfies

Au = λu where A is a square matrix.

I Multiplying u by A scales u by λ Eigenvalues and Eigenvectors

Rearranging the previous equation gives the system

Au − λu = (A − λI)u = 0 which has a solution if and only if det(A − λI) = 0.

I The eigenvalues are the roots of this determinant which is in λ.

I Substitute the resulting eigenvalues back into Au = λu and solve to obtain the corresponding eigenvector. Singular Value Decomposition T UΣV = A • Where U and V are orthogonal matrices, and Σ is a . For example: Singular Value decomposition

that An Numerical Example

• Look at how the multiplication works out, left to right: • Column 1 of U gets scaled by the first value from Σ.

• The resulting vector gets scaled by row 1 of VT to produce a contribution to the columns of A An Numerical Example

+ = • Each product of (column i of U)·(value i from Σ)·(row i of VT) produces a An Numerical Example

We can look at Σ to see that the while the second first column has column has a a large efect much smaller efect in this example SVD Applications

• For this image, using only the first 10 of 300 singular values produces a recognizable reconstruction • So, SVD can be used for image compression Principal Component Analysis

• Remember, columns of U are the Principal Components of the data: the major patterns that can be added to produce the columns of the original matrix • One use of this is to construct a matrix where each column is a separate data sample • Run SVD on that matrix, and look at the first few columns of U to see patterns that are common among the columns • This is called Principal Component Analysis (or PCA) of the data samples Principal Component Analysis

• Often, raw data samples have a lot of redundancy and patterns • PCA can allow you to represent data samples as weights on the principal components, rather than using the original raw form of the data • By representing each sample as just those weights, you can represent just the “meat” of what’s diferent between samples. • This minimal representation makes machine learning and other much more efcient