Linear Algebra Review 27-Sep-2018 1 Voisin Maxime Krishna* and Ranjay *, linalg.pdf - Primer Niebles Carlos depth linear algebra review from CS229 is available here: - Vision and Learning Lab Another, very in http://cs229.stanford.edu/section/cs229 And a video discussion of linear algebra from EE263 is here (lectures 3 and 4): https://see.stanford.edu/Course/EE263 Slides by Juan *Stanford Linear Algebra Stanford University Linear Algebra Review 27-Sep-2018 2
Translation Special Matrices Homogeneous coordinates Basic Matrix Operations Determinants, norms, trace
– – – – – Eigenvalues and Eigenvectors Matrix Calculus Matrix inverse Matrix rank Transformation Matrices Vectors and matrices
Stanford University • • • • •
• Outline Linear Algebra Review 27-Sep-2018 3 location pixel brightness, speed, space, Vectors and matrices are just collections of ordered numbers that represent something: in etc. We’ll define some common uses and standard operations on them.
Translation Special Matrices Homogeneous coordinates Basic Matrix Operations Determinants, norms, trace
– – – – – Eigenvalues and Eigenvectors Matrix Calculus Matrix inverse Matrix rank Transformation Matrices Vectors and matrices
Stanford University • • • • •
• Outline Linear Algebra Review 27-Sep-2018 4
denotes the transpose operation
A row vector where A column vector where
Stanford University •
• Vector Linear Algebra Review 27-Sep-2018 5 python
You’ll want to keep track of the orientation of your vectors when programming in We’ll default to column vectors in this class
Stanford University •
• Vector Linear Algebra Review 27-Sep-2018 6 like ) etc , interpretation keypoint vectors don’t have a (pixels, gradients at an We can still make calculations “distance” between 2 vectors Vectors can represent any kind of data image Such geometric – – – Other vectors don’t have a geometric interpretation: • … . between 2 make are just vectors
We can calculations like “distance” vectors Points from the origin
– – geometric interpretation: Some vectors have a Some vectors have a geometric interpretation, others don’t
Stanford University • Linear Algebra Review 27-Sep-2018 7 by n, 7
If , we say that is square. A matrix is an array of numbers with size m i.e. m rows and n columns.
Stanford University •
• Matrix Linear Algebra Review 27-Sep-2018 8 Python indices start at 0 8 = (0,0) column [x, y] row =
brightnesses Note that the upper left corner is Python represents an image as a matrix of pixel
Stanford University •
• Images Linear Algebra Review 27-Sep-2018 9 . of pixels of pixels vector a matrix can be represented as a
Stanford University
Images can also be represented as Images Linear Algebra Review 27-Sep-2018 10 3 matrix × n × m red, green, and blue – per pixel, and are stored as = number 1 numbers per pixel and are stored as an - 3 have . have (RGB) matrix n × m Color images brightnesses Grayscale images an Color Images • • Stanford University Linear Algebra Review 27-Sep-2018 11 Determinant / trace Multiplication Transpose Inverse / pseudoinverse Scaling Dot product Addition
– – – – – – –
We will discuss: Stanford University
• Basic Matrix Operations Linear Algebra Review 27-Sep-2018 12 . J only add a matrix with matching dimensions, or a scalar
We can Good to know for Python assignments
– Scaling Addition
Stanford University Matrix Operations
• •
Linear Algebra Review 13 27-Sep-2018
Examples of vector norms
• Vector Norms Stanford University Linear Algebra Review 27-Sep-2018 14 0]. … [0,0 4 properties: : For all For all : For all : f(x) = 0 if and only if x = formally, a norm is any function negativity: satisfies the following -
Triangle inequality Definiteness Homogeneity that Non More
• • • •
• Vector Norms Stanford University Linear Algebra Review 27-Sep-2018 15 ) two vectors of the angle between x and y ( y|cos (dot product) x|| is also | add up the result x·y Multiply corresponding entries of two vectors and
– – Inner product
Stanford University
• Vector Operation: inner product Linear Algebra Review 27-Sep-2018 16 ) Θ ( A|cos | ) = Θ 1 x cos( two vectors A|x | of = ) Θ ( B|cos || (dot product) vector: the length of A which lies in the direction of B unit A·B = |A gives hen T A·B • • If B is a
–
Inner product Stanford University
• Vector Operation: inner product Linear Algebra Review 27-Sep-2018 17 product of two matrices
The
Stanford University
• Matrix Operations Linear Algebra Review 27-Sep-2018 18 is:
that row of A) dot product with (that column of B) ( Many uses, which will be covered later Each entry in the result The product AB is: Multiplication
Matrix Operations
• •
• • Stanford University Linear Algebra Review 27-Sep-2018 19 the by taking: of dot product
Each entry of the matrix product is made the corresponding row in the left matrix, with the corresponding column in the right one. –
Multiplication example:
Matrix Operations
Stanford University • Linear Algebra Review 27-Sep-2018 20
The product of two matrices
Stanford University
• Matrix Operations Linear Algebra Review 27-Sep-2018 21 , and 2 way! square matrices can be multiplied that , etc. 3
(make sure you understand why) AAA as A Important: only By convention, we can refer to the matrix product AA as A
– – Powers
Matrix Operations • Stanford University Linear Algebra Review 27-Sep-2018 22 , so row 1
A useful identity: Transpose a matrix: flip the matrix becomes column 1
•
• Matrix Operations Stanford University Linear Algebra Review 27-Sep-2018 23 scalar returns a
Properties: parallelogram described by the vectors in the rows of the matrix For , Represents area (or volume) of the
– – – – Determinant
• Matrix Operations Stanford University Linear Algebra Review 27-Sep-2018 24 sometimes in proofs. (Rarely in this class though.) Properties: Invariant to a lot of transformations, so it’s used – – Trace • Matrix Operations Stanford University Linear Algebra Review 27-Sep-2018 25 norm: Frobenius Norms can also be defined for Norms (we’ve talked about them earlier) matrices, such as the Matrix norms: Vector • • Matrix Operations Stanford University Linear Algebra Review 27-Sep-2018 26 elsewhere matrix A] [another matrix B] scales the · I [that matrix A] = [that = I · ] matrix [a matrix A]
·
a matrix A [diagonal matrix A] rows of matrix B Square matrix with numbers along diagonal, 0’s elsewhere I [ Square matrix, 1’s along diagonal, 0’s Diagonal Identity matrix
Stanford University •
• Special Matrices Linear Algebra Review 27-Sep-2018 27 3 5 5 7 2 0 20 57 0 2 4 symmetric matrix -
Skew Symmetric matrix
Stanford University •
• Special Matrices Linear Algebra Review 27-Sep-2018 28 transformation . Matrix multiplication can be used to transform vectors. A matrix used in this way is called a matrix
Translation Special Matrices Homogeneous coordinates Basic Matrix Operations Determinants, norms, trace
– – – – – Eigenvalues and Eigenvectors Matrix Calculus Matrix inverse Matrix rank Transformation Matrices Vectors and matrices
Stanford University • • • • •
• Outline Linear Algebra Review 27-Sep-2018 29 in useful ways, : vector Scaled scaling Ax = x’ Initial vector transformation is matrix Scaling
(Verify to yourself that the matrix multiplication works out this way) Simplest Matrices can be used to transform vectors through multiplication:
Stanford University •
• Transformation: scaling Linear Algebra Review 27-Sep-2018 30 ector: Scaled v scaling matrix: Multiply by the
Initial vector:
Stanford University Transformation: scaling Linear Algebra Review 27-Sep-2018 31 ° 30 = � vector: Rotated otation Initial vector: Transformation: r Stanford University Linear Algebra Review 27-Sep-2018 32 x0 x1 y0 y1 ”? “1 data point, but it is represented in a new, rotated frame. x0 same It is the coordinate frame
How can you convert a vector represented in frame “0” to a new, rotated
• Before learning how to rotate a vector, let’s understand how to do something slightly different
Stanford University 0
y0 y Linear Algebra Review 27-Sep-2018 33 x0 x1 y0 y1 ”? “1 data point, but it is represented in a new, rotated frame. x0 same This is x coordinate in frame “0” It is the coordinate frame x
How can you convert a vector represented in frame “0” to a new, rotated
• Before learning how to rotate a vector, let’s understand how to do something slightly different
Stanford University y
0 y0 y
“0” in frame coordinate This is y Linear Algebra Review 27-Sep-2018 34 This is x coordinate in frame “1” x0 x1 x' y0 ' y y1 coordinate in frame “1” This is y ”? “1 data point, but it is represented in a new, rotated frame. x0 same This is x coordinate in frame “0” It is the coordinate frame x
How can you convert a vector represented in frame “0” to a new, rotated
• Before learning how to rotate a vector, let’s understand how to do something slightly different
Stanford University y
0 y0 y
“0” in frame coordinate This is y Linear Algebra Review 27-Sep-2018 35 This is x coordinate in frame “1” x0 x1 x' This is y coordinate in frame “1” y0 ' y y1 ”? This is x coordinate in frame “0” x0 x
Remember what a vector is: [component in direction of the frame’s x axis, component in direction of y axis] How can you convert a vector represented in frame “0” to a new, rotated coordinate frame “1
• • Before learning how to rotate a vector, let’s understand how to do something slightly different
y Stanford University
0 y0 y
“0” in frame coordinate This is y Linear Algebra Review 27-Sep-2018 36 This is x coordinate in frame “1” x0 x1 x' This is y coordinate in frame “1” y0 ' y y1 ”? This is x coordinate in frame “0” x0 x
Goal: How do we go from to ? How can you convert a vector represented in frame “0” to a new, rotated coordinate frame “1
• • Before learning how to rotate a vector, let’s understand how to do something slightly different y
Stanford University
0
y0 y
“0” frame te in coordina This is y Linear Algebra Review 27-Sep-2018 37 which lies the original vector axis the length of the original vector which lies the length of y [the new y axis] [the new x axis] dot dot the new x axis ”? coordinate) is y [original vector] [original vector] (the new y’ is in the direction of the new x’ is x’ (the new x coordinate) is in the direction of y’ o o o o So: Answer: we use dot products!! How can you convert a vector represented in frame “0” to a new, rotated coordinate frame “1 Goal: How do we go from to ? • • • • Before learning how to rotate a vector, let’s understand how to do something slightly different Stanford University Linear Algebra Review 27-Sep-2018 38 [the new y axis] [the new x axis] dot dot ] ] ”? the new y axis the new x axis [ [ dot dot = [x, y] y’ x’ = [x, y] x’ (the new x coordinate) is [original vector] y’ (the new y coordinate) is [original vector] o o o o So: How can you convert a vector represented in frame “0” to a new, rotated coordinate frame “1 Goal: How do we go from to ? • • • Before learning how to rotate a vector, let’s understand how to do something slightly different Stanford University Linear Algebra Review 27-Sep-2018 39 ] ] ] [x, y] [x, y] axis dot dot x axis] x axis] ”? the new y axis the new [ [ new new y dot dot [the [the [ = = [x, y] y’ x’ = [x, y] o o o So: How can you convert a vector represented in frame “0” to a new, rotated coordinate frame “1 Goal: How do we go from to ? • • • Before learning how to rotate a vector, let’s understand how to do something slightly different Stanford University Linear Algebra Review 27-Sep-2018 40 vector - is a matrix it multiplication!! à ] [x, y] [x, y] ”? x x dot dot ] ] ] ] axis] axis x axis] x new new y the new the new y axis [the [the 2x2 Rotation Matrix [ [ [ [ [ = = o o o Goal: How do we go from to ? How can you convert a vector represented in frame “0” to a new, rotated coordinate frame “1 So: • • • Before learning how to rotate a vector, let’s understand how to do something slightly different Stanford University Linear Algebra Review 27-Sep-2018 41 P R = P' vector multiplication!! - ”? x ] is a matrix it x R 2x2 Rotation Matrix [ = = answer: o o à Goal: How do we go from to ? How can you convert a vector represented in frame “0” to a new, rotated coordinate frame “1 • • Before learning how to rotate a vector, let’s understand how to do something slightly different Stanford University Linear Algebra Review 27-Sep-2018 42 x0 x1 x' y0 ' y y1 x R ”? = vector multiplication!! - x0 x
Answer: using a matrix How can you convert a vector represented in frame “0” to a new, rotated coordinate frame “1
• • Before learning how to rotate a vector, let’s understand how to do something slightly different y
Stanford University 0
y0 y Linear Algebra Review 27-Sep-2018 43 in frame “0” to a x0 x1 x x' R = data point represented a We rotated the frame to the left. ”. ' y0 y y1 x0 x1 x
y0 y Until now, we’ve seen how to convert new, rotated coordinate frame “1
Stanford University • Now, here’s how we do a rotation!! y1 Linear Algebra Review 27-Sep-2018 44 x0 x1 x ’ x R right = in frame “0” to a ’ y0 y y1 x0 x1 x system “1” x' R = data point represented a ”. We rotated the frame to the left. ' y0 coordinate system, we have rotated the point y original y1 x0 x1 we express a point in the new coordinate x we plot the result in the Suppose If
y0 – – y Until now, we’ve seen how to convert new, rotated coordinate frame “1 Insight: rotating the frame to the left == rotating a data point to the right
Stanford University • • Now, here’s how we do a rotation!! y1 Linear Algebra Review 27-Sep-2018 45 an Θ the x0 ’ in frame “0” to a angle x Rotate vector by ’ y0 y data point represented a x Θ R ”. = angle Rotation by an as operators to rotate vectors -- x0 x y0 y
them in that sense Thus, rotation matrices can be used to rotate vectors. We’ll usually think of Until now, we’ve seen how to convert new, rotated coordinate frame “1 Insight: rotating the frame to the left == rotating a data point to the right
Stanford University • • • Now, here’s how we do a rotation!! Linear Algebra Review 27-Sep-2018 46 ù ú û y x é ê ë q ù ú û q y q x θ θ sin cos P - sin sin - + R q q x y θ θ = sin cos é ê ë cos cos rotation anby angle = P' = = û ù ú ' ' ' ' y x y x é ê ë Formula: what is R? clockwise - P q x x’ P’ Counter
y’ y
Stanford University 2D Rotation Matrix Linear Algebra Review 27-Sep-2018 47 . right to left S) p S p 1 1 (S p))) R
R 1 2 2
(R 2
first, to form a single transformation matrix: p’=(R The result is exactly the same if we multiply the matrices (R In the example above, the result is The effect of this is to apply their transformations one after the other, from p’=R point: Multiple transformation matrices can be used to transform a
Stanford University
•
• •
• Transformation Matrices Linear Algebra Review 27-Sep-2018 48 coordinates , Rotation inverse
Homogeneous Special Matrices Scaling Basic Matrix Operations Determinants, norms, trace
– – – – – Eigenvalues and Eigenvectors Matrix Calculus Matrix Matrix rank Transformation Matrices Vectors and matrices
Stanford University • • • • •
• Outline Linear Algebra Review 27-Sep-2018 49 L
But notice, we can’t add a constant! This is sufficient for scale, rotate, skew transformations.
– – In general, a matrix multiplication lets us linearly combine components of a vector
• Homogeneous system Stanford University Linear Algebra Review 27-Sep-2018 50 AND translate
(note how the multiplication works out, above) This is called “homogeneous coordinates” Now we can rotate, scale, and skew like before, The (somewhat hacky) solution? Stick a “1” at the end of every vector: – – –
Stanford University Homogeneous system Linear Algebra Review 27-Sep-2018 51
bottom row of [0 0 1], so that the result has a “1” at the bottom too. Generally, a homogeneous transformation matrix will have a In homogeneous coordinates, the multiplication works out so the rightmost column of the matrix is a vector that gets added. – – Homogeneous system Stanford University Linear Algebra Review 27-Sep-2018 52 , in homogeneous coordinates, we’ll divide the by convention So, result by its last coordinate after doing a matrix multiplication For example, we may want to divide by a coordinate, to make things scale down as they get farther away in a camera image Matrix multiplication can’t actually divide
– – –
One more thing we might want: to divide the result by something Stanford University
Homogeneous system • Linear Algebra Review 27-Sep-2018 53
P’ t P 2D Translation using Homogeneous Coordinates Stanford University Linear Algebra Review 27-Sep-2018 54 P ù ú ú ú û y x 1 é ê ê ê ë ) × ù ú ú ú û ) 1 , y x 1 , y 1 t t t y , , x x t 0 0 1 ( ( ® 0 0 1 ® é ê ê ê ë ) ) y y = t , , ù ú ú ú û x x y x t ( t t ( = 1 + + = x y P t ê ê ë é ê ® ' P P’ tx t P x
y ty
Coordinates 2D Translation using Homogeneous Stanford University Linear Algebra Review 27-Sep-2018 55 P ù ú ú ú û y x 1 é ê ê ê ë ) × ù ú ú ú û ) 1 , y x 1 , y 1 t t t y , , x x t 0 0 1 ( ( ® 0 0 1 ® ê ê ê ë é ) ) y y = t , , ú ú ú û ù x x y x t ( t t ( = 1 + + = x y P t é ê ê ê ë ® ' P P’ tx t P x
y ty
Coordinates 2D Translation using Homogeneous Stanford University Linear Algebra Review 27-Sep-2018 56 P ù ú ú ú û y x 1 é ê ê ê ë ) × ù ú ú ú û ) 1 , y x 1 , y 1 t t t y , , x x t 0 0 1 ( ( ® 0 0 1 ® ê ê ê ë é ) ) y y = t , , ú ú ú û ù x x y x t ( t t ( = 1 + + = x y P t é ê ê ê ë ® ' P P’ tx t P x
y ty
Coordinates 2D Translation using Homogeneous Stanford University Linear Algebra Review 27-Sep-2018 57 P ù ú ú ú û y x 1 é ê ê ê ë ) × ù ú ú ú û ) 1 , y x 1 , y 1 t t t y , , x x t 0 0 1 ( ( ® 0 0 1 ® é ê ê ê ë ) ) y y = t , , ù ú ú ú û x x y x t ( t t ( = 1 + + = x y P t ê ê ë é ê ® ' P P’ tx t P x
y ty
Coordinates 2D Translation using Homogeneous Stanford University Linear Algebra Review 27-Sep-2018 58 P ù ú ú ú û t y x 1 P é ê ê ê ë × ) × ù ú ú ú û ) 1 , T y x 1 , y 1 t t t y = , , x x P t 0 0 1 ( × ( ú û ù t 1 ® 0 0 1 ® ê ê ê ë é ) ) y y I 0 = t , é ê ë , ú ú ú û ù x x y x t = ( t t ( = 1 + + = x y P t é ê ê ê ë ® ' P P’ tx t P x
y ty
Coordinates 2D Translation using Homogeneous Stanford University Linear Algebra Review 27-Sep-2018 59 P’ P Scaling Stanford University Linear Algebra Review 27-Sep-2018 60 ) ) y 1 y , s y , y s x , x x s x ) ( s 1 , ( = y ' , ® x P ( ) y ® y ® s ) ) , y y x , x , s x x ( ( ( = = ' = P P P x P’ x s x P Scaling Equation y y Stanford University y s Linear Algebra Review 27-Sep-2018 61 ) ) y 1 y , s y , y s x , x x s x ) ( s 1 , ( = y ' , ® x P ( ) y ® y ® s ) ) , ù ú ú ú û y y x y x 1 , x , é ê ê ê ë s x x × ( ( ( ù ú ú ú û = 0 0 1 = ' = P P P y 0 0 s x x P’ 0 0 x s s ê ê ê ë é = ú û ù ú ú x y x P x y 1 s s
ê ë é ê ê
Scaling Equation ®
y ' Stanford University y
y P s Linear Algebra Review 27-Sep-2018 62 ) ) y P 1 y × , s y S , y s = x , x x P s x × ) ( s 1 ù ú û , ( = 0 1 y ' , ® x P ' ( ) 0 S y é ê ë ® y ® s = ) ) , ù ú ú ú û y y x y x 1 , x , é ê ê ê ë s x x × ( ( ( ù ú ú ú û = 0 0 1 = ' = P P P y 0 0 s S x x P’ 0 0 x s s ê ê ê ë é = ú û ù ú ú x y x P x y 1 s s ê ë ê é ê
Scaling Equation
® y ' y Stanford University
y P s Linear Algebra Review 27-Sep-2018 63 P’’ P’=S∙P P’’=T∙P’
P P’’=T · P’=T ·(S · P)= T · S ·P Scaling & Translating Stanford University Linear Algebra Review 27-Sep-2018 64 % ' ' ' & y x 1 = % ' ' ' & " $ $ $ # & % ' y x 1 t 1 $ $ # " $ ' ' & % ' ' 0 S 0 0 1 " $ # = y 0 0 % ' ' ' ' & s x y t t x 0 0 s + + 1 y x $ $ # " $ $ y x ⋅ ' ' & % ' ' s s y x $ $ # " $ $ 1 t t = % ' ' ' & 0 0 1 y x 1 0 0 1 $ $ $ # " ' ' ' & % ' $ $ $ $ # " y x 1 t t = P ⋅ y 0 0 s A S ⋅ x T 0 0 s
Scaling & Translating = '
' $ $ # " $ $ Stanford University
P = Linear Algebra Review 27-Sep-2018 65 % ' ' ' & y x 1 = % ' ' ' & " $ $ $ # % ' & y x 1 t 1 " $ $ $ # % ' ' ' ' & 0 S 0 0 1 " $ # = y 0 0 % ' ' ' ' & s x y t t x 0 0 s + + 1 y x " $ $ $ $ # y x ⋅ % ' ' ' ' & s s y x " $ $ $ $ # 1 t t = % ' ' ' & 0 0 1 y x 1 0 0 1 $ # " $ $ ' ' & % ' ' $ $ # " $ $ y x 1 t t = P ⋅ y 0 0 s S ⋅ x T 0 0 s
Scaling & Translating =
' ' " $ $ $ $ # Stanford University
P = Linear Algebra Review 27-Sep-2018 66 ù ú ú ú û x y t t + + 1 y x y x s s é ê ê ê ë = ù ú ú ú û y x 1 é ê ê ê ë ù ú ú ú û y x 1 t t y 0 0 s x 0 0 s é ê ê ê ë = ù ú ú ú û y x 1 é ê ê ê ë ù ú ú ú û 0 0 1 y 0 0 s = Scaling & Translating x 0 0 s é ê ê ê ë ù ú ú ú û y x 1 t t 0 0 1 0 0 1 é ê ê ê ë = P × S × T
= '
' ' P
Stanford University Translating & Scaling ! Linear Algebra Review 27-Sep-2018 67 ù ú ú ú û x y t t + + 1 y x y x s s é ê ê ê ë = = ù ú ú ú û ù ú ú ú û y x 1 y 1 x é ê ê ê ë é ê ê ê ë ù ú ú ú û y x ù ú ú ú û 1 t t y x 1 t t y 0 0 ù ú ú ú û s x y 0 0 1 t t x y x 0 0 s s s é ê ê ê ë 0 0 1 1 = + + é ê ê ê ë ù ú ú ú û ù ú ú ú û y x y x 1 y x 0 0 1 é ê ê ê ë ù ú ú ú û s s 0 0 1 é ê ê ê ë y = 0 0 y s 0 0 s ù ú ú ú û = Scaling & Translating y x 1 x x é ê ê ê ë 0 0 0 0 s s ù ú ú ú û é ê ê ê ë é ê ê ê ë ù ú ú ú û y x y x t t 1 t t 1 = y x s s P 0 0 1 × y T 0 0 1 0 0 s é ê ê ê ë × = S P x = × 0 0 s ' ' S é ê ê ê ë ' × T = P
= '
' ' P
Stanford University Translating & Scaling ! Linear Algebra Review 27-Sep-2018 68 ù ú ú ú û x y t t + + 1 y x y x s s é ê ê ê ë = = ù ú ú ú û ù ú ú ú û y x 1 y 1 x é ê ê ê ë é ê ê ê ë ù ú ú ú û y x ù ú ú ú û 1 t t y x 1 t t y 0 0 ù ú ú ú û s x y 0 0 1 t t x y x 0 0 s s s é ê ê ê ë 0 0 1 1 = + + é ê ê ê ë ù ú ú ú û ù ú ú ú û y x y x 1 y x 0 0 1 é ê ê ê ë ù ú ú ú û s s 0 0 1 é ê ê ê ë y = 0 0 y s 0 0 s ù ú ú ú û = Scaling & Translating y x 1 x x é ê ê ê ë 0 0 0 0 s s ù ú ú ú û é ê ê ê ë é ê ê ê ë ù ú ú ú û y x y x t t 1 t t 1 = y x s s P 0 0 1 × y T 0 0 1 0 0 s é ê ê ê ë × = S P x = × 0 0 s ' ' S é ê ê ê ë ' × T = P
= '
' ' P
Stanford University Translating & Scaling ! Linear Algebra Review 27-Sep-2018 69 P’ P Rotation Stanford University Linear Algebra Review 27-Sep-2018 70 ù ú û y x é ê ë q ù ú û q y q x θ θ sin cos - sin sin P - + q q x y R θ θ = sin cos ê ë é P' cos cos rotation anby angle = = = ù ú û ' ' ' ' y x y x é ê ë clockwise - P q x x’ P’ Counter y y’ Rotation Equations Stanford University Linear Algebra Review 27-Sep-2018 71 y x θ θ sin sin - + x y θ θ cos cos = = ' ' y x ù ú û y x é ê ë I matrices matrices ù ú û = q q R × sin normal T cos - 1 R Properties = = ) q T q R R × sin cos é ê ë A 2D rotation matrix is 2x2 R det( = ù ú û ' ' y x é ê ë Note: R belongs to the category of of thecategory to belongs R Note: satisfiesand interesting many properties: Rotation Matrix Stanford University Linear Algebra Review 27-Sep-2018 72 I = R × T 1 R = = ) T R R × R det(
(and so are its columns)
–
unit vectors The rows of a rotation matrix are always mutually perpendicular (a.k.a. orthogonal) Transpose of a rotation matrix produces a rotation in the opposite direction Rotation Matrix Properties
• • Stanford University Linear Algebra Review 27-Sep-2018 73 ) � . 1 � , x . sin � + sin � . + � y . cos � cos
, , y � . . � � sin sin − − x x . . � � cos cos ( = � → 1 , y y ,
, x Rotation Equation x
Stanford University = � Linear Algebra Review 27-Sep-2018 74 ) � . � x y 1 . sin � + 0 0 1 � . � θ
� θ
x = 1 . 0 , sin � � x cos cos . , − � � . sin � � � � θ
+ sin 0 ′ y os sin + . � sin c � � y − . x = = � . cos � , x y y . . cos . , � � cos � y ( 1 . , in � sin s y = sin , x 1 + − − sin � x y x . . − . → → � � � x . y y os � , , cos c cos x x cos = → = = → � � � � Rotation Equation Stanford University Linear Algebra Review 27-Sep-2018 75 = ù ú ú ú û y x 1 é ê ê ê ë ù ú ú ú û 0 0 1 y 0 0 s purpose - x 0 0 s é ê ê ê ë This is the form of the general transformation matrix ù ú ú ú û 0 0 1 θ θ in 0 s = cos ù ú ú ú û - y x 1 é ê ê ê ë ù ú ú ú û ù ú ú ú û θ y x θ 1 0 0 1 é ê ê ê ë 0 ù ú û sin t cos 1 é ê ê ê ë y 0 0 ù ú ú ú û s y x S 1 t t 0 x R 0 0 s é ê ë ê ê ê ë é 0 0 1 ú ú ú û ù P’= (T R S) P = y x 1 ù ú ú ú û t t 0 0 1 y x 1 ê ê ê ë é é ê ê ê ë ù ú û = θ θ 0 1 in P 0 s × cos - 0 S S × ê ë é ú û ù
R t 1 θ θ × 0
T sin 0 R cos = é ê ë ê ë é ê ê ' Scaling + Rotation + Translation Stanford University = = P Linear Algebra Review 27-Sep-2018 76 The inverse of a transformation matrix reverses its effect
Translation Special Matrices Homogeneous coordinates Basic Matrix Operations Determinants, norms, trace
– – – – – Eigenvalues and Eigenvectors Matrix Calculate Matrix inverse Matrix rank Transformation Matrices Vectors and matrices
Stanford University • • • • •
• Outline Linear Algebra Review 27-Sep-2018 77 . is A singular exists, 1 - A is a matrix such 1 - A . Otherwise, it’s I singular , its inverse - A A = 1 non - or = A 1 -
AA
Inverse does not always exist. If invertible Useful identities, for matrices that are invertible: E.g. Given a matrix that
•
• •
• Inverse Stanford University Linear Algebra Review 27-Sep-2018 78 (A, B) np.linalg.solve If there is no exact solution, it will return the closest one If there are many solutions, it will return the smallest one • • Python will try several appropriate numerical methods (including the pseudoinverse if the inverse doesn’t exist) Python will return the value of X which solves the equation Instead of taking an inverse, directly ask python to solve for X in AX=B, by typing Fortunately, there are workarounds to solve AX=B in these situations. And python can do them!
– – – –
Pseudoinverse Stanford University
Matrix Operations • Linear Algebra Review 27-Sep-2018 79 np A,B) ( as numpy np.linalg.solve = 1.0000 0.5000 x - import
=
x >> >> Python example:
Matrix Operations • Stanford University Linear Algebra Review 27-Sep-2018 80 The rank of a transformation matrix tells you how many dimensions it transforms a vector to.
Translation Special Matrices Homogeneous coordinates Basic Matrix Operations Determinants, norms, trace
– – – – – Eigenvalues and Eigenvectors Matrix Calculate Matrix inverse Matrix rank Transformation Matrices Vectors and matrices
Stanford University • • • • •
• Outline Linear Algebra Review 27-Sep-2018 81 on the other n v , …, 1 ) 4 v v dependent .7 - 2 v = .7 1 is linearly v 1 as a linear combination of the other v 1 (E.g. v can be expressed as a combination of the . n 1 v v , then … n 2 v v … 2 v The direction directions
– vectors. Suppose we have a set of vectors If we can express vectors
Stanford University • • Linear independence Linear Algebra Review 27-Sep-2018 82 on the other n v is always linearly n , …, v 1 ) 4 v v dependent , …, .7 1 - . v 2 v = .7 1 is linearly v 1 as a linear combination of the other v 1 (E.g. v independent zero) can be expressed as a combination of the . - n 1 v v , then … n 2 v v … 2 v
Common case: a set of vectors independent if each vector is perpendicular to every other vector (and non The direction directions
– – If no vector is linearly dependent on the rest of the set, the set is linearly vectors. Suppose we have a set of vectors If we can express vectors
Stanford University • • • Linear independence Linear Algebra Review 27-Sep-2018 83 Not linearly independent
Linearly independent set
Stanford University Linear independence Linear Algebra Review 27-Sep-2018 84
Column rank always equals row rank
– Matrix rank Column/row rank
Stanford University •
• Matrix rank Linear Algebra Review 27-Sep-2018 85 the line y=2x All points get mapped to p A is 1, then the transformation p’= A
maps points onto a line. Here’s a matrix with rank 1: For transformation matrices, the rank tells you the dimensions of the output E.g. if rank of
Stanford University
• •
• Matrix rank Linear Algebra Review 27-Sep-2018 86 x 1 vector m square matrices > it is invertible - - , we say it’s “full rank” m x 1 vector uniquely to another matrix is rank , we say it’s “singular” m m m x m the result and tell what the input was Inverse does not exist At least one dimension is getting collapsed. No way to look at Maps an An inverse matrix can be found
– – – – Inverse also doesn’t exist for non If rank < If an
Stanford University
• •
• A matrix has “full rank” < Linear Algebra Review 27-Sep-2018 87 Eigenvectors
Translation Special Matrices Homogeneous coordinates Basic Matrix Operations Determinants, norms, trace
– – – – – Eigenvalues and Matrix Calculus Matrix inverse Matrix rank Transformation Matrices Vectors and matrices
Stanford University • • • • •
• Outline Linear Algebra Review 27-Sep-2018 88 zero - is a non A is applied to it, does not change A of a linear transformation x
direction. An eigenvector vector that, when
Stanford University
• Eigenvector and Eigenvalue Linear Algebra Review 27-Sep-2018 89 zero - is a non A is applied to it, does not change A , called an eigenvalue. of a linear transformation λ x to the eigenvector only scales the eigenvector by A
the scalar value direction. Applying An eigenvector vector that, when
Stanford University •
• Eigenvector and Eigenvalue Linear Algebra Review 27-Sep-2018 90
Therefore: Which can we written as: We want to find all the eigenvalues of A:
Stanford University
• •
• Eigenvector and Eigenvalue Linear Algebra Review 27-Sep-2018 91 , we can instead solve the x zero -
above equation as: Since we are looking for non We can solve for eigenvalues by solving:
Stanford University •
• Eigenvector and Eigenvalue Linear Algebra Review 27-Sep-2018 92 ) are dn (d1, . . . zero eigenvalues - diag dn
The eigenvalues of a diagonal matrix D = just the diagonal entries d1, . . . The rank of A is equal to the number of non of A. The determinant of A is equal to the product of its eigenvalues The trace of a A is equal to the sum of its eigenvalues:
• •
• •
Stanford University Properties Linear Algebra Review 27-Sep-2018 93 . λ : spectrum ) = 0 is often called λI and an associated eigenvector λ of A associated with the eigenvalue . eigenspace eigenpair
The set of all eigenvalues of A is called its The space of vectors where (A − the We call an eigenvalue an
Stanford University
• •
• Spectral theory Linear Algebra Review 27-Sep-2018 94 Where C is the space of all eigenvalues of A
– The magnitude of the largest eigenvalue (in magnitude) is called the spectral radius
Stanford University
• Spectral theory Linear Algebra Review 27-Sep-2018 95
Proof: Turn to a partner and prove this! The spectral radius is bounded by infinity norm of a matrix:
Stanford University •
• Spectral theory Linear Algebra Review 27-Sep-2018 96 of A: eigenpair and v be an λ
Proof: Let The spectral radius is bounded by infinity norm of a matrix:
Stanford University •
• Spectral theory Linear Algebra Review 27-Sep-2018 97 n matrix A is diagonalizable if it has n linearly : Eigenvectors associated with distinct eigenvalues are ×
Matrices with n distinct eigenvalues are diagonalizable Normal matrices are diagonalizable
– – Most square matrices (in a sense that can be made mathematically rigorous) are diagonalizable: An n independent eigenvectors.
Stanford University linearly independent. Lemma • • Diagonalization Linear Algebra Review 27-Sep-2018 98 Where D is a diagonal matrix of the eigenvalues
– Eigenvalue equation:
Stanford University
• Diagonalization Linear Algebra Review 27-Sep-2018 99 are unique: ’s i λ
Remember that the inverse of an orthogonal matrix is just its transpose and the eigenvectors are orthogonal
• Assuming all Eigenvalue equation:
Stanford University •
• Diagonalization Linear Algebra Review 27-Sep-2018 100 The eigenvectors of A are orthonormal. For a symmetric matrix A, all the eigenvalues are real.
– – Properties:
Stanford University
• Symmetric matrices Linear Algebra Review 27-Sep-2018 101
where
– So, what can you say about the vector x that satisfies the following optimization? Therefore:
Stanford University •
• Symmetric matrices Linear Algebra Review 27-Sep-2018 102
Is the same as finding the eigenvector that corresponds to the largest eigenvalue of A. where
– – So, what can you say about the vector x that satisfies the following optimization? Therefore:
Stanford University •
• Symmetric matrices Linear Algebra Review 27-Sep-2018 103
We are going to use it to compress images in future classes PCA PageRank Schrodinger’s equation
Stanford University
• • • • Some applications of Eigenvalues Linear Algebra Review 27-Sep-2018 104 Eigenvectors
Translation Special Matrices Homogeneous coordinates Basic Matrix Operations Determinants, norms, trace
– – – – – Eigenvalues and Matrix Calculus Matrix inverse Matrix rank Transformation Matrices Vectors and matrices
Stanford University • • • • •
• Outline Linear Algebra Review 27-Sep-2018 105 The Gradient f: – of gradient n and return a real value.
×
Then the Let a function take as input a matrix A of size m
Stanford University •
• Matrix Calculus Linear Algebra Review 27-Sep-2018 106 The Gradient – (A) is always the same as the size of A. So if A f A ∇
is just a vector x: Every entry in the matrix is: the size of
Stanford University • • Matrix Calculus Linear Algebra Review 27-Sep-2018 107
Find: Example:
Stanford University •
• Exercise Linear Algebra Review 27-Sep-2018 108
From this we can conclude that: Example:
Stanford University •
• Exercise Linear Algebra Review 27-Sep-2018 109 The Gradient –
Properties
Stanford University
• Matrix Calculus Linear Algebra Review 27-Sep-2018 110 n matrix. × The Hessian – dimensional vector is the n -
The Hessian of n The Hessian matrix with respect to x, written or simply as H:
Stanford University •
• Matrix Calculus Linear Algebra Review 27-Sep-2018 111 The Hessian –
Exercise: Why is the Hessian always symmetric? Each entry can be written as:
Stanford University •
• Matrix Calculus Linear Algebra Review 27-Sep-2018 112 The Hessian –
derivatives don’t matter as long as the second derivative exists and is continuous. This is known as Schwarz's theorem: The order of partial The Hessian is always symmetric, because Each entry can be written as:
Stanford University
• •
• Matrix Calculus Linear Algebra Review 27-Sep-2018 113 The Hessian – of the gradient of the vector. entry
every Note that the hessian is not the gradient of whole gradient of a vector (this is not defined). It is actually the gradient of
Stanford University
• Matrix Calculus Linear Algebra Review 27-Sep-2018 114 The Hessian – , the first column is the gradient of
Eg
Stanford University
• Matrix Calculus Linear Algebra Review 27-Sep-2018 115
Example:
Stanford University
• Exercise
Linear Algebra Review 27-Sep-2018 116
Stanford University Exercise Linear Algebra Review 27-Sep-2018 117
== k or i j == k
• • Divide the summation into 3 parts depending on whether:
Stanford University Exercise
Linear Algebra Review 27-Sep-2018 118
Stanford University Exercise
Linear Algebra Review 27-Sep-2018 119
Stanford University Exercise
Linear Algebra Review 27-Sep-2018 120
Stanford University Exercise
Linear Algebra Review 27-Sep-2018 121
Stanford University Exercise
Linear Algebra Review 27-Sep-2018 122
Stanford University Exercise
Linear Algebra Review 27-Sep-2018 123
Stanford University Exercise
Linear Algebra Review 27-Sep-2018 124
Stanford University Exercise
Linear Algebra Review 27-Sep-2018 125
Stanford University Exercise
Linear Algebra Review 27-Sep-2018 126
Stanford University Exercise Linear Algebra Review 27-Sep-2018 127
Homogeneous coordinates Translation Basic Matrix Operations Special Matrices
– – – – Eigenvalues and Eigenvectors Matrix Calculate Matrix inverse Matrix rank Transformation Matrices Vectors and matrices
Stanford University • • • • •
• What we have learned