
Math 54 Cheat Sheet T (v) as a linear combination of vectors in C. Put the works, use long division! Then A = P DP −1, where coefficients in a column vector, and then form the D= diagonal matrix of eigenvalues, P = matrix of Vector spaces matrix of the column vectors you found! eigenvectors Coordinates: To find [x]B, express x in terms of the Complex eigenvalues If λ = a + bi, and v is an Subspace: If u and v are in W , then u + v are in W , vectors in B. eigenvector, then A = PCP −1, where and cu is in W x = PB [x] , where PB is the matrix whole columns a b B P = Re(v) Im(v), C = Nul(A): Solutions of Ax = 0. Row-reduce A. are the vectors in B. −b a Row(A): Space spanned by the rows of A: Row-reduce Invertible matrix theorem: If A is invertible, then: A C is a scaling of pdet(A) followed by a rotation by θ, A and choose the rows that contain the pivots. is row-equivalent to I, A has n pivots, T (x) = Ax is 1 cos(θ) sin(θ) Col(A): Space spanned by columns of A: Row-reduce one-to-one and onto, Ax = b has a unique solution for where: p C = det(A) − sin(θ) cos(θ) A and choose the columns of A that contain the pivots every b, AT is invertible, det(A) 6= 0, the columns of n Rank(A): = Dim(Col(A)) = number of pivots A form a basis for R , Nul(A) = f0g, Orthogonality Rank-Nullity theorem: Rank(A) = n Rank(A) + dim(Nul(A)) = n, where A is m × n a b−1 d −b u; v orthogonalp if u · v = 0. = 1 Linear transformation: T (u + v) = T (u) + T (v), c d ad−bc −c a kuk = u · u fu ··· u g is orthogonal if u · u = 0 if i 6= j, T (cu) = cT (u), where c is a number. A j I ! I j A−1 1 n i j T is one-to-one if T (u) = 0 ) u = 0 orthonormal if ui · ui = 1 Change of basis: [x] = PC B [x] (think of C as the ? m C B W : Set of v which are orthogonal to every w in W . T is onto if Col(T ) = R . new, cool basis) Linearly independence: If fu1 ··· ung is an orthogonal basis, then: [C j B] ! [I j PC B] y·uj a1v1 + a2v2 + ··· + anvn = 0 ) a1 = a2 = y = c1u1 + ··· cnun ) cj = u ·u PC B is the matrix whose columns are [b]C, where b j j ··· = an = 0. is in B Orthogonal matrix Q has orthonormal columns! To show lin. ind, form the matrix of the vectors, and Consequence:QT Q = I, QQT = Orthogonal show that Nul(A) = f0g Diagonalization projection on Col(Q). Linear dependence: a1v1 + a2v2 + ··· + anvn = 0 kQxk = kxk Diagonalizability: A is diagonalizable if for a1; a2; ··· ; an, not all zero. (Qx) · (Qy) = x · y A = P DP −1 for some diagonal D and invertible P . Span: Set of linear combinations of v1; ··· vn fu ··· u g −1 Orthogonal projection: If 1 k is a basis for Basis B for V : A linearly independent set such that A and B are similar if A = P BP for P invertible W , then orthogonal projection of y on W is: A , A n Span (B) = V Theorem: is diagonalizable has linearly y·u1 y·u1 y^ = u1 + ··· + uk To show sthg is a basis, show it is linearly independent independent eigenvectors u1u1 ukuk y − y^ y^ y and spans. Theorem: IF A has n distinct eigenvalues, THEN A is is orthogonal to , shortest distance btw and W ky − ^yk To find a basis from a collection of vectors, form the diagonalizable, but the opposite is not always true!!!! is B = fu ; ··· u g matrix A of the vectors, and find Col(A). Notes: A can be diagonalizable even if it’s not Gram-Schmidt: Start with 1 n . Let: v = u To find a basis for a vector space, take any element of 0 0 1 1 invertible (Ex: A = ). Not all matrices are u2·v1 that v.s. and express it as a linear combination of 0 0 v2 = u2 − v1 v1·v1 ’simpler’ vectors. Then show those vectors form a 1 1 u3·v1 u3·v2 diagonalizable (Ex: ) v3 = u3 − v1 − v2 basis. 0 1 v1·v1 v2·v2 Dimension: Number of elements in a basis. Consequence: A = P DP −1 ) An = PDnP −1 Then fv1 ··· vng is an orthogonal basis for Span(B), vi and if wi = , then fw1 ··· wng is an To find dim, find a basis and find num. elts. How to diagonalize: To find the eigenvalues, calculate kvik Theorem: If V has a basis of vectors, then every basis det(A − λI), and find the roots of that. orthonormal basis for Span(B). of V must have n vectors. To find the eigenvectors, for each λ find a basis for QR-factorization: To find Q, apply G-S to columns of T Basis theorem: If V is an n−dim v.s., then any lin. ind. Nul(A − λI), which you do by row-reducing A. Then R = Q A set with n elements is a basis, and any set of n elts. Rational roots theorem: If p(λ) = 0 has a rational Least-squares: To solve Ax = b in the least which spans V is a basis. a T T root r = b , then a divides the constant term of p, and b squares-way, solve A Ax = A b. Matrix of a lin. transf T with respect to bases B and C: divides the leading coefficient. Least squares solution makes kAx − bk smallest. For every vector v in B, evaluate T (v), and express Use this to guess zeros of p. Once you have a zero that x^ = R−1QT b, where A = QR. R b s m rt Inner product spaces f · g = a f(t)g(t)dt. G-S t (Bmt ··· + B1t + 1)e sin(βt), where s = m, Systems of differential equations applies with this inner product as well. if a + bi is also a root of aux with multiplicity m Cauchy-Schwarz: ju · vj ≤ kuk kvk (s = 0 if not). cos always goes with sin and Triangle inequality: ku + vk ≤ kuk + kvk vice-versa, also, you have to look at a + bi as one entity. To solve x0 = Ax: T λ t λ t λ t Symmetric matrices (A = A ) Variation of parameters: First, make sure the leading x(t) = Ae 1 v1 + Be 2 v2 + e 3 v3 (λi are your Has n real eigenvalues, always diagonalizable, coefficient (usually the coeff. of y00) is = 1.. Then eigenvalues, vi are your eigenvectors) T orthogonally diagonalizable (A = P DP , P is an y = y0 + yp as above. Now suppose Fundamental matrix: Matrix whose columns are the orthogonal matrix, equivalent to symmetry!). yp(t) = v1(t)y1(t) + v2(t)y2(t), where y1 and y2 solutions, without the constants (the columns are Theorem: If A is symmetric, then any two are your hom. solutions. Then solutions and linearly independent) 0 Complex eigenvalues If λ = α + iβ, and v = a + ib. eigenvectors from different eigenspaces are orthogonal. y1 y2 v1 0 0 0 0 = . Invert the matrix and αt αt How to orthogonally diagonalize: First diagonalize, y1 y2 v2 f(t) Then: x(t) = A e cos(βt)a − e sin(βt)b + 0 0 αt αt then apply G-S on each eigenspace and normalize. solve for v1 and v2, and integrate to get v1 and v2, and B e sin(βt)a + e cos(βt)b Then P = matrix of (orthonormal) eigenvectors, D = finally use: yp(t) = v1(t)y1(t) + v2(t)y2(t). Notes: You only need to consider one complex matrix of eigenvalues. −1 a b 1 d −b eigenvalue. For real eigenvalues, use the formula Useful formulas: = 1 a−bi Quadratic forms: To find the matrix, put the ad−bc = c d −c a above. Also, a+bi a2+b2 2 R xi -coefficients on the diagonal, and evenly distribute sec(t) = ln jsec(t) + tan(t)j, Generalized eigenvectors If you only find one the other terms. For example, if the x1x2−term is 6, R tan(t) = ln jsec(t)j, R tan2(t) = tan(x) − x, eigenvector v (even though there are supposed to be 2), then the (1; 2)th and (2; 1)th entry of A is 3. R ln(t) = t ln(t) − t then solve the following equation for u: T Then orthogonally diagonalize A = P DP . Linear independence: f; g; h are linearly independent if (A − λI)(u) = v (one solution is enough). T λt λt λt Then let y = P x, then the quadratic form becomes af(t) + bg(t) + ch(t) = 0 ) a = b = c = 0 Then: x(t) = Ae v + B te v + e u 2 2 . To λ1y1 + ··· + λnyn, where λi are the eigenvalues. show linear dependence, do it directly. To show linear Undetermined coefficients First find hom. solution. Spectral decomposition: independence, form the Wronskian: Then for xp, just like regular undetermined T T T λ1u1u1 + λ2u2u2 + ··· + λnunun f(t) g(t) coefficients, except that instead of guessing W (t) = (for 2 functions), t t f f 0(t) g0(t) xp(t) = ae + b cos(t), you guess ae + b cos(t), Second-order and Higher-order 2 3 a f(t) g(t) h(t) where a = 1 is a vector.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages4 Page
-
File Size-