Math 54 Cheat Sheet T (v) as a linear combination of vectors in C. Put the works, use long division! Then A = PDP −1, where coefficients in a column vector, and then form the D= diagonal of eigenvalues, P = matrix of Vector spaces matrix of the column vectors you found! eigenvectors Coordinates: To find [x]B, express x in terms of the Complex eigenvalues If λ = a + bi, and v is an Subspace: If u and v are in W , then u + v are in W , vectors in B. eigenvector, then A = PCP −1, where and cu is in W x = PB [x] , where PB is the matrix whole columns  a b B P = Re(v) Im(v), C = Nul(A): Solutions of Ax = 0. Row-reduce A. are the vectors in B. −b a Row(A): Space spanned by the rows of A: Row-reduce theorem: If A is invertible, then: A C is a scaling of pdet(A) followed by a by θ, A and choose the rows that contain the pivots. is row-equivalent to I, A has n pivots, T (x) = Ax is   1 cos(θ) sin(θ) Col(A): Space spanned by columns of A: Row-reduce one-to-one and onto, Ax = b has a unique solution for where: √ C = det(A) − sin(θ) cos(θ) A and choose the columns of A that contain the pivots every b, AT is invertible, det(A) 6= 0, the columns of n Rank(A): = Dim(Col(A)) = number of pivots A form a basis for R , Nul(A) = {0}, Rank-Nullity theorem: Rank(A) = n Rank(A) + dim(Nul(A)) = n, where A is m × n a b−1  d −b u, v orthogonal√ if u · v = 0. = 1 Linear transformation: T (u + v) = T (u) + T (v), c d ad−bc −c a kuk = u · u {u ··· u } is orthogonal if u · u = 0 if i 6= j, T (cu) = cT (u), where c is a number. A | I → I | A−1 1 n i j T is one-to-one if T (u) = 0 ⇒ u = 0 orthonormal if ui · ui = 1 : [x] = PC←B [x] (think of C as the ⊥ m C B W : Set of v which are orthogonal to every w in W . T is onto if Col(T ) = R . new, cool basis) Linearly independence: If {u1 ··· un} is an orthogonal basis, then: [C | B] → [I | PC←B] y·uj a1v1 + a2v2 + ··· + anvn = 0 ⇒ a1 = a2 = y = c1u1 + ··· cnun ⇒ cj = u ·u PC←B is the matrix whose columns are [b]C, where b j j ··· = an = 0. is in B Orthogonal matrix Q has orthonormal columns! To show lin. ind, form the matrix of the vectors, and Consequence:QT Q = I, QQT = Orthogonal show that Nul(A) = {0} Diagonalization projection on Col(Q). Linear dependence: a1v1 + a2v2 + ··· + anvn = 0 kQxk = kxk Diagonalizability: A is diagonalizable if for a1, a2, ··· , an, not all zero. (Qx) · (Qy) = x · y A = PDP −1 for some diagonal D and invertible P . Span: Set of linear combinations of v1, ··· vn {u ··· u } −1 Orthogonal projection: If 1 k is a basis for Basis B for V : A linearly independent set such that A and B are similar if A = PBP for P invertible W , then orthogonal projection of y on W is: A ⇔ A n Span (B) = V Theorem: is diagonalizable has linearly  y·u1   y·u1  yˆ = u1 + ··· + uk To show sthg is a basis, show it is linearly independent independent eigenvectors u1u1 ukuk y − yˆ yˆ y and spans. Theorem: IF A has n distinct eigenvalues, THEN A is is orthogonal to , shortest distance btw and W ky − ˆyk To find a basis from a collection of vectors, form the diagonalizable, but the opposite is not always true!!!! is B = {u , ··· u } matrix A of the vectors, and find Col(A). Notes: A can be diagonalizable even if it’s not Gram-Schmidt: Start with 1 n . Let:   v = u To find a basis for a vector space, take any element of 0 0 1 1 invertible (Ex: A = ). Not all matrices are  u2·v1  that v.s. and express it as a linear combination of 0 0 v2 = u2 − v1   v1·v1 ’simpler’ vectors. Then show those vectors form a 1 1  u3·v1   u3·v2  diagonalizable (Ex: ) v3 = u3 − v1 − v2 basis. 0 1 v1·v1 v2·v2 : Number of elements in a basis. Consequence: A = PDP −1 ⇒ An = PDnP −1 Then {v1 ··· vn} is an orthogonal basis for Span(B), vi and if wi = , then {w1 ··· wn} is an To find dim, find a basis and find num. elts. How to diagonalize: To find the eigenvalues, calculate kvik Theorem: If V has a basis of vectors, then every basis det(A − λI), and find the roots of that. for Span(B). of V must have n vectors. To find the eigenvectors, for each λ find a basis for QR-factorization: To find Q, apply G-S to columns of T Basis theorem: If V is an n−dim v.s., then any lin. ind. Nul(A − λI), which you do by row-reducing A. Then R = Q A set with n elements is a basis, and any set of n elts. Rational roots theorem: If p(λ) = 0 has a rational Least-squares: To solve Ax = b in the least which spans V is a basis. a T T root r = b , then a divides the constant term of p, and b squares-way, solve A Ax = A b. Matrix of a lin. transf T with respect to bases B and C: divides the leading coefficient. Least squares solution makes kAx − bk smallest. For every vector v in B, evaluate T (v), and express Use this to guess zeros of p. Once you have a zero that xˆ = R−1QT b, where A = QR. R b s m rt Inner product spaces f · g = a f(t)g(t)dt. G-S t (Bmt ··· + B1t + 1)e sin(βt), where s = m, Systems of differential equations applies with this inner product as well. if a + bi is also a root of aux with multiplicity m Cauchy-Schwarz: |u · v| ≤ kuk kvk (s = 0 if not). cos always goes with sin and Triangle inequality: ku + vk ≤ kuk + kvk vice-versa, also, you have to look at a + bi as one entity. To solve x0 = Ax: T λ t λ t λ t Symmetric matrices (A = A ) Variation of parameters: First, make sure the leading x(t) = Ae 1 v1 + Be 2 v2 + e 3 v3 (λi are your Has n real eigenvalues, always diagonalizable, coefficient (usually the coeff. of y00) is = 1.. Then eigenvalues, vi are your eigenvectors) T orthogonally diagonalizable (A = PDP , P is an y = y0 + yp as above. Now suppose Fundamental matrix: Matrix whose columns are the orthogonal matrix, equivalent to !). yp(t) = v1(t)y1(t) + v2(t)y2(t), where y1 and y2 solutions, without the constants (the columns are Theorem: If A is symmetric, then any two are your hom. solutions. Then solutions and linearly independent)    0    Complex eigenvalues If λ = α + iβ, and v = a + ib. eigenvectors from different eigenspaces are orthogonal. y1 y2 v1 0 0 0 0 = . Invert the matrix and αt αt  How to orthogonally diagonalize: First diagonalize, y1 y2 v2 f(t) Then: x(t) = A e cos(βt)a − e sin(βt)b + 0 0 αt αt  then apply G-S on each eigenspace and normalize. solve for v1 and v2, and integrate to get v1 and v2, and B e sin(βt)a + e cos(βt)b Then P = matrix of (orthonormal) eigenvectors, D = finally use: yp(t) = v1(t)y1(t) + v2(t)y2(t). Notes: You only need to consider one complex matrix of eigenvalues.  −1   a b 1 d −b eigenvalue. For real eigenvalues, use the formula Useful formulas: = 1 a−bi Quadratic forms: To find the matrix, put the ad−bc = c d −c a above. Also, a+bi a2+b2 2 R xi -coefficients on the diagonal, and evenly distribute sec(t) = ln |sec(t) + tan(t)|, Generalized eigenvectors If you only find one the other terms. For example, if the x1x2−term is 6, R tan(t) = ln |sec(t)|, R tan2(t) = tan(x) − x, eigenvector v (even though there are supposed to be 2), then the (1, 2)th and (2, 1)th entry of A is 3. R ln(t) = t ln(t) − t then solve the following equation for u: T Then orthogonally diagonalize A = PDP . : f, g, h are linearly independent if (A − λI)(u) = v (one solution is enough). T λt λt λt  Then let y = P x, then the becomes af(t) + bg(t) + ch(t) = 0 ⇒ a = b = c = 0 Then: x(t) = Ae v + B te v + e u 2 2 . To λ1y1 + ··· + λnyn, where λi are the eigenvalues. show linear dependence, do it directly. To show linear Undetermined coefficients First find hom. solution. Spectral decomposition: independence, form the : Then for xp, just like regular undetermined T T T λ1u1u1 + λ2u2u2 + ··· + λnunun  f(t) g(t)  coefficients, except that instead of guessing W (t) = (for 2 functions), t t f f 0(t) g0(t) xp(t) = ae + b cos(t), you guess ae + b cos(t), Second-order and Higher-order   a  f(t) g(t) h(t) where a = 1 is a vector. Then plug into 0 0 0 a differential equations Wf(t) =  f (t) g (t) h (t)  (for 3 functions). 2 00 00 00 x0 = Ax + f and solve for a etc. Homogeneous solutions: Auxiliary equation: Replace f (t) g (t) h (t) Variation of parameters First hom. solution equation by polynomial, so y000 becomes r3 etc. Then Then pick a point t0 where det(Wf(t0)) is easy to x (t) = Ax1(t) + Bx2(t). Then sps find the zeros (use the rational roots theorem and long evaluate. If det 6= 0, then f, g, h are linearly h independent! Try to look for simplifications before you xp(t) = v1(t)x1(t) + v2(t)x2(t), then solve division, see the ‘Diagonalization-section). ’Simple  0  rt differentiate. v1   zeros’ give you e , Repeated zeros (multiplicity m) Wf(t) 0 = f, where Wf(t) = x1(t) | x2(t) . rt rt m−1 rt Fundamental solution set: If f, g, h are solutions and v2 give you Ae + Bte + ··· Zt e , Complex  −1 zeros r = a + bi give you linearly independent. Multiply both sides by Wf(t) , integrate and solve Aeat cos(bt) + Beat sin(bt) Largest interval of existence: First make sure the . for v1(t), v2(t), and plug back into xp. Finally, Undetermined coefficients: y(t) = y (t) + y (t), leading coefficient equals to 1. Then look at the domain 0 p x = xh + xp y = 0 y of each term. For each domain, consider the part of the n n where 0 solves the hom. eqn. (equation ), and p eAt = P∞ A t interval which contains the initial condition. Finally, n=0 n! . To calculate is a particular solution. To find yp: At m rt intersect the intervals and change any brackets to e , either diagonalize: If the inhom. term is Ct e , then: −1 At Dt −1 Dt s m rt A = PDP ⇒ e = P e P , where e is a yp = t (Amt ··· + A1t + 1)e , where if r is a parentheses. Harmonic oscillator: 00 0 λit root of aux with multiplicity m, then s = m, and if r is my + by + ky = 0 (m = inertia, b = damping, with diag. entries e . Or if A only k = stiffness) has one eigenvalue λ with multiplicity m, use not a root, then s = 0. n n m at At λt Pm−1 (A−λI) t 0 If the inhom term is Ct e sin(βt), then: e = e n=0 n! . Solution of x = Ax s m at At yp = t (Amt ··· + A1t + 1)e cos(βt) + is then x(t) = e c, where c is a constant vector. X00(x) Coupled mass-spring system   kπ  = λ, so X00 = λX. Then find a differential sin N+1 X(x)    equation for T . Note: If you have an α-term, put it sin 2kπ   N+1  with T . Proper modes: vk =   00 Case N = 2  .  Step 2: Deal with X = λX. Use boundary    .  ∂u −2 1   conditions to find X(0) etc. (if you have ∂x , you Equation: x00 = Ax, A =  Nkπ  0 1 −2 sin N+1 might have X (0) instead of X(0)). 2 Proper frequencies: Eigenvalues of A are: Step 3: Case 1: λ = ω , then √ X(x) = Aeωx + Be−ωx, then find ω = 0, λ = −1, −3, then proper frequencies ±i, ± 3i (± Partial differential equations contradiction. Case 2: λ = 0, then X(x) = Ax + B, square roots of eigenvalues) √ then eihter find X(x) = 0 (contradiction), or find " # Full Fourier series: f defined on (−T,T ): 2  sin π   3 P∞ πmx  πmx  X(x) = A. Case 3: λ = −ω , then Proper modes: v = 3 = √2 , f(x) e m=0 am cos T + bm sin T , 1 sin 2 π  3 X(x) = A cos(ωx) + B sin(ωx). Then solve for ω, 3 where: πm √ 2 1 R T usually ω = . Also, if case 2 works, should find  π  " 3 # a0 = f(x)dx T sin 2 2T −T cos, if case 2 doesn’t work, should find sin. v = 3 = 2√ 1 T πmx 2 π  3 R  2 sin 4 am = T −T f(x) cos T Finally, λ = −ω , and X(x) = whatever you found in 3 − 2 b0 = 0 2) w/o the constant. 1 R T πmx  Case N = 3 bm = f(x) sin Step 4: Solve for T (t) with the λ you found.   T −T T −2 1 0 Cosine series: f defined on (0,T ): Remember that for the heat equation: x00 = Ax A = 1 −2 1 P∞ πmx  0 λt Equation: ,   f(x) e m=0 am cos T , where: T = λT ⇒ T (t) = Agme . And for the wave 0 1 −2 2 R T equation: a0 = 2T 0 f(x)dx (not a typo) Proper frequencies: Eigenvalues of A: 2 R T πmx  00 √ √ am = f(x) cos T = λT ⇒ T (t) = Agm cos(ωt) + Bgm sin(ωt). λ = −2, −2 − 2, −2 + 2, then proper frequencies T 0 T P∞ Sine series: f defined on (0,T ): Step 5: Then u(x, t) = m=0 T (t)X(x) (if case 2 √ p √  p √  P∞ πmx  works), u(x, t) = P∞ T (t)X(x) (if case 2 doesn’t ± 2i, ± 2 + 2 i, ± 2 − 2 i f(x) e m=0 bm sin T , where: m=1 b0 = 0 work!) 2 R T πmx  Proper modes: bm = f(x) sin Step 6: Use u(x, 0), and plug in t = 0. Then use  √  T 0 T  π   2  π  Tabular integration: (IBP: R f 0g = fg − R fg0) To Fourier cosine or sine series or just ‘compare’, i.e. if sin 4 2 sin 2 4 v = sin 2 π  =  1  , v = sin 4 π  = integrate R f(t)g(t)dt where f is a polynomial, make u(x, 0) = 4 sin(2πx) + 3 sin(3πx), then Af2 = 4, 1  4   √  2  4  sin 3 π  2 sin 6 π  a table whose first row is f(t) and g(t). Then Af3 = 3, and Agm = 0 if m 6= 2, 3. 4 4 ∂u 2 √   differentiate f as many times until you get 0, and Step 7: (only for wave equation): Use ∂t u(x, 0):   sin 3 π  2 1 4 2 antidifferentiate as many times until it aligns with the 0 Differentiate Step 5 with respect to t and set t = 0. 0 , v = sin 6 π  = −1   3  4   √  for f. Then multiply the diagonal terms and do + first Then use Fourier cosine or series or ‘compare’ −1 π  2 sin 9 4 term − second term etc. Nonhomogeneous heat equation: 2  2 General case (just in case!) Orthogonality formulas: ∂u = β ∂ u + P (x)  ∂t ∂x2 x00 = Ax T Equation: , R cos πmx  sin πnx  dx = 0 u(0, t) = U1, u(L, t) = U2 −2 1 0 ··· 0  −T T T  u(x, 0) = f(x) R T cos πmx  cos πnx  dx = 0 if m 6= n 1 −2 1 0 ··· 0 −T T T Then u(x, t) = v(x) + w(x, t), where:   R T πmx  πnx   0 1 −2 1 0 ··· sin sin dx = 0 if m 6= n h R L R z 1 i x   −T T T v(x) = U2 − U1 + P (s)dsdz + U1 − A =  ......  0 0 β L   Convergence: Fourier series F goes to f(x) is f is R x R z 1  ......  P (s)dsdz and w(x, t) solves the hom. eqn:   continuous at x, and if f has a jump at x, F goes to the 0 0 β  2  0 0 ··· 1 −2 1  average of the jumps. Finally, at the endpoints, F goes ∂w = β ∂ w 0 0 0 0 1 −2  ∂t ∂x2 to average of the left/right endpoints. w(0, t) = 0, w(L, t) = 0   Heat/Wave equations:  Proper frequencies: ±2i sin kπ , u(x, 0) = f(x) − v(x) 2(N+1) Step 1: Suppose u(x, t) = X(x)T (t), plug this into D’Alembert’s formula: ONLY works for wave k = 1, 2, ··· N PDE, and X-terms and T -terms. Then equation and −∞ < x < ∞: u(x, t) = The integral just means ‘antidifferentiate and plug in’. we get Y 00(y) = −λY (y). Also, instead of writing 1 1 R x+αt ωy −ωy (f(x + αt) + f(x − αt)) + g(s)ds, Y (y) = Agme + Bgme , write 2 2α x−αt Laplace equation: where Y (y) = Agm cosh(ωy) + Bgm sinh(ωy). Remember 2 ∂u Same as for Heat/Wave, but T (t) becomes Y (y), and utt = α uxx, u(x, 0) = f(x), ∂t u(x, 0) = g(x). cosh(0) = 1, sinh(0) = 0