<<

Math Tripos Part IA: Differential Equations

Michael Li

December 17, 2014

1 Basic Informal treatment of differentiation as a limit, the , Leibnitz’s rule, Taylor , informal treat- ment of O and o notation and l’Hˆopitalsrule; integration as an area, fundamental theorem of calculus, integration by substitution and parts. [3] Informal treatment of partial , geometrical interpretation, statement (only) of symmetry of mixed partial derivatives, chain rule, implicit differentiation. Informal treatment of differentials, including exact differentials. Differentiation of an with respect to a parameter. [2]

First-order linear differential equations Equations with constant coefficients: exponential growth, comparison with discrete equations, series solution; modelling examples including radioactive decay. Equations with non-constant coefficients: solution by . [2]

Nonlinear first-order equations Separable equations. Exact equations. Sketching solution . Equilibrium solutions, stability by perturbation; examples, including logistic equation and chemical kinetics. Discrete equations: equilibrium solutions, stability; examples including the . [4]

Higher-order linear differential equations Complementary and particular integral, , (for second-order equa- tions), Abel’s theorem. Equations with constant coefficients and examples including radioactive sequences, comparison in simple cases with difference equations, reduction of order, resonance, transients, damping. Homogeneous equations. Response to step and impulse function inputs; introduction to the notions of the Heaviside step-function and the Dirac delta-function. Series solutions including statement only of the need for the logarithmic solution. [8]

Multivariate functions: applications Directional derivatives and the vector. Statement of for functions on Rn. Local extrema of real functions, classification using the Hessian . Coupled first order : equivalence to single higher order equations; solution by matrix methods. Non-degenerate phase portraits local to equilibrium points; stability. Simple examples of first- and second-order partial differential equations, solution of the wave equation in the form f(x + ct) + g(x − ct). [5]

2 Contents

1 Differentiation 4 1.1 Differentiation ...... 4 1.2 Small o and big O notations ...... 4 1.3 Methods of Differentiation ...... 4 1.4 L’Hopital’s Rule ...... 5

2 Integration 5 2.1 Integration ...... 5 2.2 Methods of integration ...... 6 2.3 ...... 6

3 Partial Differentiation 6 3.1 partial Differentiation ...... 6 3.2 Chain rule ...... 7 3.3 Implicit Differentiation ...... 7 3.4 Differentiation of Integral w.r.t parameter in Integrand ...... 8

4 First-order differential equation 8 4.1 The ...... 8 4.2 First-order Linear Homogenous Ordinary Differential Equation ...... 9 4.2.1 Discrete equation ...... 9 4.3 Forced (inhomogeneous) equations ...... 10 4.4 Non-constant coefficients ...... 10 4.5 Nonlinear first-order equations ...... 10 4.5.1 Separable Equations ...... 10 4.5.2 Exact Equations ...... 11 4.6 Solution curves (Trajectories) ...... 12 4.6.1 Stability of Fixed Points ...... 12 4.6.2 Perturbation Analysis ...... 12 4.6.3 Autonomous systems ...... 13 4.7 Logistic Equation ...... 13 4.7.1 Fighting for limited resources ...... 13 4.8 Discrete equations(Difference Equation) ...... 14 4.8.1 Behaviour of Logistic map ...... 14

5 Second-order Differential Equations 14 5.1 Constant Coefficients ...... 14 5.1.1 Complementary function ...... 15 5.1.2 Detuning ...... 15 5.1.3 Method for Second Complementary Function ...... 15 5.1.4 Phase ...... 16 5.2 Particular ...... 16 5.2.1 Guessing ...... 16 5.2.2 Detuning ...... 17

3 5.2.3 ...... 17 5.3 Homogeneous equations, a.k.a Linear equidimensional equations ...... 17 5.3.1 Solving ...... 17 5.4 Difference Equation ...... 18 5.4.1 Guesswork ...... 18 5.5 Transcients and damping ...... 18 5.5.1 Free (natural) response f =0...... 18 5.5.2 Underdamped ...... 19 5.5.3 Critically damped ...... 19 5.5.4 Overdamped ...... 19 5.5.5 Forcing ...... 19 5.6 Impulses and Point Forces ...... 19 5.7 Heaviside Step Function ...... 20 5.7.1 Solving differential equations with discontinuous functions ...... 20

6 Series Solution 20 6.1 Behaviour near x = x0 ...... 22

7 Directional Derivatives 23 7.0.1 The gradient vector of f(x, y)...... 23 7.1 Stationary Points ...... 23 7.2 Taylor series for Multivariable Functions ...... 23 7.3 Classification of stationary points ...... 24 7.3.1 Determination of definiteness ...... 24 7.3.2 Contours of f(x, y)...... 25

8 Systems of Linear differential equations 25 8.1 Phase-space trajectories ...... 25 8.2 Nonlinear dynamical systems ...... 26 8.2.1 Equilibrium (fixed) points ...... 27 8.2.2 Stability ...... 27

9 Partial differential equations (PDEs) 27 9.1 First-order wave equation ...... 27 9.2 Second-order wave equation ...... 28

10 The Diffusion Equation 29

4 1 Differentiation

1.1 Differentiation Definition. The of a function fx w.r.t to an independent variable x is defined as the rate of change of f(x) with x, or symbolically:

df f(x + h) − f(x) = lim dx h→0 h f(x) is differentiable at a point x = a iff the limit exists. (|x| is not differentiable at x = 0)

df 0 d d2f Note. dx , f (x), dx f(x) all mean the same thing. Higher derivatives follow the same rule ( dx2 = 00 d d 0 f (x) = dx ( dx f(x))). However, f represents the derivative w.r.t the argument.

1.2 Small o and big O notations Definition.

f(x) (i) f(x) = o(g(x)) as x → x0 if lim g(x) = 0, so f(x) is much smaller than g(x). x→x0

f(x) (ii) f(x) = O(g(x)) as x → x0 if g(x) is bounded as x → x0, so f(x) is about the size of g(x). Note. Bounded doesn’t mean limit exists. f(x) = o(g(x)) ⇒ f(x) = O(g(x))

0 Corollary. f(x0 + h) = f(x0) + f (x0)h + o(h) Proof. f(x + h) − f(x ) 0(h) f 0(x ) = 0 0 + 0 h h from definition of the derivative and the small o notation. Result follows.

1.3 Methods of Differentiation Theorem (Chain rule). Given f(x) = F (g(x)), then

df dF dg = . dx dg dx

dg Proof. Assuming that dx exists and is therefore finite, we have df F (g(x + h)) − F (g(x)) = lim dx h→0 h F (g(x)) + (hg0(x) + o(h))F 0(g(x)) + o(hg0(x) + o(h)) − F (g(x)) = lim h→0 h df dg = g0(x)F 0(g(x)) = dg dx

5 Theorem (Leibinz’s Rule). Given f = uv, then

n X n f (n)(x) = u(r)v(n−r) r r=0 where f (n) is the n-th derivative of f.

1.4 L’Hopital’s Rule

Theorem. If f(x), g(x) differentiable at x0, and lim f(x) = lim g(x) = 0, then: x→x0 x→x0

f(x) f 0(x) lim = lim 0 . x→x0 g(x) x→x0 g (x)

0 Proof. From the Taylor’s Theorem, we have f(x) = f(x0)+(x−x0)f (x0)+o(x−x0), and similarly for g(x). Thus

0 f(x) f(x0) + (x − x0)f (x0) + o(x − x0) lim = lim 0 x→x0 g(x) x→x0 g(x0) + (x − x0)g (x0) + o(x − x0) 0 o(x−x0) f (x0) + = lim x−x0 o(x−x ) x→x0 g0(x ) + 0 0 x−x0 f 0(x) = lim 0 x→x0 g (x)

2 Integration

2.1 Integration Definition. An integral is a limit of an (infinite) sum, or the ”area under the graph”:

N Z b X f(x) dx = lim f(xn)∆x. ∆x→0 a n=0 If f is differentiable, the total area under the graph from a to b is:

N−1 N−1 Z b X 2 X lim (f(xn)∆x) + N · O(∆x ) = lim (f(xn)∆x) + O(∆x) = f(x) dx N→∞ N→∞ n=0 n=0 a due to the missing ”triangle”O(∆x2) of the area in the sum.

R x 0 Theorem (Fundamental Theorem of Calculus). Let F (x) = a f(t) dt. Then F (x) = f(x).

6 Proof.

d 1 Z x+h F (x) = lim f(t) dt dx h→0 h x 1 = lim [f(x)h + O(h2)] h→0 h = f(x)

d R b d R g(x) 0 Similarly, we have dx x f(t) dt = −f(x) and dx a f(t) dt = f(g(x))g (x). Note. We write R f(x) dx = R x f(t) dt,; the unspecified lower limit gives constant +C

2.2 Methods of integration Useful Hints for trig substitution: Useful identity √Part of integrand Substitution cosθ + sinθ = 1 1 − x2 x = sin θ 2 2 2 1 + tan θ = sec θ √1 + x x = tan θ 2 2 2 cosh u − sinh u = 1 √x − 1 x = cosh u cosh2 u − sinh2 u = 1 1 − x2 x = sinh u 1 − tanh2 u = sech2u 1 − x2 x = tanh u

2.3 Integration by Parts Integrating the , we have: Z Z uv0 dx = uv − vu0 dx.

Example 1 (THE example for Integration by Parts). Consider R log x dx. Let u = log x and 0 0 1 v = 1. Then u = x and v = x. So we have Z Z log x dx = x log x − dx

= x log x − x + C

3 Partial Differentiation

3.1 partial Differentiation Definition (). Given a function of several variables f(x, y), the partial derivative of f with respect to x is the rate of change of f as x varies, keeping y constant.

∂f f(x + δx, y) − f(x, y) = lim ∂x y δx→0 δx

7 Example 2. Consider f(x, y) = x2 + y3 + exy2 . Just treat all other variables held constant as constants, e.g.:

∂f 2 xy2 = 2x + y e . ∂y y

∂f Note. If the variables to be kept constant are not given, e.g. simply ∂x , it is assumed that ALL ∂f ∂2f other variables are being held constant. We also write fx = ∂x and fxy = ∂y∂x . Also, fxy = fyx.

3.2 Chain rule Consider an arbitrary displacement in any direction (x, y) → (x + δx, y + δy). We have

δf = f(x + δx, y + δy) − f(x, y) = f(x + δx, y + δy) − f(x + δx, y) + f(x + δx, y) − f(x, y)

= fy(x + δx, y)δy + o(δy) + fx(x, y)δx + o(δx)

= (fy(x, y) + O(δx))δy + o(δy) + fx(x, y)δx + o(δx)

Take the limit as δx, δy → 0, we have ∂f ∂f df = dx + dy. ∂x ∂y

Note. We use ”d” as differentials, knowing we will sum (integral) or divide by another infinitesimal before evaluating.

3.3 Implicit Differentiation Consider the contour surface of a function F (x, y, z) given by F (x, y, z) = const. This implicitly 2 2 5 y−yz2 defines z = z(x, y). e.g. If f(x, y, z) = xy + yz + z x = 5, then we can have x = y2+z5 . Even though z(x, y) is insolvable due to quintic equation, derivatives of z(x, y) can still be found by differentiation F (x, y, z) = const w.r.t. x holding y constant. e.g.

∂ ∂ (xy2 + yz2 + z5x) = 5 ∂x ∂x ∂z y2 + z5 = − ∂x 2yz + 5z4x

Theorem (Multivariable implicit differentiation). Given an equation

F (x, y, z) = c for some constant c, we have

∂z (∂F )/(∂x) = − ∂x y (∂F )/(∂z)

8 Proof.

∂F ∂F ∂x ∂F ∂y ∂F ∂z = + + = 0 ∂x y ∂x ∂x y ∂y ∂y y ∂z ∂x y

∂z (∂F )/(∂x) = − ∂x y (∂F )/(∂z)

Note. Reciprocals hold with partial derivatives as long as the variables keeping constant are the same.

3.4 Differentiation of Integral w.r.t parameter in Integrand R b Consider a family of functions f(x, c). Define I(b, c) = a f(x, c)dx. We have

∂I 1 Z b Z b  = lim f(x, c + δc)dx − f(x, c) dx ∂c δc→0 δc 0 0 Z b f(x, c + δc) − f(x, c) = lim dx δc→0 0 δc Z b ∂f = dx 0 ∂c

R b(x) If I(b(x), c(x)) = 0 f(y, c(x))dy, then by the chain rule, we have

dI ∂I db ∂I dc Z b ∂f = + = f(b, c)b0(x) + c0(x) dy. dx ∂b dx ∂c dx 0 ∂c 4 First-order differential equation

4.1 The Exponential Function Consider f(x) = ax, where a > 0 is constant.

df ax+h − ax ah − 1 = lim = ax lim = λax = λf(x) dx h→0 h h→0 h

9 Where ah − 1 λ = lim = const = f 0(0) h→0 h x df We define the function f(x) = e by dx = f(x) with λ = 1 with f(0) = 1. We can prove that 1 e = lim (1 + )k . So, If y = ax = ex ln a then dy = (ln a)ex ln a = (ln a)ax ⇒ λ = ln a. k→∞ k dx

Note. ln x ≡ loge x ≡ log x

4.2 First-order Linear Homogenous Ordinary Differential Equation Definition. An eigenfunction under an operator is a function that only differs by a constant after applying the operator.

d mx mx mx d Example 3. dx (e ) = m(e ) so e is an eigenfunction of the differential operator of dx . Theorem. Any linear, homogeneous, ordinary differential equation with constant coefficient has solutions of the form emx. Linear means the dependent variable appears linearly (no powers). Homogeneous means y = 0 is a solution. constant coefficients means the independent variable does not appear explicitly.

(i) Because equations are linear and homogeneous, any multiple of a solution is also a solution. So y = Aemx is a solution for any A.

3x (ii) An nth-order linear differential equation has only n independent solutions, so y = Ae 5 is the general solution of the solution 5y0 − 3y = 0.

(iii) Any solution in nth-order linear differential equation is a linear combination of eigenfunctions.

Note. Determine A by applying a boundary condition at some specified value of x (usually x = 0)

4.2.1 Discrete equation

0 Example 4. 5y − 3y = 0 , y = y0 at x = 0

yn+1−yn 3h Approximate by 5 h − 3yn = 0 ⇒ yn+1 ≈ (1 + 5 )yn (compound interest) Apply repeatedly 3h 3h y = (1 + )y = (1 + )ny n 5 n−1 5 0 x For a given value of x, choose h = n , so 3x y = y (1 + )n n 0 5n Take limit as n → ∞ 3x n y(x) = lim yo(1 + ) n→∞ 5n in agreement with solution of differential equation.

10 4.3 Forced (inhomogeneous) equations Example 5.

(i) 5y0 − 3y = 10

(ii) 5y0 − 3y = ex

Solution

(i) This is called constant forcing. We just need to guess a constant particular solution, and 10 clearly yp = − 3 would do. x x (ii) This is called eigenfunction forcing. Since e is an eigenfunction, we guess yp = Ae . Solve 1 for A to find A = 2

4.4 Non-constant coefficients Given form y0 + p(x)y = c(x) R We can use an integrating factor µ = e p dx to solve this:

R c(x)µ dx (µy)0 = c(x)µ ⇒ y = µ

4.5 Nonlinear first-order equations In a general, a first order equation has the form dy Q(x, y) + P (x, y) = 0 dx

4.5.1 Separable Equations Definition. The equation is separable if it can be manipulated into the form g(y)dy = p(x)dx

In which case the solution can be found by integrating both sides. Z Z g(y)dy = p(x)dx

Example 6. dy (x2y − 3y) − 2xy2 = 4x dx dy 4x + 2xy2 2x(2 + y2) ⇒ = = dx x2y − 3y y(x2 − 3) Z y Z 2x ⇒ dy = dx 2 + y2 x2 − 3

2 1 2 ⇒ (y + 2) 2 = A(x − 3)

11 4.5.2 Exact Equations dy Q(x, y) + P (x, y) = 0 dx is an exact equation iff the differential form

Q(x, y) dy + P (x, y) dx is exact. i.e. there is a function f(x, y) of which the above equation is the differential df. For exact equations: ∂f ∂f ∂f ∂f df = dx + = P (x, y)dx + Q(x, y)dy ⇒ = P, = Q ∂x ∂y ∂x ∂y By taking mixed second partial derivatives, we need the following equality: ∂P ∂Q = ∂y ∂x

∂P ∂Q Note. If ∂y = ∂x throughout a simply-connected domain D, then P dx + Q dy is an exact differential of a single-valued function in D. A domain D is simply connected if any closed curve in D can be shrunk to a point in D without leaving D. (no holes, unlike torus)

If the equation is exact then solution f =constant can be found by integrating the

df = P (x, y)dx + Q(x, y)dy

Example 7. dy 6y(y − 2) + (2x − 3y2) = 0 dx ⇒ (2x − 3y2)dx + 6y(y − x)dy = 0 One could check it is exact so we have

f = x2 − 3xy2 + h(y)

∂f dh dh = −6xy + = 6y2 − 6xy ⇒ = 6y2 ∂y dy dy h = 2y3 + C ⇒ f = x2 − 3xy2 + 2y3 + C Since the equation was df = 0: x2 − 3xy2 + 2y3 = const

12 4.6 Solution curves (Trajectories) Example 8. dy = t(1 − y2) dt Separable equation, so after calculation, we have:

A − e−t2 ⇒ y = A + e−t2 If we determined A e.g.: y(0) = 0, we can plot the graph:

Can we understand the nature of the family of solutions without solving the equation? In general, consider equations of form dy = f(y, t) dt df We consider derivatives for different t (and curves of dt = 0, called isoclines) to obtain vector field/solution curves. The curves cannot cross if f(y, t) is single valued.

4.6.1 Stability of Fixed Points When y is close to 1, we will be pushed towards 1, but when y is close to -1, we will be pushed dy dy away from -1. Mathematically, for a point y = x that makes dt = 0 for all t, if dt < 0 when y > x dy and dt > 0 when y < x, then x is stable. If the signs of the derivative are reversed, x is unstable.

4.6.2 Perturbation Analysis Suppose that y = ais a fixed point, then consider point y = a + (t) Then d = f(a + , t) dt ∂f = f(a, t) +  (a, t) + O(2) ∂y ∂f =  (a, t) + O(2) ∂y ∂f = [ ] ∂y

13 4.6.3 Autonomous systems This name represents equations of the form: dy = f(y) dt Then near a fixed point y = a, where f(a) = 0, go through the same procedure and reach: df ˙ = [ (a)] = k dy

kt ⇒  = 0e Example 9 (Chemical Reaction).

NaOH + HCl → H2O + NaCl

Initially the concentrations are: [NaOH] = a0,[HCl] = b0,[H2O] = 0, [NaCl] = 0 If reactants are in dilute solution (in water say) then reaction rate is proportional to ab, or in equation form: dc = λab for some λ dt dc = λ(a − c)(b − c) = f(c) dt 0 0

It is clear that c = a0 is a stable equilibrium but c = b0 is an unstable one. (Parabola graph)

4.7 Logistic Equation We have a population size y, a birth rate αy, death rate βy, we get: dy = (α − β)y ⇒ y = y e(α−β)y dt 0 Population increases or decreases exponentially depending on whether birth rate exceeds death rate or vice versa.

4.7.1 Fighting for limited resources Probability of finding a piece of food ∝ y. Probability of finding same piece of food is ∝ y2. If food is scarce then they “fight” (to the death), so death rate due to fighting (competing) is γy2. dy = (α − β)y dt y r = ry(1 − ) where r = α − β, and Y = Y γ 0 is an unstable fixed point and y is a stable fixed point. When y is small, y << Y , then

y˙ ' ry

So population grows exponentially but eventually equilibrium y=Y is reached.

14 4.8 Discrete equations(Difference Equation) Evolution of species may occur discretely (e.g. births in spring, deaths more common in winter) rather than continuously, so a better model might be of the form

xn+1 = λxn(1 − xn) dy y = ry(1 − ) dt Y Approximate the left hand side to give y − y y n+1 n ' y (1 − n ) ∆t n Y y ⇒ y ' y + r∆ty (1 − n ) n+1 n n Y r∆t y = (1 + r∆t)y (1 − ( ) 0 ) n 1 + r∆t Y

r∆t y x = λx (1 − x ), λ = 1 + r∆t, x = ( ) 0 n+1 n n n 1 + r∆t Y This is the discrete logistic equation or logistic map. It is of the general form xn+1 = f(xn)

4.8.1 Behaviour of Logistic map

Fixed points: xn+1 = xn ⇒ f(xn) = xn For logistic map λxn(1 − xn) = xn 1 ⇒ x = 0 or x = 1 − n n λ For λ < 1 population dies out. For 1 < λ < 2, we have a stable fixed point as shown on the left. For 2 < λ < 3, we have the same converging sequence, but it is oscillatory (the sequence forms√ a decaying ”box” around the stable point). For 3 < λ < 1+ 6, we have a 2-cycle (oscillating between two stable points).

5 Second-order Differential Equations

5.1 Constant Coefficients Definition. A second order linear ordinary differential equa- tion with constant coefficients can be expressed as followed: ay00 + by0 + cy = f(x) (i) Find the complementary function satisfying the homo- geneous equation ay00 + by0 + cy = 0 (ii) Find a particular Integral that satisfies the full equa- tion.

15 5.1.1 Complementary function

λx d d2 d d e is an eigenfunction of dx and dx2 (= dx ( dx )). Complementary functions have the form: λx 0 λx 00 2 λx yc = e ⇒ yc = λe ⇒ yc = λ e ⇒ aλ2 + bλ + c = 0 This is the characteristic equation. There are two solutions, giving itwo complementary functions

λ1x λ2x y1 = e , y2 = e if λ1, λ2 are distinct. Then y1, y2 are linearly independent and complete-they form a basis for the solution space. The general complementary function is

λ1x λ2x yc = Ae + Be Example 10. y00 − 5y0 + 6y = 0 2x 3x ⇒ λ1 = 2, λ2 = 3 ⇒ yc = Ae + Be

5.1.2 Detuning Example 11. y00 − 4y0 + 4 = 0 We only have one value of λ. So we need to detune. Consider 00 0 2 2x x −x y − 4y + (4 −  )y = 0 ⇒ yc = e [Ae + Be ] 2x 2 2 yc = e [A + B + x(A − B) + O(a , b )] Choose A + B = α, (A − B) = β 1 β 1 β ⇒ A = (α + ),B = (α − ) 2  2  Choose that α, β independent of  as epsilon tends to 0. 2x 2x ⇒ yc = e [α + βx + O] ⇒ e (α + βx) as  tends to 0

This is a demonstration of a general rule that if y1(x) is a degenerate linear differential equation with constant coefficients then y2(x) = xy1(x) is an independent complementary function.

5.1.3 Method for Second Complementary Function This is associated with a degenerate solution of the homogeneous equation. We can look for a solution in the form y2(x) = v(x)y1(x) Example 12. y00 − 4y0 + 4y = 0 2x 2x y1 = e , Try y2 = ve 0 0 2x 00 00 0 2x ⇒ y2 = (v + 2v)e , y2 = (v + 4v + 4v)e Then solve for v to get the same solution.

16 5.1.4 Phase Space A differential equation of nth order (n) (n−1) an(x)y + an−1(x)y + ··· + ao(x)y = f(x) determines the nth derivative y(n)(x) and hence all higher derivatives,in terms of y(x), y0(x),..., y(n−1)(x). Therefore we know the Taylor series that we can use to extend y to values around x. Definition. Define a solution vector Y(x) = (y(x), y0(x), ··· , y(n−1)(x)) for each value of x, As x varies Y(x) traces out a in phase space.

The solution y1(x) and y2(c) are independent solutions of the differential equations iff Y1, Y2 are linearly independent vectors in phase space. i.e.: iff the Wronskian() W (x) =   y1 y2 0 0 6= 0 y1 y2 Theorem (Abel’s Theorem). Write equation in standard form: y00 + p(x)y0 + q(x)y = 0 Then either W ≡ 0 ∀x, or W 6= 0 ∀x. If two solutions are independent for some particular value of x then they re independent for all values of x.

Proof. If y1 and y2 are both solutions, then 00 0 00 0 y1 + py1 + qy1 = 0, ⇒ y2y1 + py2y1 + qy2y1 = 0 00 0 00 0 y2 + py2 + qy2 = 0 ⇒ y1y2 + py1y2 + qy1y2 = 0 Subtract, we have: 00 00 0 0 0 R −p(x) y2y1 − y1y2 + p(y2y1 − y1y2) = 0 ⇒ −W − pW = 0 ⇒ W = W0e

Where W0 is a constant. Exponential function is never zero, so either W0 = 0 ⇒ W ≡ 0 or W is never 0. Note. Any linear nth order differential equation is equal to a of first order equations. It can be shown that for the system Y˙ + AY = 0, we have: 0 − R T r(A)dx W + T r(A)W = 0 ⇒ W = W0e ⇒ Abel’s Theorem holds

5.2 Particular Integrals 5.2.1 Guessing If the forcing terms are easy, we can easily “guess” the particular integral.

f(x) yp(x) emx Aemx sin kx A sin kx + B cos kx cos kx n polynomial pn(x) qn(x) = anx + ··· + a1x + a0 Remember that the equation is linear! (Consider each forcing term separately).

17 5.2.2 Detuning

Same as before, if resonance occurs, try yp = At cos x to get an oscillator vibrating at tis natural frequency. Alternatively, for forcing term sin ω0t, one could try yp = C(sin ωt − sin ω0t) and then take ω − ω0 limit to 0. (Recommend former way on test)

5.2.3 Variation of parameters

00 0 Let y1(x) and y2(x) be linearly independent complementary functions of the ODE y + p(x)y + 0 0 q(x)y = f(x). The solution can be written in terms of Y1 = (y1, y1), Y2 = (y2, y2):

yp = uy1 + vy2 (a) 0 0 0 yp = uy1 + vy2 (b) From the second equation, we have

00 00 0 0 00 0 0 yp = (uy1 + u y1) + (vy2 + v y2) (c) 0 0 0 0 If we use consider (c) + p(b) + q(a), we have y1u + y2v = f. 0 0 0 0 0 By (a) - (b), we obtain y1u + y2v = 0. Now we have two simultaneous equations for u and v .    0   y1 y2 u 0 0 0 0 = y1 y2 v f  0  0    u 1 y2 −y2 0 0 = 0 v W −y1 y1 f 0 y2 0 y1 So u = − W f and v = W f. Example 13.

00 y + 4y = sin 2x ⇒ y1 = cos 2x, y2 = sin 2x ⇒ yp = 2u cos 2x − 2v sin 2x − cos 4x sin 4x x ⇒ u = , v = − 16 16 4 1 x 1 1 ⇒ y = (− cos 4x sin 2x + sin 4x cos 2x) − cos 2x = sin 2x − x cos 2x p 16 4 16 4

5.3 Homogeneous equations, a.k.a Linear equidimensional equations Definition. This class of equations look like: ax2y00 + bxy0 + cy = f(x)

5.3.1 Solving (i) Use eigenfunction y = xk to reach characteristic equation ak(k − 1) + bk + c = 0. Then find particular solution.

2 d2y dy z (ii) Use substitution z = ln x to transform this equation into a dz2 + (b − a) dz + cy = f(e ). This method agrees with the one above. (iii) For degenerate solutions, use xk ln x. Same with resonant forcing.

18 5.4 Difference Equation Definition. This class of equations is defined as:

ayn+2 + byn+1 + cyn = f(n)

Solve in a similar way to differential equation. We can think of a difference operatorD[yn] = yn+1 n has eigenfunction yn = k General complementary function has characteristic equation: ak2 + bk + c = 0 So the solutions are of form: (c) n n an = Ak1 + Bk2 if k1 6= k2 (c) n an = (A + Bn)k if k1 = k2

5.4.1 Guesswork p fn yn kn Akn kn nkn (if degeneracy occurs) p p p−1 n A1n + A2n + ··· + Ap

5.5 Transcients and damping For physical systems, often there is a restoring force (suspension). Consider a car of mass M with a vertical force F (t) acting on it . We can consider the wheels to be springs (F = kx) with a “shock absorber” (F = lx˙). Mx¨ = F (t) − kx − lx.˙ So we have l k x¨ + x˙ + x = F (t). M M Write t = τpM/k, where τ is dimensionless. Then we obtain x¨ + 2κx˙ + x = f(τ) where,x ˙ means dx , κ = √l and f = F . dτ 2 kM k

5.5.1 Free (natural) response f = 0

x¨ + 2κx˙ + x = 0

We try x = eλτ

λ + 2κλ + 1 = 0 p λ = −κ ± κ2 − 1

= λ1, λ2

19 5.5.2 Underdamped √ √ If κ < 1, we have x = e−κτ (A sin 1 − κ2τ + B cos 1 − κ2τ). The period is √ 2π and its amplitude decays in a characteristic 1−κ2 1 of O( κ ). Note that the damping increases the period. As κ → 1, then the oscillation period → ∞.

5.5.3 Critically damped If κ = 1, then x = (A + Bτ)e−κτ . 1 The rise time and decay time are both O( κ ) = O(1). So the dimensional rise and decay times are O(pM/k).

5.5.4 Overdamped

−λ τ −λ τ If κ > 1, then x = Ae 1 + Be 2 with λ1 < λ2. Then the decay time is O(1/λ1) and the rise time is O(1/λ2). Note that it is possible to get a large initial increase in ampli- tude.

5.5.5 Forcing In a forced system, the CF typically determine the transcient re- sponse, while the PI determines the asymptotic response. For example, if f(τ) = sin τ, xp = C sin τ +D cos τ. In this case, 1 −λ1τ −λ2τ xp = − 2κ cos τ. The general solution is x = Ae + Be − 1 1 2κ cos τ ∼ − 2κ cos τ as λ → ∞ since Re(λ1,2) > 0. Note. The forcing response is out of phase with the forcing.

5.6 Impulses and Point Forces We don’t need to know the details of F (t) (bouncing ball) but only note that it acts for a short time O(). To make it convenient for mathematics, we take limit  → 0. Newton’s 2nd law gives:

mx¨ = F (t) − mg

Integrating:

Z T + 2 Z T + Z T + d x dx T + 2 dt = F (t)dt − mg dt ⇒ [m ]T − = I − 2mg T − dt T − T − dt

20 Gravity ignored, Impulse written as I we have: dx I = [m ]T + dt T − we only need the integral properties of F (t; ). Consider a family of function D(t, ) such that Z ∞ lim = 0 for all t 6=0 & lim D(t; )dt = 1 →0 →0 −∞ We define the by:

δ(x) = lim D(x : ) →0 On the understanding that we can only use its integral properties. (its value at 0 is undefined) For example: Z ∞ Z ∞ Z ∞ g(x)δ(x)dx = lim g(x)D(x; )dx = g(0) lim D(x; )dx = g(0) −∞ →0 −∞ →0 −∞ In general, the integral is only non-zero if the ”infinite” point is between the limits of the integral, and the value is the value of g(x) at that point. If the limit is at the infinite point, it is undefined.

5.7 Heaviside Step Function Definition. Z x H(x) = δ(t)dt −∞ H(x) = 0 for x < 0 ,H(x) = 1 for x > 0,H(x) undefined for x = 0

5.7.1 Solving differential equations with discontinuous functions Separate the equation into parts before discontinuity and after discontinuity and then consider boundary conditions (continuity of function).

6 Series Solution

Consider the following type of equations:

p(x)y00(x) + q(x)y0(x) + r(x)y = 0

Q R Definition. A point x = x0 is an ordinary point if P and P have taylor series (are analytic) about x0. If it is not an ordinary point, It is an regular singular point if the two fractions can have taylor series when it is expressed in form:

2 00 0 p(x)(x − x0) y (x) + q(x)(x − x0)y (x) + r(x)y = 0

If these two fractions also does not have taylor series. It is an irregular singular point.

21 Example 14. (1 − x2)y00 − 2xy0 + 2y = 0 x = 0 is an ordinary point, x = ±1 are ordinary singular points.

Proposition 1. If x0 is an ordinary point, then equation has two linearly independent solutions of the form ∞ X n y = an(x − x0) n=0 convergent in some range about x0. If x0 is regular singular , then equation has ≥ 1 solution in:

∞ X n+σ σ y = an(x − x0) = (x − x0) f(x) , a0 6= 0 n=0 This is called a Frobenius series. f(x) is analytic, σ can be any complex number.

Example 15. (1 − x2)y00 − 2xy00 + 2y = 0 Try solution ∞ X n y = anx n=0 Now write it in equidimensional form:

(1 − x2)x2y00 − 2x2xy0 + 2x2y = 0

X 2 2 2 n an((1 − x )n(n − 1) − 2x n + 2x )x = 0 Coefficient of xn gives the general recurrence relation:

n(n − 1)an + (−(n − 2)(n − 3) − 2(n − 2) + 2)an−2 = 0

2 ⇒ n(n − 1)an = (n − 3n)an−2

n = 0, n = 1 : 0a0 = 0, 0a1 = 0 ⇒ a0, a1 are arbitrary. n − 3 n − 5 n > 1 : a = a = a = ··· n n − 1 n−2 n − 1 n−4 −1 a = a , a = 0 2k 2k − 1 0 2k+1 x2 x4 x6 x 1 + x y = a [1 − − − − · · · ] + a x = a [1 − ln( )] + a x 0 1 3 5 1 0 2 1 − x 1 Example 16 (Frobenius Series).

4xy00 + 2(1 − x2)y0 − xy = 0

4x2y00 + 2(1 − x2)(xy0) + x2y = 0

22 Now x = 0 is a regular singular point, try Frobenius series:

2 2 n+σ [4(n + σ)(n + σ − 1) + 2(1 − x )(n + σ) − x ]anx = 0

[4(n + σ)(n + σ − 1) + 2(n + σ)]an + [−2(n + σ − 2) − 1]an−2 = 0

2(n + σ)(2n + 2σ − 1)an = (2n + 2σ − 3)an−2 n = 0 gives the indicial equation for the index σ. 1 2σ(2σ − 1)a = 0 ⇒ σ = 0, 0 2 If σ = 0 2n(2n − 1)an = (2n − 3)an−2

For n = 0, we can see a0 is arbitrary. For n = 1,we can see a1 = a3 = a5 ··· = 0 1 5 y = a [1 + x2 + x4 + ··· ] 0 4 · 3 8 · 7 · 4 · 3

1 σ = 2 we have: (2n + 1)2nan = (2n − 2)an−2 n = 0 shows that a0 is arbitrary, and for n = 1, we don’t have odd terms (again):

1 1 2 3 4 ⇒ y = x 2 b [1 + x + x + ··· ] 0 2 · 5 2 · 5 · 4 · 9

6.1 Behaviour near x = x0

Indicia equation has two roots σ1 and σ2. If σ2 − σ1 is not an integer then there are two linearly independent Frobenius solutions

∞ ∞ σ1 X n σ2 X n y = (x − x0) an(x − x0) + (x − x0) bn(x − x0) n=0 n=0

If σ2 − σ1 is an integer then there is a Frobenius solution of the form:

∞ σ2 X n y1 = (x − x0) an(x − x0) n=0 where σ2 ≥ σ1 The other solution is (usually) of form

∞ X n y2 = ln(x − x0)y1 + bn(x − x0) n=0

23 7 Directional Derivatives

Consider a displacement ds = (dx, dy) of function f(x, y). The change during that displacement is: ∂f ∂f df = dx + dy ∂x ∂y ∂f ∂f ⇒ df = (dx, dy) · ( , ) = ds · ∇f ∂x ∂y ∂f ∂f ( ∂x , ∂y ) are components of gradient of f, denoted grad f or ∇f. Write ds = ˆs ds where |ˆs| = 1.Then: df = ˆs · ∇f ds df where ds is the of f. This is the definition of ∇f.

7.0.1 The gradient vector of f(x, y) df = |∇f| cos θ ds where θ is the angle between ∇f and ds. Therefore, maximum occurs when ds points along ∇f. (i) ∇f has magnitude of the maximum rate of change over all directions (in x − y plane) at a point. It has direction in which the graph increases most rapidly.

(ii) If ds is a displacement along a contour of f then ∇f is orthogonal to the contour of f. df = 0 ⇒ ˆs · ∇f = 0 ds 7.1 Stationary Points df There is always one direction in which ds = 0, namely parallel to a contour of f. Local have df = ˆs · ∇f = 0 ds for all directions. so ∇f = 0. (i) For maximum/minimum points, ∇f points toward/away the point.

(ii) For saddle points, contours cross (it only crosses for saddle points).

7.2 Taylor series for Multivariable Functions Consider finite displacement δs along a straight line in x − y plane. Then d δs ≡ δs · ∇ ds So the Taylor series along the line is: df 1 d2f 1 f(s) = f(s + δs) = f(s ) + δs + (δs)2 + ··· = f(s ) + δs · ∇f + (δs)2(ˆs · ∇)(ˆs · ∇)f + ··· 0 0 ds 2 ds2 0 2

24 Now: ∂f ∂f δs · ∇f = δx + δy ∂x ∂y Expanding the terms: ∂ ∂ ∂f ∂f (δs)2(ˆs · ∇)(ˆs · ∇)f = (δs)2(ˆs + ˆs )(ˆs + ˆs ) x ∂x y ∂y x ∂x y ∂y ∂2f ∂2f ∂2f ∂2f = δx2 + δxδy + δyδx + δy2 ∂x2 ∂x∂y ∂y∂x ∂y2 f f  δx = δx δy xx xy fyx fyy δy where f f  xx xy ≡ ∇∇f (Hessian matrix) fyx fyy In general, the coordinate-free form of Taylor expansion is is 1 f(x) = f(x ) + δx · ∇f(x ) + δx · ∇∇f · δx 0 0 2 7.3 Classification of stationary points

At a x0, we know that ∇f(x0) = 0. So at a point near the stationary point, 1 f(x) ≈ f(x ) + δx · H · δx, 0 2 where H = ∇∇f(x0) is the Hessian matrix. At a minimum, δx · H · δx > 0 for all δx, so it is positive definite. Similarly, at a maximum, δx · H · δx < 0 for all δx so it is negative definite. At a saddle, δx · H · δx is indefinite. If det H = 0, then we need to look at higher derivatives. Now note that H = ∇∇f is symmetric (fxy = fyx). So H is diagonalizable. W.r.t these axes in which H is diagonal (principal axes), we have 2 2 2 δx · H · δx = λ1(δx) + λ2(δy) + ··· + λn(δz) where λ1, λ2, ··· λn are the eigenvalues of H. So positive-definite means λi > 0 for all i. Similarly, it is negative-definite iff λi < 0 for all i. If eigenvalues have mixed sign, then it is a .

7.3.1 Determination of definiteness Definition (Signature of Hessian matrix). The signature of H is the pattern of the signs of the subdeterminants:

fxx fxy ··· fxz

f f fyx fyy ··· fyz f , xx xy , ··· , xx . . .. . |{z} fyx fyy . . . . |H1| | {z } |H2| fzx fzy ··· fzz | {z } |Hn|=|H| Proposition 2. H is positive definite if and only if the signature is +, +, ··· , +. H is negative definite if and only if the signature is −, +, ··· , (−1)n. Otherwise, H is indefinite.

25 7.3.2 Contours of f(x, y)   λ1 0 Consider H in 2 dimensions, and diagonal axes for H. So H = . Write x − x0 = (X,Y ). 0 λ2 2 2 Then near x0, f = constant ⇒ xHx = constant, i.e. λ1X + λ2Y = constant. At a maximum or minimum, λ1 and λ2 have the same sign. So these countours are locally ellipses. At a saddle point, they have different signs and the contours are locally hyperbolae.

8 Systems of Linear differential equations

Consider two dependent variables y1(t) and y2(t) with

y˙1 = ay1 + by2 + f1(t)

y˙2 = cy1 + dy2 + f2(t) ⇒ Y˙ = MY + F y  a b f  where Y= 1 M= , F= 1 y2 c d f2 This can be simplified to a second-order linear ODE of form below through differentiation:

y¨1 + Ay˙1 + Ay1 = f(t)

Transferring back is equally easy:

y¨ + ay˙ + by = f, let y1 = y, y2 =y ˙

To transfer this back into a system of two linear first order differential equations. λt Complementary solution Yc = ve has to satisfy:

Mv = λv

So v is an eigenvector of M with corresponding eigenvalue λ. Thus the solution (non-degenerate) is: λ1x λ1x Y = Av1e + Bv1e + f(x) where v1, v2 are the eigenvectors, λ1, λ2 are the eigenvalues, and f(x) the forcing term.

8.1 Phase-space trajectories There are three possible cases of solutions corresponding to three different possible eigenvalues of M:

(i) If both λ1, λ2 are real with opposite sign (λ1λ2 < 0). wlog assume λ1 > 0. Then there is a saddle as above:

26 (ii) If λ1, λ2 are real with λ1λ2 > 0. wlog assume |λ1| ≥ |λ2|. Then the is

If both λ1, λ2 < 0, then the arrows point towards the intersection and we say there is a stable node. If both are positive, they point outwards and there is an unstable node.

(iii) If λ1, λ2 are complex conjugates, then we obtain a spiral

If Re(λ1,2) < 0, then it spirals inwards. If > 0, then it spirals outwards. If = 0, then we have ellipses with common center. We can determine whether the spiral is positive (as shown above), or negative (mirror image of the spiral above) by considering the eigenvectors.

8.2 Nonlinear dynamical systems Consider the second-order autonomous system (i.e. t does not explicitly appear in the forcing terms on the right)

x˙ = f(x, y) y˙ = g(x, y)

It can be difficult to solve the equations, but we can learn a lot about phase-space trajectories of these solutions by studying the equilibria and their stability.

27 8.2.1 Equilibrium (fixed) points

Definition (Equilibrium point). An equilibrium point is a point in whichx ˙ =y ˙ = 0 at x0 = (x0, y0).

Clearly this occurs when f(x0, y0) = g(x0, y0) = 0. We solve these simultaneously for x0, y0.

8.2.2 Stability

Write x = x0 + ξ, y = y0 + η. Then

ξ˙ = f(x0 + ξ, y0 + η) ∂f ∂f = f(x , y ) + ξ (x ) + η (x ) + O(ξ2, η2) 0 0 ∂x 0 ∂y 0 So if ξ, η  1, ξ˙ f f  ξ = x y η˙ gx gy η This is a linear system, and we can determine its character from the eigensolutions.

9 Partial differential equations (PDEs)

9.1 First-order wave equation Consider the equation of the form ∂y ∂y = c ∂t ∂x with c constant. Recall that along a path x = x(t) so that y = y(x(t), t),

dy ∂y dx ∂y = + dt ∂x dt ∂t by the chain rule. Now we choose a path along which dx = −c. (1) dt Along such paths, dy ∂y ∂y = −c + = 0 (2) dt ∂x ∂t So wex replaced the original partial differential equations with a pair of ordinary differential equa- tions. (2) tells us y is constant along the path. (1) says x = x0−ct, with x0 constant. Write x+ct = x0. So we have a family of paths, each determined by x0, and along each path, y is constant. For each value of x0, we obtain a unique constant function along the path y = f(x0). So

y = f(x + ct), which is the general solution of y. As we move up the time axis, we simply take the t = 0 solution and translate it to the left.

28 The paths we’ve identified are called the “characteristics” of the wave equation. (In this example it is constant). We can also solve forced equations, such as

∂y ∂y 2 + 5 = e−t, y(x, 0) = e−x . ∂t ∂x

dy −t dx −t So we obtain dt = e with dt = −5. So y = A − e on paths x = x0 + 5t. −x2 At t = 0, y = A − 1 and x = x0. So A − 1 = e 0 from the given initial conditions., i.e. −x2 A = 1 + e 0 . So 2 y = 1 + e−(x−5t) − e−t.

9.2 Second-order wave equation We consider equations in the following form: ∂2y ∂2y = c2 ∂t2 ∂x2

∂2y Suppose that ρ(x) is the mass per unit length of a string. Then the force ma = ρ ∂t2 is proportional ∂2y to the ∂x2 (curvature of string) This is often known as the “hyperbolic equation” because the form resembles that of a hyperbola (which has the form x2 − b2y2 = 0). But the connection stops there. If c is constant, we write

∂2y ∂2y ∂ ∂ ∂ ∂ − c2 = 0 ⇒ ( + c )( − c )y = 0 ∂t2 ∂x2 ∂t ∂x ∂t ∂x If y = f(x+ct), then the first operator differentiates it to give a constant (as in the first-order wave equation). Then applying the second operator differentiates it to 0. So y = f(x + ct) is a solution. Since the operators are commutative, y = f(x − ct) is also a solution. So the general solution is

y = f(x + ct) + g(x − ct).

This shows that the solution composes of superpositions of waves travelling to the left and those to the right. We can show that this is indeed the most general solution by substituting ξ = x + ct and 2 2 η = x − ct. We can show, using the chain rule, that ytt − c yxx ≡ −4c yηξ = 0. Integrating twice gives y = f(ξ) + g(η). How many boundary conditions do we need to have a unique solution? In PDEs, we have to count all variables. In this case, we need 2 boundary conditions and 2 initial conditions. Example 17. 1 ∂y y = , = 0 at t = 0 1 + x2 ∂t y → 0 as x → ±∞ Solution has the form: y = f(x + ct) + g(x − ct) Solving: 1 y = f(x) + g(x) = 1 + x2

29 ∂y = cf 0(x) − cg0(x) = 0 ∂t Solving we have f 0 = g0 and from boundary condition f = g so solution is:

1 f(x) = g(x) = 2 1 + x2 So: 1 1 1 y = [ + ] 2 1 + (x + ct)2 1 + (x − ct)2

10 The Diffusion Equation

Definition. Heat conduction in a in one dimension is modelled by the diffusion equation: ∂T ∂2T = K ∂t ∂x2 This is a parabolic PDE where T(x,t) is and the constant k is diffusivity. Example 18. Infinitely long bar heated at one end. Note in general that ”velocity” is proportional to curvature. There is a similarity solution of the diffusion equation, valid on an infinite domain in which

T (x, t) = θ(η) where η = √x So solving for the derivatives: 2 κt ∂2T 1 = θ00(η) ∂x2 4κt ∂T η = − θ0 ∂t 2t In diffusion equation: 1 −ηθ0 = κ θ00 4κ ⇒ θ00 + 2ηθ0 = 0 This is an ordinary differential equation for θ(η). Using integrating factor treating θ0 as the variable, we have: 2 θ0 = Ae−η η Z 2 ⇒ θ = A e−u du + B 0 = α erf η + B where: η 2 Z 2 erf y = √ e−u du π 0 √ L2 This solution is valid for κt << L ⇒ t << κ

30