<<

Series and the (Course Notes for Math 308H - Spring 2016)

Dr. Michael S. Pilant

April 26, 2016

1 Self-Adjoint Matrices

If A is a self-adjoint matrix, then (by definition)

A = A∗ = (A¯)T = A¯T where AT is the usual transpose , and A¯ is the conjugate of A (replacing i by −i). n × n Self-adjoint matrices have many important properties:

1. They have a full set of n linearly independent eigenvectors. 2. The eigenvectors are mutually orthogonal. 3. All eigenvalues are real.

Recall that the inner of two vectors in Rn is defined by

k=n X < x, y >= x¯kyk k=1 where we take the conjugate (to allow complex valued vectors).

If two eigenvectors ξm, ξn are orthogonal (perpindicular) then their inner product vanishes, that is

< ξm, ξn >= 0 if m 6= n.

This implies that every vector in Rn can be written (uniquely!) in terms of the eigenvectors

k=n X x = ckξk k=1

Taking the inner product of this relation with ξm we have

1 k=n X < x, ξm >= ck < ξk, ξm > k=1

Each of the terms < ξk, ξm > is zero except when k = m, which yields

< x, ξm >= cm < ξm, ξm >

This immediately gives us the kth coefficient of x in the eigenvector expansion.

< x, ξm > < x, ξm > cm = = 2 < ξm, ξm > kξmk

To simplify things in what follows, we assume that we normalize the eigenvectors so that they all have unit length. This is done by

ξm ξm → kξmk

With this simplifying assumption we have the all-important identity

k=n k=n X X x = ckξk = < x, ξk > ξk k=1 k=1

This simple (but fundamental) representation allows us to solve matrix equations and matrix ordinary differential equations as if they were just single equations.

2 Solving Self-Adjoint Matrix Equations

Suppose we wish to solve the following matrix system of equations

Ax = b where A is a self-adjoint n × n matrix. Since x and b have unique eigenvector expansions we have

k=n k=n X X A xkξk = bkξk k=1 k=1 or k=n k=n X X xkAξk = bkξk k=1 k=1 or k=n k=n X X xkλkξk = bkξk k=1 k=1 Since the expansion is unique, we must have xkλk = bk for each k, that is bk xk = λk This gives us the solution of the matrix equations in terms of b

k=n k=n X X bk b bkξk → x = ξk λk k=1 k=1 So solving the matrix system of equations is as simple as computing the coefficients of the right hand side, and dividing by the appropriate eigenvalues. (Note we must have λk 6= 0 for A to be invertible!)

3 Solving Self-Adjoint ODES

Suppose we have the following system of ODES dx = Ax + b(t) dt where A is a , self-adjoint matix. Since the solution x(t) is in Rn for each time t, we can write

k=n X x(t) = ck(t)ξk k=1 Substituting this relation into the matrix ode gives us

k=n k=n k=n dx X X X = c0 (t)ξ = A c (t)ξ + b (t)ξ dt k k k k k k k=1 k=1 k=1 Because of the uniquess of the eigenvector expansion, we have a system of n independent scalar odes:

0 ck(t) = λkck(t) + bk(t) Initial conditions are determined by k=n X x0 = x(0) = ck(0)ξk k=1 and therefore ck(0) =< x(0), ξk >

The initial value problem for ck(t) can be easily solved by the method of integration factors

4 Self-Adjoint Operators

We can generalize the notion of inner product from vectors to functions by Z < f, g >= f¯(x)g(x)dx Specifically, consider the space of functions defined on [0, 1] where Z 1 < f, g >= f¯(x)g(x)dx 0 Let L denote a linear operator on this space, that is an operator satisfying

L[f + g] = L[f] + L[g],L[cf] = cL[f]

The adjoint of L is denoted by L∗ and is formally defined by

< L∗f, g >=< f, Lg > for all f, g. Specifically, we will consider the case d2 L = − dx2 defined on functions with two continuous , that vanish at x = 0 and x = 1. In this case we have Z 1 < f, Lg >= − f¯(x)g00(x)dx 0 Integrating by parts we have the identity Z 1 Z 1 ¯ 00 0 1 0 1 ¯00 < f, Lg >= − f(x)g (x)dx = −f(x)g (x)|0 + f (x)g(x)|0 − f (x)g(x)dx 0 0 The boundary terms fanish, and we have

< L∗f, g >≡< f, Lg >=< Lf, g >

So L is self-adjoint.

5 of Self-Adjoint Operators

To find the eigenvalues of L, we need to find a u satisfying

00 −u (x) = Lu = λku(x), u(0) = u(1) = 0 This has a very simple solution, the eigenvalue/ pair

2 2 λk = k π , uk(x) = sin(kπx)

As in the case of self-adjoint matrices, these functions are orthogonal, that is Z 1 < sin(mπx), sin(nπx) >= sin(mπx) sin(nπx)dx = 0, m 6= n 0 1 = , m = n 2 This leads to the eigenfunction expansion

k=∞ X f(x) = fk sin(kπx) k=1 where Z 1 fk = 2 f(x) sin(kπx)dx 0

This is called the Fourier Series expansion of f(x).

There is a corresponding Fourier Cosine Series expansion of f(x) given by

k=∞ 1 X f(x) = f + f cos(kπx) 2 0 k k=1 where Z 1 fk = 2 f(x) cos(kπx)dx 0

6 Partial Differential Equations with Self-Adjoint Operators

One very famous (and important) example of a partial is called the

∂u ∂2u − = f(x, t) ∂t ∂x2 with initial condition u(x, 0) = u0(x) and boundary conditions u = 0 on [0, 1].

Using the eigenfunctions of L = −uxx for this domain, which are

ξn(x) = sin(nπx) we substitute k=∞ X u(x, t) = ck(t) sin(kπx) k=1 into the partial differential equation to get

k=∞ k=∞ k=∞ X ∂u X X c0 (t) sin(kπx) = = u (x, t) + f(x, t) = c (t)(−k2π2) sin(kπx) + f (t) sin(kπx) k ∂t xx k k k=1 k=1 k=1 Again, using the uniqueness of the eigenfunction expansion, we must have

0 2 2 ck(t) = −k π ck(t) + fk(t) with initial conditions ck(0) = 2 < u(x, 0), sin(kπx) > and fk determined by fk(t) = 2 < f(x, t), sin(kπx) >

The partial differential equation reduces to independent set of scalar ode’s. 7 Fourier Transform

If we look at a larger domain [0,T ], the eigenfunctions of L[u] = −u00(x) are sin((kπ/T )x) and the eigenvalues are k2π2/T 2. Even though the eigenvalues are still discrete, they are spaced more closely together. In the limit as T → ∞ we get a continuum of eigenvalues. This leads us to try to expand functions f(x) in terms of exp(ikx). Since there are a continuum of eigenvalues, we use an instead of a discrete sum

1 Z ∞ f(x) = F (k)eikxdk 2π −∞ One can show that Z ∞ F (k) = f(x)e−ikxdx −∞ F (k) is called the Fourier Tranfsorm of f(x). The fourier transform and its inverse transform are closely related - they differ only by a constant, and a minus sign. The Fourier transform is defined for all functions which are piecewise continuous and Z ∞ |f(x)|dx < ∞ −∞

8

If we have a vector of finite length, we can write

N−1 X −i2πkn/N Xk = xne n=0 and the inverse N−1 1 X x = X ei2πkn/N n N k k=0 This is often written in more symmetric form

N−1 1 X −i2πkn/N Xk = √ xne N n=0

N−1 1 X i2πkn/N xn = √ Xke N k=0