<<

Summary of Lecture 5, Oct. 6:

Sect. 2.6. Dual Spaces. If V is a , the V ∗ is defined as the space of all linear maps from V into F . It is clearly a vector space. If f ∈ V ∗ and v ∈ V ,it clarifies things to write hf,vi instead of f(v). In this way, the roles of f and v are seen to be symmetric. Then hf,vi is bilinear in f and v. Further, (by definition) if hf,vi = 0 for all v ∈ V , we have f = 0. Similarly, if hf,vi = 0 for all f ∈ V ∗, we have v = 0. (If v =6 0, there exists f ∈ V ∗ such that f(v) = 1. This follows from the existence of a liner transformation in which vectors may be arbitrarily assigned.)

∗ For any basis β = {x1,...xn} of V , we can find f1,...,fn ∈ V with hfi,vji = δij (the ∗ ∗ Kronecker delta). Then these fi form a basis of V , since if f ∈ V with f(xi)=ai, then P aifi = f since they agree on the basis vectors. Note the important formula

f = hf,xiifi

where the fi and xj are dual bases.

V ∗∗ = V , since any x ∈ V can be regarded as a linear functional on V ∗. Think of hf,xi for f. It is a linear functional. By considerations, it consists of all of V . More formally, for x ∈ V , define ψ(x) as a linear functional on V ∗ according to the formula:

ψ(x)(f)=f(x)

Then ψ:V → V ∗∗. Now you must prove that ψ is linear, 1-1 and onto.

Let T : V → W . Then hf,Tvi is bilinear in f ∈ W ∗ and v ∈ V , so for fixed f it is linear in v. So there is a g ∈ V ∗ such that hf,Tvi = hg, vi. g depends linearly on f, and we write T t(f). Note that T t maps W ∗ into V ∗ Thus, if T : V → W then T t : W ∗ → V ∗ with

hf,T(v)i = T t(f),vi

Suppose T :V → W , and bases are chosen for V and W . Then there are in V ∗ and W ∗. Now show that if these basis vectors are used, the of T t is the of the matrix of T . Show this.

Sect. 2.7. Linear Differential Equations. V = C∞(R). For convenience, these are complex valued functions over the field of complex numbers. If p(t) is an n-th degree polynomial, and L is a transformation of V , then it makes sense to talk about p(L). We look at the transformation L = D, where D is the differentiation . Then

n n−1 p(D)=D + an−1D + ...+ a1D + a0I is an n-th order differential operator on V .Ann-th order differential equation with constant coefficients is simply the equation p(D)y = 0. Another operator on V is left multiplication

1 by a x(t). We write this as x(t)I to stress x as operator, rather than as element of V . Now compute D(x(t)I) by finding its action an any function y ∈ V :

D(x(t)I)y = D(x(t)y)=x(t)y0 + x0(t)y =(x(t)I)D +(x0(t)I)y

So D(xI)=(xI)D + x0I. In particular, for x(t)=eλt,wehavex0 = λx,so

D(eλtI)=(eλtI)D + λ(eλtI) or (e−λtI)D(eλtI)=D + λI. Corollary: The equation (D − λI)y = 0 has the one dimensional solution space generated by y = eλt, namely y = ceλt.

Corollary: D − λI is onto. Using factorization, p(D) is onto.

Theorem. (p. 126, Lemma 2.) If V is a vector space, and U and T are operators on V such that U is onto, and N(T ) and N(U) are finite dimensional, then N(TU) is finite dimensional and dim N(TU) = dim N(T )+dimN(U)

U onto T V −→ V −→ V

Proof: Let u1,...,up be a basis for N(T ) and v1,...,vq be a basis for N(U). Since U is onto, we have ui = U(wi) for i =1,...,p. Then claim w1,...,wp,v1,...,vq is a basis for N(TU). Proof: Spanning. Suppose TU(v) = 0. Then Uv ∈ N(T )soUv = P aiui = P aiUwi. But this gives U(v − P aiwi)=0sov − P aiwi ∈ N(U)so=P bjvi. Linear Independence: Apply U to a 0 linear combination to get the result.

Corollary. If p(D) is an n-th order linear differential operator with constant coefficients, the solution space of p(D)y = 0 has dimension n. Proof by induction on dimension of p(t).

It is now a simple matter to show that the equation p(D)y = x(t) is a coset of the n- dimensional kernel of P (D). For if p(D)y0 = x(t) for some y0 (possible since p(D) is onto and y is any other solution of p(D)y = x(t), then p(D)(y − y0)=0soy − y0 is in the kernel of p(D).

2