<<

Supplementary Problems for Chapter 2

1. Let u1, u2, . . . , un be continuous, real-valued functions on an interval [a, b] of the real axis. Show that a necessary and sufficient condition for these functions to be linearly dependent there is that the Gram R b A = (aij) be singular (for the Gram matrix, aij = a ui(x)uj(x) dx). Suppose first that these functions are linearly dependent. Then there exist c1, c2, . . . , cn not all zero such that

c1u1(x) + c2u2(x) + ··· + cnun(x) = 0, x ∈ [a, b].

Multiplying by uj(x) and integrating gives

n X ajkck = 0 (j = 1, 2, . . . , n), k=1 i.e., Ac = 0 for a non-zero vector c. This implies that A is singular. Alternatively, suppose A is singular, choose a non-zero vector c = t (c1, c2, . . . , cn) so that Ac = 0, form the sum above with these con- stants cj, square it, and integrate:

" n #2 n Z b X Z b X ckuk(x) dx = cjuj(x)uk(x)ck dx. a k=1 a j,k=1

The last sum is ctAc = 0. Thus the integral of the squared function vanishes. For a continuous function this can only happen if the function vanishes, i.e., c1u1(x)+···+cnun(x) = 0 for this choice of the constants.

In the next two problems, y is an n-component vector and A an n × n matrix with entries aij(t) that are defined and continuous for all real t. Explain how the solution may be expressed in quadratures, but do not give the full expressions.

2. Consider the system of n equations dy = A(t)y (1) dt 1 and suppose the matrix A is upper-triangular: aij(t) = 0 if i > j. Explain how the solution of this equation can be obtained through the solution of n first-order equations (do not try to give the complete formula for the solution). The last equation is dy n = a (t)y , dt nn n which can be solved in the form Z t yn(t) = cn exp{ ann(s) ds}. t0 The next to last equation is then dy n−1 = a (t)y + a (t)y (t). dt (n−1)(n−1) n−1 (n−1)n n

Since yn(t) is known from the first step, this is an inhomogenous, first- order equation which can be solved by a known formula, giving yn−1 as an explicit function of t. The remaining equations, from n − 2 up to 1 can be solved successively in this manner.

3. Consider (1) but now with aij(t) = aj(t), i.e., aij is independent of i. Let the initial data be yi(t0) = ci.

Note thaty ˙i =y ˙j for each i, j = 1, . . . , n. We may therefore write

y2(t) = y1(t) + γ2, y3(t) = y1(t) + γ3,... (2)

where γi = ci − c1. Substituting these into the differential equation for y1 now gives y˙1 = a(t)y1 + b(t), y1(t0) = c1, where n n X X a(t) = aj(t), b(t) = γjaj(t). j=1 j=2

The equation for y1 can be solved in quadratures and those for the remaining values y2, . . . , yn can be obtained from equation (2)

Answers to Selected Problems in Chapter 2

1. PS 2.3.1

2 • 3) 1, x, e−x are a of solutions. Their is e−x. • 4) Solutions are 1, x, cos x, sin x with Wronskian = 1. • 6) The formula for the |a| is X |a| = i1i2···in a1i1 a2i2 ··· anin where the  has the value ±1 depending on whether the string i1i2 ··· in is an even or an odd permutation of 12 ··· n, and the sum is extended over all permutations. If each entry aij(x) is a differentiable function of x, then differentiating gives the sum of n terms, which can easily be identified with the given formula.

If the determinant is the Wronskian, so that a1k = uk(x), a2k = 0 uk(x), etc., it is easily seen that each determinant on the right- hand side in the formula has two rows equal and vanishes, with the exception of the last. In that determinant, the bottom row (n) (n) is (u1 , . . . , un ). This remaining determinant can likewise be reduced by using the differential equation on each term in the bottom row. There results a sum of having propor- tional rows, leaving only one term, −p1(x)|a(x)|. This provides the formula in question. • 7) One solution, by inspection, of the inhomogeneous problem is uP (x) = x. The homogeneous problem has the basis u1 = 1, u2 = sin x, u3 = cos x so the most general solution is

u(x) = x + c1 + c2 sin x + c3 cos x

for arbitrary constants c1, c2, c3.

• 8) The influence function has the form G(x, s) = c1(s)+c2(s) sin x+ c3(s) cos x, satisfying the conditions (see equation 2.31) G(s, s) = Gx(s, s) = 0,Gxx(s, s) = 1. These give the three equations

c1(s) + c2(s) sin s + c3(s) cos s = 0,

c2(s) cos s − c3(s) sin s = 0,

−c2(s) sin s − c3(s) cos s = 1. The solution is easily found to be

G(x, s) = 1 − sin x sin s − cos x cos s = 1 − cos(x − s).

3 • 12) With v(x) = Φ(x)w(x),

v0(x) = Φ0(x)w(x) + Φ(x)w0(x) = A(x)Φ(x)w(x) + Φ(x)w0(x)

so, since Φ0 = AΦ, the equation reduces to Φ(x)w0(x) = r(x) or

Z x w(x) = Φ−1(s)r(s) ds; x0 therefore a particular integral is Z x Φ(x) Φ−1(s)r(s) ds. x0

• 13) Write the fundamaental matrix solution in the form

 (1) (2) (n)  u1 u1 ··· u1  (1) (2) (n)   u u ··· u   2 2 2  Φ(x) =  . . . .   . . .. .   . . .  (1) (2) (n) un un ··· u

in terms of n linearly independent column vectors {u(j)}, j = 1, 2, . . . , n. Then differentiating the determinant |Φ(x)| leads to n determinants:

(1)0 (2)0 (n)0 u1 u1 ··· u1 (1) (2) (n) d|Φ(x)| u u ··· u 2 2 2 = . . . . + ETC dx ...... (1) (2) (n) un un ··· u where ETC means the sum of the remaining n − 1 determinants. In the determinant above, using the differential equations

n (1)0 X (1) u1 = a1juj j=1 in each of the terms of the first row leads, after noting that it can be written as the sum of n determinants of which n − 1 vanish, to the formula d|Φ(x)| = a (x)|Φ(x)| + ETC. dx 11 4 Treating the remaining terms in the sum ETC in the same way leads to |Φ(x)| = a(x)|Φ(x)| dx Pn where a(x) = (TrA(x)) = j=1 ajj(x).

5