Iowa State University Capstones, Theses and Retrospective Theses and Dissertations Dissertations

1967 A numerical method of characteristics for solving hyperbolic partial differential equations David Lenz Simpson Iowa State University

Follow this and additional works at: https://lib.dr.iastate.edu/rtd Part of the Mathematics Commons

Recommended Citation Simpson, David Lenz, "A numerical method of characteristics for solving hyperbolic partial differential equations " (1967). Retrospective Theses and Dissertations. 3430. https://lib.dr.iastate.edu/rtd/3430

This Dissertation is brought to you for free and open access by the Iowa State University Capstones, Theses and Dissertations at Iowa State University Digital Repository. It has been accepted for inclusion in Retrospective Theses and Dissertations by an authorized administrator of Iowa State University Digital Repository. For more information, please contact [email protected]. This dissertation has been microfilmed exactly as received 68-2862

SIMPSON, David Lenz, 1938- A NUMERICAL METHOD OF CHARACTERISTICS FOR SOLVING HYPERBOLIC PARTIAL DIFFERENTIAL EQUATIONS.

Iowa State University, PluD., 1967 Mathematics

University Microfilms, Inc., Ann Arbor, Michigan A NUMERICAL METHOD OF CHARACTERISTICS

FOR SOLVING HYPERBOLIC PARTIAL DIFFERENTIAL EQUATIONS

by

David Lenz Simpson

A Dissertation Submitted to the

Graduate Faculty in Partial Fulfillment of

The Requirements for the Degree of

DOCTOR OF PHILOSOPHY

Major Subject: Mathematics

Approved:

Signature was redacted for privacy.

In Cijarge of Major Work

Signature was redacted for privacy.

Headead of^ajorof Major DepartmentDepartmen

Signature was redacted for privacy.

Dean Graduate Co11^^

Iowa State University of Science and Technology Ames, Iowa

1967 ii

TABLE OF CONTENTS

Page

I. INTRODUCTION 1

II. DISCUSSION OF THE ALGORITHM 3

III. DISCUSSION OF LOCAL ERROR, EXISTENCE AND

UNIQUENESS THEOREMS 10

IV. STABILITY OF THE PREDICTOR, CORRECTOR

AND ALGORITHM 25

V. DISCUSSION OF STARTING TECHNIQUES AND

CHANGE OF STEP-SIZE 42

VI. NUMERICAL EXAMPLES 46

VII. SUMMARY 50

VIII. BIBLIOGRAPHY 51

IX. ACKNOWLEDGMENT 52

X. APPENDIX 53 1

1. INTRODUCTION

!n this paper we are considering an algorithm for the solution of

quasi-linear hyperbolic partial differential equations by the method of

characteristics. The method of characteristics is playing an increasingly

important role in the sciences and particularily in propagation

type problems. The propagation problems arising in the theory of gas

dynamics, the theory of long water waves and the theory of plasticity, to

name a few, are well suited to the method of characteristics.

The restriction to quasi-linear hyperbolic partial differential

equations is understandable in view of the dependence of the method of

characteristics on two sets of real characteristics.

In this work, the method of characteristics is considered only for

problems in two independent variables. Although this limitation is im­ plicit in nearly all practical applications of the method, there do exist

certain procedures that utilize the special features of the characteristics

for problems in more than two independent variables.

The method of characteristics reduces the partial differential

equation to a family of initial value problems. And although a substan­

tial knowledge of ordinary differential equations is available the need

for discrete variable methods is well known.

Thus the need for a high accuracy numerical method that can be easily

applied to a wide range of practical problems seems to be present.

Chapter two deals with the conversion of the quasi-linear,- second

order, hyperbolic partial into characteristic normal

form. The equivalence of the two problems is remarked upon and the 2 existence of a solution to the characteristic normal form system along with the differentiability of such solutions is proved.

Chapter three investigates the truncation error involved in the predictor and the corrector. We also affirm the existence and uniqueness of solutions of these two systems used to approximate the true solution of the characteristic normal form system.

In chapter four we discuss the stability or lack of stability of the predictor and corrector as well as the stability of the algorithm. The order and the degree of the algorithm is also defined here.

A starting procedure is recommended in chapter five in addition to a procedure for altering the step-size of the characteristic mesh.

Chapter six illustrates the algorithm with a numerical example. II. DISCUSSION OF THE ALGORITHM

The second order, quasi-linear, hyperbolic partial differential

equation as it arises in most practical situations has the form

(2.1) au** + + cUyy + e = 0,

2 2 2 where ^ . "yy = H and the coefficients a, b, c dX dy and e, due to the quasi-linearity,are functions of x, y, u, p and q;

(p = u^, q = Uy). We mention that due to the hyperbolic nature of the

equation we are restricted to a region R of the xy-plane where

(2.2) - ac > 0.

We will however insist on a slightly more restrictive condition, namely

that we stay within a region R' where the discriminant is bounded away

from zero:

(2.3) b^ - ac > d > 0,

for d some constant.

We also assume that along an initial curve,

(2.4) y = f(x), a < X < b,

u, p and q are known and are given by 4

(2.5) u = u^fx), p = p^fx), q = q^Cx)

Furthermore we assume the initial curve is not a characteristic.

Though the partial differential equation arising in practice is generally of the form (2.1) we find this particular form inappropriate for a formal discussion of convergence, stability and other facets connected with its solution. For this reason we will convert equation (2.1) to its so-called "characteristic normal form,"

5

i = 1(1)3 k=l (2.6a) 5

ik 21 Î = 4(1)5, I ' sn k=l

wi th 12 3 4 5 s «= X, s = y, s = u, s = p, s = q, and

(2.6)

1 0 0 0 e_ 0 0 1 X. a (2.6b) (a'k) = -p •q 1 0 0

-X. 1 0 0 0 e^ 0 0 1 X. a

b + /b^ - ac where

X = b - /b^ - ac 5

it is well known ([3], [13]) that a great number of hyperbolic problems in two independent variables - including equation (2.1) as well as the general second order equation

(2.7) (i(X, y. f, fy. «xy, fyy) ^ 0 "

may be reduced to a characteristic normal form.

We will be assuming that the data from the initial curve (2.4)/(2.5) is prescribed on the line

(2.8) Ç + n = 1

in the Çn~plarie. i.e. x = x^(g, 1-Ç), y = y^ (Ç, 1-Ç), u = u^(Ç, 1-Ç),

P = P](Ç> , q = A](Ç, 1"Ç)I 0 ^ Ç ^ 1. We will refer to the initial curve as

(2.9) . c = {(Ç, n)I g + n = 1» 0£Ç£i, 0£n£i

Since new characteristic coordinates can be introduced by replacing Ç with any function of Ç and by replacing n with any function of ri, and since the initial curve is not itself a characteristic there is no actual loss of generality in taking it to be of the special form (2.8).

it has been shown [7] that equation (2.1) and the characteristic normal form (2.6) are equivalent.

In the characteristic normal form (2.6) the characteristics emanating from the initial curve (2.8) are lines 6

(2.10) Ç=Cj, n = 1 - Cj, 0<_C^ £1.

For any high-order algorithm we must be certain that

the solutions of (2.6)/(2.8) are sufficiently differentlable. To establish

the needed differentiability we adapt a more general theorem by A. Douglas

[5].

Theorem 1.1: (A. Douglas). Assume

a) a'^(S) e c^ for S = (s\ s^, s^, s^, s^)eU,

b) [det(a"^(S) )| ^d > 0 for S e U,

c) s*^ e c^, k = 1(1)5, and "S e U on the initial curve c, where U is a closed region of the 5-dImensional space and let D be defined as the triangular region enclosed by the initial curve c and the

1ines Ç = 1, n = '. * * Then in a certain region D CD, with D containing the initial curve c

there exists a unique set of solutions s^, k = 1(1)5 of (2.6)/(2.8) which are four-times differentiable with respect to Ç and r).

in view of later applications we will assume D to have the following properties (which are made possible by proceding to a subregion of the it original D ):

a) D IS closed

b) Given some £ > 0

(2.11) "s' = (s^(Ç, n) + . . . . s^(Ç, n) + e^) £ U

if (Ç, n) G D ' and £ e , k = 1(1)5. In the following we will assume that an e >0 has been chosen and

that 0* is the region just described. Within this region we are investi­

gating the numerical solution of the problem.

The predictor-corrector algorithm we now introduce to solve the system

(2.6) with initial conditions described on c is based upon approximating

the ten partial in (2.6) by forward and backward differences.

We impose a square mesh of side h on the triangular region D. The calcu­

lated value at a point (Ç^ ) of the characteristic grid depends upon

six back points, three along each of the characteristics 5 = and n = Hj.

The partial derivatives and — will act as ordinary derivatives OS when applied along a line Ç = c^ or ri = C2. Therefore we will use the

following well known ([10],[11]) forward and backward difference operators,

truncated after third differences.

(2.12a) f'(x) = 1 ^ f(*) h i=l

(2.12)

(2.12b) f ' (x) = 1 } j. V' f(x) . 1 =1

We also note the use of when the forward and backward operators

are applied to the first variable in a function, F(Ç, ri) and when

they are applied to the second variable in the argument; i.e.

A^F(x, y) = F(x + h, y) - F(x, y)

A2F(x, y) = F(x, y + h) - F(x, y)

with Ag, ^2 defined analogously. s

With the partial s in (2.6) replaced by forward differences (2.12a)

truncated after third differences we get our predictor system for the

solution at the (£, m) point on our characteristic grid:

5

(^1 " 2 A] + 3 = 0, i = 1 (1)3, k=l (2.13)

= 0, i = 4(1)5. (^2 " 2 ^2 + 3 k=l

Expanding the differences this may be written as

_ 3 5 a S: = 0, i = 1(1)3, T V &-ym : v=0 (2.14) 3

a So = 0, i = 4(1)5 I V ^-v k=l _ v=0

"'th s&m' ^L' '

(2.15) Gg = 1, Oj = J, = 9, = - and the a'^'s as in (l.6b)

With the partial s in (2.6) replaced by backward differences we get our

corrector system for the solution at the (£, m) point on our characteristic

grid:

5

(2.16) I = 1(1)3, k=1 9

n k (2.16) )S = 0, 1 = 4(1)5. 2m k=l

Expanding the differences in (2.16) we get

5 r- 3 , 1 By i = °' i = 1(1)3 k=1 v=0 (2.17) 5 r- 3

! = 4(1)5, I I82 ^Zm-v I = °' k=1 v=0 J 18 Q ? ik (2.18) 3^ = 1, = - yy, ^2 = yy. ^3 ~ " jy and again the a 's as in

(1.6b).

Thus we have a linear system (2.14) to solve for the predicted solution at the (&, m) characteristic grid point. We then use this predicted solution in the non-linear corrector system in an iterative manner to find our final approximate solution to the system of partial differential equations. We will later comment on the recommended number of iterations. 10

III. DISCUSSION OF LOCAL ERROR, EXISTENCE AND UNIQUENESS THEOREMS

As mentioned in Chapter two, within the triangle D of the Ç,n~plane we introduce a square lattice with mesh-size h > 0. Further we are always assuming L^ = •^ > 0 to be an integer.

We use the following notation and definitions:

a) = Z^(&h, mh) for values of the true solution of (2.6) and

(2.8).

b) "S^ = ^*^(&h, mh) for values approximating at the point

(&h, mh) of the lattice by the finite difference algorithm, also

"7 o ^7' 7^ 7^ 7^ 7^ ) ^Am ^ &m' ilm' im' Am' im' S = fS ^ S^ S^ ) ^£m ^ Am' Am' Am' Am' ^Am^ and

The (A, m) may take integral values within

D, = {(A,m)| A + m > 1, 0 < A < L , 0 < m < L }; n — — — o — — o^ * 5» will denote the part of corresponding to D, defined in chapter one.

We are assuming the following bounds:

|a'^XS)| and |a!^(^) | = |£H. _ _ ' BsJ ' for S e U,

(3.1) ^r.k .r-k |_ I < and I-- I < u^.

for (Ç,ri) e D7 r = 0(1)4. 11

We now want to examine the local error or truncation error of the predictor operator (2.14). To aid us we note a lemma, [10].

Lemma 3.1: Let f(x) have a continuous of order q + 1 in

J. Then for every y = 0(1)q there exists a number a in the interval containing the x^'s such that

(q+1) f f (a) L' (x^) ,

where P(x) denotes the q*" degree approximating polynomial which agrees g with f(x) at x^, y = 0(1 )q; and L'(x) = 1=0*i^. The proof of lemma

3.1 can be found in a number of books, e.g. [10], [11]

We now state and prove the following lemma concerning the error in using forward differences (2.12a) truncated after third differences to approximate the partial derivatives and •

Lemma 3.2: Let s^(S,n) e in both Ç and n. Then for g = ih, n = mh, m constant, there exists a constant on the Ç-înterval

[(&-3)h, &h] along the line n = mh such that

3s (&h,mh) _ 1 12 13 (3.2) s^(&-3)h,mh) = 3Ç " h ^1 " 2 Al + 3 ^1

r u3 .4 k(a,,mh) where E, . = -, 1 . And for ^ = &h, £ constant and ri = mh k2m 4 g^4 there exists a constant on the ^'interval [(m-3)h, mh] on the line

Ç = Jlh such that

3s (2h,mh) _ n (3.3) (2h, (m-3)h) = E 3n h ^2 ' ? 4 " T 4 k£m' 12

,3 3's'-(>th.a,, where E, •k£m 4 ^ Proof: Let n = mh, m constant. Then f is a function of the one variable Ç and hence we can apply lemma 3.1 with q = 3 and = (&-3+M)h, li = 0(1)3 and get

0Ç, however.

L'(Ç) = (Ç - (5,-2)h)(Ç - (£-l)h)(Ç - 2h) +

(Ç - (£.-3)h)(Ç - (il-l)h)(Ç - £h) +

(C - (£-3)h)(Ç - (A-2)h)(Ç - 2h) +

U - (&-3)h)(5 - (&-2)h(S - (&-l)h), and thus

L'(&h) = 3!h^. Therefore

Ç _ h£ 9^s^(a,mh) kZm 4 ^ '

which establishes (3.2). Equation (3.3) is proved in a completely anala- gous manner.

We now state and prove a theorem giving the truncation error of our predictor operator.

Theorem 3.1: Let Z(Ç,ri) be the actual solution of (2.6) and assume

Z has continuous fourth partial derivatives. Then the truncation error in using (2.14) to approximate (2.6) and (2.8) at the point (&,m) is given by 13

"L-i-A' k=l (3.4)

'W T' k=i where a^ is a point on the ^-interval [(&-3)h, £h] on the line ri = mh and a2 is a point on the ri~interval [(m-3)h, mh] on the line Ç = Jlh.

Proof: Using the results of lemma 3.2 we write

k=1 k=1

5

"• i = Ut)3; k=î (3.5) 5

k=1 k=1

5

k=1

Expanding the differences in (3.5) we get

5_ ik/:^ \,1 -k 3 _k . _ -k 1 1 _k v ^ &-3m)(3 &m " 2 ^ A-%m " T &-^m^ k=1 14

k=1

5 \ ik,^ \/1 7k 3 7k , ? _k 11 7k \ / ^ ( &m-3 3 2m ' 2 &m-l ^ &m-2 " T &m-3^ k=l

5

= "' i= "1(1)5, k=l which can be rewritten as

5 _2_ 5

^ ^ ^ ^£-vm k=l v=0 k=l

i = 1(1)3; (3.6) 5

/ "v 4-v + 3h2_-"(î,„.3)EL = k=l v=0

3 11 i = 4(1)5, where = 1, ^ » ^2 = 5' j•

We note that the first term on the right in each of equations (3.&) is precisely our predictor operator operating on the actual solution Z; thus the truncation error is

5 5

k=l k=l

i = 1(1)3; 15

^

k=1 k=l

i = 4(1)5.

Using equations (3.1) we note the following bounds for the truncation error of the predictor operator as stated in theorem 3.1:

(3.7) iTp;j ""4 i = 1(1)5. k=]

We will now investigate the truncation error of the corrector operator

(2.17). We first state a lemma concerning the error in using backward differences, (2.12b) truncated after third differences to approximate the partial derivatives. This lemma is analogous to lemma 3.2 for forward di fferences.

Lemma 3.3: Let s^CC, r|) have continuous fourth partial derivatives.

Then for Ç = Ih, X] = mh, m constant, there exists a constant on the

^-interval [(2-3)h, Jlh] along the line n = mh such that

3s (£h,mh) j_ (3.8) sNlh.mh) = 3Ç " h

S^^Cai.mh) "here ° 5 ^ And for Ç = £h, S, constant and n = mh 3Ç there exists a constant on the n-interval [(m-3)h,mh] on the line

Ç = 5,h such that

9s (&h,mh) _ _1_ 12 13 (3.9) s (Ih.mh) = 3n h "2 * 2 "2 * J "2 16 where . , ,4 9 s (&h,a,) FT = -— kJim 4 3rt The proof is along the same lines as lemma 3.2 and will be omitted.

Theorem 3.2: Let Z(Ç,n) be the actual solution of (2.6) and assume

Z has continuous fourth partial derivatives. Then the truncation error in using (2.17) to approximate (2.6) and (2.81 for the solution at the

(£,, m) point is given by

k=l (3.10)

k=i where is a point on the Ç-interval [(2-3)h, 2h] on the line ri = mh and

is a point on the q-interval [(m-3)h, mh] on the line Ç = £h. The proof follows the same lines as the proof of Theorem 3.1.

Again using equations (3.1) we can establish bounds on the corrector

truncation error as

5

(3-11) 1 i = 1(1)5. k=l

We remarked earlier that the predictor operator yields a linear

system to solve. In the next lemma we deal with the solvability of the

system (2.14).

Lemma 3.4: Given that

a) (3.1) holds. 17

ik /e- b) )det{a' (Z) )j ^ d > 0 fpr f E U,

c) s,-Z < ch" with p > 1 for + m' < & + m £'m' *"£'m' then (2.14) is solvable for the s^^ if h is sufficiently small.

Proof : COue to a proof of a more general lecma by Stetter, [13])

Assumption b) guarantees the existence of an e > 0 for which

(3.12) |det(a'^Xz) + c'^)] ^ y

ifje'^^l^e, ZeU. We now consider ' ~ 1(03 and

i = 4(1)5. By the differentiability of a'*^ we have

Then by a) and c) we get

And thus for some h^ > 0 and h < hj we have

l= - 1(1)3. k= 1(1)5.

In a completely analogous manner we get

^1' 1 =^(1)5. k- '0)5 for some h^ > 0 and h < h^. Also since Z is continuous there exists an hj > 0 such that for h < h^ we have llz - z !| < W &-3m -Am-3W 2H^ and thus ik/e \ ik\ f ^ In ik/ a •'"W ^>-«£-3™-V3'l

- "1 2iïj" "I ' ' " 4(1)5, k = 1(1)5.

Now for h < h^ = min(hp h^, h^) consider 18

«"5(5-

a^Sfs- l-3m

(3.13) det a35rs £-3/1

^-3 aSSfS S-3> &m-3

In determinant (3.13) we add and subtract a term from each element as follows:

replaced by +

a'^(Z. , ) for i = 1(1)3 and k = 1(1)5. &-jm replaced by + ik, ik, a (Z._.,) - a'^VjJ + a-(?^.3„) for i - 4(1)5 and •Jim-3 k = 1(1)5.

Then if we let

ik i = I(1)3, k = 1(1)5,

ik/-; b) (V3> - •£m-3 "£-3m i = 4(1)5, k = 1(1)5, we have ie'*^! < e and hence (3.13) equals

Idet (a'^(Z%_2^|) + c'^)] 1 f > 0 • by (3.12). This establishes the solvability of (2.14).

In chapter one we mentioned that the non-linear system arising from our corrector (2.17) would be solved for Sp by iteration using a predicted (o) value from (2.14). We will prove the existence of a unique solution of (2.17) in the vicinity of The following lemma is adapted from a general theorem by [I3] who in turn used a fixed point theorem of the type due to Weisinger, ([2] pp 36-38). 19

Lemma 3.5: Given that

a) (3.1) holds,

b) ^ ch" with p ^ 1 for &' + m' < % + m, &'m' c) jdet(a''^(Z) ) I > d > 0 for t e U,

,, - ^ d) S Am - Z&m 1Soh "ith 1 1P^ 1P-

Then within a certain neighborhood, of there exists a unique solu­

tion of (2.17) if h is sufficiently small.

Proof: Using the abbreviations il<, U(S) = (a'"(S) )

0 ...0

0 . . . 0

0 ...0

a'"(?) ...3^5(2)

a5'(S) . . . a"(S)

Ù(S) .- (U(S)]'' U*(S) » (à'^S) ) we may write (2.17) as

3

• S, £m v=l

wi th

AS ^vfS&m-v " ^A-vm^* v=l

We note that

jrzmli 1-"A"' c.h, "h "< h,"1 for h^ > 0.

The proof of this inequality depends on hypothesis a) i.e. 20

iiAS2m l^vl i|^2m-v " S&-vmi v=l

IM ^2m-v ^&-vm-v V=1 hence

< ) iBvKÏVv - Wvl +11:Jl-vm-v il-vm v=l

!l "nVv asa-vm*; ji vh + vh) 8Ç 3n j v=l and by a)

AS < / |6y| ZUjVh. v=l 3

Thus if C = %l 2u,v then j < c^h. A V=1

-1 The existence of U for sufficiently small h, i.e. h < h^ for h^ > 0

< c,h is proved like lemma 3-5. Assumptions a) and b) li^JLm ~ ^£m| ~ik, establish the boundedness of the a 's and their partial derivatives. The bound on the partials is denoted by and on the elements by , also we wi 1 ] assume > Hp

To show that

^£m ^^m^^&m^ 21 has a unique solution in a certain region we remark that

-ti

for h < hg for some h^ > 0, and some c >0. To see this consider

11^ (S) , M .. li

-(g)

" ^v^A-vm •*" '*'2_ ^v^&-vm " ^li v=l v=]

* '^o'' ° •^ Hh*""4

~(o)

<2.i«vi ^&-vm ^Jl-vmf v=]

+ Coh'° + #

1Ch"2.1^1 + 5H|C^h * 5H,C^h + C^h ° + gh^Hu^

V=1

^ C h where

3 PL-I C° = ChP"' > IB^I + 10H|C^ + Coh ° + ^ h^HU^

V=1

We now choose for the neighborhood 3^ ^ c E the complete region s - z > 3C")h £n' il 1

where h^ satisfies

a) hy = min(h|, hg, hy, K

b) (c^ + 3C°)h^ < E* i.e. G c U,

Then F. satisfies the Lipschitz condition in B with Lipschitz constant Jim (2) L, i.e. if "s. and *S. are two arbitrary vectors in 3n^ then &m Jim

([) ~(2)

-(1) -(2) (I

(1) (2)

-'V:

(I) <_ max 'li<5 -M

£ max C.h £m ilmi A V '

-1(1) (2) Iti) W !| 5 ; hJll • We define the matrix norm as jjAji = max. ^ | l

!(n+1) (n) (n) (nH) !|(n) (n-J ) S - 'S < Lj! S - t Am £m Jim

(i) (°J II 23 hence

•jKn+l) (n) (0) (q) - S f" (s._) - s L"c°h £mi £m

Thus for m > n we have

m-1 m- (m) (ji) |U) (0) I S - "s < / L &m ' - V"''' i=n

m-n-1

< C~h^ / L**"; however since L < ^ i=0

l(m) w

- hrr, 1

Hence

(m) (_n) 1 im S - S = 0; Jim fi,m m,n oo and a solution S. exists. x,m We also note that if n = 1 we have

^ ^ C°hv'

(m) (U (U (0) I i!-?' -7 "S -z +S -S +s. ~s hence jj ' 4m £tn £m £m Jim ilm Jlmj

(m) (0 ^1) (Q) I + • i - T ' "S - "5 - z £,m Jim Jim JlmjI ,i ^Jlm 24

< 2C°h + C h + C°h < (C + 3C°)h — VOV V— O V

and therefore all iterates stay within the neighborhood

To establish the uniqueness of the solution we assume a second solution exists. Then we have and 3^^ =

However,

i T - S "S jj £m Zm ilm Zm

and since L < — we must have T^^ = 25

IV. STABILITY OF THE PREDICTOR, CORRECTOR AND ALGORITHM

Chapter four is primarily concerned with the stability of our algorithm, however before stating or proving a stability theorem we will define several terms.

In a characteristic finite-difference method - as we are considering - only values along Ç = and n = are used for the computation of a value at (Z^, m^).

Def init ion. Degree N: We call a characteristic finite-difference method of "degree N" if only values at the N preceding points on each characteristic contribute to the value at (Z^, m^).

From the description of our algorithm in chapter two we see that it is a method of "degree 3."

At this time we state a general finite-difference technique from which both our predictor and our corrector may be obtained. We then prove a stability theorem for this general technique from which we can evaluate the stability of our predictor and corrector.

k Ng

^ ' • '(')k' k=l M=0 v=0 (4.1) k N,

k=l M=0 v=0

We assume the following relation holds for the coefficients a^, with f(x) sufficiently differentiable

"2 r 1 V— Pi LP, J * (4.2) \ f(x-vh) - hf'(x) = ch f (x^), p^ ^ 2, v=0 26

with Xj in the interval [x - N^h, x].

We assume that already the starting values for the computation contain numerical errors:

(^^3) ®£m'

for & + m = 1 + vh, k = 1(1)5, v = n - 1, n - 2, . . . , 0 if we are com­ puting the value on the n^*^ diagonal.

Note; We will refer to the initial curve as the zero diagonal, hence the first values actually computed are on the third diagonal, and the point

Vc * 1 (2, m) is on the £ + m - L diagonal where L = ^.

Furthermore, the numerical solution of (4.1) will introduce errors which we denote by

K N, N„

k=l u=0 v=0 (4.4) *1 =

Vli' ' ' = k'(l)k. k=l U=0 v=0

The 6's are called "initial errors", the <{)'s "computing errors." We

ill assume

(4.4a) 14)^1 1khP and |e^^l < kh^ .

Qefi ni t i on. Class 3(k, p'): The numerical result of an integration algorithm for a given problem e.g. (2.6)/(2.8) is called of "class 3(k,p')V 27

k > 0, p' > 0 if the following relations hold for 0 < h < h^ (for some

> 0):

for all {l, m) e D^.

The following definition of "Stable convergence" was introduced in a

similar manner by Dahlquist [4] for ordinary differential equations.

Défini tion. Stable convergence: Let S^^ be obtained from the initial

values (4.3) by the procedure (4.1) with a mesh size h, with computing errors (4.4). Then the algorithm is called stably convergent if there exists a function P (h; k,p'), defined for 0 ^ h ^ h^, continuous and in­ creasing with respect to h, for which

sup sup < P"(h;k,p')

$(k,p') (&,m) e and

(4.5) 1im P (h; k,p') = 0 . _+

Note: The first supremum is extended over all results of class 3(k,p');

however since our algorithm is assumed to be using back values of order

three and since both our predictor and corrector are of order three we are

only interested in results of class 3(k, 3) with k the maximum of the con­

stants, and HU^.

Definition. Stable convergence of order p; A method is called stably

convergent of order p, if it is stably convergent and if there exists a

positive number p and a constant P > 0, for which P (h; k,p')

p'

S - 2 < Ph^, P constant.

As a final preliminary before we state and prove our stability

theorem consider:

Lemma 4.1: Let B be a linear difference operator with constant

coefficients:

N

% \-v' v=0 and let N be the solution of

= % with the initial values Xy ~ Xy» ~ 0(1 )N-]. Then, if the eigerv.-;]ues,

of i.e. the zeros of the polynomial

N

v=0 are such that their magnitude does not exceed unity and the zeros whose magnitude equals unity are simple,

N-1 I

(4.6) Ix^l 1%, / IXyt + l^n v=0 n=N

for some constant .

The proof of this lemma may be Found in [8] and will be omitted.

The following stability theorem is adapted from a more general 29 theorem due to Stetter, [13].

Theorem 4.1: Let the following assumptions hold:

a) with regard to the problem (2.6)/(2.8) 1^ p J _ a 1) Z E C and Z e U on the initial curve, C; - I , P I ~ a2) a" (Z) e C for Z" e U, with p as defined in (4.2)

a3) 1 det (a'^(z3)| >_ d > 0 for "z e U;

b) with regard to the finite-difference method (4.1) the approxi­

mation relations (4.2) hold.

Then the following condition is both sufficient and necessary for the stable convergence of the numerical procedure (4.1):

The zeros Z^, y = 1(1)N of

N

P(Z) = v=0 satisfy |Z^| <_ 1, and P'(Z^) = 0 if {Z^| = 1. Furthermore, if the result of the computations is of class B(k,p') with p' ^ p = (p^-l) then the con­ vergence is of order p.

Note: The method of proof of sufficiency is that of displaying a function

P (h; k,p') as described in the definition of stable convergence. The proof of necessity takes the form of counterexamples.

Proof: (We prove the more general result concerning the order of the stability).

We use the following for difference operators along a characteristic:

"2 Y "2

(4.7) ^ "v S&-vm' ^K^Am " ô ^ % ^5,-vm' v=0 ° v=l 30

"l "l -,

(4-7) 4m = :Lvm' i v=0 v=l

The meaning of A , A , F , F is analogous. Furthermore we set m m m m

(4-2)

"Interior points" in the Taylor formula sense are marked by

We deal only with the equations along n = constant (i ^ k') since the corresponding relations for Ç = constant arise in an analogous way. Since the second subscript remains constant and since we are dealing only with the sub - Z operations of (4.7) we do not write the subscript i.e.

We form the difference

K ik \ /, ^k\ ik/--;^ \ /. ^k\ (4.9) a; = / {a (FS ) (A S,) - a""(F'Z ) (A Z )} k=l

K Z {[a^'CFtj - a'k(F'Zp)]A + a'''(Ftj[A S, - A Z^] k=l

K Z{[A aj^) + F , k=1 where

(4.10) a'*" = a'k(F S^) and b^*" = Va^j(F'S)A Z^. j=1 31

Note: We use the bracket around the subscript to mean tnat the operator ignores this term.

Now we define

o and equation (4.9) becomes

K K K

C.u) A; = aXC'(^) 4 - 'r k=l k=l

Considering the errors (4.3) and (4.4) we write

Ct.lS)

We use (4.13) to rewrite (4.12) as

— K

(4.12) V^] i = l(l)K' k=1 k=l

Note that the right side of (4.14) does not contain V^.

An expression similar to (4.14) would be obtained along the character­ istic Ç = constant.

V/e note that the use of the mean value theorem in (4.9) assumes that the S g also lie in U. That this is the case is shown later.

We now regard (4.14) as a system of linear difference equations for K the linear combinations .i k ..k y £ V k=l 32

K

(^•'5) '(L) <,m '4.m. k=l with n = £ along the n = constant characteristic, i = ](l)k' and n = m along the Ç = constant characteristic, i = k'(1)k. The inhomogeneities i k k e. contain \l„. . with &' + m' < & + m only, i.e. only values of V on &,m & m' the diagonals preceding the Ç + n = (& + m - L"') diagonal.

We solve the finite-difference equations (4.15) by superposition of solutions of a set of finite-difference equations with constant coeffi­ cients. To see this let

K

.i _ }pik yk n &,m / 2,&-n S,,l-n k=l (4.16) K

t ' = / C n 5,m / m-n,m m+n,m k=l

k where n denotes the diagonal on which V is evaluated. Thus for n = &+m-L

(4.17)

are precisely the difference equations we want to solve. However, as k noted above the inhomogeneities contain ^' + m' < £ + tn and the only such values that we know are those to the left of the first computed diagonal N » max(Np N^). Hence, we first solve the homogeneous difference- 33

equations

i=K'(l)K,

with initial conditions

K

N^v.rn k=l

K and i = K>(OK, k=l

V = [N - - 1], respectively. We note that the initial conditions only involve V^^'s from diagonals to the left of the N diagonal and thus are assumed known. We, therefore can solve these difference-equations and

thus we know

K

N^Am ^A,2-N' ' k=l

K

N^L=^CN,mÙN,m. k=l

for all points (&,m) on the N diagonal. Using these k equations we solve

k for the k unknown values of V on the N diagonal. Then we consider the

finite-difference equations

\

i = K'(l)K,

with initial conditions 34

N+l^vm / t,A-N \a-v' ' ~ k=]

and N+l = //> Crl^N ,m V^ï+v .m ^ := <' > "< k=1

V - [N - + 1](1)[N], respectively. We note that the initial conditions

involve the values of on the N diagonal but these we have just found

in the-preceding problem. Therefore this problem is solvable and we use 1^ the solutions to find the V^^'s on the N + 1 diagonal. Continuing in this manner we find the V^^'s on all diagonals through the (Jl+m-L -l)^^. We then consider the finite difference equations

i = l(l)K'

'='m i=K'(i)K.

These are the equations we actually wanted to solve and since we now know k i all the V^^'s on preceding diagonals the e^^ term is a known constant and the problem is solvable. The above paragraphs concerning the solution of

(4.15) can be written analytically:

= 4m «nOl4™-L~) ' " '

with initial conditions

K

k=l

K

n4v =^Cn.m "tvX i-K'(l)K, k=l 35

V = [n - N2](l)[n - 1]; n = [N] (1) [il+m-L ]

Then we can write the solution of (4.15) as

Z+m-L

K ! = KDk', n^&m' -ik yk n=N (4.18) (&m) &m k=l 2+m-L

r i = K' (1)K. Zn £m n=N

Once again we return to only considering the equations along ri = con­ stant, i = l(1)k' and also delete certain subscripts as before.

Applying lemma (4.1) and the hypothesis of the theorem concerning the zeros of the polynomial, P(Z) to the solution (4.18) of (4.15) we get

K &+m-L

(4.19) 'nt&l k=1 n=N

&+m-L j-1

Ij'vl * I'j j=N _v=j-N2 where is some positive constant.

We introduce bounds for the errors:

(4.20) IV., ,I < V , & + m - L =n k=l(l)k'. ' X, m — n

The sequence of is assumed to be non-decreasing.

By using (4.2) and (3.1) and assuming the truncation errors are

such that <_ Th''"*"^ for h < hj , 36

(for some > O), P = - 1 we obtain the following estimates for suffi­ ciently small h, h < h^, (for some h^ > O), with each bound H and J of order one:

(4.21) |b|k| < Icj^l

The estimates on make use of the assumption p' ^ p from the definition of stable convergence of order p.

From these estimates we get

j-1 j-1

(4.22) v-j-N2 vj-H2

p+1 la] 1 < KNhHbVj+m-C'-l +

K ^ The estimates for | ^2 | obtained by introducing the above k=l k estimates into (4.16) may be turned into estimates for the themselves in the manner in which we obtained the V^^'s in solution of (4.15).

By lemma 3.4 and hypothesis A3) the determinant of (aj^^) is bounded away from zero for sufficiently small h i.e. h < h^ (for some h^ > O) and if S - 7 < R,h, for R, some constant in case y = 0. L_t |c!^ —11 o '£m aj^l ^ R^h, for some constant by (4.21), (3.1) and the definition of

therefore we have for sufficiently small h, h < h^ (for some h^^ > O), ldet(c!'^)| > d' > 0 for (2,m) e D, . Hence (CÎ,'^) ^ exists and we need only ' ' — K Km multiply the estimates for C/^v | by the bound for its norm ( we are k=l ^ using the "maximum row sums" norm) to obtain the desired estimate for the 37

Combining terms of equal structure we finally have for sufficiently small h, i.e. h < h^ (for some h^ > O), with D.'s constants of order one, and for N < iU-m-L" < L = max(£+m-L'')<

il+m-L"

'"•23) Vmi ^ "l Z'j-1 " - j^N

The proof of sufficiency is completed when we show that there exist constants E^, Ej, such that

n EihL (k.2k) £ EqH e

for sufficiently small h < h^ (for some h^ > O) and £ - m £ L , because we will choose . ,E,hL P (h; e, p') = Egh^e which will be a non-decreasing function of h and C P*(h; e, p ' ) £ E^h' z ' _< Ph^, and limP (h; e, p') = 0. h 0 We prove the existence of E^, E^ by induction:

Let

^ = Ko'

&n(l + 26^(^-1)[hD + D ]) E, = 1 h

where is defined in the definition of stable convergence of order p. 38

Sett ing

E (11.26)

(4.27) ^8 — E E ' ^2 ^0^ ^ ' o 2 with e defined as in (2.11),

we then define

h = min[hj, ^2' ^4' ^5' ^6' ^7' ^8* ^9^ with the h.'s defined above.

From (4.4a) and p' ^ P we have that (4.24) holds for V^_y, v = 1(])N-].

We now make the induction assumption that (4.24) holds for n <_£ - m - 1 and show that this assumption implies that it holds for , n = 2 - m.

From (4.27) and (2.11), we conclude e U for Z + m'<_£ + m-l.

In case = 0, we use lemma 3.5 to conclude that e for &' + m' =

Jl + m. These are facts we assumed in using the mean value theorem earlier.

Furthermore, using the fact that the are non-decreasing along with (4.23) and (4.24) we, get -, . Jl+m-L — , E]h ^ _j I - e '

+ °l^o j^p^h(N-l) ^ _ ^Eih(S,+m-L"-l ^ (,p+l l-e^lh _ _ o'

V . < eh(N-l) E h^e^l (&+m-L") ^ p E] (2+m-Ll 2+m-L" - ® Eih , o^ 2 , e I -1 39

V u < E ^ _0 \+m-L" - o

The necessity of the theorem was demonstrated by Dahlquist, [4] by

showing example^ of ordinary differential equations that were unstable

when the hypothesis was violated.

Theorem 4.2: Let the following assumptions with regard to the prob­

lem (2.6)/(2.8) hold: \ k 4 — a) Z e C and Z e U on the initial curve C,

b) a'k(Z) e for Z e D",

c) ldet(a''^(Z)) 1 > d > 0 for Z £ IT.

Then the predictor (2.14) is not stably convergent but the corrector (2.17)

is stably convergent.

Proof: The predictor (2.14) is written as

i = 1(1)3 k=1 " v=0

5 3

k=l v=0

V/e note that this is the general operator (4.4) if we take = 3,

K' = 3, K = 5. We also note that Yq = ^2 ~ ^3 ~ ^ but more impor­

tant Uq = 1 , = 9, Ci^ and thus the polynomial P(Z)

written as 40

P(Z) = -I + 9Z - has zeros Z ^ = 1, Z^ = Z^ = -—^ since |Zj] = |Z^ | > 1 we have by theorem 4.1 that (2.14) is not stably convergent.

The corrector (2.17) has the form

5 3_

sl-v.m k=1. v=0

5 3

- 0' ' =

k=1 v=0

This has the form of the general operator (4.4) also if we take = 0, 18 9 = 3, K' = 3. K = 5. We see that = 1, 6^ = 1, = - yy, ^2 = yf.

3^ = - yy-. Also by lemma 3.3 we have that our coefficients 3. satisfy

(4.2) with p^ = 4. Assuming sufficiently accurate starting values, we have by lemmas 3.2 and 3.3 that the numerical result of (2.17) is 3(K, 3), i.e. p' = 3. The polynomial of theorem 4.1 is

3

P(z) =Y~ 3^ Z^'v v=0 or

P(z)=z3-|8z2.^z.^ and has zeros

7=12=7 + /RT 7 = 7 - /W 1 ' 2 22 ' 3 22 and |Z.| £_ 1, i = 1(1)3; therefore the hypothesis of theorem 4.1 are satis­ fied and (2.17) is stably convergent of order three. 4i

We mentioned earlier that we would make a statement concerning the

recommended number of iterations of the corrector. This is often an

important consideration when discussing so-called .

Since the corrector (2.17) is stably convergent of order 3 this means that it would be wise to continue our iterations only until

(4.29) -ï < M h jlm &m

(k) th . for some constant M, where S„ denotes the k iterative value of the Jem corrector while S„ denotes the actual solution of the corrector. The number of iterations needed to bring this about is noted In the following

1emma.

Lemma 4.2: Under the assumptions of lemma 3-5, using (2.14) as predictor and (2.17) as corrector, (4.28) holds after one iteration.

Proof:

(k) S - s jlm ^£m

% (k-i) < 25H,C^h s - s 2m Am (0) If K < (25H,C^)khi S - s £m ^Am ,(0) I C - 7 +7 - S I Jim Jim

< (25H^C^)'^ h'^(C^h^ + B h^)

K+3 < M h

Hence if we choose K = 1 we have the desired result (4.28). 42

V. DISCUSSION OF STARTING TECHNIQUES / AND CHANGE OF STEP-SIZE

A number of possibilities for acquiring g Che needed starting values for the algorithm present themselves. A "Ruiunge-Kutta type" was briefly considered but the high number of functional 1 evaluations that appeared necessary for a Runge-Kutta adaptation to thne problem caused us to look elsewhere. A number of predictor-corrector t/pe starting routines using lower order operators which were more in thes spirit of our algorithm were outlined in [2]; however in keeping with our? des i re to maintain an "easily applied" high accuracy method we suggest a sasimple one-step starting tech­ nique. By a one step starting method we raeasan a method using only the in­ formation one diagonal behind the desired diiiagonal, e.g. [6] pp» 212-214,

[15]. The advantages of using such a startiiing technique include:

a) the ease of application,

b) the known convergence and stabi 1 ityfy of many one step methods,

c) the availability of numerical progn raois for their application.

The main disadvantage is the large number oSf grid points. I.e. small step- size, necessary to give back values of suffî'Icîent accuracy. For instance, assume we want a step-size of h for our ma Inn predictor-corrector technique and assume we are using a one step starting ! procedure which uses first forward differences. That is partial deri-va'at Ives wl 11 be approximated by

(5.1) f'(Xp) = ^Af(Xp_,).

The truncation error In such a procedure 2 would be of the form

(5.2) lE^I < Mh^. 43

Hence to achieve sufficient accuracy in our back values we would choose

2 h^ = h . The number of calculations necessary to get started even with a step-size of h = .1 for the main predictor-corrector would be

20

(5.3) ^ (100-i) = 1890. i=0

This would take us out to the second diagonal Ç + ri = 1 + .2 from where

the main predictor-corrector would take over. As grim as this may seem we note that the number of calculations needed to start may be greatly

reduced as follows.

Assume the overall step-size desired is h. Then we choose for a

starting step-size on the initial curve

(5.4) h = , s 4'

where i is such that 4' > h. This particular choice accomplishes two

things; one is that

hg < h^

and thus the desired accuracy in back values is achieved. The other will

be discussed below.

Using the h^ step-size on the initial curve we use the one step tech­

nique to compute the solutions at the (h^) ' points on the diagonal

C + ri= 1 + h^ and also at the (h^) ^ - 1 points on the diagonal

Ç + ri = 1 + 2h^. We then have three back values on each characteristic 44 and therefore we shift to our predictor-corrector algorithm. Using the predictor-corrector technique we compute the (h^) ' - 2, (h^) ^ -3 points on the diagonals Ç + n = 1 + 3hg, Ç + n = 1 + 4h^ respectively. Hereafter we will refer to the diagonal Ç + r) + 1 + kh^ as the kh^ diagonal. So far all the calculations have been with the step-size h^ while the desired step-size with the algorithm is h = i/h^. Hence we now discard the h^ diagonal and use only the values on the initial curve, the 2h^ and 4h^ diagonals to compute the values on the 6h^ diagonal, then use the 2h^, 4h^,

6h^ values to compute the 8h^ values. At this point we can discard the

2h^ and Sh^ diagonals and use the initial curve, the 4h^ and 8h^ diagonals to compute the values on the 12h^ diagonals, then in turn use the 4hg, 8h^,

12h^ diagonals to compute values on the l6h^ diagonals. V/e continue to extend the step-size in this manner until we achieve the h step-size. It now becomes evident why we choose h^ in the manner of (5.4); i.e. so the expanding step method will rass through the diagonals Ç + ri = 1 + h and

Ç + ri = 1 + 2h. Once we have reached the 2h diagonal in our expanding step process we quit expanding and simply continue with the step-size h.

We note the number of calculations required for the h = .1 step is 576 as opposed to the 1890 in (5.3).

if at any time during the calculations we wish to increase, (double it or more), this process may be employed again.

V/e have advocated our alpr/ithm as a practical, easy to apply method for the solut'on of certain hyperbolic partial differential equations and for this reason we feel a word of explanation is in order,

in the above procedure if i-. step-size of h = .01 is desired for the over-all algorithm we would require 45

hence for h = .01, we need i = 4 and thus

. _ _J ^s ~ 25.600 " Therefore we would require 25,601 data points on the initial curve to get

started. This is a somewhat prohibitive number but we mention now that

the choice for the number of points was arrived at so as to satisfy cer­

tain "sufficiency" requirements within the development of our theory and

it is very possible that a much smaller number would be adequate. We would suggest however that the method of choosing the starting values

should continue to contain the factor to make the expanding step size pass

through the h and 2h diagonals. This is accomplished by simply choosing i

less than required by 5.4. We also mention that due to the high accuracy of the method a step-size in the overall technique as small .01 may not be

necessary in most practical situations.

If at a certain point in our calculations the difference between the

predicted value and the corrected value becomes so large as to warrant a

change in the step-size we suggest the following procedure. From the new

desired step size we calculate a new starting step size h^ then use an

interpolation technique e.g. Lagrangian interpolation, to find newly

spaced data points on the last satisfactorily computed diagonal. From

these points we use the one step technique to compute two additional diag­

onals, then shift to predictor-corrector again as described earlier in

this chapter. 46

VI. NUMERICAL EXAMPLES

To illustrate our algorithm we solve two hyperbolic partial differ­ ential equations. The first of which is linear and the second a genuine

quasi-linear partial differential equation. Both problems are rather

simple in nature but this facilitates the finding of the transformation

to the Çri-plane so that we can compare actual and calculated solutions.

Example 1. Consider the partial differential equation

This has the general form of the partial differential equation (2.1) with 2 a = 1, b - 0, c = -4x , e = -6x. We have

(6.2) b^ - ac = 4x^ > 0,

for all X # 0 hence the equation is hyperbolic away from the origin.

We consider the initial curve as

(6.3) C = { (x,y) 1 1 X <_ 2, y = 0 }

2 with U = 3x , p = 6x, q = 2 on C. Solving analytically, the slope of the

characteristic is given by

AL= = ± 2x, dx a and therefore the characteristics are 2 ^ y = X + c^ (6.4) 2 y = ^2 - X . 47

Hence we make the transformation 2 2 C' = y + X , n' = y - % .

Which in turn changes (6.1) into

2Uç - 2u^ - 8(ç - n)Uç^ = - n or

J, (g-n)'^^ -"E* "n (6.5) U Cn v/T 4(Ç - n)

The actual solution of (6.1) with initial conditions (6.3) is

(6.6) U(x, y) = + 2y,

which we should get by using a method such as Picard's method of success­ ive approximations on (6.5).

The transformations necessary to put the problem into the form sug­ gested in chapter two i.e. initial curve Ç + n = 1 are given by:

= (X^T y - 1)

(6.7)

= -/x^ - y + 2.

Also then

1/2 U + 1)^ + (n - 2)^ X =

U + 1) - (n - 2)2 y = 48

The actual values of x, y, u, p, q and the computed values are com­

pared in Table 10.1 of the appendix.

We note that the l.B.M. 360 Model 50 was used and the machine time

needed for the actual calculation of the five variables at 36 grid points was 5.96 seconds. It should be mentioned that this time also includes the evaluation of the actual solutions using (6.8) and (6.6).

Example 2. Consider the partial differential equation

(6.9) 3U - 2U U - U^U = -3 sin x. XX xy yy

This has the general form of the partial differential equation (2.1) with a = 3, b = -u, c = -u2 , e = 3 sin x.

(6.10) b^ - ac = u^ + 3u^ = 4u^ > 0, .

for all u 7^ 0, hence (6.9) is hyperbolic away from u = 0.

We consider the initial curve as

(6.11) c = {(x,y) 1 I" <_ X <_ IT, y = 0}

with u = sin x, p = cos x, q = 0 on c. The actual solution is

(6.12) u = sin X.

We note that attempting to solve analytically immediately leads to trouble since the slope of the characteristics will be given by 49

(6.13) ^= u, " y u

Knowing the solution we see that the characteristics are given as

y c -cos X + cJ,

(6.14)

y = J cos X + C2.

The transformation necessary to put the problem in the form (2.6)/

(2.8) is

Ç ^ cos ^ (cos X - y) -1

(6.15)

n = 2 - ^ cos ^(3y + cos x).

The actual values of x, y, u, p, q and the computed values are com­ pared in Table 10.2 of the appendix.

The machine time for the calculation of the five variables at the 36 grid points is 5.91 seconds. Again.this time includes the time necessary to compute the actual results using (6.12) and x and y found from (6.15).

In both examples a step-size of h = .1 in the (g,n)-plane was used while in addition example two was also run using a step of h-= .01. 50

VII. SUMMARY

We have demonstrated an algorithm which we feel is easy to apply and

yet gives high enough accuracy so a somewhat realistic step-size may be

used. The truncation error has been exhibited as has the stability of the method. Also procedures for obtaining starting values and for increasing or decreasing the step-size of the computation have been discussed. The

examples of chapter six, though admittedly simple in nature, do show the method to be applicable.

To assume that the solving of quasi-linear hyperbolic second order partial differential equations is now complete would be foolish of course.

The physical situations which often give rise to the hyperbolic equations also cause many difficulties which our method may not be able to handle.

Therefore, though we admit shortcomings in our method we feel that

it can be a significant contribution to the solving of many hyperbolic

partial differential equations. And its simplicity and ease of application

should make it especially desirable for the technicians and engineers who must find solutions to hyperbolic partial differential equations as they

arise in their various applied fields.

Future work in this area could include a process for determining a

realistic step-size which in turn would not cause great instability, also

some possible adaptation of the technique for systems of partial differ­

ential equations. Looking into certain perturbations of the coefficients

of the predictor to cause it to become stable might prove interesting.

Further processes for changing step-size would always be welcome. 51

VI II. BIBLIOGRAPHY

1. Abbott, Michael B. An introduction to the method of characteristics. New York, N.Y., American Elsevier Publishing Company, Inc. I966.

2. Collatz, L. The numerical treatment of differential equations. 3rd ed. Berlin, Germany, Springer-Verlag Ohg. i960.

3. Courant, R. u. D. Hllbert. Methoden der mathemalischen physik. Bd. 2. Berlin, Germany, Springer-Verlag Ohg. 1937.

4. Dahlquist, G. Convergence and stability in the of ordinary differential equations. Mathematica Scandinavica 4: 33-53. 1956.

5. Douglis, A. Some existence theorems for hyperbolic systems of partial differential equations in two independent variables. Communications on Pure and 5: 119-154. 1952.

6. Fox, L. Numerical solution of ordinary and partial differential equations. London, England, Pergamon Press. 1962.

7. Gorabedian, P. R. Partial differential equations. New York, N.Y., John Wiley and Sons, Inc. 1964.

8. Gelfond, A. 0. Differenzenrechnung. Berlin, Germany, Veb Deutscher Verlag der Wissenschaften. 1958.

9. Hamming, R. W. Numerical methods for scientists and engineers. New York, N.Y., McGraw-Hill Book Co., Inc. 1962.

10. Henrici, Peter. Discrete variable methods in ordinary differential equations. New York, N.Y., John Wiley and Sons, Inc. 1962.

11. Hildebrand, F. B. Introduction to . New York, N.Y., McGraw-Hill Book Co., Inc. 1956.

12. Lax, P. and R. Richtmeyer. Survey of the stability of linear finite difference equations. Communications on Pure and Applied Mathematics 9: 267-293. 1956.

13. Sauer, R. Anfangswertprobleme bei partiellen differentialgleichungen. 2 Aufl. Berlin, Germany, Springer-Verlag Ohg. 1958.

14. Stetter, Hans J. On the convergence of characteristic finite- difference methods of high accuracy for quasi-linear hyperbolic equations. Numerische Mathematik 3: 321-344. 1961.

15. Thomas, L. Computation of one-dimensional compressible flows includ­ ing shocks. Communications on Pure and Applied Mathematics 7: 195" 206. 1954. 52

IX. ACKNOWLEDGMENT

The author wishes to express his appreciation to his major professor.

Dr. R. J. Lambert for the inspiration and suggestions given him during the preparation of this thesis. The author also wishes to thank his wife,

Marlys, for her patience, encouragement and helpfulness not only during the preparation of this thesis but throughout his graduate studies.

Sincere thanks must also be given to Dr. T. R. Rogge for suggesting the area of study and to Martha Marks who prepared the Fortran program used to compute the illustrations. 53

X. APPENDIX

Table 10.1 gives selected values of the solution of example 1. The arrangement in the tabic has the predicted solution on the first line the corrected solution on the second line followed by the true solution on line three. The step-size is h = .1 in the Çri-plane which for this example also corresponded to h » .1 in the xy-plane.

Table 10.2 gives selected values of the solution of example 2. The arrangement differs from table 10.1 in that the solutions were calculated for two different step sizes. Therefore the first three lines are pre­ dicted, corrected, and true values corresponding to a step-size of h » .1 in the Çn-plane while the next two lines give first the corrected value then the predicted value for a step-size of .05 in the Çr|"PÎane. 54

Table 10.1. Example 1 (C, n) X y u p q

1.159710 .3450069 2.249858 4.034993 1.999974 (.3, 1) 1.159742 .3449935 2.249829 4.034983 2.000003 1.159740 .3449988 2.249845 4.034993 2.000000

1.258941 .3750066 2.745449 . 4.754964 1.999979 (.4,.9) 1.258966 .3749961 2.745441 4.754978 2.000000 1.258966 .3750000 2.745456 4.754988 2.000000

1.358286 .4050178 3.316107 5.534983 1.999988 (.5,.8) 1.358306 .4049978 3.316058 5.534982 2,000002 1.358306 .4050006 3.316073 5.534992 2.000000

1.457718 .4350194 3.967703 6.374958 1.999987 (.6,.7) 1.45/736 .4350023 3.967683 6.374977 1.999998 1.457736 .4349999 1.967686 6.374990 2.000000

1.557228 .4650115 ...706324 7.274977 1.999988 (.7,.6) 1.557240 .4649972 4.706286 7.274979 2.000000 1.557240 .4650001 4.706304 7.274992 2.000000

1.656790 .4949865 5.537869 8.234965 1.999989 (.6,.5) 1.656803 .4949941 5.537892 8.234983 2.000002 1.656803 .4949984 5.537915 8.234989 2.000000

1.756406 .5250024 6.468520 9.254961 1.999989 (.9,.4) 1.756415 .5249996 6.468533 9.254980 1.999998 1.756415 .5250000 6.468532 9.254983 2.000000

1.856063 .5550298 7.504211 10.33499 1.999995 ( 1,.3) 1.856068 .5550003 7.504116 10.33498 1.999999 1.856070 .5550007 7.504159 10.33499 2.000000

1.334145 .7800083 3.934813 5.339972 1.999992 (.6, 1) 1.334174 .7799730 3.934711 5.339944 2.000018 1.334165 .7799987 3.934808 5.339992 2.000000

1.431768 .8400106 4.615141 6.149949 1.999991 (.7,.9) 1.431784 .8399801 4.615058 6.149946 2.000006 1.431780 .8400001 4.615144 6.149988 2.000000

1.529688 .8999967 5.379457 7.019955 2.000005 (.8,.8) 1.529706 .8999586 5.379327 7.019941 2.000024 1.529704 .8999991 5.379497 7.019983 2.000000

1.627868 .9600067 6.233860 7.949941 1.999937 (.9,.7) 1.627878 .9599857 6.233778 7.949935 2.000000 1.627881 .9600000 6.233879 7.949989 2.000000 55

Table 10.1. (Cont inued)

(C, n) X y u P q

1.726263 1.020024 7.184367 8.939964 1.999990 ( 1..6) 1.726260 1.019981 7.184156 8.939930 2.000001 1.726266 1.020000 7.184270 8.939990 2.000000

1.518218 1.304967 6.109352 6.914911 2.000001 (.9. 1) 1.518226 1.304938 6.109266 6.914897 2.000032 1.518221 1.304998 6.109494 6.914992 2.000000

1.613981 1.394972 6.994365 7.814896 2.000006 ( 1,.9) 1.613993 1.394932 6.994164 7.814869 2.000025 1.614000 1.395000 6.994466 7.814990 2.000000

1.581121 1.499970 6.952685 7.499884 2.000010 ( 1, 0 1.581133 1.499903 6.952461 7.499857 2.000044 1.581138 1.500000 6.952844 7.499997 2.000000 56

Table 10.2. Example 2 (C, n) y

1.918342 1134700 .9396315 -.3404970 ,0000791 1.918211 n34942 .9403558 -.3404834 ,0000305

(.3, 1) 1.918234 1134971 .9402479 -.3404904

1.918219 1134989 .9402829 .3404932 .0000133 1.918249 1134951 .9402133 -.3404922 ,0000038

2.071560 1077807 ,8766435 .4799467 .0001798 2.071349 1078393 ,8774123 .4799432 .0000467

(.4,.9) 2.071388 1078377 .8772985 -.4799448

2.071367 1078390 .8773371 .4799433 .0000152 2.071401 1078327 ,8772982 .4799533 .0000190

2.224112 .0994324 .7935173 .6076003 .0002987 2.223746 .0995289 .7943849 '.6075785 .0000696

(.5,.8) 2.223810 .0995225 .7942562 -.6075829

2.223780 ,0995279 .7942991 .6075775 .0000333 2.223819 .0995173 .7942190 .6075809 .0000133

2.375470 .0886332 .6928014 .7202894 ,0004535 2.374864 ,0887687 .6938648 ,7202513 ,0001029

(.6,.?) 2.374971 ,0887567 .6937056 -.7202585

2.374916 ,0887613 .6937733 ,7202463 .0000372 2.374986 ,0887499 ,6936760 ,7202516 .0000172

2.524788 ,0756434 .5779075 .8152576 .0006774 2.523677 ,0758207 .5794105 .8151907 .0001560

(.7,.6) 2.523870 ,0758054 .5791797 ,8151997

2,523771 ,0758136 .5792828 ,8151804 ,0000631 2.523908 .07579595 .5791311 .9151981 ,0000383 57

Table 10.2. (Continued)

(C, n) y

2.670330 .0608187 .4535474 .8901504 •.0008347 2.667874 .0609967 .4562616 .8900584 .0001223

(.8,.5) 2.668290 .0609873 ,4558281 -.8900677

2.668111 .0609988 .4560099 .8900490 .0000774 2.668376 .0609760 .4559314 •.8900666 .0000668

2.808777 .0447919 .3265280 .9429348 %0007790 2.801074 .0446090 .3340214 .9430150 -.0005164

(.9,.4) 2.802388 , 044668 .3327369 -.9430196 0

2.8OI827 ,04465801 .3332767 .9430048 .0000909 2.802629 ,04466802 ,3325043 .9430094 .0000034

2.923045 .0275705 .2169504 .9724282 .0028429 2.904961 .0271398 .2343387 .9727547 ,0015220

( 1,.3) 2.907610 .0272485 .2318525 -.9727509

2.906131 .0271704 .2332630 .9727772 .0011335 2.908156 .0273657 .2314353 .9726878 .0006227

2.222999 .2022011 ,7944250 ,6067734 .0001707 2.222353 .2022696 .7956838 ,6067268 ,0002317

(.6, 1) 2.222775 .2022539 ,7948844 -.6067607

2.222667 .2022561 .7950125 ,6067380 .0000093 2.222730 .2022492 ,7949448 ,6067477 .0000553

2.356923 .1835930 ,7063426 .7073756 .0001828 2.355836 .1836574 7078475 .7073134 .0002403

(.7,.9) 2.356554 .1836433 ,7068517 -.7073616

2.356392 .1836427 .7070227 .7073224 .0000600 2.356483 .1836344 .7069292 .7073351 ,0000168 58

Table 10.2. (Continued) (S, n)

2.483130 .1604928 .6117725 .7905350 •.0000944 2.481133 .1604701 .6138763 .7904930 -.0000182

(.8,.8) 2.482495 .1605099 .6124035 -.7905452 0

2.482235 .1605070 .6126375 .7905015 .0000355 2.482374 .1604989 .6125221 .7905163 -.0000198

2.596180 .1335788 .5189552 .8541352 -.000576 2.591759 .1331470 .5227890 .8542940 -.0015912

(.9,.7) 2.594927 .1334248 ,5198410 -.8542630

2.594356 .1333787 .5203495 .8542367 -.0002665 2.594544 .1333867 .5202103 .8542483 -.0002286

2.685350 .1035987 .4414014 -.8967326 ,0016931 2.679547 .1026709 .4457511 .8971737 ,0029313

( L.6) 2.683608 .1030540 .4421412 -.8969453

2.682636 .1029475 .4429722 .8970230 .0009252 2.682629 .1029891 .4430990 •.8970028 ,0007391

2.403648 .2466838 .6731681 .7408557 .0013446 2.401974 .2465099 .6745157 .7409108 .0018932

(.9, 1) 2.405005 .2469223 .6717663 -.7407654

2.404514 .2468657 .6721439 .7407354 .0002163 2.404554 .2468621 .6721168 .7407476 .0002378

2.477965 .2107419 .6167945 .7892860 .0017356 2.476727 .2104149 .6171340 .7895852 .0029100

( 1,-9) 2.480151 .2108919 .6142549 .7891075

2.479439 .2107865 .6147785 .7892106 .0006997 2.479395 .2108064 .6149278 .7891995 .0006306 59

Table 10.2. (Cont i nued)

(S, n) X y u P q

2.416325 .2497352 .6642475 -.7503090 -.0021606 2.415678 .2495222 .6640882 -.7505503 -.0027941

( 1, 1) 2.418857 .2500000 .6614384 -.7499994 0

2.418188 .2498966 .6619072 -.7501077 -.0006428 2.418161 .2499178 .6620398 -.7500986 -.0005627 Figure 1. Character i sties Y CO-ORDINATE 0.00 O.M 0.80 1.20 1.50 2,00

a

Cl Q a -J.J:M

CO

ro a a

19 Figure 2. Characteristics

I