AERO97028

Introductory

2020/2021

Table of Contents:

Introductory Mathematics ...... 4 1 Function Expansion & Transforms ...... 7 1.1 Power ...... 7 1.1.1 ...... 7 1.1.2 ...... 12 1.1.3 Complex Fourier series ...... 14 1.1.4 Termwise Integration and Differentiation...... 15 1.1.5 Fourier series of Odd and Even functions ...... 17 1.2 Transform ...... 19 1.2.1 Fourier Transform ...... 20 1.2.2 Laplace Transform ...... 21 2. Vector Spaces, vector Fields & Operators ...... 24 2.1 Scalar (inner) product of vector fields ...... 25

2.1.1 Lp norms ...... 26 2.2 Vector product of vector fields ...... 28 2.3 Vector operators ...... 29 2.3.1 of a scalar field...... 29 2.3.2 of a vector field ...... 32 2.3.3 of a vector field ...... 34 2.4 Repeated Vector Operations – The Laplacian ...... 36 3. Linear Algebra, Matrices & Eigenvectors ...... 41 3.1 Basic definitions and notation ...... 41 3.2 Multiplication of matrices and multiplication of vectors and matrices ...... 43 3.2.1 Matrix multiplication ...... 43 3.2.2 Traces and determinants of square Cayley products ...... 44 3.2.3 The Kronecker product ...... 44 3.3 Matrix Rank and the Inverse of a full rank matrix ...... 46 3.3.1 Full Rank matrices...... 46 3.3.2 Solutions of linear equations ...... 47 3.3.3 Preservation of positive definiteness ...... 47

2

3.3.4 A lower bound on the rank of a matrix product ...... 48 3.3.5 Inverse of products and sums of matrices ...... 48 3.4 Eigensystems ...... 49 3.5 Diagonalisation of symmetric matrices ...... 52 4. Generalised Vector – Integral Theorems ...... 55 4.1 The for ...... 55 4.2 Green’s Theorem ...... 56 4.3 Stokes’ Theorem ...... 61 4.4 Divergent Theorem ...... 67 5. Ordinary Differential Equations ...... 70 5.1 First-Order Linear Differential Equations ...... 70 5.2 Second-Order Linear Differential Equations ...... 72 5.3 Initial-Value and Boundary-Value Problems ...... 76 5.4 Non-homogeneous linear ...... 79 6. Partial Differential Equations ...... 82 6.1 Introduction to Differential Equations ...... 82 6.2 Initial Conditions and Boundary Conditions ...... 82 6.3 Linear and Nonlinear Equations ...... 83 6.4 Examples of PDEs ...... 85 6.5 Three types of Second-Order PDEs ...... 85 6.6 Solving PDEs using Separation of Variables Method ...... 86 6.6.1 The Heat Equation ...... 87 6.6.2 The Wave Equation ...... 94

3

Introductory Mathematics

What is Mathematics?

Different schools of thought, particularly in philosophy, have put forth radically different definitions of mathematics. All are controversial and there is no consensus. Leading definitions 1. Aristotle defined mathematics as: The science of quantity. In Aristotle's classification of the sciences, discrete quantities were studied by arithmetic, continuous quantities by geometry. 2. Auguste Comte's definition tried to explain the role of mathematics in coordinating phenomena in all other fields: The science of indirect measurement, 1851. The ``indirectness'' in Comte's definition refers to determining quantities that cannot be measured directly, such as the distance to planets or the size of atoms, by means of their relations to quantities that can be measured directly. 3. Benjamin Peirce: Mathematics is the science that draws necessary conclusions, 1870. 4. Bertrand Russell: All Mathematics is Symbolic Logic, 1903. 5. Walter Warwick Sawyer: Mathematics is the classification and study of all possible patterns, 1955. Most contemporary reference works define mathematics mainly by summarizing its main topics and methods:

6. Oxford English Dictionary: The abstract science which investigates deductively the conclusions implicit in the elementary conceptions of spatial and numerical relations, and which includes as its main divisions geometry, arithmetic, and algebra, 1933. 7. American Heritage Dictionary: The study of the measurement, properties, and relationships of quantities and sets, using numbers and symbols, 2000.

Other playful, metaphorical, and poetic definitions 8. Bertrand Russell: The subject in which we never know what we are talking about, nor whether what we are saying is true, 1901. 9. Charles Darwin: A mathematician is a blind man in a dark room looking for a black cat which isn't there. 10. G. H. Hardy: A mathematician, like a painter or poet, is a maker of patterns. If his patterns are more permanent than theirs, it is because they are made with ideas, 1940.

4

Field of Mathematics

Mathematics can, broadly speaking, be subdivided into the study of quantity, structure, space, and change (i.e. arithmetic, algebra, geometry, and analysis). In addition to these main concerns, there are also subdivisions dedicated to exploring links from the heart of mathematics to other fields: to logic, to set theory (foundations), to the empirical mathematics of the various sciences (applied mathematics), and more recently to the rigorous study of uncertainty.

Mathematical awards

Arguably the most prestigious award in mathematics is the Fields Medal, established in 1936 and now awarded every four years. The Fields Medal is often considered a mathematical equivalent to the Nobel Prize.

The Wolf Prize in Mathematics, instituted in 1978, recognizes lifetime achievement, and another major international award, the Abel Prize, was introduced in 2003. The Chern Medal was introduced in 2010 to recognize lifetime achievement. These accolades are awarded in recognition of a particular body of work, which may be innovational, or provide a solution to an outstanding problem in an established field.

A famous list of 23 open problems, called Hilbert's problem, was compiled in 1900 by German mathematician David Hilbert. This list achieved great celebrity among mathematicians, and at least nine of the problems have now been solved. A new list of seven important problems, titled the Millennium Prize Problems, was published in 2000. A solution to each of these problems carries a $1 million reward, and only one (the Riemann hypothesis) is duplicated in Hilbert's problems.

Mathematics in Aeronautics

Mathematics in aeronautics includes calculus, differential equations, and linear algebra, etc.

Calculus1

Calculus has been an integral part of man's intellectual training and heritage for the last twenty-five hundred years. Calculus is the mathematical study of change, in the same way that geometry is the study of shape and algebra is the study of operations and their application to solving equations. It has two major branches, (concerning rates of change and of curves), and integral calculus (concerning accumulation of quantities and the areas under and between curves); these two branches are related to each other by the fundamental theorem of calculus. Both branches make use of the fundamental notions of convergence of infinite sequences and infinite series to a well-defined . Generally, modern calculus is considered to have been developed in the 17th century by and Gottfried Leibniz,

1 Extracted from: Boyer, Carl Benjamin. The history of the calculus and its conceptual development. Courier Dover Publications, 1949.

5 today calculus has widespread uses in science, engineering and economics and can solve many problems that algebra alone cannot.

Differential and integral calculus is one of the great achievements of the human mind. The fundamental definitions of the calculus, those of the and the integral, are now so clearly stated in textbooks on the subject, and the operations involving them are so readily mastered, that it is easy to forget the difficulty with which these basic concepts have been developed. Frequently a clear and adequate understanding of the fundamental notions underlying a branch of knowledge has been achieved comparatively late in its development. This has never been more aptly demonstrated than in the rise of the calculus. The precision of statement and the facility of application which the rules of the calculus early afforded were in a measure responsible for the fact that mathematicians were insensible to the delicate subtleties required in the logical development of the discipline. They sought to establish the calculus in terms of the conceptions found in the traditional geometry and algebra which had been developed from spatial intuition. During the eighteenth century, however, the inherent difficulty of formulating the underlying concepts became increasingly evident, and it then became customary to speak of the “metaphysics of the calculus”, thus implying the inadequacy of mathematics to give a satisfactory exposition of the bases. With the clarification of the basic notions --which, in the nineteenth century, was given in terms of precise mathematical terminology-- a safe course was steered between the intuition of the concrete in nature (which may lurk in geometry and algebra) and the mysticism of imaginative speculation (which may thrive on transcendental metaphysics). The derivative has throughout its development been thus precariously situated between the scientific phenomenon of velocity and the philosophical noumenon of motion.

The history of integration is similar. On the one hand, it had offered ample opportunity for interpretations by positivistic thought in terms either of or of the compensation of errors, views based on the admitted approximate nature of scientific measurements and on the accepted doctrine of superimposed effects. On the other hand, it has at the same time been regarded by idealistic metaphysics as a manifestation that beyond the finitism of sensory percipiency there is a transcendent infinite which can be but asymptotically approached by human experience and reason. Only the precision of their mathematical definition --the work of the nineteenth century-- enables the derivative and the integral to maintain their autonomous position as abstract concepts, perhaps derived from, but nevertheless independent of, both physical description and metaphysical explanation.

6

1 Function Expansion & Transforms

A series expansion is a representation of a particular function of a sum of powers in one of its variables, or by a sum of powers of another function 푓(푥). There are many areas in engineering, such as the motion of fluids, the transfer of hear or processing of signals where the application of certain quantities involves functions as independent variables. Therefore, it is important for us to understand how to solve each function in the equations. In this chapter, we will cover infinite series, convergence and . Furthermore, in engineering, transforms in one form to another plays a major role in analysis and design. An area of continuing importance is the use of Laplace, Fourier, and other transforms in fields such as communication, control and signal processing. These will be covered later in this chapter. 1.1 Power Series

We must therefore give meaning to an infinite sum of constants, using this to give meaning to an infinite sum of 푘 functions. When the functions being added are the simple powers (푥 − 푥표) , the sum is called a Taylor (power) series and if 푥표 = 0, a Maclaurin series.

When the functions are trig terms such as 푠푖푛(푘푥) or 푐표푠(푘푥), the series might be a Fourier series, certain infinite sums of trig functions that can be made to represent arbitrary functions, even functions with discontinuities. This type of infinite series is also generalized to sums of other functions such as Legendre polynomials. Eventually, solutions of differential equations will be given in terms of infinite sums of Bessel functions, themselves infinite series.

1.1.1 Taylor Series

Having understood sequences, series and power series, now we will focus to one of the main topic: Taylor polynomials. The Taylor polynomial is given by:

푥 1 푛 (푛+1) 푓(푥) = 푝푛(푥) + ∫ (푥 − 푡) 푓 (푡)푑푡 (1) 푛! 푎

Where the 푛-th degree Taylor polynomial 푝푛(푥) is given by:

푓′(푎) 푓(푛)(푎) 푝 (푥) = 푓(푎) + (푥 − 푎) + ⋯ + (푥 − 푎)푛 (2) 푛 1! 푛!

When 푎 = 0, the series is also called Maclaurin series.

7

There are 2 conditions apply:

1. 푓 (푥), 푓(1)(푥), ⋯ , 푓(푛+1)(푥) are continuous in a closed interval containing 푥 = 푎. 2. 푥 is any point in the interval.

A Taylor series represents a function for a given value as an infinite sum of terms that are calculated from the value of the function’s .

Therefore, the Taylor series of a function 푓 (푥) for a value 푎 is the power series, and can be written as:

∞ 푓푛(푎) 푓(푥) = ∑ (푥 − 푎)푛 (3) 푛! 푛=0

Example 1.1: Find the Maclaurin series of a function 푓 (푥) = 푒푥 and its .

Solution: So, if 푓 (푥) = 푒푥 , then 푓(푛) (푥) = 푒푥 , so 푓(푛) (0) = 푒0 = 1 for all 푛.

Therefore, the Taylor series for 푓 at 0 (which is the Maclaurin series), so:

∞ ∞ 푓푛(0) 푥푛 푥 푥2 푥3 푓(푥) = ∑ (푥)푘 = ∑ = 1 + + + + ⋯ 푛! 푛! 1! 2! 3! 푛=0 푛=0

푛 To find the radius of convergence, let 푎푛 = 푥 /푛! . Then,

푛+1 푎푛+1 푥 푛! |푥| | | = | ∙ 푛| = → 0 < 1 푎푛 (푛 + 1)! 푥 푛 + 1

So, by Ratio Test, the series converges for all 푥 and the radius of convergence is 푅 = ∞

The conclusion we can draw from example 1.1 is that if 푒푥 has a power series expansion at 0, then:

∞ 푥푛 푒푥 = ∑ 푛! 푛=0

So now, under what circumstances is a function equal to the sum of its Taylor series? Or if 푓 has derivatives of all orders, when is it that equation (3) is true?

8

With any , this means that 푓(푥) is the limit of the sequence of partial sums. In the case of Taylor series, the partial sums can be written as in as equation (2), where:

푓′(푎) 푓′′(푎) 푓(푛)(푎) 푝 (푥) = 푓(푎) + (푥 − 푎) + (푥 − 푎)2 ⋯ + (푥 − 푎)푛 푛 1! 2! 푛!

For the example of the 푓 (푥) = 푒푥, the results from example 1.1 shows that the Taylor polynomials at 0 (or Maclaurin polynomials) with 푛 = 1, 2 and 3 are:

푝1(푥) = 1 + 푥

푥2 푝 (푥) = 1 + 푥 + 2 2!

푥2 푥3 푝 (푥) = 1 + 푥 + + 3 2! 3!

In general, 푓 (푥) is the sum of its Taylor series if

푓(푥) = lim 푝푛(푥) (4) 푛→∞

If we let

푅푛(푥) = 푓(푥) − 푝푛(푥) so that 푓(푥) = 푝푛(푥) + 푅푛(푥) (5)

Then, 푅푛(푥) is called the remainder of the Taylor series.

If we show that lim 푅푛(푥) = 0, then it follows that from equation (5): 푛→∞

lim 푝푛(푥) = lim [푓(푥) − 푅푛(푥)] = 푓 (푥) − lim 푅(푥) = 푓(푥) 푛→∞ 푛→∞ 푛→∞

We have therefore proved the following:

If 푓(푥) = 푝푛(푥) + 푅푛(푥), where 푝푛 is the 푛th degree Taylor Polynomial of 푓 at 푎 and

lim 푅푛(푥) = 0 (6) 푛→∞ for |푥 − 푎| < 푅, then 푓 is equals to the sum of its Taylor series on the interval |푥 − 푎| < 푅.

9

Therefore, if 푓 has 푛 + 1 derivatives in an interval 퐼 that contains the number 푎, then for 푥 in 퐼 there is a number 푧 strictly between 푥 and 푎 such that the remainder term in the Taylor series can be expressed as

푓(푛+1)(푎) 푅 (푥) = (푥 − 푎)푛+1 (7) 푛 (푛 + 1)!

Example 1.2: Find the Maclaurin series for sin 푥 and prove that it represents sin 푥 for all 푥.

Solution: First, we arrange our computation in two columns as follows:

푓(푥) = sin 푥 푓(0) = 0 푓(1)(푥) = cos 푥 푓(1)(0) = 1 푓(2)(푥) = −sin 푥 푓(2)(0) = 0 푓(3)(푥) = −cos 푥 푓(3)(0) = 1 푓(4)(푥) = −cos 푥 푓(4)(0) = 0

Since the derivatives repeat 푛 a cycle of four, we can write the Maclaurin series as follow:

(1) (2) (3) (4) 푓 (0) 푓 (0) 푓 (0) 푓 (0) 1 0 2 −1 3 0 4 푓(0) + 푥 + 푥2 + 푥3 + 푥4 + ⋯ = 0 + 푥 + 푥 + 푥 + 푥 + ⋯ 1! 2! 3! 4! 1! 2! 3! 4!

푥3 푥5 푥7 = 푥 − + − + ⋯ 3! 5! 7!

∞ 푥2푘+1 = ∑(−1)푘 (2푘 + 1)! 푘=0

10

You can try with different types of functions, and you will get a Maclaurin series table that looks like this:

∞ 1 = ∑ 푥푛 = 1 + 푥 + 푥2 + 푥3 + ⋯ 푅 = 1 1 − 푥 푛=0 ∞ 푥푛 푥 푥2 푥3 푒푥 = ∑ = 1 + + + + ⋯ 푅 = ∞ 푛! 1! 2! 3! 푛=0 ∞ 푥(2푛+1) 푥3 푥5 푥7 sin 푥 = ∑(−1)푛 = 푥 − + − + ⋯ 푅 = ∞ (2푛 + 1)! 3! 5! 7! 푛=0 ∞ 푥2푛 푥2 푥4 푥6 cos 푥 = ∑(−1)푛 = 1 − + − + ⋯ 푅 = ∞ (2푛)! 2! 4! 6! 푛=0 ∞ 푥2푛+1 푥3 푥5 푥7 tan−1 푥 = ∑(−1)푛 = 푥 − + − + ⋯ 푅 = 1 2푛 + 1 3 5 7 푛=0 ∞ 푥푛 푥2 푥3 푥4 ln(1 + 푥) = ∑(−1)푛−1 = 푥 − + − + ⋯ 푅 = 1 푛 2 3 4 푛=0

Example 1.3: Find the first 3 terms of the Taylor series for the function sin 휋푥 centered at 푎 = 0.5. Use 휋 휋 your answer to find an approximate value to sin ( + ) 2 10

Solution: Let us first do the derivatives for the function given:

푓(푥) = sin 휋푥 . Therefore, 푓(1)푥 = 휋 cos 휋푥 , 푓(2)푥 = −휋2 sin 휋푥 , 푓(3)푥 = −휋3 cos 휋푥 , 푓(4)푥 = 휋4 sin 휋푥 And so,

Substituting this back into equation (17), we get:

1 2 1 4 휋 (푥 − 2) (푥 − 2) sin 휋푥 = sin + × (−휋)2 + × 휋4 + ⋯ 2 2! 4!

1 2 1 4 (푥 − ) (푥 − ) = 1 − 휋2 2 + 휋4 2 + ⋯ 2! 4!

11

Therefore,

1 2 1 4 1 1 (10) (10) sin 휋 ( + ) = 1 − 휋2 + 휋4 + ⋯ 2 10 2! 4!

= 1 − 0.0493 + 0.0004 = 0.9511

1.1.2 Fourier Series

As mentioned previously, a Fourier series decomposes periodic functions into a sum of sines and cosines (trigonometric terms or complex exponentials). For a periodic function 푓(푥), periodic on [−퐿, 퐿], its Fourier series representation is:

∞ 푛휋푥 푛휋푥 푓(푥) = 0.5푎 + ∑ {푎 cos ( ) + 푏 sin ( )} (8) 0 푛 퐿 푛 퐿 푛=1

where 푎0, 푎푛 and 푏푛 are the Fourier coefficients and they can be written as:

1 퐿 푎0 = ∫ 푓(푥)푑푥 퐿 −퐿 (9)

1 퐿 푛휋푥 푎푛 = ∫ 푓(푥) cos ( ) 푑푥 퐿 −퐿 퐿 (10)

1 퐿 푛휋푥 푏푛 = ∫ 푓(푥) sin ( ) 푑푥 (11) 퐿 −퐿 퐿 where period, 푝 = 2퐿. Equation (8) is also called Real Fourier series.

There are 2 conditions apply:

1. 푓(푥) is a piecewise continuous is piecewise continuous on the closed interval [−퐿, 퐿]. A function is said to be piecewise continuous on the closed interval [푎, 푏] provided that it is continuous there, with at most a finite number of exceptions where, at worst, we would find a removable or jump discontinuity. At both a removable and a jump discontinuity, the one-sided limits 푓(푡+) = lim 푓(푥) and 푓(푡−) = lim 푓(푥) exist and are finite. 푥→푡+ 푥→푡−

12

2. A sum of continuous and periodic functions converges pointwise to a possibly discontinuous and non-periodic function. This was a startling realisation for mathematicians of the early nineteenth century.

Example 1.4: Find the Fourier series of (푥) = 푥2 , −1 < 푥 < 1

Solution: In this example, period, 푝 = 2, but we know that 푝 = 2퐿, therefore, 퐿 = 1.

First, let us find 푎0. From equation (9),

1 퐿 푎0 = ∫ 푓(푥)푑푥 2퐿 −퐿

1 1 2 1 푎0 = ∫ 푥 푑푥 = 2 −1 3

Next, let us find 푏푛. From equation (11),

1 퐿 푛휋푥 푏푛 = ∫ 푓(푥) sin ( ) 푑푥 퐿 −퐿 퐿

1 1 2 푏푛 = ∫ 푥 sin 푛휋푥 푑푥 = 0 1 −1

Finally, we will find 푎푛. From equation (10),

1 퐿 푛휋푥 푎푛 = ∫ 푓(푥) cos ( ) 푑푥 퐿 −퐿 퐿

1 1 2 푎푛 = ∫ 푥 cos 푛휋푥 푑푥 1 −1

Solving using , we get:

2푐표푠푛휋푥 1 푎푛 = 2 2 | 푛 휋 −1 2 푎 = [(−1)푛 + (−1)푛] 푛 푛2휋2

13

4(−1)푛 푎 = 푛 푛2휋2

Therefore, the Fourier series can be written as:

∞ 1 4(−1)푛 푓(푥2) = + ∑ cos(푛휋푥) 3 푛2휋2 푛=1

1.1.3 Complex Fourier series

A function 푓(푥) can also be expressed as a Complex Fourier series and can be defined to be:

+∞ 푖푛휋푥/퐿 푓(푥) = ∑ 푐푛푒 (12) −∞ where

휋 1 −푖푛푥 푐푛 = ∫ 푓(푥)푒 (13) 2휋 −휋

We know that:

푒푖푥 = cos 푥 + 푖 sin 푥

푒−푖푥 = cos 푥 − 푖 sin 푥

(14) 푒푖푥 − 푒−푖푥 = 2푖 sin 푥

푒푖푥 + 푒−푖푥 = 2 cos 푥

Therefore, from equation (13),

휋 1 −푖푛푥 푐푛 = ∫ 푓(푥)푒 2휋 −휋

1 1 휋 1 휋 푐푛 = [ ∫ 푓(푥) cos 푛푥 푑푥 − 푖 ∫ 푓(푥) sin 푛푥 푑푥] 2 휋 −휋 휋 −휋

14

Hence, we can write:

1 ⟹ 푐 = (푎 − 푖푏 ) , 푛 > 0 푛 2 푛 푛 1 ⟹ 푐 = (푎 + 푖푏 ) , 푛 < 0 푛 2 −푛 −푛 ⟹ 푐푛 = 푎0 , 푛 = 0

Example 1.5: Write the complex Fourier transform of 푓(푥) = 2 sin 푥 − cos 10푥

Solution: Here, we can expand the function by substituting the sin and cos functions from equation (14), we get:

푒푖푥 − 푒−푖푥 푒10푖푥 + 푒−10푖푥 푓(푥) = 2 − 2푖 2

1 1 1 1 푓(푥) = 푒푖푥 − 푒−푖푥 − 푒10푖푥 − 푒−10푖푥 푖 푖 2 2

Therefore:

1 1 1 1 푐 = , 푐 = − , 푐 = − , 푐 = − 1 푖 10 2 −1 푖 −10 2

1.1.4 Termwise Integration and Differentiation

Parseval’s Identity

Consider a Fourier series below and expand it

푓(푥) = 푎0 + ∑{푎푛 cos 푛푥 + 푏푛 sin 푛푥} = 푎0 + 푎1 cos 푥 + 푏1 sin 푥 + 푎2 cos 2푥 + 푏2 sin 2푥 + ⋯ 푛=1

Square it, we get:

15

푁 푁 2 2 2 2 2 2 푓 (푥) = 푎0 + ∑(푎푛 cos 푛푥 + 푏푛 sin 푛푥) + 2푎0 ∑( 푎푛 cos 푛푥 + 푏푛 sin 푛푥) 푛=1 푛=1 푁

+ 2푎1 cos 푥 푏1 sin 푥 + 2푎1 cos 푥 ∑( 푎푛 cos 푛푥 + 푏푛 sin 푛푥) + ⋯ 푛=1 + 2푎푁 cos 푁푥 푏푁 sin 푁푥

Integrate both sides, we get:

휋 휋 푁 2 2 2 2 2 2 ∫ 푓 (푥) 푑푥 = ∫ {푎0 + ∑(푎푛 cos 푛푥 + 푏푛 sin 푛푥) + ⋯} 푑푥 −휋 −휋 푛=1

휋 푁 2 2 2 2 ⟹ ∫ 푓 (푥) 푑푥 = 2휋푎0 + ∑(휋푎푛 + 휋푏푛 ) + 0 −휋 푛=1

Parseval’s Identity can be written as:

∞ 1 퐿 ∫ |푓(푥)|2 푑푥 = 2|푎 |2 + ∑(|푎 |2 + |푏 |2) (15) 퐿 0 푛 푛 −퐿 푛=1

If:

a) 푓(푥) is continuous, and 푓′(푥) is a piecewise continuous on [−퐿, 퐿] b) 푓(퐿) = 푓(−퐿) c) 푓′′(푥) exist at 푥 in (−퐿, 퐿),

Therefore:

∞ 휋 푛휋푥 푛휋푥 푓′(푥) = ∑ 푛 (−푎 sin + 푏 cos ) (16) 퐿 푛 퐿 푛 퐿 푛=1

Example 1.6: From Example 1.4, we found that the Fourier series is:

∞ 1 4(−1)푛 푓(푥2) = + ∑ cos(푛휋푥) , 푥2 3 푛2휋2 푛=1

16

Solution: Applying Parseval’s to the equation above, we get:

∞ 1 2 16 1 2 2 ( ) + ∑ = ∫ 푥4 푑푥 = 3 푛4휋4 5 푛=1 −1

∞ 16 2 2 8 ⟹ ∑ = − = 푛4휋4 5 9 45 푛=1

∞ 1 휋4 ⟹ ∑ = 푛4 90 푛=1

1.1.5 Fourier series of Odd and Even functions

A function 푓(푥) is called an 푒푣푒푛 or 푠푦푚푚푒푡푟푖푐 function if it has the property

푓(−푥) = 푓(푥) (17) i.e. the function value for a particular negative value of x is the same as that for the corresponding positive value of x. The graph of an even function is therefore reflection symmetrical about the y-axis.

Figure 1.1: Square waves showing an even function

A function 푓(푥) is called an 표푑푑 or 푎푛푡푖푠푦푚푚푒푡푟푖푐 function if

푓(−푥) = −푓(푥) (18) i.e. the function value for a particular negative value of x is numerically equal to that for the corresponding positive value of x but opposite in sign. We can say these functions to be symmetrical about the origin.

17

Figure 1.2: Example of odd function

A function that is neither even nor odd can be represented as the sum of an even and an odd function.

Cosine waves are even, so any Fourier series representation of a periodic function must have an even symmetry. A function 푓(푥) defined on [0, 퐿] can be extended as an even periodic function (푏푛 = 0). Therefore, the Fourier series representation of an even function is:

∞ 푛휋푥 2 퐿 푛휋푥 푓(푥) = 0.5푎 + ∑ 푎 cos ( ) , 푎 = ∫ 푓(푥) cos ( ) 푑푥 (19) 0 푛 퐿 푛 퐿 퐿 푛=1 0

Similarly sine waves are odd, so any Fourier sine series representation of a periodic function must have odd symmetry. Therefore, a function 푓(푥) defined on [0, 퐿] can be extended as an odd periodic function

(푎푛 = 0) and the Fourier series representation of an even function is:

∞ 푛휋푥 2 퐿 푛휋푥 푓(푥) = 0.5푎 + ∑ 푏 sin ( ) , 푏 = ∫ 푓(푥) sin ( ) 푑푥 (20) 0 푛 퐿 푛 퐿 퐿 푛=1 0

Example 1.7: If 푓(푥) is even, show that

2 퐿 푛휋푥 (a) 푎 = ∫ 푓(푥) cos ( ) 푑푥 푛 퐿 0 퐿

(b) 푏푛 = 0

Solution: For an even function, we can write the equation as:

1 퐿 푛휋푥 1 0 푛휋푥 1 퐿 푛휋푥 푎푛 = ∫ 푓(푥) cos 푑푥 = ∫ 푓(푥) cos 푑푥 + ∫ 푓(푥) cos 푑푥 퐿 −퐿 퐿 퐿 −퐿 퐿 퐿 0 퐿

Let x=-u, we can re-write:

18

1 0 푛휋푥 1 퐿 −푛휋푢 1 퐿 푛휋푢 ∫ 푓(푥) cos 푑푥 = ∫ 푓(−푢) cos ( ) 푑푢 = ∫ 푓(푢) cos ( ) 푑푢 퐿 −퐿 퐿 퐿 0 퐿 퐿 0 퐿

Since by definition of an even function f(-u) = f(u). Then:

1 퐿 푛휋푢 1 퐿 푛휋푥 2 퐿 푛휋푥 푎푛 = ∫ 푓(푢) cos ( ) 푑푢 + ∫ 푓(푥) cos 푑푥 = ∫ 푓(푥) cos 푑푥 퐿 0 퐿 퐿 0 퐿 퐿 0 퐿

To show that 푏푛 = 0, we can write the expression as

1 퐿 푛휋푥 1 0 푛휋푥 1 퐿 푛휋푥 푏푛 = ∫ 푓(푥) sin ( ) 푑푥 = ∫ 푓(푥) sin ( ) 푑푥 + ∫ 푓(푥) sin ( ) 푑푥 퐿 −퐿 퐿 퐿 −퐿 퐿 퐿 0 퐿

If we make the transformation x=-u in the first integral on the right of the equation above, we obtain:

1 0 푛휋푥 1 퐿 푛휋푢 1 퐿 푛휋푢 ∫ 푓(푥) sin ( ) 푑푥 = ∫ 푓(−푢) sin (− ) 푑푢 = − ∫ 푓(푢) sin ( ) 푑푢 퐿 −퐿 퐿 퐿 0 퐿 퐿 0 퐿

1 퐿 푛휋푢 1 퐿 푛휋푥 = − ∫ 푓(푢) sin ( ) 푑푢 = − ∫ 푓(푥) sin ( ) 푑푥 퐿 0 퐿 퐿 0 퐿

Therefore, substituting this into the equation for 푏푛, we get

1 퐿 푛휋푥 1 퐿 푛휋푥 푏푛 = − ∫ 푓(푥) sin ( ) 푑푥 + ∫ 푓(푥) sin ( ) 푑푥 = 0 퐿 0 퐿 퐿 0 퐿

1.2

An integral transform is any transform of the following form

푥2 퐹(푤) = ∫ 퐾(푤, 푥)푓(푥) 푑푥 (21) 푥1

With the following inverse transform

19

푤2 푓(푥) = ∫ 퐾−1(푤, 푥)퐹(푤) 푑푤 (22) 푤1

1.2.1 Fourier Transform

A Fourier series expansion of a function 푓(푥) of a real variable 푥 with a period of 2퐿 is defined over a finite interval −퐿 ≤ 푥 ≤ 퐿 . If the interval becomes infinite and we sum over , we then obtain the Fourier integral

1 ∞ 푓(푥) = ∫ 퐹(푤)푒푖푤푥 푑푤 (23) 2휋 −∞ with the coefficients

∞ 퐹(푤) = ∫ 푓(푥)푒−푖푤푥 푑푥 (24) −∞

Equation (24) is the Fourier transform of 푓(푥). The Fourier integral is also known as the inverse Fourier −푖푤푥 transform of 퐹(푤). In this example, 푥1 = 푤1 = −∞, 푥2 = 푤2 = ∞ and 퐾(푤, 푥) = 푒 . The Fourier transform transforms a function of one variable (e.g. time in seconds) which lives in the time domain to a second function which lives in the frequency domain and changes the basis of the function to cosines and sines.

Example 1.8: Find the Fourier transform of

푡 ∶ −1 ≤ 푡 ≤ 1 푓(푡) = { 0 ∶ 푒푙푠푒푤ℎ푒푟푒

Solution: Recalling the Fourier transform in equation (24), we can write

∞ 퐹(푤) = ∫ 푓(푥)푒−푖푤푥 푑푥 −∞

1 = ∫ 푡 ∙ 푒−푖푤푡 푑푡 −1

By applying integration by parts, we get:

푡 1 1 1 = [ 푒푖푤푡] − ∫ 푒−푖푤푡 푑푡 −푖푤 −1 −1 −푖푤

20

1 We can also rewrite − = 푖, therefore: 푖

푖푡 1 1 1 1 = [ 푒푖푤푡] + [ ∙ 푒푖푤푡] 푤 −1 푖푤 −푖푤 −1

푖푡 1 1 1 푖푤푡 푖푤푡 = [ 푒 ] + [ 2 푒 ] 푤 −1 푤 −1

푖 1 = (푒−푖푤 + 푒푖푤) + (푒−푖푤 + 푒푖푤) 푤 푤2

푖 1 2푖 1 = 2 (푒−푖푤 + 푒푖푤) + (− ∙ ) (푒푖푤 − 푒−푖푤) 푤 2 푤2 2푖

2푖 2푖 = cos 푤 − sin 푤 푤 푤2

2푖 1 = (cos 푤 − sin 푤) 푤 푤

1.2.2 Laplace Transform

The Laplace transform is an example of an integral transform that will convert a differential equation into an algebraic equation. The Laplace transform of a function 푓(푥) of a variable 푥 is defined as the integral

∞ 퐹(푠) = ℒ{푓(푡)} = ∫ 푓(푡)푒−푠푡 푑푡 (25) 0 where 푠 is a positive real parameter that serves as a supplementary variable. The conditions are: if 푓(푡) is piecewise continuous on (0, ∞), and of exponential order (|푓(푡)| ≤ 퐾푒훼푡 for some 퐾 and 훼 > 0), then 퐹(푠) exists for 푠 > 훼. Several Laplace transforms are given in the table below, where 푎 is a constant and 푛 is an integer.

Example 1.9: Find the Laplace transforms of the following functions:

3 ∶ 0 < 푡 < 5 푓(푡) = { 0 ∶ 푡 > 0

21

Solution: ∞ 5 ∞ ℒ{푓(푡)} = ∫ 푓(푡) 푒−푠푡 푑푡 = ∫ 3 ∙ 푒−푠푡 푑푡 + ∫ 0 ∙ 푒−푠푡 푑푡 0 0 5

푒−푠푡 5 = 3 | | + 0 −푠 0

푒−5푠 1 = 3 | − | −푠 −푠

3 = (1 − 푒−5푠) 푠

Example 1.10: Find the Laplace transforms of the following functions:

푡 ∶ 0 < 푡 < 푎 푓(푡) = { 푏 ∶ 푡 > 푎

Solution: ∞ 푎 ∞ ℒ{푓(푡)} = ∫ 푓(푡) 푒−푠푡 푑푡 = ∫ 푡 ∙ 푒−푠푡 푑푡 + ∫ 푏 ∙ 푒−푠푡 푑푡 0 0 푎

푒−푠푡 푒−푠푡 푎 푒−푠푡 ∞ = | 푡 − ∙ 1| + 푏 | | −푠 푠2 −푠 0 푎

푎 1 1 푏 = 푒−푎푠 (− − ) − 푒0 (0 − ) − (0 − 푒−푎푠) 푠 푠2 푠2 푠

1 푏 − 푎 1 = + [ − ] 푒−푎푠 푠2 푠 푠2

Example 1.11: Determine the Laplace transform of the function below:

푓(푡) = 5 − 3푡 + 4 sin 2푡 − 6푒4푡

Solution: First, let’s break the equation one by one, we get:

5 ℒ{5} = , 푅푒 (푠) > 0 푠

22

1 ℒ{푡} = , 푅푒 (푠) > 0 푠2

2 ℒ{sin 2푡} = , 푅푒 (푠) > 0 푠2 + 4

1 ℒ{푒4푡} = , 푅푒 (푠) > 4 푠 − 4

Therefore, by linearity property,

ℒ{푓(푡)} = ℒ{5 − 3푡 + 4 sin 2푡 − 6푒4푡}

= ℒ{5} − 3ℒ{푡} + 4ℒ{sin 2푡} − 6ℒ{푒4푡}

5 3 8 6 = − + − 푠 푠2 푠2 + 4 푠 − 4

LAPLACE TRANSFORMS 풇(풙) = 퓛−ퟏ{푭(풔)} 푭(풔) = 퓛{풇(풔)} 푎 풂 푠 1 풕 푠2 (푛!) 풙풏 푠푛+1 1 풆풂풙 (푠 − 푎) 푎 퐬퐢퐧 풂풙 (푠2 + 푎2) 푠 퐜퐨퐬 풂풙 (푠2 + 푎2) 푎 퐬퐢퐧퐡 풂풙 (푠2 − 푎2) 푠 퐜퐨퐬퐡 풂풙 (푠2 − 푎2)

23

2. Vector Spaces, vector Fields & Operators

In the context of physics, we are often interested in a quantity or property which varies in a smooth and continuous way over some one-, two-, or three-dimensional region of space. This constitutes either a scalar field or a vector field, depending on the nature of property. In this chapter, we consider the relationship between a scalar field involving a variable potential and a vector field involving ‘field’, where this means force per unit mass or change. The properties of scalar and vector fields are described and how they lead to important concepts, such as that of a conservative field, and the important and useful Gauss and Stokes theorems. Finally, examples will be given to demonstrate the ideas of vector analysis.

There are basically four types of functions involving scalars and vectors:

• Scalar functions of a scalar, 푓(푥) • Vector function of a scalar, 풓(푡) • Scalar function of a vector, 휑(풓) • Vector function of a vector, 푨(풓)

1. The vector x is normalised if xTx = 1 2. The vectors x and y are orthogonal if xTy = 0

3. The vectors x1, x2, …, x푛 are linearly independent if the only numbers which satisfy the equation 푎1x1 + 푎2x2 + … + 푎푛x푛 = 0 are 푎1 = 푎2 = . . . = 푎푛 = 0 4. The vectors x1, x2, …, x푛 form a basis for a 푛 −dimensional vector-space if any vector x in the vector- space can be written as a linear combination of vectors in the basis thus x = 푎1x1 + 푎2x2 + ⋯ + 푎푛x푛 where 푎1, 푎2, ⋯ , 푎푛 are scalars.

Figure 2.1: Components of a vector

24

For example, a vector A from the origin in the figure above to a point P in the 3-dimensions takes the form

푨 = 푎푥풊 + 푎푦풋 + 푎푧풌 (26)

Where {풊, 풋, 풌}are unit vectors along the {푥, 푦, 푧} axes, respectively. The vector components {푎푥, 푎푦, 푎푧, } are the corresponding distances along the axes. The length or magnitude of Vector 푨 is

2 2 2 |푨| = √푎푥 + 푎푦 + 푎푧 (27)

2.1 Scalar (inner) product of vector fields

The scalar product of vector fields is also called as the dot product. For example, if we have 2 vectors as

푨 = (퐴1, 퐴2, 퐴3) and 푩 = (퐵1, 퐵2, 퐵3), therefore,

푇 〈푨, 푩〉 = 푨 ∙ 푩 = 푨 푩 = 퐴1퐵1 + 퐴2퐵2 + 퐴3퐵3 (28)

We can also write

푨 ∙ 푩 = ‖푨‖‖푩‖ cos 휃 (29) where 휃 is the angle between 푨 and 푩 satisfying 0 ≤ 휃 ≤ 휋. The inner product of vectors is a scalar. The scalar product obeys the product laws which are listed below:

Product laws:

1. Commutative: 푨 ∙ 푩 = 푩 ∙ 푨 2. Associative: 푚푨 ∙ 푛푩 = 푚푛푨 ∙ 푩 3. Distributive: 푨 ∙ (푩 + 푪) = 푨 ∙ 푩 + 푨 ∙ 푪 1 1 4. Cauchy-Schwarz inequality: 푨 ∙ 푩 ≤ (푨 ∙ 푨)2(푩 ∙ 푩)2

Note that a relation such as 푨 ∙ 푩 = 푨 ∙ 푪 does not imply that 푩 = 푪, as

푨 ∙ 푩 − 푨 ∙ 푪 = 푨 ∙ (푩 − 푪) = 0 (30)

Therefore, the correct conclusion is that 푨 is perpendicular to the vector 푩 − 푪.

Example 2.1: Determine the angle between 푨 = 〈1,3, −2〉 and 푩 = 〈−2, 4, −1〉.

25

Solution: All we need to do here is to rewrite equation (29) as:

푨 ∙ 푩 cos 휃 = ‖푨‖‖푩‖

Therefore, we know that:

푨 ∙ 푩 cos 휃 = ‖푨‖‖푩‖

We will first have to compute the individual parameters

푨 ∙ 푩 = 12 ‖푨‖ = √14 ‖푩‖ = √21

Hence, the angle between the vectors is:

12 cos 휃 = = 0.69985 ⟹ 휃 = cos−1(0.69985) = 45.58° √14√21

2.1.1 Lp norms

There are many norms that could be defined for vectors. One type of norms is called the 퐿푝 norm, often denoted as ‖ ∙ ‖푝. For 푝 ≥ 1, it is defined as the 푝 − 푛표푟푚 and can be written as:

1 푛 푝 푝 푇 (31) ‖푥‖푝 = (∑‖푥푖‖ ) , 푥 = [푥1, ⋯ 푥푛] 푖=1

There are a few types of norms such as the following:

1. ‖푥‖1 = ∑푖|푥푖|, also called the Manhattan norm because it corresponds to sums of distances along coordinate axes, as one would travel along the rectangular street plan of Manhattan. 2 2. ‖푥‖2 = √∑푖|푥푖| , also called the Euclidean norm, the Euclidean length, or just the length of the vector.

3. ‖푥‖∞ = 푚푎푥푖|푥푖|, also called the max norm or the Chebyshev norm.

26

Some relationships of norms are as below:

‖푥‖∞ ≤ ‖푥‖2 ≤ ‖푥‖1

‖푥‖∞ ≤ ‖푥‖2 ≤ √푛‖푥‖∞ (32)

‖푥‖2 ≤ ‖푥‖1 ≤ √푛‖푥‖2

If we define the inner product induced norm ‖푥‖ = √⟨푥, 푥⟩. Then,

(‖푥‖ + ‖푦‖)2 ≥ ‖푥 + 푦‖2 , ‖푥 + 푦‖2 = ‖푥‖2 + ‖푦‖2 + 2⟨푥, 푦⟩ (33)

Example 2.2: Given a vector 푣⃗ = 푖⃗ − 4푗⃗ + 5푘⃗⃗, determine the Manhattan norm, Euclidean length and Chebyshev norm.

Solution: So, if we re-write the vector 푣⃗ as 푣⃗ = (1, −4,5), then we can calculate the norms easily.

A. Manhattan norm (One norm):

= ∑|푣 | ‖푣⃗‖1 푖 푖 = |1| + |−4| + |5| = 10

B. Euclidean norm (Two norm)

| |2 ‖푣⃗‖2 = √∑ 푥푖 푖 = √|1|2 + |−4|2 + |5|2 = √42

C. Chebyshev norm (Infinity norm)

‖푣⃗‖∞ = 푚푎푥푖|푥푖| = 푚푎푥푖{|1|, |−4|, |5|} = 5

Therefore, we can see that

‖푥‖∞ ≤ ‖푥‖2 ≤ ‖푥‖1

27

5 ≤ √42 ≤ 10

2.2 Vector product of vector fields

The vector product of vector fields is also called as the cross product. For example, if we have 2 vectors as 푨 = (퐴1, 퐴2, 퐴3) and 푩 = (퐵1, 퐵2, 퐵3), therefore,

푨 × 푩 = (퐴2퐵3 − 퐴3퐵2, 퐴1퐵3 − 퐴3퐵1, 퐴1퐵2 − 퐴2퐵1) (34)

The cross product of the vectors 푨 and 푩, is orthogonal to both 푨 and 푩, forms a right-handed system with 푨 and 푩, and has length given by:

‖푨 × 푩‖ = ‖푨‖‖푩‖ sin 휃 (35) where 휃 is the angle between 푨 and 푩 satisfying 0 ≤ 휃 ≤ 휋. The vector product of a vector is a vector. A few additional properties of the cross product are listed below:

1. Scalar multiplication (푎푨) × (푏푩) = 푎푏(푨 × 푩) 2. Distribution laws 푨 × (푩 + 푪) = 푨 × 푩 + 푨 × 푪 3. Anticommutation 푩 × 푨 = −푨 × 푩 4. Nonassociativity 푨 × (푩 × 푪) = (푨 ∙ 푪)푩 − (푨 ∙ 푩)푪

If we breakdown equation (34), we ca rewrite the cross product of vectors 푨 and 푩 as:

푨 × 푩 퐴 퐴 퐴 퐴 퐴 퐴 = | 2 3| 푖⃗ − | 1 3| 푗⃗ + | 1 2| 푘⃗⃗ 퐵2 퐵3 퐵1 퐵3 퐵1 퐵2

푖⃗ 푗⃗ 푘⃗⃗ = |퐴1 퐴2 퐴3| 퐵1 퐵2 퐵3

Example 2.3: If 푨 = (3, −2, −2) and 푩 = (−1, 0, 5), compute 푨 × 푩 and find the angle between the two vectors.

Solution: It’s a very simple solution here, all we have to do is the compute the cross product first, so

−2 −2 3 −2 3 −2 푨 × 푩 = | | 푖⃗ − | | 푗⃗ + | | 푘⃗⃗ 0 5 −1 5 −1 0

= −10푖⃗ − 13푗⃗ − 2푘⃗⃗

28

Angle between the two vectors are given as: ‖푨 × 푩‖ = ‖푨‖‖푩‖ sin 휃. Rearranging equation (51), we get: ‖푨 × 푩‖ sin 휃 = ‖푨‖‖푩‖

√(−10)2 + (−13)2 + (−2)2 = √(3)2 + (−2)2 + (−2)2√(−1)2 + (0)2 + (5)2

√273 = √17√26

휃 = 51.80°

2.3 Vector operators

Certain differential operations may be done on a scalar and vector fields. This may have a wide range of applications in physical sciences. The most important tasks are those of finding the gradient of a scalar field and the divergence and curl of a vector field. In the following topics, we will discuss the mathematical and geometrical definitions of these, which will rely on concepts of integrating vector quantities along lines and over surfaces. In the midst of these differential operations is the vector operator ∇, which is called as del (or nabla) and in Cartesian coordinates, ∇ is defined as:

휕 휕 휕 훁 ≡ 풊 + 풋 + 풌 (36) 휕푥 휕푦 휕푧

2.3.1 Gradient of a scalar field

The gradient of a scalar field 휑(푥, 푦, 푧) is defined as

휕휑 휕휑 휕휑 휕휑 grad φ = 훁φ = 풊 + 풋 + 풌 (37) 휕푥 휕푦 휕푧 휕푥

Clearly, ∇φ is a vector field whose 푥, 푦 and 푧 components are the first partial derivatives of 휑(푥, 푦, 푧) with respect to 푥, 푦 and 푧.

29

Example 2.4: Find the gradient of the scalar field 휑 = 푥푦2푧3.

Solution: We can easily solve this problem by using equation (37), so the gradient of the scalar field 휑 = 푥푦2푧3 is

휕휑 휕휑 휕휑 grad φ = 풊 + 풋 + 풌 휕푥 휕푦 휕푧

= 푦2푧3풊 + 2푥푦푧3풋 + 3푥푦2푧2풌

If we consider a surface in 3D space with 휑(풓) = 푐표푛푠푡푎푛푡 then the direction normal (i.e. perpendicular) to the surface at the point 풓 is the direction of grad 휑. The magnitude of the greater rate of change of 휑(풓) is the magnitude of grad 휑.

∇φ

휑 = constant

Figure 2.2. Direction of gradient

In physical situations, we may have a potential, 휑, which varies over a particular region and this constitutes a field 퐸, satisfying:

휕휑 휕휑 휕휑 퐸 = −∇φ = − ( 풊 + 풋 + 풌) 휕푥 휕푦 휕푧

Example 2.5: Calculate the electric field at point (푥, 푦, 푧) due to a charge 푞1 at (2, 0, 0) and a charge 푞2 at (-2, 0, 0) where charges are in coulombs and distances are in metres.

Solution: We need to understand the equation for Electric field which is given by:

푞 퐸 = 푘 푐 푟

30 where 푟 is the magnitude or position and 푘푐 is the Coulomb constant and is given by:

1 푘푐 = 4휋휖0

Therefore, the potential at the point (푥, 푦, 푧) is

푞 푞 = − 1 + 2 φ(푥, 푦, 푧) 2 2 2 2 2 2 4휋휖0√(2 − 푥) + 푦 + 푧 4휋휖0√(2 + 푥) + 푦 + 푧

As a result, the components of the fields are

푞1(2 − 푥) 푞2(2 + 푥) 퐸푥 = − 2 2 2 3/2 + 2 2 2 3/2 4휋휖0{(2 − 푥) + 푦 + 푧 } 4휋휖0{(2 + 푥) + 푦 + 푧 }

푞1푦 푞2푦 퐸푦 = − 2 2 2 3/2 + 2 2 2 3/2 4휋휖0{(2 − 푥) + 푦 + 푧 } 4휋휖0{(2 + 푥) + 푦 + 푧 }

푞1푧 푞2푧 퐸푧 = − 2 2 2 3/2 + 2 2 2 3/2 4휋휖0{(2 − 푥) + 푦 + 푧 } 4휋휖0{(2 + 푥) + 푦 + 푧 }

Example 2.6: The function that describes the temperature at any point in the room is given by:

푥 푦 푇(푥, 푦, 푧) = 100 cos ( ) sin ( ) cos 푧 10 10

Find the gradient of 푇, the direction of greatest change in temperature in the room at point (10휋, 10휋, 휋) and the rate of change of temperature at this point.

Solution: First, let’s find the gradient of the function 푇, which is given by equation (37):

휕푇 휕푇 휕푇 ∇ 푇 = 풊 + 풋 + 풌 휕푥 휕푦 휕푧

푥 푦 푥 푦 = [−10 sin ( ) sin ( ) cos 푧] 풊 + [10 cos ( ) cos ( ) cos 푧] 풋 10 10 10 10 푥 푦 − [100 cos ( ) sin ( ) sin 푧] 풌 10 10

Therefore, at the point (10휋, 10휋, 휋) in the room, the direction of the greatest change in temperature is:

31

∇ 푇 = 0풊 − 10풋 + 0풌

And the rate of change of temperature at this point is the magnitude of the gradient, which is

|∇ 푇| = √(−10)2 = 10

2.3.2 Divergence of a vector field

The divergence of a vector field 푨(푥, 푦, 푧) is defined as the dot product of the operator ∇ and 푨:

휕퐴 휕퐴 휕퐴 div 푨 = ∇ ∙ 푨 = 1 + 2 + 3 (38) 휕푥 휕푦 휕푧

where 퐴1, 퐴2 and 퐴3 are the 푥−, 푦 − and 푧 − components of 푨. Clearly, ∇ ∙ 푨 is a scalar field. Any vector field 푨 for which ∇ ∙ 푨 = 0 is said to be solenoidal.

Example 2.7: Find the divergence of a vector field 푨 = 푥2푦2푖⃗ + 푦2푧2푗⃗ + 푥2푧2푘⃗⃗

Solution: This is a straightforward example, using equation (38) we can solve this easily:

휕퐴 휕퐴 휕퐴 ∇ ∙ 푨 = 1 + 2 + 3 휕푥 휕푦 휕푧

= 2(푥푦2 + 푦푧2 + 푥2푧)

Example 2.8: Find the divergence of a vector field 푭 = (푦푧푒푥푦, 푥푧푒푥푦, 푒푥푦 + 3 cos 3푧)

Solution: Again, using equation (38) we can solve this easily:

휕퐹 휕퐹 휕퐹 ∇ ∙ 푭 = 1 + 2 + 3 휕푥 휕푦 휕푧

= 푦2푧푒푥푦 + 푥2푧푒푥푦 − 9 sin 3푧

The value of the scalar div 푨 at point 푟 gives the rate at which the material is expanding or flowing away from the point 푟 (outward flux per unit volume).

32

2.3.2.1 Theorem involving Divergence

Divergence theorem, also known as Gauss theorem, relates a and a within a vector field. Let 푭 be a vector field, 푆 be a closed surface and ℛ be the region inside of 푆, then:

∬ 푭 ∙ 푑푨 = ∭ ∇ ∙ 푭푑푉 (39) 푆 ℛ

Example 2.9: Evaluate the following

∬ (3푥푖⃗ + 2푦푗⃗) ∙ 푑푨 푆 where 푆 is the sphere 푥2 + 푦2 + 푧2 = 9.

Solution: We could parameterize the surface and evaluate the surface integral, but it is much faster to use the . Since:

휕 휕 휕 div (3푥풊 + 2푦풋) = (3푥) + (2푦) + (0) = 5 휕푥 휕푦 휕푧

The divergence theorem gives:

∬ (3푥풊 + 2푦풋) ∙ 푑푨 = ∭ 5 푑푉 푆 ℛ

= 5 × (Volume of sphere)

= 180π

Example 2.10: Evaluate the following

∬ (푦2푧풊 + 푦3풋 + 푥푧풌) ∙ 푑푨 푆 where 푆 is the boundary of the cube defined by −1 ≤ 푥 ≤ 1, −1 ≤ 푦 ≤ 1, 푎푛푑 0 ≤ 푧 ≤ 2.

33

Solution: First let’s solve the divergence of the equation given:

휕 휕 휕 div (푦2푧풊 + 푦3풋 + 푥푧풌) = (푦2푧) + (푦3) + (푥푧) 휕푥 휕푦 휕푧

= 3푦2 + 푥

The divergence theorem gives:

∬ (푦2푧풊 + 푦3풋 + 푥푧풌) ∙ 푑푨 = ∭ (3푦2 + 푥) 푑푉 푆 ℛ

2 1 1 = ∫ ∫ ∫ (3푦2 + 푥) 푑푥 푑푦 푑푧 0 −1 −1

1 = 2 ∫ 6푦2푑푦 −1

= 8

2.3.3 Curl of a vector field

The vector product (cross product) of operator and the vector A is known as the curl or rotation of A. Thus in Cartesian coordinates, we can write:

푖⃗ 푗⃗ 푘⃗⃗ 휕 휕 휕 curl 푨 = ∇ × 푨 = | | (40) 휕푥 휕푦 휕푧 퐴1 퐴2 퐴3

Therefore:

휕퐴 휕퐴 휕퐴 휕퐴 휕퐴 휕퐴 curl 푨 = ∇ × 푨 = ( 3 − 2 푖⃗, 1 − 3 푗⃗, 2 − 1 푘⃗⃗, ) (41) 휕푦 휕푧 휕푧 휕푥 휕푥 휕푦

where 푨 = (퐴1, 퐴2, 퐴3). The vector curl 푨 at point r gives the local rotation (or vorticity) of the material at point r. The direction of curl 푨 is the axis of rotation and half the magnitude of curl 푨 is the rate of rotation or angular frequency of the rotation.

34

Example 2.11: Find the curl of a vector field 풂 = 푥2푦2푧2푖⃗ + 푦2푧2푗⃗ + 푥2푧2푘⃗⃗

Solution: This is a straightforward question. All we have to do is to put the equation in the form of equation (41), we get:

풊 풋 풌 휕 휕 휕 ∇ × 풂 = | | 휕푥 휕푦 휕푧 푥2푦2푧2 푦2푧2 푥2푧2

휕 휕 휕 휕 휕 휕 = [ (푥2푧2) − (푦2푧2)] 풊 − [ (푥2푧2) − (푥2푦2푧2)] 풋 + [ (푦2푧2) − (푥2푦2푧2)] 풌 휕푦 휕푧 휕푥 휕푧 휕푥 휕푦

= −2[푦2푧풊 + (푥푧2 − 푥2푦2푧)풋 + 푥2푦푧2풌]

2.3.3.1 Theorem involving Curl

The theorem involving curl of vectors is better known as Stoke’s theorem. If we consider a surface 푆 in ℝ3 that has a closed non-intersecting boundary, 퐶, the topology of, say, one half of a tennis ball. That is, “if we move along C and fall to our left, we hit the side of the surface where the normal vectors are sticking out”. Stoke’s theorem states that for a vector field 푭 within which the surface is situated is given by:

∮ 푭 ∙ 푑풓 = ∯ (∇ × 푭) ∙ 풏 푑푆 (42) 퐶 푆

The theorem can be useful in either direction: sometimes the line integral is easier than the surface integral, and sometimes vice-versa.

Example 2.12: Evaluate the line integral of the function 푭(푥, 푦, 푧) = 〈푥2푦3, 푒푥푦+푧, 푥 + 푧2〉 around a circle 푥2 + 푦2 = 1 in the plane 푦 = 0, oriented counterclock-wise as viewed from the positive 푦 −direction.

Solution: Whenever we want to integrate a vector field around a closed curve, and it looks like the computation might be messy, think of applying Stoke’s Theorem. The circle 퐶 in question is the positively- oriented boundary of the disc 푆 given by 푥2 + 푦2 ≤ 1, 푦 = 0, with the unit normal vector 푛⃗⃗ pointing in the positive 푦 −direction. That is 풏 = 풋 = 〈0, 1, 0〉.

35

Stoke’s Theorem tells us that:

∮ 푭 ∙ 푑풓 = ∯ (∇ × 푭) ∙ 풏 푑푆 퐶 푆

Evaluating the curl of 푭, we get:

풊 풋 풌 휕 휕 휕 ∇ × 푭 = | | 휕푥 휕푦 휕푧 푥2푦3 푒푥푦+푧 푥 + 푧2

= (−푒푥푦+푧풊 − 풋 + (푦푒푥푦+푧 − 3푥2푦2)풌)

(∇ × 푭) ∙ 풏 = (−푒푥푦+푧풊 − 풋 + (푦푒푥푦+푧 − 3푥2푦2)풌) ∙ (0, 1, 0)

= −1

∮ 푭 ∙ 푑풓 = ∯ (∇ × 푭) ∙ 풏푑푆 퐶 푆

= ∯ −1 푑푆 푆

= −푎푟푒푎(푆)

= −휋

2.4 Repeated Vector Operations – The Laplacian

So far, note the following:

i. grad must operate on a scalar field and gives a vector field in return ii. div operates on a vector field and gives a scalar field in return, and, iii. curl operates on a vector field and gives a vector field in return In addition to the vector relations involving del (∇) mentioned above, there are six other combinations in which del appears twice. The most important one which involves a scalar is:

36

풅풊풗 품풓풂풅 휑 = ∇ ∙ ∇φ = ∇2휑 (43) where 휑(푥, 푦, 푧) that is a scalar point function. The operator ∇2= ∇ ∙ ∇, is also known as the Laplacian, takes a particularly simple form in Cartesian coordinates, which are:

휕2 휕2 휕2 ∇2= + + (44) 휕푥2 휕푦2 휕푧2

When applied to a vector, it yields a vector, which is given in Cartesian coordinates:

휕2푨 휕2푨 휕2푨 ∇2푨 = + + (45) 휕푥2 휕푦2 휕푧2

The cross product of two dels operating on a scalar function yields

풊 풋 풌 휕 휕 휕 | | ∇ × ∇φ = 풄풖풓풍 품풓풂풅 φ = 휕푥 휕푦 휕푧 = 0 (46) |휕휑 휕휑 휕휑| 휕푥 휕푦 휕푧

If ∇ × 푨 = 0 for any vector 푨, then 푨 = ∇휑. In this case, 푨 is irrotational.

Similarly,

∇ ∙ ∇ × 푨 = 풅풊풗 풄풖풓풍 푨 = 0 (47)

Finally, a useful expansion is given by:

∇ × (∇ × 푨) = 풄풖풓풍 풄풖풓풍 푨 = ∇(∇ ∙ 푨) − ∇2푨 (48)

Other forms for other coordinate systems for ∇2 are as follows:

1. Spherical polar coordinates:

1 휕 휕 1 휕 휕 1 휕2 ∇2= 푟2 + (sin 휃 ) + (49) 푟2 휕푟 휕푟 푟2 sin 휃 휕휃 휕휃 푟2 sin2 휃 휕휙2

37

2. Two-dimensional polar coordinates:

휕2 1 휕 1 휕2 ∇2= + + (50) 휕푟2 푟 휕푟 푟2 휕휃2

3. Cylindrical coordinates:

휕2 1 휕 1 휕2 휕2 ∇2= + + + (51) 휕푟2 푟 휕푟 푟2 휕휃2 휕푧2

Several other useful relations are summarised below:

DEL OPERATOR RELATIONS

Let 휑 and 휓 be scalar fields and 푨 and 푩 be vector fields

Sum of fields ∇(휑 + 휓) = ∇휑 + ∇휓

∇ ∙ (푨 + 푩) = ∇ ∙ 푨 + ∇ ∙ 푩

∇ × (푨 + 푩) = ∇ × 푨 + ∇ × 푩

Product of fields ∇(휑휓) = 휑(∇휓) + 휓(∇휑)

∇ ∙ (휑푨) = 휑(∇ ∙ 푨) + (∇휑) ∙ 푨

∇ × (휑푨) = 휑(∇ × 푨) + (∇휑) × 푨

∇ ∙ (푨 × 푩) = 푩 ∙ (∇ × 푨) − 푨 ∙ (∇ × 푩)

∇ × (푨 × 푩) = 푨 ∙ (∇ ∙ 푩) + (푩 ∙ ∇)푨 − 푩(∇ ∙ 푨) − (푨 ∙ ∇)푩

∇(푨 ∙ 푩) = 푨 × (∇ × 푩) − 푩(∇ ∙ 푨) + (푩 ∙ ∇)푨 − (푨 ∙ ∇)푩

Laplacian ∇ ∙ (∇휑) = ∇2휑

∇ × (∇ × 푨) = ∇(∇ ∙ 푨) − ∇2푨

38

Example 2.13: If 푨 = 2푦푧풊 − 푥2푦풋 + 푥푧2풌, 푩 = 푥2풊 + 푦푧풋 − 푥푦풌 and 휙 = 2푥2푦푧3, find (a) (푨 ∙ ∇)휙 (b) 푨 ∙ ∇휙 (c) 푩 × ∇휙 (d) ∇2휙

Solution:

(a) 휕 휕 휕 (푨 ∙ ∇)휙 = [(2푦푧풊 − 푥2푦풋 + 푥푧2풌) ∙ ( 풊 + 풋 + 풌)] 휙 휕푥 휕푦 휕푧

휕 휕 휕 = [2푦푧 − 푥2푦 + 푥푧2 ] 2푥2푦푧3 휕푥 휕푦 휕푧

휕 휕 휕 = 2푦푧 (2푥2푦푧3) − 푥2푦 (2푥2푦푧3) + 푥푧2 (2푥2푦푧3) 휕푥 휕푦 휕푧

= 2푦푧(4푥푦푧3) − 푥2푦(2푥2푧3) + 푥푧2(6푥2푦푧2)

= 8푥푦2푧4 − 2푥4푦푧3 + 6푥3푦푧4

(b) 휕 휕 휕 ∇휙 = (2푥2푦푧3)풊 + (2푥2푦푧3)풋 + (2푥2푦푧3)풌 휕푥 휕푦 휕푧

= 4푥푦푧3풊 + 2푥2푧3풋 + 6푥2푦푧2풌 Therefore

푨 ∙ ∇휙 = (2푦푧풊 − 푥2푦풋 + 푥푧2풌) ∙ (4푥푦푧3풊 + 2푥2푧3풋 + 6푥2푦푧2풌)

= 8푥푦2푧4 − 2푥4푦푧3 + 6푥3푦푧4

(c) ∇휙 = 4푥푦푧3풊 + 2푥2푧3풋 + 6푥2푦푧2풌 , therefore:

풊 풋 풌 푩 × ∇휙 = | 푥2 푦푧 −푥푦 | 4푥푦푧3 2푥2푧3 6푥2푦푧2

39

= (6푥2푦2푧3 + 2푥3푦푧3)풊 + (−4푥2푦2푧3 − 6푥4푦푧2)풋 + (2푥4푧3 − 4푥푦2푧4)풌

(d) 휕2 휕2 휕2 ∇2휙 = (2푥2푦푧3) + (2푥2푦푧3) + (2푥2푦푧3) 휕푥2 휕푦2 휕푧2

= 4푦푧3 + 0 + 12푥2푦푧

40

3. Linear Algebra, Matrices & Eigenvectors

In many practical systems, there naturally arises a set of quantities that can conveniently be represented as a certain dimensional array, referred to as matrix. If matrices were simply a way of representing array of numbers, then they would have only a marginal utility as a means of visualising data. However, a whole branch of mathematics has evolved, involving manipulation of matrices, which has become a powerful tool for the solution f many problems.

For example, consider the set of 푛 linear equations with 푛 unknowns

푎11푌1 + 푎12푌2 + ⋯ + 푎1푛푌푛 = 0 푎 푌 + 푎 푌 + ⋯ + 푎 푌 = 0 21 1 22 2 2푛 푛 (52) ∙∙∙∙∙∙∙∙∙∙∙∙∙∙∙∙∙∙∙∙∙∙∙∙∙∙∙∙∙∙∙∙

푎푛1푌1 + 푎푛2푌2 + ⋯ + 푎푛푛푌푛 = 0

The necessary and sufficient condition for the set to have a non-trivial solution (other than 푌1 = 푌2 = ⋯ = 푌푛 = 0) is that the determinant of the array of coefficients is zero: 푑푒푡(퐴) = 0.

3.1 Basic definitions and notation

A matrix is an array of numbers with 푚 rows and 푛 columns. The (i, j)th element is the element found in row 푖 and column 푗.

For example, have a look at the matrix below. Tis matrix has 푚 = 2 rows, 푛 = 3 column, and therefore th the matrix order is 2 × 3. The (i, j) element is 푎푖푗

푎 푎 푎 퐴 = [ 11 12 13] (53) 푎21 푎22 푎23

Matrices may be categorized based on the properties of its elements. Some basic definitions include:

푇 1. The transpose of matrix 퐴 (or 퐴 ) is formed by interchanging element 푎푖푗 with element 푎푗푖. Therefore:

푇 푇 푇 푇 푇 푇 푇 퐴 = (푎푗푖) , (퐴 + 퐵) = 퐴 + 퐵 , (퐴퐵) = 퐴 퐵 (54)

A symmetric matrix is equals to its transpose, 퐴 = 퐴푇.

41

2. Diagonal matrix is a square matrix (푚 = 푛) that has it’s only non-zero elements along the leading diagonal. For example:

푎11 0 0 푑푖푎푔 퐴 = [ 0 푎22 0 ] 0 0 푎33

Diagonal can also be written for a list of matrices as:

푑푖푎푔 (푎11, 푎22, ⋯ 푎푛푛)

Which denotes the block diagonal matrix with elements 푎11, 푎22, ⋯ 푎푛푛 along te diagonal and zeros elsewhere. A matrix is formed in this way is sometimes called a direct sum of

푎11, 푎22, ⋯ 푎푛푛 and the operation is denoted by ⨁:

푎11⨁ ⋯ ⨁ 푎푛푛 = 푑푖푎푔 (푎11, 푎22, ⋯ 푎푛푛)

3. In a square matrix of order n, the diagonal containing elements 푎11, 푎22, ⋯ 푎푛푛 is called the principle or leading diagonal. The sum of elements in this diagonal is called the trace of 푛 × 푛 square matrix A, hence:

푇푟푎푐푒 (퐴) = 푇푟 (퐴) = ∑ 푎푖푖 (55) 푖

We can also define a few more notations for Trace as:

푇푟(퐴) = 푇푟(퐴푇), 푇푟(푐퐴) = 푐푇푟(퐴), 푇푟(퐴 + 퐵) = 푇푟(퐴) + 푇푟(퐵) (56)

4. Determinant of a square 푛 × 푛 matrix 퐴 is denoted as det(퐴) or |퐴|. It is determined by:

|퐴| = ∑ 푎푖푗푎(푖푗) (57) 푗=1 where: 푖+푗 푎(푖푗) = (−1) |퐴(푖)(푗)| (58)

with |퐴(푖)(푗)| denoting the submatrix that is formed from 퐴 by removing the 푖th row and the 푗th column.

Determinant of a matrix can also be defined as the following:

42

|퐴퐵| = |퐴||퐵|, |퐴| = |퐴푇|, |푐퐴| = 푐푛|퐴| (59)

5. Adjugate of a 푛 × 푛 matrix 퐴 is defined as an 푛 × 푛 matrix of the cofactors of the elements of the transposed matrix. Therefore, we can write Adjugate of 푛 × 푛 matrix 퐴 as:

푇 푎푑푗(퐴) = (푎(푗푖)) = (푎(푖푗)) (60)

Adjugate has an interesting property:

퐴 푎푑푗(퐴) = 푎푑푗(퐴)퐴 = |퐴|퐼 (61)

3.2 Multiplication of matrices and multiplication of vectors and matrices 3.2.1 Matrix multiplication

If we let 퐴 be order 푚 × 푛 and 퐵 of order 푛 × 푝. Then the product of two matrices 퐴 and 퐵 is

퐶 = 퐴퐵 (62) or 푛

푐푖푗 = ∑ 푎푖푘푏푘푗 (63) 푘=1 where the resulting matrix 퐶 is in the order of 푚 × 푝.

Square matrices obey the laws expressed as below:

Associative: 퐴(퐵퐶) = (퐴퐵)퐶 (64)

Distributive: (퐴 + 퐵)퐶 = 퐴퐶 + 퐵퐶, (퐵 + 퐶)퐴 = 퐵퐴 + 퐶퐴 (65)

Matrix Polynomials

Polynomials in square matrices are similar to the more familiar polynomials in scalars. Let us consider:

푘 푝(퐴) = 푏0퐼 + 푏1퐴 + ⋯ 푏푘퐴 (66)

The value of this polynomial is a matrix. The theory of polynomials in general holds, we have the useful factorizations of monomials:

43

For any positive integer k, 퐼 − 퐴푘 = ((퐼 − 퐴)(퐼 + 퐴 + ⋯ 퐴푘−1) (67)

For an odd positive integer k, 퐼 + 퐴푘 = ((퐼 + 퐴)(퐼 − 퐴 + ⋯ 퐴푘−1) (68)

3.2.2 Traces and determinants of square Cayley products

The useful property of the trace for the matrix 퐴 and 퐵 that are conformable for the multiplication 퐴퐵 and 퐵퐴 is

푇푟(퐴퐵) = 푇푟 (퐵퐴) (69)

This is obvious from the definitions of matrix multiplication and the trace. Due to associativity of matrix multiplications, equation (18) can be further extended to:

푇푟(퐴퐵퐶) = 푇푟 (퐵퐶퐴) = 푇푟(퐶퐴퐵) (70)

If 퐴 and 퐵 are square matrices conformable for multiplication, then an important property of the determinant is

|퐴퐵| = |퐴||퐵| (71)

Or we can write the equation as:

퐴 0 |[ ]| = |퐴||퐵| (72) −퐼 퐵

3.2.3 The Kronecker product

The Kronecker multiplication, denoted by ⨂, is not commutative, rather it is associative. Therefore, 퐴 ⨂ 퐵 may not equal to 퐵 ⨂ 퐴. Let us have a 푚 × 푚 matrix 퐴 and an 푛 × 푛 matrix 퐵. We can then form an 푚푛 × 푚푛 matrix 퐶 by defining the direct product as:

44

푎 퐵 푎 퐵 ⋯ 푎 퐵 11 12 1푚 푎 퐵 푎 퐵 ⋯ 푎 퐵 퐶 = 퐴 ⨂ 퐵 = 21 22 2푚 (73) ⋮ ⋮ ⋮ [푎푚1퐵 푎푚2퐵 ⋯ 푎푚푚퐵]

To be more specific, let 퐴 and 퐵 be a 2 × 2 matrices

푎 푎 푏 푏 퐴 = [ 11 12] 퐵 = [ 11 12] 푎21 푎22 푏21 푏22

The Kronecker product matrix 퐶 is the 4 × 4 matrix

푎11푏11 푎11푏12 푎12푏11 푎12푏12 푎 푏 푎 푏 푎 푏 푎 푏 퐶 = 퐴 ⨂ 퐵 = [ 11 21 11 22 12 21 12 22] 푎21푏11 푎21푏12 푎22푏11 푎22푏12 푎21푏21 푎21푏22 푎22푏21 푎22푏22

The determinant of Kronecker product of two square matrices 푚 × 푚 matrix 퐴 and an 푛 × 푛 matrix 퐵 has a simple relationship to the determinant of the individual matrices. Hence:

|퐴 ⨂ 퐵| = |퐴|푚|퐵|푛 (74)

Assuming the matrices are conformable for the indicated operations, some additional properties of Kronecker products are as follows:

(푎퐴) ⨂ (푏퐵) = 푎푏(퐴 ⨂ 퐵) = (푎푏퐴) ⨂ 퐵 = 퐴 ⨂ (푎푏퐵) (75) where 푎 and 푏 are scalars.

(퐴 + 퐵) ⨂ (퐶) = 퐴 ⨂ 퐶 + 퐵 ⨂ 퐶 (76)

(퐴 ⨂ 퐵 ) ⨂ 퐶 = 퐴 ⨂ (퐵 ⨂ 퐶) (77)

(퐴 ⨂ 퐵 ) 푇 = 퐴푇 ⨂ 퐵푇 (78)

(퐴 ⨂ 퐵 )(퐶 ⨂ 퐷 ) = 퐴퐶 ⨂ 퐵퐷 (79)

45

3.3 Matrix Rank and the Inverse of a full rank matrix

The linear dependence or independence of the vectors forming the rows or columns of a matrix is an important characteristic of the matrix. The maximum number of linearly independent vectors is called the rank of the matrix, 푟푎푛푘 (퐴). Multiplication by a non-zero scalar does not change the linear dependence of vectors. Therefore, for the scalar 푎 with 푎 ≠ 0, we have

푟푎푛푘 (푎퐴) = 푟푎푛푘(퐴) (80)

For a 푛 × 푚 matrix 퐴,

푟푎푛푘 (퐴) ≤ min(푛, 푚) (81)

Example 3.1: Find the rank of the matrix 퐴 below:

1 2 1 퐴 = [−2 −3 1] 3 5 0

Solution: First we understand that this is a 3 × 3 matrix. If w elook closey we can see that the first two rows are linearly independent. However, the third row is dependent on the first and second rows where 푅표푤 1 − 푅표푤 2 = 푅표푤 3. Therefore, rank of matrix 퐴 is 2.

3.3.1 Full Rank matrices

If the rank of a matrix is the same as its smaller dimension, we say the matrix is of full rank. In the case of non-square matrix, we say the matrix is of full row rank or full column rank just to emphasis which one is of the smaller number. A matrix is a full row rank when each row is linearly independent, while ma matrix is a full column rank when each column s linearly independent. For a square matrix, however, the matrix is a full rank when all rows and columns are linearly independent and that the determinant of the matrix is not zero.

Rank of product of two matrices is less than or equals to the lesser rank of the two, or:

푟푎푛푘 (퐴퐵) ≤ min(푟푎푛푘(퐴), 푟푎푛푘(퐵)) (82)

Rank of sum of two matrices is less than or equals to the sum of their ranks, or:

푟푎푛푘 (퐴 + 퐵) ≤ 푟푎푛푘 (퐴) + 푟푎푛푘 (퐵) (83)

46

From equation (83), we can also write:

|푟푎푛푘 (퐴) − 푟푎푛푘 (퐵)| ≤ 푟푎푛푘 (퐴 + 퐵) (84)

3.3.2 Solutions of linear equations

An application of vectors and matrices involve systems of linear equations:

푎11푥1 + ⋯ + 푎1푚푥푚 = 푏1 ⋮ ⋮ ⋮ (85)

푎푛1푥1 + ⋯ + 푎푛푚푥푚 = 푏푛

Or 퐴푥 = 푏 (86)

In this system, 퐴 is called the coefficient matrix. The 푥 that satisfied this system of equation is then called the solution to the system. For a given 퐴 and 푏, a solution may or may not exist. A system for which a solution exist, is said to be consistent; otherwise, it is inconsistent. A linear system 퐴푛푥푚푥 = 푏 is consistent if and only if:

푅푎푛푘([퐴|푏]) = 푟푎푛푘 (퐴) (87)

Namely, the space spanned by the columns of 퐴 is the same as that spanned by the columns of 퐴 and the vector 푏; therefore, 푏 must be a linear combination of the columns of 퐴. A special case that yields equation (87) for any 푏 is:

푅푎푛푘(퐴푛푥푚) = 푛 (88)

And so if 퐴 is of full row rank, the system is consistent regardless of the value of 푏. In this case, of course, the number of rows of 퐴 must not be greater than the number of columns. A square system in which 퐴 is non-singular is clearly consistent, and the solution is given by:

푥 = 퐴−1푏 (89)

3.3.3 Preservation of positive definiteness

A certain type of product of a full rank matrix and a positive definite matrix preserves not only the rank but also the positive definiteness. If 퐶 is an 푛 × 푛 and positive definite and 퐴 = 푛 × 푚 of rank 푚 (푚 ≤ 푛), then 퐴푇퐶퐴 is positive definite. To understand this, let us assume the matrix 퐶 and 퐴 as

47 described. Let 푥 be any 푚-vector such that 푥 ≠ 0 and ley 푦 = 퐴푥. Because 퐴 is a full column rank, therefore 푦 ≠ 0, we then have:

푥푇(퐴푇퐶퐴)푥 = (푥퐴)푇퐶(퐴푥) = 푦푇퐶푦 > 0 (90)

Therefore, to summarise:

1. If 퐶 is positive definite and 퐴 is of full column rank, then 퐴푇퐶퐴 is positive definite.

Furthermore, we then have the converse:

2. If 퐴푇퐶퐴 is positive definite, then 퐴 is of full column rank.

For otherwise there exists an 푥 ≠ 0 such that 퐴푥 = 0, and so 푥푇(퐴푇퐶퐴)푥 = 0.

3.3.4 A lower bound on the rank of a matrix product

Equation (82) gives an upper bound on the rank of the product of two matrices; where the rank cannot be greater than the rank of either of the factors. Now, we will develop a lower bound of the rank of the product of two matrices if one of them is square.

If 퐴 is an 푛 × 푛 (square) and 퐵 is a matrix with n rows, then:

푟푎푛푘 (퐴퐵) ≥ 푟푎푛푘 (퐴) + 푟푎푛푘 (퐵) − 푛 (91)

3.3.5 Inverse of products and sums of matrices

The inverse of the Cayley product of two nonsingular matrices of the same size is particularly easy to form. If 퐴 and 퐵 are square full rank matrices of the same size, then:

(퐴퐵)−1 = 퐵−1퐴−1 (92)

퐴(퐼 + 퐴)−1 = (퐼 + 퐴−1)−1 (93)

(퐴 + 퐵퐵 )−1퐵 = 퐴−1퐵(퐼 + 퐵푇퐴−1퐵)−1 (94)

(퐴−1 + 퐵−1 )−1 = 퐴(퐴 + 퐵)−1퐵 (95)

48

퐴 − 퐴(퐴 + 퐵 )−1퐴 = 퐵 − 퐵(퐴 + 퐵 )−1퐵 (96)

퐴−1 + 퐵−1 = 퐴−1(퐴 + 퐵)퐵−1 (97)

(퐼 + 퐴퐵 )−1 = 퐼 − 퐴(퐼 + 퐵퐴)−1퐵 (98)

(퐼 + 퐴퐵 )−1퐴 = 퐴(퐼 + 퐵퐴)−1 (99)

(퐴 ⨂ 퐵 )−1 = 퐴−1 ⨂ 퐵−1 (100)

Note: When 퐴 and/or 퐵 are not full rank, the inverse may not exist.

3.4 Eigensystems

Suppose 퐴 is an 푛 × 푛 matrix. The number 휆 is said to be an eigenvalue of 퐴 if some non-zero vector 풙, 퐴풙 = 휆풙. Any non-zero vector 풙 for which this equation holds is called an eigenvector for eigenvalue 휆 or an eigenvector of 퐴 corresponding to eigenvalue 휆.

How to find eigenvalues and eigenvectors? To determine whether 휆 is an eigenvalue of 퐴, we need to determine whether there are any non-zero solutions to the matrix equation 퐴풙 = 휆풙. To do this, we can define the following:

(a) The eigenvalues of a symmetric matrix 퐴 are the numbers 휆 that satisfy |퐴 − 휆퐼| = 0 (b) The eigenvectors of a symmetric matrix 퐴 are the vectors 풙 that satisfy (퐴 − 휆퐼)풙 = 0

There are two theorems involved in the eigensystems and they are:

1. The eigenvalues of any real symmetric matrix are real. 2. The eigenvectors of any real symmetric matric corresponding to different eigenvalues are orthogonal.

Example 3.2: Let 퐴 be a square matrix as below. Find the eigenvalues and eigenvectors of matrix 퐴

1 1 퐴 = [ ] 2 2

Solution: To find the eigenvalues, we will need to find the determinant of |퐴 − 휆퐼| = 0, therefore:

49

1 1 1 0 |퐴 − 휆퐼| = |( ) − 휆 ( )| 2 2 0 1

1 − 휆 1 = | | 2 2 − 휆

= (1 − 휆)(2 − 휆) − 2

= 휆2 − 3휆

So, the eigenvalues are the solutions of 휆2 − 3휆 = 0. We could simplify the equation is 휆(휆 − 3) = 0 with the solutions of 휆 = 0 and 휆 = 3. Hence the eigenvalues of 퐴 are 0 and 3.

Now to find the eigenvectors for eigenvalue 0 we can solve the system (퐴 − 0퐼)풙 = 0, that is 퐴풙 = 0, or

퐴풙 = 0

1 1 푥1 0 [ ] [ ] = [ ] 2 2 푥2 0

We then have to solve for

푥1 + 푥2 = 0 , 2푥1 + 2푥2 = 0

Which gives 푥1 = −푥2 = −1. Therefore, the eigenvectors for eigenvalue 0 is:

−1 풙 = [ ] 1

Similarly, to find the eigenvector for eigenvalue 3, we will solve (퐴 − 3퐼)풙 = 0 which is:

(퐴 − 3퐼)풙 = 0

−2 1 푥1 0 [ ] [ ] = [ ] 2 −1 푥2 0

This is equivalent to the equations

−2푥1 + 푥2 = 0 , 2푥1 − 푥2 = 0

Which gives 푥2 = 2푥1. If we choose 푥1 = 1, we then obtain the eigenvectors

50

1 풙 = [ ] 2 Example 3.3: Suppose that

4 0 4 퐴 = [0 4 4] 4 4 8

Find the eigenvalues of 퐴 and obtain one eigenvector for each eigenvalues.

Solutions: To find the eigenvalues, we will solve |퐴 − 휆퐼| = 0, so we can write as:

4 − 휆 0 4 |퐴 − 휆퐼| = | 0 4 − 휆 4 | 4 4 8 − 휆

4 − 휆 4 0 4 − 휆 = (4 − 휆) | | + 4 | | 4 8 − 휆 4 4

= (4 − 휆) ((4 − 휆)(8 − 휆) − 16) + 4(−4(4 − 휆))

= (4 − 휆) ((4 − 휆)(8 − 휆) − 16) − 16(4 − 휆)

= (4 − 휆) ((4 − 휆)(8 − 휆) − 16 − 16)

= (4 − 휆) (32 − 12휆 + 휆2 − 32)

= (4 − 휆) (휆2 − 12휆)

= (4 − 휆) 휆 (휆 − 12)

Therefore, we can solve |퐴 − 휆퐼| = 0 and the eigenvalues are 4, 0, 12.

To find the eigenvectors for eigenvalue of 4, we solve the equation (퐴 − 4퐼)풙 = 0, that is,

(퐴 − 4퐼)풙 = 0

0 0 4 푥1 0 [0 0 4] [푥2] = [0] 4 4 4 푥3 0

51

The equations we get out of the equation above are:

4푥3 = 0 4푥3 = 0 4푥1 + 4푥2 + 4푥3 = 0

Therefore, 푥3 = 0 and 푥2 = −푥1. Choosing 푥1 = 1, we get the eigenvector

1 풙 = [−1] 0

Similar solution for 휆 = 0, the eigenvector is:

1 풙 = [ 1 ] −1

And the solution for 휆 = 12, the eigenvector is:

1 풙 = [1] 2

3.5 Diagonalisation of symmetric matrices

A square matrix 푈 is said to be orthogonal if its reverse (if it exists) is equals to its transpose. Therefore:

푈−1 = 푈푇 or equivalently, 푈푈푇 = 푈푇푈 = 퐼 (101)

If 푈 is a real orthogonal matrix of order 푛 × 푛 and 퐴 is a real matrix of the same order then 푈푇퐴푈 is called the orthogonal transform of 퐴.

Note: Since 푈−1 = 푈푇 for orthogonal U, the equality of 푈푇퐴푈 = 퐷 is the same as 푈−1퐴푈 = 퐷, the diagonal entries of 퐷 are the eigenvalues of 퐴, and the columns of 푈 are the corresponding eigenvectors.

The theorems involving diagonalization of a symmetric matrix are as follows:

52

1. If 퐴 is a symmetric matrix in the order of 푛 × 푛 then it is possible to find an orthogonal matrix 푈 of the same order such that the orthogonal transform of 퐴 with respect to 푈 is diagonal and the diagonal elements of the transform are the eigenvalues of 퐴. 2. Cayley-Hamilton Theorem: A real square matrix satisfies its own characteristic equation (i.e. its own eigenvalue equation).

푛 푛−1 푛−2 퐴 + 푎푛−1퐴 + 푎푛−2퐴 + ⋯ + 푎1퐴 + 푎0퐼 = 0 Where

푛 푛−1 푎0 = (−1) |퐴| , 푎푛−1 = (−1) 푡푟(퐴)

3. Trace Theorem: The sum of eigenvalues of matrix 퐴 is equals to the sum of the diagonal elements of 퐴 and is defined as 푇푟(퐴). 4. Determinant Theorem: The product of eigenvalues of 퐴 is equals to the determinant of 퐴.

Example 3.4: If we worked on the same matrix as in example 3 before, find the orthogonal matrix U, and shows that 푈푇퐴푈 = 퐷:

4 0 4 퐴 = [0 4 4] 4 4 8

Solution: As we already observed, matrix A is symmetric, and we have calculated the three distinct eigenvalues 4, 0, 12 (in that order) and the eigenvectors associated with them are:

1 1 1 [−1] , [ 1 ] , [1] 0 −1 2

Now these eigenvectors are not f length 1. For example, the first eigenvector has a length of √12 + (−1)2 + 02 = √2. So, if we divide each row by √2., we will indeed obtain eigenvector of length 1.

1/√2 [−1/√2] 0

We can similarly normalize the other two vectors and therefore we obtain:

53

1/√3 1/√6 [ 1/√3 ] , [1/√6] −1/√3 2/√6

Now we can form the matrix 푈 whose columns are these normalized eigenvectors:

1/√2 1/√3 1/√6 푈 = [−1/√2 1/√3 1/√6] 0 −1/√3 2/√6

Therefore, U is orthogonal and 푈푇퐴푈 = 퐷 = diag(4, 0, 12).

54

4. Generalised – Integral Theorems

The four fundamental theorems of vector calculus are generalisations of the fundamental theorem of calculus, which equates the integral of the derivative G'(t) to the values of G(t) at the interval boundary points:

푏 ∫ 퐺′(푡) 푑푡 = 퐺(푏) − 퐺(푎) (102) 푎

Similarly, the fundamental theorems of vector calculus state that an integral of some type of derivative over some object is equal to the values of function along the boundary of that object. The four fundamental theorems are the gradient theorem for line integral, Green’s theorem, Stokes’ theorem and the divergence theorem.

4.1 The gradient theorem for line integral

The Gradient Theorem is also referred to as the Fundamental Theorem of Calculus for Line . It represents the generalisation of an integration along an axis, e.g. dx or dy, to the integration of vector fields along arbitrary curves, C, in their base space. It is expressed by

∫ ∇푓 ∙ 푑풔 = 푓(퐪) − 푓(퐩) (103) 퐶 where p and q are the endpoints of C. This means the line integral of the gradient of some function is just the difference of the function evaluated at the endpoints of the curve. In particular, this means that the integral of f does not depend on the curve itself. A few notes to remember when using this theorem:

i. For closed curves, the line integral is zero

∫ ∇푓 ∙ 푑풔 = 0 퐶 ii. Gradient fields are path independent: if F =  f, then the line integral between two points P and Q does not depend on the path connecting the two points. iii. The theorem holds in any dimensions. In one-dimension, it reduces to the fundamental theorem of calculus as per equation (103) above. iv. The theorem justifies the name conservative for gradient vector fields.

55

Example 4.1: Let f (x, y, z) = x2 + y4 + z. Find the line integral of the vector field F (x, y, z) =  f (x, y, z) along the path s(t) = cos(5t), sin(2t), t2 from t = 0 to t = 2.

Solution:

At t = 0, s(0) = 1, 0, 0, therefore, f (s (0)) = 1

At t = 2, s(2) = 1, 0, 42, therefore f (s (2)) = 1+42

Hence:

∫ ∇푓 ∙ 푑풔 = 푓(퐬 (2)) − 푓(퐬 (0)) 퐶

∫ ∇푓 ∙ 푑풔 = 1 + 42 − 푓(퐬 (0)) = 42 퐶

4.2 Green’s Theorem

Let’s first define some notation. Consider a domain 풟 whose boundary 풞 is a simple closed curve – that is, a closed curve that does not intersect itself (see Figure 4.1 below). We follow standard usage and denote the boundary curve 풞 by 휕풟. The counterclockwise orientation of 휕풟 is called the boundary orientation. When you traverse the boundary in this direction, the domain lies to your left (see Figure 4.1).

Figure 4.1. The boundary of 풟 is a simple closed curve 풞 that is denoted by 휕풟. The boundary is oriented in the counterclockwise direction.

We have two notations for the line integral of F = F1, F2, which are:

56

∫ 퐅 ∙ 푑풔 and ∫ 퐹1 푑푥 + 퐹2 푑푦 (104) 퐶 퐶

If 풞 is parametrized by c (t) = (x (t), y (t)) for a  t  b, then

푑푥 = 푥′(푡)푑푡, 푑푦 = 푦′(푡)푑푡

푏 ′ ′ ∫ 퐹1 푑푥 + 퐹2 푑푦 = ∫ [퐹1 (푥(푡), 푦(푡))푥 (푡) + 퐹2 (푥(푡), 푦(푡))푦 (푡)]푑푡 (105) 퐶 푎

In this section, we will assume that the components of all vector fields have continuous partial derivatives, and also that 풞 is smooth (풞 has a parametrization with derivatives of all orders) or piecewise smooth (a finite union of smooth curves joined together at corners).

Green’s Theorem: Let 풟 be a domain whose boundary 휕풟 is a simple closed curve, oriented counterclockwise. Then:

휕퐹2 휕퐹1 ∫ 퐹1 푑푥 + 퐹2 푑푦 = ∬ ( − ) 푑퐴 (106) 휕풟 풟 휕푥 휕푦

Proof: A complete proof is quite technical, so we shall make the simplifying assumption that the boundary of 풟 can be described as the union of two graphs y = g (x) and y = f (x), with g (x)  f (x), as in figure 4.2 and also as the union of two graphs x = g1 (y) and x = f1 (y), with g1 (y)  f1 (y), as in Figure 4.3.

Green’s Theorem splits up into two equations, one for F1 and one for F2:

휕퐹1 ∫ 퐹1 푑푥 = − ∬ 푑퐴 (107) 휕풟 풟 휕푦

휕퐹2 ∫ 퐹2 푑푦 = ∬ 푑퐴 (108) 휕풟 풟 휕푥

In other words, Green’s Theorem is obtained by adding equations (107) and (108). To prove equation (107), we write:

57

∫ 퐹1 푑푥 = ∮ 퐹1 푑푥 + ∮ 퐹1 푑푥 (109) 휕풟 풞1 풞2

where 풞1 is the graph of y = g (x) and 풞2 is the graph of y = f (x), oriented as in Figure 4.2. To compute these line integrals, we parametrized the graphs from left to right using t as parameter:

Graph of y = g (x): c1 (t) = (t, g (t)), a  t  b

Graph of y = f (x): c2 (t) = (t, f (t)), a  t  b

Since 풞2 is oriented from right to left, the line integral over 휕풟 is the difference

∫ 퐹1 푑푥 = ∮ 퐹1 푑푥 − ∮ 퐹1 푑푥 휕풟 풞1 풞2

Figure 4.2. The boundary curve 휕풟 is Figure 4.3. The boundary curve 휕풟 is the union of the graphs of y = g (x) and also the union of the graphs of x = g1 (x) y = f (x) oriented counterclockwise. and y = f (x), oriented counterclockwise.

In both parametrizations, x = t, so dx = dt and by Equation (105),

푏 푏 ∫ 퐹1 푑푥 = ∫ 퐹1(푡, 푔(푡))푑푡 − ∫ 퐹1(푡, 푓(푡))푑푡 (110) 휕풟 푡=푎 푡=푎

58

휕퐹 Now, the key step is to apply the Fundamental Theorem of Calculus to 1 (푡, 푦) as a function of y with t 휕푦 held constant:

푓(푡) 휕퐹1 퐹1(푡, 푓(푡)) − 퐹1(푡, 푔(푡))푑푡 = ∫ (푡, 푦) 푑푦 푦=푔(푡) 휕푦

Substituting the integral on the right in Equation (110), we obtain Equation (107)

푏 푓(푡) 휕퐹1 휕퐹1 ∫ 퐹1 푑푥 = − ∫ ∫ (푡, 푦) 푑푦 푑푡 = − ∬ 푑퐴 휕풟 푡=푎 푦=푔(푡) 휕푦 풟 휕푦

Equation (108) is proved in a similar fashion, by expressing 휕풟 as the union of the graphs of x = f1 (y) and x = g1 (y).

Recall that if curl F = 0 in a simply connected region, then the line integral along a closed curve is zero. If two curves connect two points then the line integral along those curves agrees. Therefore, Equation (106) becomes:

휕퐹 휕퐹 2 − 1 = 0 휕푥 휕푦

Example 4.2: Verify Green’s Theorem for the line integral along the unit circle 풞, oriented counterclockwise

∮ 푥푦2 푑푥 + 푥 푑푦 풞

Solution: Step 1. Evaluate the line integral directly.

We use the standard parametrization of the unit circle to be:

푥 = cos 휃, 푦 = sin 휃 푑푥 = − sin 휃, 푑푦 = cos 휃 푑휃

The line integrand in the line integral is

59

푥푦2 푑푥 + 푥 푑푦 = cos 휃 sin2 휃(− sin 휃 푑휃) + cos 휃 (cos 휃 푑휃)

= (− cos 휃 sin3 휃 + cos2 휃) 푑휃

And 2휋 ∮ 푥푦2 푑푥 + 푥 푑푦 = ∫ (− cos 휃 sin3 휃 + cos2 휃) 푑휃 풞 0

2휋 sin4 휃 1 1 2휋 = − | + (휃 + sin 2휃)| 4 2 2 0 0

1 = 0 + (2휋 + 0) 2

= 휋

Step 2: Evaluate the line integral using Green’s Theorem.

2 In this example, F1 = xy and F2 = x, so

휕퐹 휕퐹 휕 휕 2 − 1 = 푥 − 푥푦2 = 1 − 2푥푦 휕푥 휕푦 휕푥 휕푦

According to Green’s Theorem, from Equation (106):

휕퐹 휕퐹 ∮ 푥푦2 푑푥 + 푥 푑푦 = ∬ ( 2 − 1) 푑퐴 = ∬ 1 − 2푥푦 푑퐴 풞 풟 휕푥 휕푦 풟

Where 풟 is the disk x2 + y2  1 enclosed by 풞. The integral 2xy over 풟 is zero by symmetry – the contributions for positive and negative x cancel. We can check this directly:

1 √1−푥2 1 2 ∬ 1 − 2푥푦 푑퐴 = −2 ∫ ∫ 푥푦 푑푦 푑푥 = − ∫ 푥푦2|√1−푥 푑푥 = 0 푦=−√1−푥2 풟 푥=−1 푦=−√1−푥2 푥=−1

Therefore,

휕퐹 휕퐹 ∬ ( 2 − 1) 푑퐴 = ∬ 1 푑퐴 = Area (풟) = 휋 풟 휕푥 휕푦 풟

60

4.3 Stokes’ Theorem

Stokes’ Theorem is an extension of Green’s Theorem to thre dimensions in which circulation is related to a surface integral in ℝ3 (rather than to a double integral in the plane). In order to state it, let’s first introduce some definitions and terminology.

Figure 4.4 shows three surfaces with different types of boundaries. The boundary of a surface is denoted as 휕푆. Observe that the boundary in (A) is a single, simple closed curve and the boundary in (B) consists of three closed curves. The surface in (C) is called a closed surface because its boundary is empty. In this case, we write 휕푆 = 0.

Figure 4.4. Surfaces and their boundaries.

Recall that an orientation is a continuously varying choice of unit normal vector at each point of a surface S. When S is oriented, we can specify an orientation of 휕푆, called the boundary orientation.

Imagine that you are a unit vector walking along the boundary curve. The boundary orientation is the direction for which the surface is on your left as you walk. For example, the boundary of the surface in

Figure 4.5 consists of two curves, 풞1 and 풞2.

In Figure 4.5 (A), the normal vector points to the outside. The woman (representing the normal vector) is walking along 풞1 and has the surface to her left, so she is walking in the positive direction. The curve 풞2 is oriented in the opposite direction because she would have to walk along 풞2 in that direction to keep the surface to her left.

The boundary orientations in Figure 4.5 (B) are reversed because the opposite normal has been selected to orient the surface.

61

Figure 4.5. The orientation of the boundary 휕푆 for each of the two possible orientations of the surface S.

Recall from Chapter 2: All that is left to do is to define curl. The curl of a vector field F = F1, F2, F3 is a vector field defined by the symbolic determinant

풊 풋 풌 휕 휕 휕 curl (퐅) = | | 휕푥 휕푦 휕푧 퐹1 퐹2 퐹3

휕퐹 휕퐹 휕퐹 휕퐹 휕퐹 휕퐹 = ( 3 − 2 ) 풊 − ( 3 − 1 ) 풋 + ( 2 − 1) 풌 휕푦 휕푧 휕푥 휕푧 휕푥 휕푦

Recall from Chapter 2, the curl is the symbolic cross product

curl (퐅) = ∇ × 퐅 where  is the del “operator” (also called “nabla”):

휕 휕 휕 ∇= 〈 , , 〉 휕푥 휕푦 휕푧

It is straightforward to check that curl obeys the linearity rules:

curl (F + G) = curl (F) + curl (G)

curl (c F) = c curl (F) (c being any constant)

62

Now, going back to Stokes’ Theorem, let’s assume that S is an oriented surface with parametrization G : 풟 → S, where 풟 is a domain in the plane bounded by smooth, simple closed curves, and G is one-to-one and regular, except possibly on the boundary of 풟. More generally, S may be a finite union of surfaces of this type. The surfaces in applications we consider, such as spheres, cubes and graphs of functions, satisfy these conditions.

For surface S described above, Stokes’ Theorem gives:

∮ 퐅. 푑퐬 = ∫ ∫ curl (퐅). 푑퐬 (111) 휕푆 푆

The integral on the left is defined relative to the boundary orientation of 휕푆. If S is closed (that is, 휕푆 is empty), then the surface integral on the right is zero.

Proof: Each side of Equation (111) is equal to a sum over the components of F:

∮ 퐅. 푑퐬 = ∮ 퐹1 푑푥 + 퐹2 푑푦 + 퐹3 푑푧 풞 풞

∫ ∫ curl (퐅). 푑퐬 = ∫ ∫ curl (퐹1풊). 푑퐬 + ∫ ∫ curl (퐹2풋). 푑퐬 + ∫ ∫ curl (퐹3풌). 푑퐬 푆 푆 푆 푆

The proof consists of showing that the F1-, F2-, and F3- terms are separately equal.

We will proof this under the simplifying assumption that S is the z = f (x, y) lying over a domain in the xy-plane. Furthermore, we will carry the details only for the F1- terms. The calculation for F2- and F3- components are similar.

Thus we shall prove that

∮ 퐹1 푑푥 = ∫ ∫ curl (퐹1(푥, 푦, 푧)풊). 푑퐬 풞 푆 (112)

63

Figure 4.6.

Orient S with upward-pointing normal as in Figure 4.6 and let 풞 = 휕푆 be the boundary curve. Let 풞0 be the boundary of 풟 in the xy-plane, and let c0 (t) = (x(t), y(t)) (for a  t  b) be a counterclockwise parametrization of 풞0 as in Figure 4.6. The boundary curve 풞 projects onto 풞0 so 풞 has parametrization

퐜(푡) = (푥(푡), 푦(푡), 푓(푥(푡), 푦(푡)))

And thus

푏 푑푥 ∮ 퐹1 (푥, 푦, 푧) 푑푥 = ∫ 퐹1(푥(푡), 푦(푡), 푓(푥(푡), 푦(푡))) 푑푡 풞 푎 푑푡

The integral on the right is precisely the integral we obtain by integrating 퐹1(푥, 푦, 푓(푥, 푦))푑푥 over the 2 curve 풞0 in the plane ℝ . In other words,

∮ 퐹1 (푥, 푦, 푧) 푑푥 = ∫ 퐹1(푥, 푦, 푓(푥, 푦))푑푥 풞 풞0

By applying Green Theorem to the integral on the right,

64

휕 ∮ 퐹1 (푥, 푦, 푧) 푑푥 = ∫ ∫ 퐹1(푥, 푦, 푓(푥, 푦)) 푑퐴 풞 풟 휕푦

By the ,

휕 퐹 (푥, 푦, 푓(푥, 푦)) = 퐹 (푥, 푦, 푓(푥, 푦)) + 퐹 (푥, 푦, 푓(푥, 푦))푓 (푥, 푦) 휕푦 1 1푦 1푧 푦

So, we finally obtain

∮ 퐹1 푑푥 = ∫ ∫ (퐹1푦(푥, 푦, 푓(푥, 푦)) + 퐹1푧(푥, 푦, 푓(푥, 푦))푓푦(푥, 푦)) 푑퐴 (113) 풞 풟

To finish the proof, we will compute the surface integral of curl (퐹1푖⃗) using the parametrization G (x, y) = (x, y, f(x, y)) of S:

(Note that n is the upward-pointing normal)

퐧 = 〈−푓푥(푥, 푦), −푓푦(푥, 푦), 1〉

curl (퐹1푖⃗) ∙ 퐧 = 〈0, 퐹1푧, −퐹1푦〉 ∙ 〈−푓푥(푥, 푦), −푓푦(푥, 푦), 1〉

= 퐹1푧(푥, 푦, 푓(푥, 푦)푓푦(푥, 푦) − 퐹1푦(푥, 푦, 푓(푥, 푦)

∫ ∫ curl (퐹1푖⃗) ∙ 푑퐬 = − ∫ ∫ (퐹1푧(푥, 푦, 푓(푥, 푦)푓푦(푥, 푦) − 퐹1푦(푥, 푦, 푓(푥, 푦) 푑퐴 (114) 푆 풟

The right-hand sides of Equation (113) and Equation (114) are equal. This proves Equation (112)

Example 4.3: Let F(x, y, z) = −푦2푖⃗ + 푥푗⃗ + 푧2푘⃗⃗ and 풞 is the curve of intersection of the plane y + z = 2 and the cylinder x2 + y2 = 1 (Orient 풞 to be counterclockwise when viewed from above). Evaluate

∫ 퐅 ∙ 푑퐫 풞

65

Solution: We first compute for F(x, y, z) = −푦2푖⃗ + 푥푗⃗ + 푧2푘⃗⃗;

푖⃗ 푗⃗ 푘⃗⃗ 휕 휕 휕 푐푢푟푙 퐹 = | | = (1 + 2푦)푘⃗⃗ | 휕푥 휕푦 휕푧| −푦2 푥 푧2

If we look at the figure above, there are many surfaces with boundary 풞. The most convenient choice, though, is the elliptical region S in the plane y + z = 2 that is bounded by 풞. If we orient S upward, 풞 has the induced positive orientation.

The projection 풟 of S on the xy-plane is the disk x2 + y2  1, so by using the equation z = 2 – y and applying the Stokes’ Theorem, we obtain:

∫ 퐅 ∙ 푑퐫 = ∫ ∫ curl 퐅 ∙ 푑퐒 = ∫ ∫ (1 + 2푦) 푑퐴 풞 푆 풟

2휋 1 = ∫ ∫ (1 + 2푟 sin 휃) 푟 푑푟 푑휃 0 0

` 2휋 푟2 푟3 = ∫ [ + 2 sin 휃] 푑휃 2 3 0 0

2휋 1 2 = ∫ ( + sin 휃) 푑휃 0 2 3

66

1 = (2휋) + 0 = 휋 2

4.4 Divergent Theorem

In section 4.2, Green Theorem was written in a vector version as:

휕퐹2 휕퐹1 ∫ 퐹1 푑푥 + 퐹2 푑푦 = ∬ ( − ) 푑퐴 = ∬ div 퐅(푥, 푦) 푑퐴 퐶 풟 휕푥 휕푦 풟 where 퐶 is the positively oriented boundary curve of the plane region 풟. If we were seeking to extend this theorem to vector fields on ℝ3, we might make the guess that

∬ 퐅. 퐧 푑푆 = ∭ div 퐅 (푥, 푦, 푧)푑푉 푆 퐸 (115) where S is the boundary surface of the solid region E.

Let E be a simple solid region and let S be the boundary surface of E, given positive outward orientation and F be a vector field whose component functions have continuous partial derivatives on an open region that contains E. Therefore, Divergence Theorem can be written as:

∬ 퐅 푑푆 = ∭ div 퐅 푑푉 푆 퐸 (116)

Note that Divergent Theorem are also usually called Gauss Theorem.

Example 4.4: Evaluate

∬ 퐅 푑푆 푆

2 Where F(x, y, z) = 푥푦풊 + (푦2 + 푒푥푧 )풋 + sin(푥푦) 풌 and S is the surface of the region E bounded by the parabolic cylinder 푧 = 1 − 푥2 and the planes 푧 = 0, 푦 = 0, 푦 + 푧 = 2

67

Solution: It would be extremely difficult to evaluate the given surface integral directly.

So we would have to evaluate four surface integrals corresponding to the four pieces of S.

Also, the divergence of F is much less complicated than F itself:

휕 휕 2 휕 div 퐅 = (푥푦) + (푦2 + 푒푥푧 ) + sin(푥푦) 휕푥 휕푦 휕푧

= 푦 + 2푦

= 3푦

So, we will use the Divergence Theorem to transform the given surface integral into a triple integral. The easiest way to evaluate triple integral is to express E as a type 3 region:

퐸 = {(푥, 푦, 푧)| − 1 ≤ 푥 ≤ 1, 0 ≤ 푧 ≤ 1 − 푥2, 0 ≤ 푦 ≤ 2 − 푧}

Then, if we use Equation (116), we will have:

∬ 퐅 푑푆 = ∭ div 퐅 푑푉 푆 퐸

= ∭ 3y 푑푉 퐸

68

1 1−푥2 2−푧 = 3 ∫ ∫ ∫ 푦 푑푦 푑푧 푑푥 −1 0 0

2 1 1−푥 (2 − 푧)2 = 3 ∫ ∫ 푑푧 푑푥 −1 0 2

1−푥2 3 1 (2 − 푧)3 = ∫ [ ] 푑푥 2 3 −1 0

1 1 = − ∫ [(푥2 + 1)3 − 8] 푑푥 2 −1

1 = − ∫ (푥6 + 3푥4 + 3푥2 − 7) 푑푥 0

184 = 35

69

5. Ordinary Differential Equations

5.1 First-Order Linear Differential Equations

The first order linear differential equation takes the form of

푑푦 + 푃(푥)푦 = 푄(푥) (117) 푑푥 where 푃 and 푄 are continuous functions on a given interval.

Let’s take an easy example of a linear equation 푥푦′ + 푦 = 2푥, for 푥 ≠ 0. We can rewrite this equation as:

1 푦′ + 푦 = 2 (118) 푥

Using the , we can rewrite the original equation as

푥푦′ + 푦 = (푥푦)′

Now we can rewrite the above equation as

(푥푦)′ = 2푥

Now if we integrate both sides, we get

퐶 푥푦 = 푥2 + 퐶 or 푦 = 푥 + 푥

We can solve every first-order differential equation in a similar fashion by multiplying both sides of Equation (117) by a suitable function I(x) called an integrating factor. We try to find 퐼 so that the left side of Equation (117) when multiplied by 퐼(푥), becomes the derivative of the product 퐼(푥)푦:

퐼(푥)(푦′ + 푃(푥)푦) = (퐼(푥)푦)′ (119)

If we can find such a function I, then Equation (117) becomes

(퐼(푥)푦)′ = 퐼(푥)푄(푥)

Integrating both sides, we would have

70

퐼(푥)푦 = ∫ 퐼(푥)푄(푥) 푑푥 + 퐶 So the solution would be

1 푦(푥) = [∫ 퐼(푥)푄(푥) 푑푥 + 퐶] (120) 퐼(푥)

To find such an 퐼, we expand Equation (119) and cancel terms

퐼(푥)푦′ + 퐼(푥)푃(푥)푦 = (퐼(푥)푦)′ = 퐼′(푥)푦 + 퐼(푥)푦′

퐼(푥)푃(푥) = 퐼′(푥)

This is a separable differential equation for 퐼, which we solve as follows:

푑퐼 ∫ = ∫ 푃(푥)푑푥 퐼

ln|퐼| = ∫ 푃(푥)푑푥

퐼 = 퐴푒∫ 푃(푥)푑푥 where 퐴 = ±푒푐. Let’s take 퐴 = 1, as we are looking for a particular integrating factor

퐼(푥) = 푒∫ 푃(푥)푑푥 (121)

Therefore, to solve a linear differential equation 푦′ + 푃(푥)푦 = 푄(푥), multiply both sides with the integrating factor 퐼(푥) = 푒∫ 푃(푥)푑푥 and integrate both sides.

Example 5.1: Find the solution of the initial-value problem

푥2푦′ + 푥푦 = 1 푥 > 0 푦(1) = 2

Solution: We must first divide both sides by the coefficient of 푦’ to put the differential equation into standard form

1 1 푦′ + 푦 = 푥 > 0 (122) 푥 푥2

71

The integrating factor is

1 ( )푑푥 퐼(푥) = 푒∫ 푥 = 푒ln 푥 = 푥

Multiplication of Equation (122) by 푥 gives

1 1 푥푦′ + 푦 = or (푥푦)′ = 푥 푥

Then:

1 푥푦 = ∫ 푑푥 = ln 푥 + 퐶 푥

ln 푥 + 퐶 푦 = 푥

Since 푦(1) = 2, we have

ln 1 + 퐶 2 = = 퐶 1

Therefore, the solution to the initial-value problem is

ln 푥 + 2 푦 = 푥

5.2 Second-Order Linear Differential Equations

A second-order linear differential equation has the form

푑2푦 푑푦 푃(푥) + 푄(푥) + 푅(푥)푦 = 퐺(푥) (123) 푑푥2 푑푥 where 푃, 푄, 푅 and 퐺 are continuous functions. In this section, we will only cover the case where 퐺(푥) = 0 for all 푥, in Equation (123). Such equations are called homogeneous linear differential equations. Hence, the form of second order linear differential equation is

72

푑2푦 푑푦 푃(푥) + 푄(푥) + 푅(푥)푦 = 0 (124) 푑푥2 푑푥

If 퐺(푥) ≠ 0 for some 푥, Equation (123) is nonhomogeneous and will be dealt with in section 5.3.

Two basic facts enable us to solve homogeneous linear differential equations.

A. If we know two solutions 푦1 and 푦2 of such an equation, then the linear combination 푦 = 푐1푦1(푥) + 푐2푦2(푥) is also a solution. Therefore, if 푦1(푥) and 푦2(푥) are both solutions of the linear homogeneous equation and 푐1 and 푐2 are any constants, then the function in Equation (125) below is also a solution of Equation (124)

푦(푥) = 푐1푦1(푥) + 푐2푦2(푥) (125)

Let’s proof this: Since 푦1 and 푦2 are solutions of Equation (124), we then have

′′ ′ 푃(푥)푦1 + 푄(푥)푦1 + 푅(푥)푦1 = 0

′′ ′ 푃(푥)푦2 + 푄(푥)푦2 + 푅(푥)푦2 = 0

And therefore, suing the basic rule for differentiation, we have

′′ ′ 푃(푥)푦′′ + 푄(푥)푦′ + 푅(푥)푦 = 푃(푥)( 푐1푦1 + 푐2푦2) + 푄(푥)( 푐1푦1 + 푐2푦2) + 푅(푥)( 푐1푦1 + 푐2푦2)

′′ ′′ ′ ′ = 푃(푥)( 푐1푦1 + 푐2푦2 ) + 푄(푥)(푐1푦1 + 푐2푦2 ) + 푅(푥)( 푐1푦1 + 푐2푦2)

′′ ′ ′′ ′ = 푐1[푃(푥)푦1 + 푄(푥)푦1 + 푅(푥)푦1] + 푐2[푃(푥)푦2 + 푄(푥)푦2 + 푅(푥)푦2]

= 푐1(0) + 푐2(0) = 0

Thus, 푦 = 푐1푦1 + 푐2푦2 is a solution of Equation (124).

B. The second means of solving the equation says that the general solution is a linear combination of

two linearly independent solutions 푦1 and 푦2. This means that neither 푦1 nor 푦2 is a constant multiple of the other. For instance, the functions 푓(푥) = 푥2 and 푔(푥) = 5푥2 are linearly 2 푥 dependent, but 푓(푥) = 푒 and 푔(푥) = 푥푒 are linearly independent. Therefore, if 푦1 and 푦2 are linearly independent solutions of Equation (124), and 푃(푥) is never 0, then the general solution is given by:

73

푦(푥) = 푐1푦1(푥) + 푐2푦2(푥) (126)

where 푐1 and 푐2 are arbitrary constants.

In general, it is not easy to discover solutions to a second-order linear differential equation. But it is always possible to do so if the coefficient 푃, 푄 and 푅 are constant functions, i.e., if the differential equation has the form

푎푦′′ + b푦′ + cy = 0 (127)

where 푎, 푏 and 푐 are constants and 푎 ≠ 0.

We know that the exponential function 푦 = 푒푟푥 (where 푟 is a constant) has the property that its derivative is a constant multiple of itself, i.e., 푦′ = 푟푒푟푥. Furthermore, 푦′′ = 푟2푒푟푥. If we substitute these expressions into Equation (127), we get:

푎푟2푒푟푥 + b푟푒푟푥 + c푒푟푥 = 0

(푎푟2 + 푏푟 + 푐)푒푟푥 = 0

But 푒푟푥 is never 0. Therefore, 푦 = 푒푟푥 is a solution of Equation (127) is 푟 is a root of the equation

푎푟2 + 푏푟 + 푐 = 0 (128)

Equation (128) is called auxiliary equation (or characteristic equation) of the differential equation 푎푦′′ + b푦′ + cy = 0. Realise that it is an algebraic equation that is obtained from the differential equation by replacing 푦′′ by 푟2, 푦′ by 푟 and 푦 by 1.

Sometimes the roots 푟1 and 푟2 of the auxiliary equation can be found by factoring. Sometimes they are found by using the quadratic formula:

−푏 + √푏2 − 4푎푐 −푏 − √푏2 − 4푎푐 푟 = 푟 = (129) 1 2푎 2 2푎

From Equation (129), let’s look at the expression of 푏2 − 4푎푐

74

Case A. If 풃ퟐ − ퟒ풂풄 > ퟎ

In this case, the roots 푟1 and 푟2 , of the auxiliary equation are real and distinct. If the roots 푟1 and 푟2 of the auxiliary equation 푎푟2 + 푏푟 + 푐 = 0 are real and unequal, then the general solution of 푎푦′′ + b푦′ + cy = 0 is

푟1푥 푟2푥 푦 = 푐1푒 + 푐2푒 (130)

Case B. If 풃ퟐ − ퟒ풂풄 = ퟎ

In this case, 푟1 = 푟2 , that is the roots of the auxiliary equation are real and equal. If the auxiliary equation 푎푟2 + 푏푟 + 푐 = 0 has only one real root 푟, then the general solution of 푎푦′′ + b푦′ + cy = 0 is

푟푥 푟푥 푦 = 푐1푒 + 푐2푥푒 (131)

Case C. If 풃ퟐ − ퟒ풂풄 < ퟎ

In this case, the roots 푟1 and 푟2 of the auxiliary equation are complex numbers, we can write

푟1 = 훼 + 푖훽 푟2 = 훼 − 푖훽 where 훼 and 훽 are real numbers. In fact, we can write:

−푏 √4푎푐 − 푏2 훼 = 훽 = 2푎 2푎

Then, using Euler’s equation

푒푖휃 = cos 휃 + 푖 sin 휃

So, we can write the solution of the differential equation as

푟1푥 푟2푥 푦 = 퐶1푒 + 퐶2푒

(훼+푖훽)푥 (훼−푖훽)푥 = 퐶1푒 + 퐶2푒

훼푥 훼푥 = 퐶1푒 (cos 훽푥 + 푖 sin 훽푥) + 퐶2푒 ( cos 훽푥 − 푖 sin 훽푥)

훼푥 = 푒 [(퐶1 + 퐶2) cos 훽푥 + 푖(퐶1 − 퐶2) sin 훽푥]

75

훼푥 = 푒 [푐1 cos 훽푥 + 푐2 sin 훽푥]

where 푐1 = 퐶1 + 퐶2, 푐2 = 푖(퐶1 − 퐶2). This gives all solutions (real and complex) of differential equation. The solution is real when constants 푐1 and 푐2 are real.

2 Therefore, if the roots of the auxiliary equation 푎푟 + 푏푟 + 푐 = 0 are the complex numbers 푟1 = 훼 + 푖훽, ′′ ′ 푟2 = 훼 − 푖훽, then the general solution of 푎푦 + b푦 + cy = 0 is

훼푥 푦 = 푒 (푐1 cos 훽푥 + 푐2 sin 훽푥) (132)

5.3 Initial-Value and Boundary-Value Problems

An initial-value problem for the second order Equation (124) or Equation (125) consists of finding a solution 푦 of the differential equation that also satisfies initial conditions of the form

′ 푦(푥0) = 푦0 푦 (푥0) = 푦1

where 푦0 and 푦1 are given constants. If 푃, 푄, 푅 and 퐺 are continuous on an interval and 푃(푥) ≠ 0, then this guarantees the existence and uniqueness of a solution to this initial-value problem.

Example 5.2: Solve the initial-value problem

푦′′ + 푦′ − 6푦 = 0 푦(0) = 1 푦′(0) = 0

Solution: The auxiliary equation is then

푟2 + 푟 − 6 = (푟 − 2)(푟 + 3) = 0

Therefore, the roots are 푟 = 2 and −3. So, the general equation (given by Equation (131)) is

2푥 −3푥 푦(푥) = 푐1푒 + 푐2푒

Differentiating this equation, we get:

′ 2푥 −3푥 푦 (푥) = 2푐1푒 − 3푐2푒

To satisfy the initial conditions, we require that

푦(0) = 푐1 + 푐2 = 1

76

′ 푦 (0) = 2푐1 − 3푐2 = 0

Solving for 푐1 and 푐2, we get

3 2 푐 = , 푐 = 1 5 2 5

Substituting these values, the solution of the initial-value problem is

3 2 푦(푥) = 푒2푥 + 푒−3푥 5 5

Example 5.3: Solve the initial-value problem

푦′′ + 푦 = 0 푦(0) = 2 푦′(0) = 3

Solution: The auxiliary equation here is 푟2 + 1 = 0 or 푟2 = −1, whose roots are ±푖. Thus, 훼 = 0, 훽 = 1, and since 푒0푥 = 1, the general solution is

푦 (푥) = 푐1 cos 푥 + 푐2 sin 푥 (133)

Differentiating Equation (133), we get

푦′ (푥) = −푐1 sin 푥 + 푐2 cos 푥

The initial conditions become

′ 푦(0) = 푐1 = 2 , 푦 (0) = 푐2 = 3

Therefore, the solution of the initial-value problem is

푦 (푥) = 2 cos 푥 + 3 sin 푥

A boundary-value problem however consists of finding a solution y of the differential equation that also satisfies boundary conditions of the form

푦(푥0) = 푦0 푦(푥1) = 푦1

77

In contrast with the situation for initial-value problems, a boundary-value problem does not always have a solution.

Example 5.4: Solve the boundary-value problem

푦′′ + 2푦′ + 푦 = 0 푦(0) = 1 푦(1) = 3

Solution: The auxiliary equation is

푟2 + 2푟 + 1 = 0 or (푟 + 1)2 = 0 whose only root is 푟 = −1. Therefore, the general solution is:

−푥 −푥 푦(푥) = 푐1푒 + 푐2푥푒

The boundary conditions are satisfied if

푦(0) = 푐1 = 1

−1 −1 푦(1) = 푐1푒 + 푐2푒 = 3

The first condition gives 푐1 = 1, so the second condition becomes

−1 −1 푒 + 푐2푒 = 3

Solving this equation for 푐2 by first multiplying through by 푒, we get

1 + 푐2 = 3푒 so 푐2 = 3푒 − 1

Thus, the solution of the boundary-value problem is

푦(푥) = 푒−푥 + (3푒 − 1)푥푒−푥

78

Summary:

Solutions of 풂풚′′ + 풃풚′ + 풄 = ퟎ are as follows:

Roots of 풂풓ퟐ + 풃풓 + 풄 = ퟎ General solution

푟1푥 푟2푥 풓ퟏ, 풓ퟐ real and distinct 푦 = 푐1푒 + 푐2푒

푟푥 푟푥 풓ퟏ = 풓ퟐ = 풓 푦 = 푐1푒 + 푐2푥푒

훼푥 풓ퟏ, 풓ퟐ complex: 휶 ± 풊휷 푦 = 푒 (푐1 cos 훽푥 + 푐2 sin 훽푥)

5.4 Non-homogeneous linear differential equation

Remember from section 5.3, the second-order nonhomogeneous linear differential equation with constant coefficients has the form

푎푦′′ + 푏푦′ + 푐푦 = 퐺(푥) (134) where 푎, 푏 and 푐 are constants and 퐺 is a . The related homogeneous equation (Equation (127)) is also called the complementary equation and is important in solving the nonhomogeneous equation.

The general solution of the nonhomogeneous differential equation (Equation (133)) can be written as

푦(푥) = 푦푝(푥) + 푦푐(푥) (135)

where 푦푝 is a particular solution of Equation (124) and 푦푐 is the general solution of the complementary Equation (127).

Example 5.5: Solve the equation 푦′′ + 푦′ − 2푦 = 푥2

Solution: The auxiliary equation for 푦′′ + 푦′ − 2푦 = 0 is

푟2 + 푟 − 2 = (푟 − 1)(푟 + 2) = 0

With roots 푟 = 1 and −2. So the solution of the complementary equation is

79

푥 −2푥 푦푐 = 푐1푒 + 푐2푒

Since 퐺(푥) = 푥2 is a polynomial of degree 2, we seek a particular solution of the form

2 푦푝(푥) = 퐴푥 + 퐵푥 + 퐶 Then

푦푝′ = 2퐴푥 + 퐵

푦푝′′ = 2퐴

Substituting these into the given differential equation, we get

(2퐴) + (2퐴푥 + 퐵) − 2(퐴푥2 + 퐵푥 + 퐶) = 푥2

−2퐴푥2 + (2퐴 − 2퐵)푥 + (2퐴 + 퐵 − 2퐶) = 푥2

Polynomials are equal when their coefficients are equal. Thus

−2퐴 = 1 2퐴 − 2퐵 = 0 2퐴 + 퐵 − 2퐶 = 0

The solution of this system of equation is

1 1 3 퐴 = − 퐵 = − 퐶 = − 2 2 4

A particular solution is therefore

1 1 3 푦 (푥) = − 푥2 − 푥 − 푝 2 2 4

And the general solution according to Equation (129) is

1 1 3 푦 = 푦 + 푦 = 푐 푒푥 + 푐 푒−2푥 − 푥2 − 푥 − 푝 푐 1 2 2 2 4

80

Example 5.6: Solve 푦′′ + 4푦 = 푒3푥

Solution: The auxiliary equation is 푟2 + 4 = 0 with roots ±2푖, so the solution of the complementary equation is

푦푐 = 푐1 cos 2푥 + 푐2 sin 2푥

3푥 3푥 3푥 For a particular solution, we try 푦푝(푥) = 퐴푒 . Then 푦푝′(푥) = 3퐴푒 and 푦푝′′(푥) = 9퐴푒 . Substituting into the differential equation, we have

9퐴푒3푥 + 4(퐴푒3푥) = 푒3푥

So, 13퐴푒3푥 = 푒3푥

1 퐴 = 13

Therefore,

1 푦 (푥) = 푒3푥 푝 13

And the general solution is

1 푦(푥) = 푐 cos 2푥 + 푐 sin 2푥 + 푒3푥 1 2 13

81

6. Partial Differential Equations

6.1 Introduction to Differential Equations

Although we have introduced the ordinary differential equation in Chapter 5, let’s just recap and get a bit into the details of differential equations. A differential equation is an equation that relates the derivatives of a (scalar) function depending on one or more variables. For example,

푑4푢 푑2푢 + + 푢3 = cos 푥 (136) 푑푥4 푑푥2 is a differential equation for the function u (x) depending on a single variable x while

휕푢 휕2푢 휕2푢 = + − 푢 (137) 휕푡 휕푥2 휕푦2 is a differential equation involving a function u (t, x, y) of three variables.

A differential equation is called ordinary if the function u depends on only a single variable, and partial if it depends on more than one variable. The order of a differential equation is that of the highest-order derivatives that appears in the equation. Thus, Equation (136) is a fourth-order ordinary differential equation (ODE) while Equation (137) is a second-order partial differential equation (PDE).

There are 2 common notations for partial derivatives, and we shall use them interchangeably. The first, used in Equation (136) and Equation (137) is the familiar Leibniz notation that employs a d to denote ordinary derivatives of a function of single variable and the 휕 symbol (usually pronounced “dee”) for partial derivatives of functions of more than one variable. An alternative, more compact notation employs 2 2 subscripts to indicate partial derivatives. For example, ut represent 휕푢/휕푡, while uxx is used for 휕 푢/휕푥 3 2 and 휕 푢/휕푥 휕푦 for uxxy. Thus, in subscript notation, the partial differential equation for Equation (137) is written as:

푢푡 = 푢푥푥 + 푢푦푦 − 푢 (138)

6.2 Initial Conditions and Boundary Conditions

How many solutions does a partial differential equation have? In general, lots! The solutions to dynamical ordinary differential equations are singled out by the imposition of initial conditions, resulting in an initial

82 value problem. On the other hand, equations modelling equilibrium phenomena require boundary conditions to specify their solutions uniquely, resulting in a boundary-value problem.

For partial differential equations modeling dynamic process, the number of initial conditions required depends on the highest-order time derivative that appears in the equation. On bounded domains, one must also impose suitable boundary conditions in order to uniquely characterise the solution and hence the subsequent dynamical behavior of the physical system. The combination of the partial differential equation, the initial conditions, and the boundary conditions leads to an initial-boundary value problem. We will encounter and solve many important examples of such problem throughout this section.

6.3 Linear and Nonlinear Equations

Linearity means that all instances of the unknown and its derivatives enter the equation linearly. We can use the concept of a linear 퓛. Such operator is assembled by summing the basic operators, with either constant coefficients or ore generally, coefficients depending on the independent variables. A linear differential equation has the form:

퓛[푢] = 0 (139)

휕2 For example, if 퓛 = + 1, then 퓛[푢] = 푢 + 푢 휕푥2 푥푥

The operator 퓛 is called linear if

퓛(푢 + 푣) = 퓛푢 + 퓛푣 and 퓛(푐푢) = 푐퓛푢 (140) for any functions u, v and a constant c.

Example 6.1: Is the heat equation 푢푡 − 푢푥푥 = 0 linear or non-linear?

Solution:

퓛(푢 + 푣) = (푢 + 푣)푡 − (푢 + 푣)푥푥 = 푢푡 + 푣푡 − 푢푥푥 − 푣푥푥 = (푢푡 − 푢푥푥) + (푣푡 − 푣푥푥) = 퓛푢 + 퓛푣

And

퓛(푐푢) = 푐퓛푢 = (푐푢)푡 − (푐푢)푥푥 = 푐푢푡 − 푐푢푥푥 = 푐(푢푡 − 푢푥푥) = 푐퓛푢.

Therefore, the heat equation is a linear equation, since it is given by a linear operator.

83

Example 6.2: Is the Burger’s equation 푢푡 + 푢푢푥 = 0 linear or non-linear?

Solution:

퓛(푢 + 푣) = (푢 + 푣)푡 + (푢 + 푣)(푢 + 푣)푥 = 푢푡 + 푣푡 + (푢 + 푣)(푢푥 + 푣푥)

= (푢푡 + 푢푢푥) + (푣푡 + 푣푣푥) + 푢푣푥 + 푣푢푥 ≠ 퓛푢 + 퓛푣

Therefore, the Burger’s equation is a non-linear differential equation.

Equation (139) is also called homogeneous linear PDE, while the Equation (141) below:

퓛[푢] = 푔(푥, 푦) (141) is called inhomogeneous linear equation. If uh is a solution to the homogeneous Eq. (139), and up is a particular solution to the inhomogeneous Equation (141), then uh + up is also a solution to the inhomogeneous Equation (141). Indeed

퓛(푢ℎ + 푢푝) = 퓛푢ℎ + 퓛푢푝 = 0 + 푔 = 푔

Therefore, in order to find the general solution to the inhomogeneous Eq. (6), it is enough to find the general solution of the homogeneous Equation (139), and add to this particular solution of the inhomogeneous equation (check that the difference of any two solutions of the inhomogeneous equation is a solution of the homogeneous equation). In this sense, there is similarity between ODEs and PDEs, since this principle relies only on the linearity of the operator 퓛.

Notice that where the solution of an ODE contains arbitrary constants, the solution to a PDE contains arbitrary functions.

The potential degree of non-linearity embedded in PDE of first order leads to the following differentiations:

PDE Type Description Linear Constant coefficient 푎, 푏, 푐 are constant functions Linear 푎, 푏, 푐 are functions of x and y only Semi-Linear 푎, 푏 functions of x and y, 푐 may depend on u Quasi-Linear 푎, 푏, 푐 are functions of x, y and u The derivatives carry exponents, e.g. (푢 )2, or Non-Linear 푥 derivatives cross-terms exist, e.g. 푢푥 푢푦

84

Let’s assume a first-order PDE in the form:

휕푢(푥, 푦) 휕푢(푥, 푦) 푎(푥, 푦) + 푏(푥, 푦) = 푐(푥, 푦, 푢(푥, 푦)) (142) 휕푥 휕푦

Hence, Equation (142) represents a semi-linear PDE, because it permits for light non-linearities in the source term, 푐(푥, 푦, 푢(푥, 푦)).

6.4 Examples of PDEs

Some examples of PDEs of physical significance are listed below:

푢푥 + 푢푦 = 0 Transport equation (143)

푢푡 + 푢푢푥 − 푣푢푥푥 = 0 Viscous Burger’s equation (144)

푢푡 + 푢푢푥 = 0 Inviscid Burger’s equation (145)

푢푥푥 + 푢푦푦 = 0 Laplace’s equation (146)

푢푡푡 − 푢푥푥 = 0 Wave equation (147)

푢푡 − 푢푥푥 = 0 Heat equation (148)

푢푡 + 푢푢푥 + 푢푥푥푥 = 0 Kortewedge Vries equation (149)

6.5 Three types of Second-Order PDEs

The classification theory of real linear second-order PDEs for scalar-valued function 푢(푡, 푥) depending on two variables proceed as follows. The most general such equation has the form

퓛 [푢] = 퐴푢푡푡 + 퐵푢푡푥 + 퐶푢푥푥 + 퐷푢푡 + 퐸푢푥 + 퐹푢 = 퐺 (150)

Where the coefficients 퐴, 퐵, 퐶, 퐷, 퐸, 퐹 are all allowed to be functions of (푡, 푥), as is the inhomogeneity or forcing function 퐺(푡, 푥). The equation is homogenous if and only if 퐺 ≡ 0. We assume that at least one of the leading coefficients 퐴, 퐵, 퐶 is not identically zero, since otherwise, the equation degenerates to a first-order equation. The key quantity that determines the type of such a PDE is its discriminant:

85

∆ = 퐵2 − 4퐴퐶 (151)

This should (and for good reason) remind you of the discriminant of the quadratic equation

푄(푥, 푦) = 퐴푥2 + 퐵푥푦 + 퐶푦2 + 퐷푥 + 퐸푦 + 퐹 = 0 (152)

Therefore, at a point (푡, 푥), the linear second-order PDE Equation (150) is called:

i. Hyperbolic, if ∆(푡, 푥) > 0 ii. Parabolic, if ∆(푡, 푥) = 0 but 퐴2 + 퐵2 + 퐶2 ≠ 0 iii. Elliptic, if ∆(푡, 푥) < 0

In particular:

• The wave equation (Equation (147)) 푢푡푡 − 푢푥푥 = 0 has discriminant ∆ = 4, and is hyperbolic • The heat equation (Equation (148)) 푢푥푥 − 푢푡 = 0 has discriminant ∆ = 0, and is parabolic • The Laplace equation (Equation (146)) 푢푥푥 + 푢푦푦 = 0 has discriminant ∆ = −4, and is elliptic

Example 6.3: The Tricomi equation from the theory of supersonic aerodynamics is written as:

휕2푢 휕2푢 푥 − = 0 휕푡2 휕푥2

Comparing the equation above to Equation (150), we find that

퐴 = 푥, 퐵 = 0, 퐶 = −1 while 퐷 = 퐸 = 퐹 = 퐺 = 0

The discriminant in this particular case is:

∆ = 퐵2 − 4퐴퐶 = 4푥

Hence, the equation is hyperbolic when 푥 > 0, elliptic when 푥 < 0, and parabolic on the transition line 푥 = 0. In this physical model, the hyperbolic region corresponds to subsonic flow, while the supersonic regions are of elliptic type. The transitional parabolic boundary represents the sock line between the sub- and super-sonic regions – the familiar sonic boom as an airplane crosses the sound barrier.

6.6 Solving PDEs using Separation of Variables Method

The separation of variables method is used for solving key PDEs in their two-independent-variables incarnations. For wave and heat equations (Equation (147) and (148), respectively), the variables are time,

86 t, and a single space coordinate, x, leading to initial boundary value problems modelling the dynamic behavior of the one-dimensional medium. For the Laplace equation (Equation (146)), the variables represent space coordinates, 푥 and 푦, and the associated boundary value problems model the equilibrium configuration of a planar body, e.g., the deformation of a membrane.

In order to use the separation of variables method, we must be working with a linear homogeneous PDEs with linear homogeneous boundary conditions. The separation of variables method relies upon the assumption that a function of the form,

푢(푥, 푡) = 휑(푥)퐺(푡) (153) will be a solution to a linear homogeneous PDE in 푥 and 푡. This is called a product solution and provided the boundary conditions are also linear and homogeneous, this will also satisfy the boundary conditions.

6.6.1 The Heat Equation

Let’s start with the one-dimensional heat equation:

휕푢 휕2푢 = 푘 (154) 휕푡 휕푥2

Let the initial and boundary conditions be:

푢(푥, 0) = 푓(푥) 푢(0, 푡) = 0 푢(퐿, 푡) = 0

So, we have the heat equation with fixed boundary conditions (that are also homogeneous) and an initial condition. The separation of variables method tells us to assume that the solution will take the form of the product (Equation (153)),

푢(푥, 푡) = 휑(푥)퐺(푡)

So, all we have to do here is substituting Equation (153) into Equation (154), we obtain

휕 휕2 (휑(푥)퐺(푡)) = 푘 (휑(푥)퐺(푡)) 휕푡 휕푥2

푑퐺 푑2휑 휑(푥) = 푓퐺(푡) 푑푡 푑푥2

87

Therefore, we can factor the 휑(푥) out of the time derivative and similarly we can factor 퐺(푡) out of the spatial derivative. Also note that after we have factored these out, we no longer have partial derivatives left in the problem. In the time derivative, we are only differentiating 퐺(푡) with respect to 푡 and this is now an ordinary derivative. Likewise, in the spatial derivative, we are now only differentiating 휑(푥) with respect to 푥 so again we have ordinary derivative.

Now, to solve the equation, we want to get all the 푡’s on one side of the equation and all the 푥’s on the other side. In other words, we want to “separate the variables”. In this case, we can just divide both sides by 휑(푥)퐺(푡) but this is not always the case. So, diving gives us:

1 푑퐺 1 푑2휑 1 푑퐺 1 푑2휑 = 푘 ⟹ = 퐺 푑푡 휑 푑푥2 푘퐺 푑푡 휑 푑푥2

Let’s pause here for a bit. How is it possible that a function of 푡’s only can be equal to a function of only 푥’s regardless of the choice of 푡 and/or 푥? This is impossible until there is one way it can be true. If both functions (i.e. both sides of the equation) were in fact a constant and of the same constant, then they can in fact be equal. So, we must have

1 푑퐺 1 푑2휑 = = −휆 (155) 푘퐺 푑푡 휑 푑푥2 where −휆 is called the separation constant and is arbitrary.

The next step is to acknowledge that we can take Equation (155) and split it into the following two ordinary differential equations.

푑퐺 푑2휑 = −푘휆퐺 = −휆휑 푑푡 푑푥2

Both of these are very simple differential equations. However, since we do not know what 휆 is, we can’t solve them yet.

The last step in the process is to make sure our product solution (Equation (153)), satisfy the boundary conditions so let’s substitute it into both of the boundary conditions.

푢(0, 푡) = 휑(0)퐺(푡) = 0 푢(퐿, 푡) = 휑(퐿)퐺(푡) = 0

Let’s consider the first one. We have two options. Either 휑(0) = 0 or 퐺(푡) = 0 for every 푡. However, if we have 퐺(푡) = 0 for every 푡, then we will also have 푢(푥, 푡) = 0. Instead, let’s assume that we must have

88

휑(0) = 0. Likewise, from the second boundary condition, we will get 휑(퐿) = 0 to avoid having a trivial solution.

Now, let’s try and solve the problem. Note the general solution for differential equation cases for 휆

• Case (i): 휆 > 0: 푦(푥) = 푐1 cos(√휆푥) + 푐2 sin(√휆푥) • Case (ii): 휆 = 0: 푦(푥) = 푎 + 푏푥, 퐺(푡) = 푐 • Case (ii): 휆 < 0: Always ignore since this case only gives trivial solution satisfying the PDE and boundary conditions.

Let’s look at case (i), 휆 > 0

We now know that the solution to the differential equation is

휑(푥) = 푐1 cos(√휆푥) + 푐2 sin(√휆푥)

Applying the first boundary condition gives:

0 = 휑(0) = 푐1

Now, applying the second boundary condition, and using the above result gives:

0 = 휑(퐿) = 푐2 sin(퐿√휆)

Now we are after non-trivial solutions and therefore we must have:

sin(퐿√휆) = 0

퐿√휆 = 푛휋 푛 = 1, 2, 3, ….

The positive eigenvalues and their corresponding eigenfunctions of this boundary problem are:

푛휋 2 푛휋푥 휆 = ( ) 휑 (푥) = sin ( ) 푛 = 1, 2, 3, … . 푛 퐿 푛 퐿

Let’s look at case (ii), 휆 = 0

The solution to differential equation is:

89

휑(푥) = 푐1 + 푐2푥

Applying the boundary conditions, we get

0 = 휑(0) = 푐1

0 = 휑(퐿) = 푐2퐿 ⟹ 푐2 = 0

So, in this case, the only solution is the trivial solution, so 휆 = 0 is not an eigenvalue for this boundary value problem.

Let’s look at case (iii), 휆 < 0

Here, the solution to the differential equation is

휑(푥) = 푐1 cosh(√−휆푥) + 푐2 sinh(√−휆푥)

Applying the first boundary condition gives:

0 = 휑(0) = 푐1

Now, applying the second boundary condition gives:

0 = 휑(퐿) = 푐2 sinh(퐿√−휆)

So, we are assuming 휆 < 0 and so 퐿√−휆 ≠ 0 and this means that sinh(퐿√−휆) ≠ 0. Therefore, we must have 푐2 = 0 and again, we can only get the trivial solution in this case.

Therefore, there will be no negative eigenvalues for this boundary value problem.

Hence, the complete list of eigenvalues and eigenfunctions for this problem are:

푛휋 2 푛휋푥 휆 = ( ) 휑 (푥) = sin ( ) 푛 = 1, 2, 3, … . 푛 퐿 푛 퐿

Now, let’s solve the time differential equation,

푑퐺 = −푘휆 퐺 푑푡 푛

90

This is a simple linear first order differential equation and therefore the solution is:

푛휋 2 −푘( ) 푡 퐺(푡) = 푐푒−푘휆푛푡 = 푐푒 퐿

Now, we have solved both ordinary differential equations, we can finally write down a solution. The product solution is therefore

푛휋 2 푛휋푥 −푘( ) 푡 푢 (푥, 푡) = 퐵 sin ( ) 푒 퐿 푛 = 1, 2, 3, …. 푛 푛 퐿

Please note that we have denoted the product solution to 푢푛 to acknowledge that each value of 푛 will result in different solutions. Also note that we’ve changed 푐 to 퐵푛 to denote that it might also be different for any value of 푛 as well.

Example 6.4: Solve the initial-boundary value problem

푢푡 = 푢푥푥 0 < 푥 < 2, 푡 > 0

푢(푥, 0) = 푥2 − 푥 + 1 0 ≤ 푥 ≤ 2

푢(0, 푡) = 1, 푢(2, 푡) = 3 푡 > 0

Find lim푡→+∞푢(푥, 푡).

Solution:

First, we need to obtain a function 푣 that satisfies 푣푡 = 푣푥푥 and takes 0 boundary conditions. So, let

푣(푥, 푡) = 푢(푥, 푡) + (푎푥 + 푏) (156) where 푎 and 푏 are constants to be determined. Then,

푣푡 = 푢푡

푣푡푡 = 푢푡푡

Thus,

푣푡 = 푣푡푡

91

We need Equation (156) to take 0 boundary conditions for 푣(0, 푡)and 푣(2, 푡):

푣(0, 푡) = 0 = 푢(0, 푡) + 푏 = 1 + 푏 ⟹ 푏 = −1

푣(2, 푡) = 0 = 푢(2, 푡) + 2푎 − 1 = 2푎 + 2 ⟹ 푎 = −1

Therefore, Equation (156) becomes

푣(푥, 푡) = 푢(푥, 푡) − 푥 − 1 (157)

The new problem now is

푣푡 = 푣푥푥

푣(푥, 0) = (푥2 − 푥 + 1) − 푥 − 1 = 푥2 − 2푥

푣(0, 푡) = 푣(2, 푡) = 0

Let’s solve the problem for 푣 using separation of variables method.

Let 푣(푥, 푡) = 휑(푥)퐺(푡)

Which gives (Equation (155)):

1 푑퐺 1 푑2휑 = = −휆 퐺 푑푡 휑 푑푥2

From 푑2휑 + 휆휑 = 0, 푑푥2

We get,

휑푛(푥) = 푎푛 cos(√휆푥) + 푏푛 sin(√휆푥)

Using boundary conditions, we have

푣(0, 푡) = 휑(0)퐺(푡) = 0 푣(2, 푡) = 휑(2)퐺(푡) = 0

92

Therefore, 휑(0) = 휑(2) = 0

Hence,

휑푛(0) = 푎푛 = 0

휑푛(푥) = 푏푛 sin(√휆푥)

푛휋 2 휑 (2) = 푏 sin(2√휆) ⟹ 2√휆 = 푛휋 ⟹ 휆 = ( ) 푛 푛 푛 2

Therefore,

푛휋푥 푛휋 2 휑 (푥) = 푏 sin , 휆 = ( ) 푛 푛 2 푛 2

With these values of 휆푛, we solve

푑퐺 + 휆퐺 = 0 푑푡

Or can be written as

푑퐺 푛휋 2 + ( ) 퐺 = 0 푑푡 2

And we get:

푛휋 2 −( ) 푡 퐺푛(푡) = 푐푛푒 2

Therefore,

∞ ∞ 푛휋 2 −( ) 푡 푛휋푥 푣(푥, 푡) = ∑ 휑 (푥)퐺 (푡) = ∑ 푐̃ 푒 2 sin 푛 푛 푛 2 푛=1 푛=1

Coefficients 푐푛̃ are obtained using the initial condition:

∞ 푛휋푥 푣(푥, 0) = ∑ 푐̃ sin = 푥2 − 2푥 푛 2 푛=1

93

2 푛휋푥 0 푛 푖푠 푒푣푒푛 푐̃ = ∫ (푥2 − 2푥) sin 푑푥 = { 32 푛 2 − 푛 푖푠 표푑푑 0 (푛휋)3

Therefore,

∞ 푛휋 2 32 −( ) 푡 푛휋푥 푣(푥, 푡) = ∑ − 푒 2 sin (푛휋)3 2 푛=1

We now use Equation (157) to convert back to function 푢:

푢(푥, 푡) = 푣(푥, 푡) + 푥 + 1

∞ 푛휋 2 32 −( ) 푡 푛휋푥 푢(푥, 푡) = ∑ − 푒 2 sin + 푥 + 1 (푛휋)3 2 푛=1

And finally,

lim 푢(푥, 푡) = 푥 + 1 푡→+∞

6.6.2 The Wave Equation

Let’s start with a wave equation as follows:

휕2푢 휕2푢 = 푐2 휕푡2 휕푥2 (158)

The initial and boundary conditions are as follows:

휕푢 푢(푥, 0) = 푓(푥) (푥, 0) = 푔(푥) 휕푡

푢(0, 푡) = 0 푢 (퐿, 푡) = 0

One of the main differences is now we have two initial conditions. So, let’s start with the product solution:

94

푢(푥, 푡) = 휑(푥)ℎ(푡)

Substituting the two boundary conditions gives:

휑(0) = 0 휑(퐿) = 0

Substituting the product solution into the differential equation (Eq. 21), separating and introducing a separation constant gives:

휕2 휕2 (휑(푥)ℎ(푡)) = 푐2 (휑(푥)ℎ(푡)) 휕푡2 휕푥2

푑2ℎ 푑2휑 휑(푥) = 푐2ℎ(푡) 푑푡2 푑푥2

1 푑2ℎ 1 푑2휑 = = −휆 푐2ℎ 푑푡2 휑 푑푥2

We moved the 푐2 to the left side for convenience and chose −휆 for the separation constant so the differential equation for 휑 would match a known (and solved) case.

The two ordinary differential equations we get from separation of variables methods are:

푑2ℎ 푑2휑 + 푐2휆ℎ = 0 + 휆휑 = 0 푑푡2 푑푥2

휑(0) = 0 휑(퐿) = 0

We have solved the boundary value problem above in the Example in solving the Heat Equation in section 6.6.1, so the eigenvalues and eigenfunctions for this problem are:

푛휋 2 푛휋푥 휆 = ( ) 휑 (푥) = sin ( ) 푛 = 1, 2, 3, … . 푛 퐿 푛 퐿

The first ordinary differential equation is now

푑2ℎ 푛휋푐 2 + ( ) ℎ = 0 푑푡2 퐿

And because the coefficient of the ℎ is clearly positive the solution to this is

95

푛휋푐푡 푛휋푐푡 ℎ(푡) = 푐 cos ( ) + 푐 sin ( ) 1 퐿 2 퐿

Since there is no reason to think that either of the coefficients above are zero, we then get two product solutions,

푛휋푐푡 푛휋푥 푢 (푥, 푡) = 퐴 cos ( ) sin ( ) 푛 푛 퐿 퐿

푛휋푐푡 푛휋푥 푢 (푥, 푡) = 퐵 cos ( ) sin ( ) 푛 = 1,2,3, … 푛 푛 퐿 퐿

The solution is then,

∞ 푛휋푐푡 푛휋푥 푛휋푐푡 푛휋푥 푢(푥, 푡) = ∑ [퐴 cos ( ) sin ( ) + 퐵 sin ( ) sin ( )] 푛 퐿 퐿 푛 퐿 퐿 푛=1

Now, in order to apply the second initial condition, we’ll need to differentiate this with respect to 푡, so

∞ 휕푢 푛휋푐 푛휋푐푡 푛휋푥 푛휋푐 푛휋푐푡 푛휋푥 = ∑ [− 퐴 sin ( ) sin ( ) + 퐵 cos ( ) sin ( )] 휕푡 퐿 푛 퐿 퐿 퐿 푛 퐿 퐿 푛=1

If we now apply the initial conditions, we get,

∞ ∞ 푛휋푥 푛휋푥 푛휋푥 푢(푥, 0) = 푓(푥) = ∑ [퐴 cos(0) sin ( ) + 퐵 sin(0) sin ( )] = ∑ 퐴 sin ( ) 푛 퐿 푛 퐿 푛 퐿 푛=1 푛=1

∞ 휕푢 푛휋푐 푛휋푥 (푥, 0) = 푔(푥) = ∑ 퐵 sin ( ) 휕푡 퐿 푛 퐿 푛=1

Both of these are Fourier sine series. The first 푓(푥) on 0 ≤ 푥 ≤ 퐿 while the second is for 푔(푥) on 0 ≤ 푥 ≤ 퐿 with a slightly messy coefficient. Using the Fourier series formula for Fourier Sine series, we get

2 퐿 푛휋푥 퐴푛 = ∫ 푓(푥) sin ( ) 푑푥 푛 = 1,2,3, … . 퐿 0 퐿

푛휋푐 2 퐿 푛휋푥 퐵푛 = ∫ 푔(푥) sin ( ) 푑푥 푛 = 1,2,3, … . 퐿 퐿 0 퐿

96

Upon solving, we get:

2 퐿 푛휋푥 퐴푛 = ∫ 푓(푥) sin ( ) 푑푥 푛 = 1,2,3, … . 퐿 0 퐿

2 퐿 푛휋푥 퐵푛 = ∫ 푔(푥) sin ( ) 푑푥 푛 = 1,2,3, … . 푛휋푐 0 퐿

97