Mathematical Methods in Engineering and Science 1,

Mathematical Methods in Engineering and Science [http://home.iitk.ac.in/˜dasgupta/MathCourse]

Bhaskar Dasgupta [email protected]

An Applied Mathematics course for graduate and senior undergraduate students and also for rising researchers. Mathematical Methods in Engineering and Science 2,

Textbook: Dasgupta B., App. Math. Meth. (Pearson Education 2006, 2007). http://home.iitk.ac.in/˜ dasgupta/MathBook Mathematical Methods in Engineering and Science 3, Contents I

Preliminary Background

Matrices and Linear Transformations

Operational Fundamentals of Linear Algebra

Systems of Linear Equations

Gauss Elimination Family of Methods

Special Systems and Special Methods

Numerical Aspects in Linear Systems Mathematical Methods in Engineering and Science 4, Contents II Eigenvalues and Eigenvectors

Diagonalization and Similarity Transformations

Jacobi and Givens Rotation Methods

Householder Transformation and Tridiagonal Matrices

QR Decomposition Method

Eigenvalue Problem of General Matrices

Singular Value Decomposition

Vector Spaces: Fundamental Concepts* Mathematical Methods in Engineering and Science 5, Contents III Topics in Multivariate Calculus

Vector Analysis: Curves and Surfaces

Scalar and Vector Fields

Polynomial Equations

Solution of Nonlinear Equations and Systems

Optimization: Introduction

Multivariate Optimization

Methods of Nonlinear Optimization* Mathematical Methods in Engineering and Science 6, Contents IV Constrained Optimization

Linear and Quadratic Programming Problems*

Interpolation and Approximation

Basic Methods of Numerical Integration

Advanced Topics in Numerical Integration*

Numerical Solution of Ordinary Differential Equations

ODE Solutions: Advanced Issues

Existence and Uniqueness Theory Mathematical Methods in Engineering and Science 7, Contents V First Order Ordinary Differential Equations

Second Order Linear Homogeneous ODE’s

Second Order Linear Non-Homogeneous ODE’s

Higher Order Linear ODE’s

Laplace Transforms

ODE Systems

Stability of Dynamic Systems

Series Solutions and Special Functions Mathematical Methods in Engineering and Science 8, Contents VI Sturm-Liouville Theory

Fourier Series and Integrals

Fourier Transforms

Minimax Approximation*

Partial Differential Equations

Analytic Functions

Integrals in the Complex Plane

Singularities of Complex Functions Mathematical Methods in Engineering and Science 9, Contents VII Variational Calculus*

Epilogue

Selected References Mathematical Methods in Engineering and Science Preliminary Background 10, Theme of the Course Outline Course Contents Sources for More Detailed Study Logistic Strategy Expected Background

Preliminary Background Theme of the Course Course Contents Sources for More Detailed Study Logistic Strategy Expected Background Mathematical Methods in Engineering and Science Preliminary Background 11, Theme of the Course Theme of the Course Course Contents Sources for More Detailed Study Logistic Strategy To develop a firm mathematical backgroundExpected necessary Background for graduate studies and research ◮ a fast-paced recapitulation of UG mathematics ◮ extension with supplementary advanced ideas for a mature and forward orientation ◮ exposure and highlighting of interconnections

To pre-empt needs of the future challenges ◮ trade-off between sufficient and reasonable ◮ target mid-spectrum majority of students

Notable beneficiaries (at two ends) ◮ would-be researchers in analytical/computational areas ◮ students who are till now somewhat afraid of mathematics Mathematical Methods in Engineering and Science Preliminary Background 12, Theme of the Course Course Contents Course Contents Sources for More Detailed Study Logistic Strategy Expected Background

◮ Applied linear algebra ◮ Multivariate calculus and vector calculus ◮ Numerical methods ◮ Differential equations + + ◮ Complex analysis Mathematical Methods in Engineering and Science Preliminary Background 13, Theme of the Course Sources for More Detailed Study Course Contents Sources for More Detailed Study Logistic Strategy Expected Background

If you have the time, need and interest, then you may consult ◮ individual books on individual topics; ◮ another “umbrella” volume, like Kreyszig, McQuarrie, O’Neil or Wylie and Barrett; ◮ a good book of numerical analysis or scientific computing, like Acton, Heath, Hildebrand, Krishnamurthy and Sen, Press et al, Stoer and Bulirsch; ◮ friends, in joint-study groups. Mathematical Methods in Engineering and Science Preliminary Background 14, Theme of the Course Logistic Strategy Course Contents Sources for More Detailed Study Logistic Strategy Expected Background ◮ Study in the given sequence, to the extent possible. ◮ Do not read mathematics. ◮ Use lots of pen and paper. Read “mathematics books” and do mathematics. ◮ Exercises are must. ◮ Use as many methods as you can think of, certainly including the one which is recommended. ◮ Consult the Appendix after you work out the solution. Follow the comments, interpretations and suggested extensions. ◮ Think. Get excited. Discuss. Bore everybody in your known circles. ◮ Not enough time to attempt all? Want a selection ? ◮ Program implementation is needed in algorithmic exercises. ◮ Master a programming environment. ◮ Use mathematical/numerical library/software. Take a MATLAB tutorial session? Mathematical Methods in Engineering and Science Preliminary Background 15, Theme of the Course Logistic Strategy Course Contents Sources for More Detailed Study Logistic Strategy Expected Background

Tutorial Plan

Chapter Selection Tutorial Chapter Selection Tutorial 2 2,3 3 26 1,2,4,6 4 3 2,4,5,6 4,5 27 1,2,3,4 3,4 4 1,2,4,5,7 4,5 28 2,5,6 6 5 1,4,5 4 29 1,2,5,6 6 6 1,2,4,7 4 30 1,2,3,4,5 4 7 1,2,3,4 2 31 1,2 1(d) 8 1,2,3,4,6 4 32 1,3,5,7 7 9 1,2,4 4 33 1,2,3,7,8 8 10 2,3,4 4 34 1,3,5,6 5 11 2,4,5 5 35 1,3,4 3 12 1,3 3 36 1,2,4 4 13 1,2 1 37 1 1(c) 14 2,4,5,6,7 4 38 1,2,3,4,5 5 15 6,7 7 39 2,3,4,5 4 16 2,3,4,8 8 40 1,2,4,5 4 17 1,2,3,6 6 41 1,3,6,8 8 18 1,2,3,6,7 3 42 1,3,6 6 19 1,3,4,6 6 43 2,3,4 3 20 1,2,3 2 44 1,2,4,7,9,10 7,10 21 1,2,5,7,8 7 45 1,2,3,4,7,9 4,9 22 1,2,3,4,5,6 3,4 46 1,2,5,7 7 23 1,2,3 3 47 1,2,3,5,8,9,10 9,10 24 1,2,3,4,5,6 1 48 1,2,4,5 5 25 1,2,3,4,5 5 Mathematical Methods in Engineering and Science Preliminary Background 16, Theme of the Course Expected Background Course Contents Sources for More Detailed Study Logistic Strategy Expected Background

◮ moderate background of undergraduate mathematics ◮ firm understanding of school mathematics and undergraduate calculus

Take the preliminary test. [p 3, App. Math. Meth.]

Grade yourself sincerely. [p 4, App. Math. Meth.]

Prerequisite Problem Sets* [p 4–8, App. Math. Meth.] Mathematical Methods in Engineering and Science Preliminary Background 17, Theme of the Course Points to note Course Contents Sources for More Detailed Study Logistic Strategy Expected Background

◮ Put in effort, keep pace. ◮ Stress concept as well as problem-solving. ◮ Follow methods diligently. ◮ Ensure background skills.

Necessary Exercises: Prerequisite problem sets ?? Mathematical Methods in Engineering and Science Matrices and Linear Transformations 18, Matrices Outline Geometry and Algebra Linear Transformations Terminology

Matrices and Linear Transformations Matrices Geometry and Algebra Linear Transformations Matrix Terminology Mathematical Methods in Engineering and Science Matrices and Linear Transformations 19, Matrices Matrices Geometry and Algebra Linear Transformations Matrix Terminology Question: What is a “matrix”? Answers: ◮ a rectangular array of numbers/elements ? ◮ a mapping f : M N F , where M = 1, 2, 3, , m , × → { · · · } N = 1, 2, 3, , n and F is the set of real numbers or { · · · } complex numbers ?

Question: What does a matrix do? Explore: With an m n matrix A, ×

y = a x + a x + + a nxn 1 11 1 12 2 · · · 1 y2 = a21x1 + a22x2 + + a2nxn . . . . · · · .  or Ax = y . . . . .   ym = am x + am x + + amnxn 1 1 2 2 · · ·   Mathematical Methods in Engineering and Science Matrices and Linear Transformations 20, Matrices Matrices Geometry and Algebra Linear Transformations Consider these definitions: Matrix Terminology ◮ y = f (x) ◮ y = f (x)= f (x1, x2, , xn) ◮ · · · yk = fk (x)= fk (x , x , , xn), k = 1, 2, , m 1 2 · · · · · · ◮ y = f(x) ◮ y = Ax Further Answer: A matrix is the definition of a linear vector function of a vector variable. Anything deeper?

Caution: Matrices do not define vector functions whose components are of the form

yk = ak + ak x + ak x + + aknxn. 0 1 1 2 2 · · · Mathematical Methods in Engineering and Science Matrices and Linear Transformations 21, Matrices Geometry and Algebra Geometry and Algebra Linear Transformations Matrix Terminology T Let vector x = [x1 x2 x3] denote a point (x1, x2, x3) in 3-dimensional space in frame of reference OX1X2X3. Example: With m = 2 and n = 3,

y = a x + a x + a x 1 11 1 12 2 13 3 . y = a x + a x + a x 2 21 1 22 2 23 3 

Plot y1 and y2 in the OY1Y2 plane.

A: R 3 R 2

X3 Y2 x y Y1        O X2 O

X1 Domain Co−domain Figure: Linear transformation: schematic illustration What is matrix A doing? Mathematical Methods in Engineering and Science Matrices and Linear Transformations 22, Matrices Geometry and Algebra Geometry and Algebra Linear Transformations Matrix Terminology

Operating on point x in R3, matrix A transforms it to y in R2.

Point y is the image of point x under the mapping defined by matrix A.

Note domain R3, co-domain R2 with reference to the figure and verify that A : R3 R2 fulfils the requirements of a mapping, by → definition.

A matrix gives a definition of a linear transformation from one vector space to another. Mathematical Methods in Engineering and Science Matrices and Linear Transformations 23, Matrices Linear Transformations Geometry and Algebra Linear Transformations Matrix Terminology 3 Operate A on a large number of points xi R . 2 ∈ Obtain corresponding images yi R . ∈ The linear transformation represented by A implies the totality of these correspondences.

3 We decide to use a different frame of reference OX1′ X2′X3′ for R . 2 [And, possibly OY1′Y2′ for R at the same time.]

Coordinates change, i.e. xi changes to xi′ (and possibly yi to yi′). Now, we need a different matrix, say A′, to get back the correspondence as y′ = A′x′.

A matrix: just one description.

Question: How to get the new matrix A′? Mathematical Methods in Engineering and Science Matrices and Linear Transformations 24, Matrices Matrix Terminology Geometry and Algebra Linear Transformations Matrix Terminology

◮ ··· ··· ◮ Matrix product ◮ Transpose ◮ Conjugate transpose ◮ Symmetric and skew-symmetric matrices ◮ Hermitian and skew-Hermitian matrices ◮ Determinant of a square matrix ◮ Inverse of a square matrix ◮ Adjoint of a square matrix ◮ ··· ··· Mathematical Methods in Engineering and Science Matrices and Linear Transformations 25, Matrices Points to note Geometry and Algebra Linear Transformations Matrix Terminology

◮ A matrix defines a linear transformation from one vector space to another. ◮ Matrix representation of a linear transformation depends on the selected bases (or frames of reference) of the source and target spaces. Important: Revise matrix algebra basics as necessary tools.

Necessary Exercises: 2,3 Mathematical Methods in Engineering and Science Operational Fundamentals of Linear Algebra 26, Range and Null Space: Rank and Nullity Outline Basis Change of Basis Elementary Transformations

Operational Fundamentals of Linear Algebra Range and Null Space: Rank and Nullity Basis Change of Basis Elementary Transformations Mathematical Methods in Engineering and Science Operational Fundamentals of Linear Algebra 27, Range and Null Space: Rank and Nullity Range and Null Space: Rank and NullityBasis Change of Basis Elementary Transformations Consider A Rm n as a mapping ∈ × A : Rn Rm, Ax = y, x Rn, y Rm. → ∈ ∈

Observations 1. Every x Rn has an image y Rm, but every y Rm need ∈ ∈ ∈ not have a pre-image in Rn. Range (or range space) as subset/subspace of co-domain: containing images of all x Rn. ∈ 2. Image of x Rn in Rm is unique, but pre-image of y Rm ∈ ∈ need not be. It may be non-existent, unique or infinitely many. Null space as subset/subspace of domain: containing pre-images of only 0 Rm. ∈ Mathematical Methods in Engineering and Science Operational Fundamentals of Linear Algebra 28, Range and Null Space: Rank and Nullity Range and Null Space: Rank and NullityBasis Change of Basis Elementary Transformations n m R R 0

O A Range ( A )

Null ( A )

Domain Co−domain

Figure: Range and null space: schematic representation

Question: What is the dimension of a vector space? Linear dependence and independence: Vectors x , x , , xr 1 2 · · · in a vector space are called linearly independent if

k x + k x + + kr xr = 0 k = k = = kr = 0. 1 1 2 2 · · · ⇒ 1 2 · · ·

Range(A) = y : y = Ax, x Rn { ∈ } Null(A) = x : x Rn, Ax = 0 { ∈ } Rank(A) = dim Range(A) Nullity(A) = dim Null(A) Mathematical Methods in Engineering and Science Operational Fundamentals of Linear Algebra 29, Range and Null Space: Rank and Nullity Basis Basis Change of Basis Elementary Transformations

Take a set of vectors v , v , , vr in a vector space. 1 2 · · · Question: Given a vector v in the vector space, can we describe it as v = k v + k v + + kr vr = Vk, 1 1 2 2 · · · T where V = [v v vr ] and k = [k k kr ] ? 1 2 · · · 1 2 · · · Answer: Not necessarily.

Span, denoted as < v , v , , vr >: the subspace 1 2 · · · described/generated by a set of vectors.

Basis: A basis of a vector space is composed of an ordered minimal set of vectors spanning the entire space. The basis for an n-dimensional space will have exactly n members, all linearly independent. Mathematical Methods in Engineering and Science Operational Fundamentals of Linear Algebra 30, Range and Null Space: Rank and Nullity Basis Basis Change of Basis Orthogonal basis: v , v , , vn with Elementary Transformations { 1 2 · · · } T v vk = 0 j = k. j ∀ 6 Orthonormal basis:

T 0 if j = k v vk = δjk = 6 j 1 if j = k  Members of an orthonormal basis form an . Properties of an orthogonal matrix: 1 T T V− = V or VV = I, and det V = +1 or 1, − Natural basis: 1 0 0 0 1 0       e1 = 0 , e2 = 0 , , en = 0 .  .   .  · · ·  .   .   .   .         0   0   1              Mathematical Methods in Engineering and Science Operational Fundamentals of Linear Algebra 31, Range and Null Space: Rank and Nullity Change of Basis Basis Change of Basis Suppose x represents a vector (point) in RnElementaryin some Transformations basis. Question: If we change over to a new basis c , c , , cn , how { 1 2 · · · } does the representation of a vector change?

x =x ¯ c +x ¯ c + +x ¯ncn 1 1 2 2 · · · x¯1 x¯2 = [c1 c2 cn]  .  . · · · .    x¯n      With C = [c c cn], 1 2 · · · new to old coordinates: Cx¯ = x and 1 old to new coordinates: x¯ = C− x. Note: Matrix C is invertible. How? Special case with C orthogonal: orthogonal coordinate transformation. Mathematical Methods in Engineering and Science Operational Fundamentals of Linear Algebra 32, Range and Null Space: Rank and Nullity Change of Basis Basis Change of Basis Elementary Transformations Question: And, how does basis change affect the representation of a linear transformation? Consider the mapping A : Rn Rm, Ax = y. → Change the basis of the domain through P Rn n and that of the ∈ × co-domain through Q Rm m. ∈ × New and old vector representations are related as

Px¯ = x and Qy¯ = y.

Then, Ax = y A¯ x¯ = y¯, with ⇒ 1 A¯ = Q− AP Special case: m = n and P = Q gives a similarity transformation

1 A¯ = P− AP Mathematical Methods in Engineering and Science Operational Fundamentals of Linear Algebra 33, Range and Null Space: Rank and Nullity Elementary Transformations Basis Change of Basis Elementary Transformations

Observation: Certain reorganizations of equations in a system have no effect on the solution(s).

Elementary Row Transformations: 1. interchange of two rows, 2. scaling of a row, and 3. addition of a scalar multiple of a row to another.

Elementary Column Transformations: Similar operations with columns, equivalent to a corresponding shuffling of the variables (unknowns). Mathematical Methods in Engineering and Science Operational Fundamentals of Linear Algebra 34, Range and Null Space: Rank and Nullity Elementary Transformations Basis Change of Basis Elementary Transformations Equivalence of matrices: An elementary transformation defines an equivalence relation between two matrices. Reduction to normal form: I 0 A = r N 0 0   Rank invariance: Elementary transformations do not alter the rank of a matrix. Elementary transformation as : an elementary row transformation on a matrix is equivalent to a pre-multiplication with an , obtained through the same row transformation on the (of appropriate size).

Similarly, an elementary column transformation is equivalent to post-multiplication with the corresponding elementary matrix. Mathematical Methods in Engineering and Science Operational Fundamentals of Linear Algebra 35, Range and Null Space: Rank and Nullity Points to note Basis Change of Basis Elementary Transformations

◮ Concepts of range and null space of a linear transformation. ◮ Effects of change of basis on representations of vectors and linear transformations. ◮ Elementary transformations as tools to modify (simplify) systems of (simultaneous) linear equations.

Necessary Exercises: 2,4,5,6 Mathematical Methods in Engineering and Science Systems of Linear Equations 36, Nature of Solutions Outline Basic Idea of Solution Methodology Homogeneous Systems Pivoting Partitioning and Block Operations

Systems of Linear Equations Nature of Solutions Basic Idea of Solution Methodology Homogeneous Systems Pivoting Partitioning and Block Operations Mathematical Methods in Engineering and Science Systems of Linear Equations 37, Nature of Solutions Nature of Solutions Basic Idea of Solution Methodology Homogeneous Systems Pivoting Ax = b Partitioning and Block Operations Coefficient matrix: A, : [A b]. | Existence of solutions or consistency:

Ax = b has a solution b Range(A) ⇔ ∈ Rank(A)= Rank([A b]) ⇔ | Uniqueness of solutions:

Rank(A) = Rank([A b]) = n | Solution of Ax = b is unique. ⇔ Ax = 0 has only the trivial (zero) solution. ⇔ Infinite solutions: For Rank(A)= Rank([A b]) = k < n, solution |

x = x¯ + xN , with Ax¯ = b and xN Null(A) ∈ Mathematical Methods in Engineering and Science Systems of Linear Equations 38, Nature of Solutions Basic Idea of Solution Methodology Basic Idea of Solution Methodology Homogeneous Systems Pivoting Partitioning and Block Operations To diagnose the non-existence of a solution, To determine the unique solution, or To describe infinite solutions; decouple the equations using elementary transformations. For solving Ax = b, apply suitable elementary row transformations on both sides, leading to

RqRq 1 R2R1Ax = RqRq 1 R2R1b, − · · · − · · · or, [RA]x = Rb;

such that matrix [RA] is greatly simplified. In the best case, with complete reduction, RA = In, and components of x can be read off from Rb.

1 For inverting matrix A, treat AA− = In similarly. Mathematical Methods in Engineering and Science Systems of Linear Equations 39, Nature of Solutions Homogeneous Systems Basic Idea of Solution Methodology Homogeneous Systems Pivoting To solve Ax = 0 or to describe Null(A), Partitioning and Block Operations apply a series of elementary row transformations on A to reduce it to the A∼, the row-reduced echelon form or RREF. Features of RREF: 1. The first non-zero entry in any row is a ‘1’, the leading ‘1’. 2. In the same column as the leading ‘1’, other entries are zero. 3. Non-zero entries in a lower row appear later.

Variables corresponding to columns having leading ‘1’s are expressed in terms of the remaining variables.

u1 u2 Solution of Ax = 0: x = z1 z2 zn k   · · · − · · ·    un k   −  Basis of Null(A): z1, z2, , zn k   { · · · − } Mathematical Methods in Engineering and Science Systems of Linear Equations 40, Nature of Solutions Pivoting Basic Idea of Solution Methodology Homogeneous Systems Pivoting Attempt: Partitioning and Block Operations To get ‘1’ at diagonal (or leading) position, with ‘0’ elsewhere. Key step: division by the diagonal (or leading) entry. Consider Ik ...... δ.. . .  . . .. BIG .  A¯ = .  . big .. . .     ......     ......      Cannot divide by zero. Should not divide by δ. ◮ partial pivoting: row interchange to get ‘big’ in place of δ ◮ complete pivoting: row and column interchanges to get ‘BIG’ in place of δ

Complete pivoting does not give a huge advantage over partial pivoting, but requires maintaining of variable permutation for later unscrambling. Mathematical Methods in Engineering and Science Systems of Linear Equations 41, Nature of Solutions Partitioning and Block Operations Basic Idea of Solution Methodology Homogeneous Systems Pivoting Partitioning and Block Operations Equation Ax = y can be written as

x1 A11 A12 A13 y1 x2 = , A21 A22 A23   y2   x3     with x1, x2 etc being themselves vectors (or matrices). ◮ For a valid partitioning, block sizes should be consistent. ◮ Elementary transformations can be applied over blocks. ◮ Block operations can be computationally economical at times. ◮ Conceptually, different blocks of contributions/equations can be assembled for mathematical modelling of complicated coupled systems. Mathematical Methods in Engineering and Science Systems of Linear Equations 42, Nature of Solutions Points to note Basic Idea of Solution Methodology Homogeneous Systems Pivoting Partitioning and Block Operations

◮ Solution(s) of Ax = b may be non-existent, unique or infinitely many. ◮ Complete solution can be described by composing a particular solution with the null space of A. ◮ Null space basis can be obtained conveniently from the row-reduced echelon form of A. ◮ For a strategy of solution, pivoting is an important step.

Necessary Exercises: 1,2,4,5,7 Mathematical Methods in Engineering and Science Gauss Elimination Family of Methods 43, Gauss-Jordan Elimination Outline Gaussian Elimination with Back-Substitution LU Decomposition

Gauss Elimination Family of Methods Gauss-Jordan Elimination Gaussian Elimination with Back-Substitution LU Decomposition Mathematical Methods in Engineering and Science Gauss Elimination Family of Methods 44, Gauss-Jordan Elimination Gauss-Jordan Elimination Gaussian Elimination with Back-Substitution LU Decomposition 1 Task: Solve Ax = b1, Ax = b2 and Ax = b3; find A− and evaluate A 1B, where A Rn n and B Rn p. − ∈ × ∈ × n (2n+3+p) Assemble C = [A b b b In B] R 1 2 3 ∈ × and follow the algorithm . Collect solutions from the result

∼ 1 1 1 1 1 C C = [In A− b A− b A− b A− A− B]. −→ 1 2 3 Remarks: ◮ Premature termination: matrix A singular — decision? ◮ If you use complete pivoting, unscramble permutation.

◮ ∼ 1 Identity matrix in both C and C? Store A− ‘in place’. ◮ 1 1 For evaluating A− b, do not develop A− . ◮ Gauss-Jordan elimination an overkill? Want something cheaper ? Mathematical Methods in Engineering and Science Gauss Elimination Family of Methods 45, Gauss-Jordan Elimination Gauss-Jordan Elimination Gaussian Elimination with Back-Substitution LU Decomposition

Gauss-Jordan Algorithm ◮ ∆ = 1 ◮ For k = 1, 2, 3, , (n 1) · · · − 1. Pivot : identify l such that clk = max cjk for k j n. | | | | ≤ ≤ If clk =0,then ∆=0and exit. Else, interchange row k and row l. 2. ∆ ckk ∆, ←− Divide row k by ckk . 3. Subtract cjk times row k from row j, j = k. ∀ 6 ◮ ∆ cnn∆ ←− If cnn = 0, then exit. Else, divide row n by cnn.

In case of non-singular A, default termination .

This outline is for partial pivoting. Mathematical Methods in Engineering and Science Gauss Elimination Family of Methods 46, Gauss-Jordan Elimination Gaussian Elimination with Back-SubstitutionGaussian Elimination with Back-Substitution LU Decomposition Gaussian elimination: Ax = b Ax∼ = b∼ −→ a a a x b 11′ 12′ · · · 1′ n 1 1′ a22′ a2′ n x2 b2′ or,  ·. · · .   .  =  .  .. . . .        a   xn   b   nn′     n′  Back-substitutions:     

xn = bn′ /ann′ , n 1 xi = b′ a′ xj for i = n 1, n 2, , 2, 1 a  i − ij  − − · · · ii′ j i =X+1 Remarks   ◮ Computational cost half compared to G-J elimination. ◮ Like G-J elimination, prior knowledge of RHS needed. Mathematical Methods in Engineering and Science Gauss Elimination Family of Methods 47, Gauss-Jordan Elimination Gaussian Elimination with Back-SubstitutionGaussian Elimination with Back-Substitution LU Decomposition Anatomy of the Gaussian elimination: The process of Gaussian elimination (with no pivoting) leads to

U = RqRq 1 R2R1A = RA. − · · · The steps given by for k = 1, 2, 3, , (n 1) · · · − a j-th row j-th row jk k-th row for ←− − akk × j = k + 1, k + 2, , n · · · involve elementary matrices 1 0 0 0 a21 · · · a 1 0 0  − a11 · · ·  31 0 1 0 Rk k=1 = − a11 · · · etc. |  . . . . .   ......     an1 0 0 1   − a11 · · ·  1   With L = R− , A = LU. Mathematical Methods in Engineering and Science Gauss Elimination Family of Methods 48, Gauss-Jordan Elimination LU Decomposition Gaussian Elimination with Back-Substitution LU Decomposition A square matrix with non-zero leading minors is LU-decomposable. No reference to a right-hand-side (RHS) vector! To solve Ax = b, denote y = Ux and split as

Ax = b LUx = b ⇒ Ly = b and Ux = y. ⇒ Forward substitutions:

i 1 1 − yi = bi lij yj for i = 1, 2, 3, , n; lii  −  · · · j X=1   Back-substitutions:

n 1 xi = yi uij xj for i = n, n 1, n 2, , 1. uii  −  − − · · · j i =X+1   Mathematical Methods in Engineering and Science Gauss Elimination Family of Methods 49, Gauss-Jordan Elimination LU Decomposition Gaussian Elimination with Back-Substitution LU Decomposition Question: How to LU-decompose a given matrix?

l 0 0 0 u u u u n 11 · · · 11 12 13 · · · 1 l21 l22 0 0 0 u22 u23 u2n  · · ·   · · ·  l l l 0 0 0 u u n L = 31 32 33 · · · and U = 33 · · · 3  . . . . .   . . . . .   ......   ......       ln ln ln lnn   0 0 0 unn   1 2 3 · · ·   · · ·      Elements of the product give

i lik ukj = aij for i j, ≤ k X=1 j

and lik ukj = aij for i > j. Xk=1 n2 equations in n2 + n unknowns: choice of n unknowns Mathematical Methods in Engineering and Science Gauss Elimination Family of Methods 50, Gauss-Jordan Elimination LU Decomposition Gaussian Elimination with Back-Substitution LU Decomposition

Doolittle’s algorithm

◮ Choose lii = 1 ◮ For j = 1, 2, 3, , n · · ·i−1 1. uij = aij k=1 lik ukj for 1 i j 1 − j−1 ≤ ≤ 2. lij = (aij lik ukj ) for i > j ujj P− k=1 P Evaluation proceeds in column order of the matrix (for storage)

u u u u n 11 12 13 · · · 1 l u u u n  21 22 23 · · · 2  A∗ = l31 l32 u33 u3n  . . . ·. · · .   ......     ln ln ln unn   1 2 3 · · ·    Mathematical Methods in Engineering and Science Gauss Elimination Family of Methods 51, Gauss-Jordan Elimination LU Decomposition Gaussian Elimination with Back-Substitution LU Decomposition Question: What about matrices which are not LU-decomposable? Question: What about pivoting? Consider the non-singular matrix

0 1 2 1 0 0 u11 = 0 u12 u13 3 1 2 = l =? 1 0 0 u u .    21   22 23  2 1 3 l31 l32 1 0 0 u33       LU-decompose a permutation of its rows

0 1 2 0 1 0 3 1 2 3 1 2 = 1 0 0 0 1 2  2 1 3   0 0 1   2 1 3     0 1 0   1 0 0  3 1 2 = 1 0 0 0 1 0 0 1 2 .    2 1    0 0 1 3 3 1 0 0 1       In this PLU decomposition, permutation P is recorded in a vector. Mathematical Methods in Engineering and Science Gauss Elimination Family of Methods 52, Gauss-Jordan Elimination Points to note Gaussian Elimination with Back-Substitution LU Decomposition

For invertible coefficient matrices, use ◮ Gauss-Jordan elimination for large number of RHS vectors available all together and also for matrix inversion, ◮ Gaussian elimination with back-substitution for small number of RHS vectors available together, ◮ LU decomposition method to develop and maintain factors to be used as and when RHS vectors are available. Pivoting is almost necessary (without further special structure).

Necessary Exercises: 1,4,5 Mathematical Methods in Engineering and Science Special Systems and Special Methods 53, Quadratic Forms, Symmetry and Positive Definiteness Outline Cholesky Decomposition Sparse Systems*

Special Systems and Special Methods Quadratic Forms, Symmetry and Positive Definiteness Cholesky Decomposition Sparse Systems* Mathematical Methods in Engineering and Science Special Systems and Special Methods 54, Quadratic Forms, Symmetry and Positive Definiteness Quadratic Forms, Symmetry and PositiveCholesky Decomposition Definiteness Sparse Systems* Quadratic form n n T q(x)= x Ax = aij xi xj i j X=1 X=1 defined with respect to a . Quadratic form q(x), equivalently matrix A, is called positive definite (p.d.) when xT Ax > 0 x = 0 ∀ 6 and positive semi-definite (p.s.d.) when xT Ax 0 x = 0. ≥ ∀ 6 Sylvester’s criteria:

a11 a12 a11 0, 0, , det A 0; ≥ a21 a22 ≥ · · · ≥

i.e. all leading minors non-negative, for p.s.d.

Mathematical Methods in Engineering and Science Special Systems and Special Methods 55, Quadratic Forms, Symmetry and Positive Definiteness Cholesky Decomposition Cholesky Decomposition Sparse Systems* If A Rn n is symmetric and positive definite, then there exists a ∈ × non-singular lower L Rn n such that ∈ × A = LLT .

Algorithm For i = 1, 2, 3, , n · · · i 1 2 ◮ Lii = aii − L − k=1 ik ◮ q1 i 1 Lji = aji P − Ljk Lik for i < j n Lii − k=1 ≤ For solving Ax = b,P  Forward substitutions: Ly = b Back-substitutions: LT x = y Remarks ◮ Test of positive definiteness. ◮ Stable algorithm: no pivoting necessary! ◮ Economy of space and time. Mathematical Methods in Engineering and Science Special Systems and Special Methods 56, Quadratic Forms, Symmetry and Positive Definiteness Sparse Systems* Cholesky Decomposition Sparse Systems*

◮ What is a ? ◮ Bandedness and bandwidth ◮ Efficient storage and processing ◮ Updates ◮ Sherman-Morrison formula

−1 T −1 − − (A u)(v A ) (A + uvT ) 1 = A 1 − 1+ vT A−1u

◮ Woodbury formula ◮ Conjugate gradient method ◮ efficiently implemented matrix-vector products Mathematical Methods in Engineering and Science Special Systems and Special Methods 57, Quadratic Forms, Symmetry and Positive Definiteness Points to note Cholesky Decomposition Sparse Systems*

◮ Concepts and criteria of positive definiteness and positive semi-definiteness ◮ Cholesky decomposition method in symmetric positive definite systems ◮ Nature of sparsity and its exploitation

Necessary Exercises: 1,2,4,7 Mathematical Methods in Engineering and Science Numerical Aspects in Linear Systems 58, Norms and Condition Numbers Outline Ill-conditioning and Sensitivity Rectangular Systems Singularity-Robust Solutions Iterative Methods

Numerical Aspects in Linear Systems Norms and Condition Numbers Ill-conditioning and Sensitivity Rectangular Systems Singularity-Robust Solutions Iterative Methods Mathematical Methods in Engineering and Science Numerical Aspects in Linear Systems 59, Norms and Condition Numbers Norms and Condition Numbers Ill-conditioning and Sensitivity Rectangular Systems Singularity-Robust Solutions Norm of a vector: a measure of size Iterative Methods ◮ Euclidean norm or 2-norm 1 x = x = x2 + x2 + + x2 2 = xT x k k k k2 1 2 · · · n   q ◮ The p-norm

p p p 1 x p = [ x + x + + xn ] p k k | 1| | 2| · · · | |

◮ The 1-norm: x = x + x + + xn k k1 | 1| | 2| · · · | | ◮ The -norm: ∞ p p p 1 x = lim [ x1 + x2 + + xn ] p = max xj ∞ p j k k →∞ | | | | · · · | | | |

◮ Weighted norm x = xT Wx k kw where weight matrix W is symmetricq and positive definite. Mathematical Methods in Engineering and Science Numerical Aspects in Linear Systems 60, Norms and Condition Numbers Norms and Condition Numbers Ill-conditioning and Sensitivity Rectangular Systems Singularity-Robust Solutions Iterative Methods Norm of a matrix: magnitude or scale of the transformation

Matrix norm (induced by a vector norm) is given by the largest magnification it can produce on a vector

Ax A = max k k = max Ax k k x x x =1 k k k k k k

Direct consequence: Ax A x k k ≤ k k k k Index of closeness to singularity: Condition number

1 κ(A)= A A− , 1 κ(A) k k k k ≤ ≤∞

** Isotropic, well-conditioned, ill-conditioned and singular matrices Mathematical Methods in Engineering and Science Numerical Aspects in Linear Systems 61, Norms and Condition Numbers Ill-conditioning and Sensitivity Ill-conditioning and Sensitivity Rectangular Systems Singularity-Robust Solutions Iterative Methods 0.9999x 1.0001x = 1 1 − 2 x x = 1+ ǫ 1 − 2 10001ǫ+1 9999ǫ 1 Solution: x1 = 2 , x2 = 2 − ◮ sensitive to small changes in the RHS ◮ insensitive to error in a guess See illustration 1 For the system Ax = b, solution is x = A− b and 1 1 δx = A− δb A− δA x − If the matrix A is exactly known, then

δx 1 δb δb k k A A− k k = κ(A)k k x ≤ k k k k b b k k k k k k If the RHS is known exactly, then

δx 1 δA δA k k A A− k k = κ(A)k k x ≤ k k k k A A k k k k k k Mathematical Methods in Engineering and Science Numerical Aspects in Linear Systems 62, Norms and Condition Numbers Ill-conditioning and Sensitivity Ill-conditioning and Sensitivity Rectangular Systems

X Singularity-Robust Solutions 2 X2 (2) Iterative Methods (2) (2b) (1) (1)

(b) X

o X 1 o X1 (a) (a) X X

(a) Reference system (b) Parallel shift

X 2 X2 (2) (2) (2d) (1) (1)

(c) o X X1 o X1

(c) Guess validation (d) Singularity

Figure: Ill-conditioning: a geometric perspective Mathematical Methods in Engineering and Science Numerical Aspects in Linear Systems 63, Norms and Condition Numbers Rectangular Systems Ill-conditioning and Sensitivity Rectangular Systems m n Singularity-Robust Solutions Consider Ax = b with A R and RankIterative(A)= Methodsn < m. ∈ × T T T 1 T A Ax = A b x = (A A)− A b ⇒ Square of error norm 1 1 U(x) = Ax b 2 = (Ax b)T (Ax b) 2k − k 2 − − 1 1 = xT AT Ax xT AT b + bT b 2 − 2 Least square error solution: ∂U = AT Ax AT b = 0 ∂x −

Pseudoinverse or Moore-Penrose inverse or left-inverse

# T 1 T A = (A A)− A Mathematical Methods in Engineering and Science Numerical Aspects in Linear Systems 64, Norms and Condition Numbers Rectangular Systems Ill-conditioning and Sensitivity Rectangular Systems m n Singularity-Robust Solutions Consider Ax = b with A R and RankIterative(A)= Methodsm < n. ∈ × Look for λ Rm that satisfies AT λ = x and ∈ AAT λ = b Solution T T T 1 x = A λ = A (AA )− b Consider the problem 1 T minimize U(x)= 2 x x subject to Ax = b. Extremum of the Lagrangian (x, λ)= 1 xT x λT (Ax b) is L 2 − − given by ∂ ∂ L = 0, L = 0 x = AT λ, Ax = b. ∂x ∂λ ⇒ T T 1 Solution x = A (AA )− b gives foot of the perpendicular on the solution ‘plane’ and the pseudoinverse # T T 1 A = A (AA )− here is a right-inverse! Mathematical Methods in Engineering and Science Numerical Aspects in Linear Systems 65, Norms and Condition Numbers Singularity-Robust Solutions Ill-conditioning and Sensitivity Rectangular Systems Singularity-Robust Solutions Ill-posed problems: Tikhonov regularizationIterative Methods ◮ recipe for any linear system (m > n, m = n or m < n), with any condition! Ax = b may have conflict: form AT Ax = AT b. AT A may be ill-conditioned: rig the system as

T 2 T (A A + ν In)x = A b

Coefficient matrix: symmetric and positive definite! The idea: Immunize the system, paying a small price. Issues: ◮ The choice of ν? ◮ When m < n, computational advantage by

T 2 T (AA + ν Im)λ = b, x = A λ Mathematical Methods in Engineering and Science Numerical Aspects in Linear Systems 66, Norms and Condition Numbers Iterative Methods Ill-conditioning and Sensitivity Rectangular Systems Singularity-Robust Solutions Iterative Methods Jacobi’s iteration method:

n (k+1) 1 (k) xi = bi aij xj for i = 1, 2, 3, , n. aii  −  · · · j=1, j=i X6   Gauss-Seidel method:

i 1 n (k+1) 1 − (k+1) (k) xi = bi aij xj aij xj for i = 1, 2, 3, , n. aii  − −  · · · j j i X=1 =X+1   The category of relaxation methods: diagonal dominance and availability of good initial approximations Mathematical Methods in Engineering and Science Numerical Aspects in Linear Systems 67, Norms and Condition Numbers Points to note Ill-conditioning and Sensitivity Rectangular Systems Singularity-Robust Solutions Iterative Methods

◮ Solutions are unreliable when the coefficient matrix is ill-conditioned. ◮ Finding pseudoinverse of a full-rank matrix is ‘easy’. ◮ Tikhonov regularization provides singularity-robust solutions. ◮ Iterative methods may have an edge in certain situations!

Necessary Exercises: 1,2,3,4 Mathematical Methods in Engineering and Science Eigenvalues and Eigenvectors 68, Eigenvalue Problem Outline Generalized Eigenvalue Problem Some Basic Theoretical Results Power Method

Eigenvalues and Eigenvectors Eigenvalue Problem Generalized Eigenvalue Problem Some Basic Theoretical Results Power Method Mathematical Methods in Engineering and Science Eigenvalues and Eigenvectors 69, Eigenvalue Problem Eigenvalue Problem Generalized Eigenvalue Problem Some Basic Theoretical Results In mapping A : Rn Rn, special vectors ofPower matrix MethodA Rn n → ∈ × ◮ mapped to scalar multiples, i.e. undergo pure scaling

Av = λv Eigenvector (v) and eigenvalue (λ): eigenpair (λ, v) algebraic eigenvalue problem

(λI A)v = 0 − For non-trivial (non-zero) solution v,

det(λI A) = 0 − Characteristic equation: characteristic polynomial: n roots ◮ n eigenvalues — for each, find eigenvector(s) Multiplicity of an eigenvalue: algebraic and geometric Multiplicity mismatch: diagonalizable and defective matrices Mathematical Methods in Engineering and Science Eigenvalues and Eigenvectors 70, Eigenvalue Problem Generalized Eigenvalue Problem Generalized Eigenvalue Problem Some Basic Theoretical Results 1-dof mass-spring system: mx¨ + kx = 0 Power Method

k Natural frequency of vibration: ωn = m q Free vibration of n-dof system: Mx¨ + Kx = 0, Natural frequencies and corresponding modes? Assuming a vibration mode x = Φ sin(ωt + α),

( ω2MΦ + KΦ)sin(ωt + α)= 0 KΦ = ω2MΦ − ⇒ 1 2 Reduce as M− K Φ = ω Φ? Why is it not a good idea? K symmetric,  M symmetric and positive definite!!

T ∼ T ∼ 1 T With M = LL , Φ = L Φ and K = L− KL− ,

K∼ Φ∼ = ω2Φ∼ Mathematical Methods in Engineering and Science Eigenvalues and Eigenvectors 71, Eigenvalue Problem Some Basic Theoretical Results Generalized Eigenvalue Problem Some Basic Theoretical Results Power Method Eigenvalues of transpose Eigenvalues of AT are the same as those of A.

Caution: Eigenvectors of A and AT need not be same.

Diagonal and block diagonal matrices Eigenvalues of a are its diagonal entries. Corresponding eigenvectors: natural basis members (e1, e2 etc).

Eigenvalues of a block diagonal matrix: those of diagonal blocks. Eigenvectors: coordinate extensions of individual eigenvectors. With (λ2, v2) as eigenpair of block A2,

A1 0 0 0 0 0 Av∼ = 0 A 0 v = A v = λ v 2  2   2   2 2  2  2  0 0 A3 0 0 0         Mathematical Methods in Engineering and Science Eigenvalues and Eigenvectors 72, Eigenvalue Problem Some Basic Theoretical Results Generalized Eigenvalue Problem Some Basic Theoretical Results Power Method Triangular and block triangular matrices Eigenvalues of a triangular matrix are its diagonal entries. Eigenvalues of a block triangular matrix are the collection of eigenvalues of its diagonal blocks. Take AB r r s s H = , A R × and C R × 0 C ∈ ∈   If Av = λv, then

v AB v Av λv v H = = = = λ 0 0 C 0 0 0 0            If µ is an eigenvalue of C, then it is also an eigenvalue of CT and

0 AT 0 0 0 CT w = µw HT = = µ ⇒ w BT CT w w        Mathematical Methods in Engineering and Science Eigenvalues and Eigenvectors 73, Eigenvalue Problem Some Basic Theoretical Results Generalized Eigenvalue Problem Some Basic Theoretical Results Power Method

Shift theorem Eigenvectors of A + µI are the same as those of A. Eigenvalues: shifted by µ.

Deflation For a symmetric matrix A, with mutually orthogonal eigenvectors, having (λj , vj ) as an eigenpair,

T vj vj B = A λj T − vj vj

has the same eigenstructure as A, except that the eigenvalue corresponding to vj is zero. Mathematical Methods in Engineering and Science Eigenvalues and Eigenvectors 74, Eigenvalue Problem Some Basic Theoretical Results Generalized Eigenvalue Problem Some Basic Theoretical Results Power Method Eigenspace If v , v , , vk are eigenvectors of A corresponding to the same 1 2 · · · eigenvalue λ, then

eigenspace: < v , v , , vk > 1 2 · · ·

Similarity transformation 1 B = S− AS: same transformation expressed in new basis.

1 det(λI A) = det S− det(λI A) det S = det(λI B) − − − Same characteristic polynomial! Eigenvalues are the property of a linear transformation, not of the basis. 1 An eigenvector v of A transforms to S− v, as the corresponding eigenvector of B. Mathematical Methods in Engineering and Science Eigenvalues and Eigenvectors 75, Eigenvalue Problem Power Method Generalized Eigenvalue Problem Some Basic Theoretical Results Power Method Consider matrix A with

λ1 > λ2 λ3 λn 1 > λn | | | | ≥ | |≥···≥| − | | |

and a full set of n eigenvectors v , v , , vn. 1 2 · · · For vector x = α v + α v + + αnvn, 1 1 2 2 · · · p p p p p λ2 λ3 λn A x = λ α v + α v + α v + + αnvn 1 1 1 λ 2 2 λ 3 3 · · · λ   1   1   1  

As p , Apx λpα v , and →∞ → 1 1 1 p (A x)r λ1 = lim , r = 1, 2, 3, , n. p p 1 →∞ (A − x)r · · ·

At convergence, n ratios will be the same. Question: How to find the least magnitude eigenvalue? Mathematical Methods in Engineering and Science Eigenvalues and Eigenvectors 76, Eigenvalue Problem Points to note Generalized Eigenvalue Problem Some Basic Theoretical Results Power Method

◮ Meaning and context of the algebraic eigenvalue problem ◮ Fundamental deductions and vital relationships ◮ Power method as an inexpensive procedure to determine extremal magnitude eigenvalues

Necessary Exercises: 1,2,3,4,6 Mathematical Methods in Engineering and Science Diagonalization and Similarity Transformations 77, Diagonalizability Outline Canonical Forms Symmetric Matrices Similarity Transformations

Diagonalization and Similarity Transformations Diagonalizability Canonical Forms Symmetric Matrices Similarity Transformations Mathematical Methods in Engineering and Science Diagonalization and Similarity Transformations 78, Diagonalizability Diagonalizability Canonical Forms Symmetric Matrices Similarity Transformations n n Consider A R , having n eigenvectors v , v , , vn; ∈ × 1 2 · · · with corresponding eigenvalues λ , λ , , λn. 1 2 · · ·

AS = A[v v vn] = [λ v λ v λnvn] 1 2 · · · 1 1 2 2 · · · λ 0 0 1 · · · 0 λ2 0 = [v1 v2 vn]  . . ·. · · .  = SΛ · · · . . .. .    0 0 λn   · · ·    1 1 A = SΛS− and S− AS =Λ ⇒ Diagonalization: The process of changing the basis of a linear transformation so that its new matrix representation is diagonal, i.e. so that it is decoupled among its coordinates. Mathematical Methods in Engineering and Science Diagonalization and Similarity Transformations 79, Diagonalizability Diagonalizability Canonical Forms Symmetric Matrices Similarity Transformations

Diagonalizability: A matrix having a complete set of n linearly independent eigenvectors is diagonalizable. Existence of a complete set of eigenvectors: A possesses a complete set of n linearly independent eigenvectors.

◮ All distinct eigenvalues implies diagonalizability. ◮ But, diagonalizability does not imply distinct eigenvalues! ◮ However, a lack of diagonalizability certainly implies a multiplicity mismatch. Mathematical Methods in Engineering and Science Diagonalization and Similarity Transformations 80, Diagonalizability Canonical Forms Canonical Forms Symmetric Matrices Similarity Transformations

Jordan canonical form (JCF)

Diagonal (canonical) form

Triangular (canonical) form

Other convenient forms Tridiagonal form Hessenberg form Mathematical Methods in Engineering and Science Diagonalization and Similarity Transformations 81, Diagonalizability Canonical Forms Canonical Forms Symmetric Matrices Jordan canonical form (JCF): composedSimilarity of Jordan Transformations blocks λ 1 J1 λ 1   J2 . J =   , J = λ .. .. r   .  .     .. 1   Jk       λ      The key equation AS = SJ in extended form gives  . .. A[ Sr ] = [ Sr ]  Jr  , · · · · · · · · · · · · .  ..      where Jordan block Jr is associated with the subspace of

Sr = [v w w ] 2 3 · · · Mathematical Methods in Engineering and Science Diagonalization and Similarity Transformations 82, Diagonalizability Canonical Forms Canonical Forms Symmetric Matrices Equating blocks as ASr = Sr Jr gives Similarity Transformations

λ 1 λ 1   [Av Aw Aw ] = [v w w ] . 2 3 · · · 2 3 · · · λ ..    .   ..      Columnwise equality leads to

Av = λv, Aw = v + λw , Aw = w + λw , 2 2 3 2 3 · · ·

Generalized eigenvectors w2, w3 etc:

(A λI)v = 0, − (A λI)w = v and (A λI)2w = 0, − 2 − 2 (A λI)w = w and (A λI)3w = 0, − 3 2 − 3 · · · Mathematical Methods in Engineering and Science Diagonalization and Similarity Transformations 83, Diagonalizability Canonical Forms Canonical Forms Symmetric Matrices Similarity Transformations

Diagonal form ◮ Special case of Jordan form, with each Jordan block of 1 1 × size ◮ Matrix is diagonalizable ◮ Similarity S is composed of n linearly independent eigenvectors as columns ◮ None of the eigenvectors admits any generalized eigenvector ◮ Equal geometric and algebraic multiplicities for every eigenvalue Mathematical Methods in Engineering and Science Diagonalization and Similarity Transformations 84, Diagonalizability Canonical Forms Canonical Forms Symmetric Matrices Similarity Transformations Triangular form Triangularization: Change of basis of a linear tranformation so as to get its matrix in the triangular form

◮ For real eigenvalues, always possible to accomplish with orthogonal similarity transformation ◮ Always possible to accomplish with unitary similarity transformation, with complex arithmetic ◮ Determination of eigenvalues

Note: The case of complex eigenvalues: 2 2 real diagonal block × α β α + iβ 0 − β α ∼ 0 α iβ    −  Mathematical Methods in Engineering and Science Diagonalization and Similarity Transformations 85, Diagonalizability Canonical Forms Canonical Forms Symmetric Matrices Similarity Transformations Forms that can be obtained with pre-determined number of arithmetic operations (without iteration): Tridiagonal form: non-zero entries only in the (leading) diagonal, sub-diagonal and super-diagonal ◮ useful for symmetric matrices Hessenberg form: A slight generalization of a triangular matrix

∗∗ ∗ ··· ∗ ∗  ∗∗ ∗ ··· ∗ ∗  ∗ ∗ ··· ∗ ∗ Hu =  . . . .   ......     . . .   .. .. .       ∗ ∗    Note: Tridiagonal and Hessenberg forms do not fall in the category of canonical forms. Mathematical Methods in Engineering and Science Diagonalization and Similarity Transformations 86, Diagonalizability Symmetric Matrices Canonical Forms Symmetric Matrices Similarity Transformations A real symmetric matrix has all real eigenvalues and is diagonalizable through an orthogonal similarity transformation.

Eigenvalues must be real. A complete set of eigenvectors exists. Eigenvectors corresponding to distinct eigenvalues are necessarily orthogonal. Corresponding to repeated eigenvalues, orthogonal eigenvectors are available. In all cases of a symmetric matrix, we can form an orthogonal matrix V, such that VT AV =Λ is a real diagonal matrix.

Further, A = VΛVT . Similar results for complex Hermitian matrices. Mathematical Methods in Engineering and Science Diagonalization and Similarity Transformations 87, Diagonalizability Symmetric Matrices Canonical Forms Symmetric Matrices Similarity Transformations Proposition: Eigenvalues of a real symmetric matrix must be real.

Take A Rn n such that A = AT , with eigenvalue λ = h + ik. ∈ × Since λI A is singular, so is − B = (λI A) (λ¯I A) = (hI A + ikI)(hI A ikI) − − − − − = (hI A)2 + k2I − For some x = 0, Bx = 0, and 6 xT Bx = 0 xT (hI A)T (hI A)x + k2xT x = 0 ⇒ − − Thus, (hI A)x 2 + kx 2 = 0 k − k k k k = 0 and λ = h Mathematical Methods in Engineering and Science Diagonalization and Similarity Transformations 88, Diagonalizability Symmetric Matrices Canonical Forms Symmetric Matrices Similarity Transformations Proposition: A symmetric matrix possesses a complete set of eigenvectors. Consider a repeated real eigenvalue λ of A and examine its Jordan block(s). Suppose Av = λv. The first generalized eigenvector w satisfies (A λI)w = v, giving − vT (A λI)w = vT v vT AT w λvT w = vT v − ⇒ − (Av)T w λvT w = v 2 ⇒ − k k v 2 = 0 ⇒ k k which is absurd. An eigenvector will not admit a generalized eigenvector. All Jordan blocks will be of 1 1 size. × Mathematical Methods in Engineering and Science Diagonalization and Similarity Transformations 89, Diagonalizability Symmetric Matrices Canonical Forms Symmetric Matrices Similarity Transformations Proposition: Eigenvectors of a symmetric matrix corresponding to distinct eigenvalues are necessarily orthogonal. Take two eigenpairs (λ , v ) and (λ , v ), with λ = λ . 1 1 2 2 1 6 2 T T T v1 Av2 = v1 (λ2v2)= λ2v1 v2 T T T T T T v1 Av2 = v1 A v2 = (Av1) v2 = (λ1v1) v2 = λ1v1 v2

From the two expressions, (λ λ )vT v = 0 1 − 2 1 2 T v1 v2 = 0

Proposition: Corresponding to a repeated eigenvalue of a symmetric matrix, an appropriate number of orthogonal eigenvectors can be selected.

If λ1 = λ2, then the entire subspace < v1, v2 > is an eigenspace. Select any two mutually orthogonal eigenvectors for the basis. Mathematical Methods in Engineering and Science Diagonalization and Similarity Transformations 90, Diagonalizability Symmetric Matrices Canonical Forms Symmetric Matrices Facilities with the ‘omnipresent’ symmetricSimilarity matrices: Transformations ◮ Expression A = VΛVT T λ1 v1 T λ2 v2 = [v1 v2 vn]  .   .  · · · .. .    T   λn   v     n   n   T T T T = λ v v + λ v v + + λnvnv = λi vi v 1 1 1 2 2 2 · · · n i i X=1 ◮ Reconstruction from a sum of rank-one components ◮ Efficient storage with only large eigenvalues and corresponding eigenvectors ◮ Deflation technique ◮ Stable and effective methods: easier to solve the eigenvalue problem Mathematical Methods in Engineering and Science Diagonalization and Similarity Transformations 91, Diagonalizability Similarity Transformations Canonical Forms Symmetric Matrices Similarity Transformations

General Hessenberg

Symmetric Tridiagonal Triangular

Symmetric Tridiagonal

Diagonal

Figure: Eigenvalue problem: forms and steps

How to find suitable similarity transformations? 1. rotation 2. reflection 3. matrix decomposition or factorization 4. elementary transformation Mathematical Methods in Engineering and Science Diagonalization and Similarity Transformations 92, Diagonalizability Points to note Canonical Forms Symmetric Matrices Similarity Transformations

◮ Generally possible reduction: Jordan canonical form ◮ Condition of diagonalizability and the diagonal form ◮ Possible with orthogonal similarity transformations: triangular form ◮ Useful non-canonical forms: tridiagonal and Hessenberg ◮ Orthogonal diagonalization of symmetric matrices Caution: Each step in this context to be effected through similarity transformations

Necessary Exercises: 1,2,4 Mathematical Methods in Engineering and Science Jacobi and Givens Rotation Methods 93, Plane Rotations Outline Jacobi Rotation Method Givens Rotation Method

Jacobi and Givens Rotation Methods (for symmetric matrices) Plane Rotations Jacobi Rotation Method Givens Rotation Method Mathematical Methods in Engineering and Science Jacobi and Givens Rotation Methods 94, Plane Rotations Plane Rotations Jacobi Rotation Method Givens Rotation Method

Y Y /

P (x, y)

φ y y /

L x M O φ X x / K N

X /

Figure: Rotation of axes and change of basis

x = OL + LM = OL + KN = x′ cos φ + y ′ sin φ

y = PN MN = PN LK = y ′ cos φ x′ sin φ − − − Mathematical Methods in Engineering and Science Jacobi and Givens Rotation Methods 95, Plane Rotations Plane Rotations Jacobi Rotation Method Givens Rotation Method Orthogonal change of basis:

x cos φ sin φ x′ r = = = r′ y sin φ cos φ y ℜ    −  ′ 

Mapping of position vectors with

1 T cos φ sin φ − = = − ℜ ℜ sin φ cos φ  

In three-dimensional (ambient) space,

cos φ sin φ 0 cos φ 0 sin φ xy = sin φ cos φ 0 , xz = 0 1 0 etc. ℜ  −  ℜ   0 0 1 sin φ 0 cos φ −     Mathematical Methods in Engineering and Science Jacobi and Givens Rotation Methods 96, Plane Rotations Plane Rotations Jacobi Rotation Method Givens Rotation Method Generalizing to n-dimensional Euclidean space (Rn), 1 0 0 1 0 0  . . .  .. . .    1 0 0     0 0 0 c 0 0 s 0   · · · · · · · · ·   0 1 0  Ppq =    . . .   . .. .     0 1 0     0 0 0 s 0 0 c 0     · · · −. · · · . ·. · ·   . . ..     0 0 1      Matrix A is transformed as 1 T A′ = Ppq− APpq = PpqAPpq, only the p-th and q-th rows and columns being affected. Mathematical Methods in Engineering and Science Jacobi and Givens Rotation Methods 97, Plane Rotations Jacobi Rotation Method Jacobi Rotation Method Givens Rotation Method

a′ = a′ = carp sarq for p = r = q, pr rp − 6 6 a′ = a′ = carq + sarp for p = r = q, qr rq 6 6 2 2 a′ = c app + s aqq 2scapq , pp − 2 2 aqq′ = s app + c aqq + 2scapq , and 2 2 a′ = a′ = (c s )apq + sc(app aqq) pq qp − − In a Jacobi rotation,

2 2 c s aqq app apq′ = 0 − = − = k (say). ⇒ 2sc 2apq

Left side is cot 2φ: solve this equation for φ.

Jacobi rotation transformations P , P , , P n; P , , P n; 12 13 · · · 1 23 · · · 2 ; Pn 1,n complete a full sweep. · · · − Note: The resulting matrix is far from diagonal! Mathematical Methods in Engineering and Science Jacobi and Givens Rotation Methods 98, Plane Rotations Jacobi Rotation Method Jacobi Rotation Method Givens Rotation Method Sum of squares of off-diagonal terms before the transformation

2 2 2 S = ars = 2 a + a | |  rp rq r=s r=p p=r=q X6 X6 6X6   = 2 (a2 + a2 )+ a2  rp rq pq p=r=q 6X6 and that afterwards  

2 2 2 S′ = 2 (a′ + a′ )+ a′  rp rq pq p=r=q 6X6  2 2  = 2 (arp + arq) p=r=q 6X6 differ by 2 ∆S = S′ S = 2a 0; and S 0. − − pq ≤ → Mathematical Methods in Engineering and Science Jacobi and Givens Rotation Methods 99, Plane Rotations Givens Rotation Method Jacobi Rotation Method Givens Rotation Method arq While applying the rotation Ppq, demand a′ = 0: tan φ = rq − arp r = p 1: Givens rotation − ◮ Once ap 1,q is annihilated, it is never updated again! −

Sweep P23, P24, , P2n; P34, , P3n; ; Pn 1,n to · · · · · · · · · − annihilate a13, a14, , a1n; a24, , a2n; ; an 2,n. · · · · · · · · · − Symmetric

How do eigenvectors transform through Jacobi/Givens rotation steps? T T A∼ = P(2) P(1) AP(1)P(2) · · · · · · Product matrix P(1)P(2) gives the basis. · · · To record it, initialize V by identity and keep multiplying new rotation matrices on the right side. Mathematical Methods in Engineering and Science Jacobi and Givens Rotation Methods 100, Plane Rotations Givens Rotation Method Jacobi Rotation Method Givens Rotation Method

Contrast between Jacobi and Givens rotation methods ◮ What happens to intermediate zeros? ◮ What do we get after a complete sweep? ◮ How many sweeps are to be applied? ◮ What is the intended final form of the matrix? ◮ How is size of the matrix relevant in the choice of the method?

Fast forward ... ◮ Householder method accomplishes ‘tridiagonalization’ more efficiently than Givens rotation method. ◮ But, with a half-processed matrix, there come situations in which Givens rotation method turns out to be more efficient! Mathematical Methods in Engineering and Science Jacobi and Givens Rotation Methods 101, Plane Rotations Points to note Jacobi Rotation Method Givens Rotation Method

Rotation transformation on symmetric matrices ◮ Plane rotations provide orthogonal change of basis that can be used for diagonalization of matrices. ◮ For small matrices (say 4 n 8), Jacobi rotation sweeps ≤ ≤ are competitive enough for diagonalization upto a reasonable tolerance. ◮ For large matrices, one sweep of Givens rotations can be applied to get a symmetric tridiagonal matrix, for efficient further processing.

Necessary Exercises: 2,3,4 Mathematical Methods in Engineering and Science Householder Transformation and Tridiagonal Matrices 102, Householder Reflection Transformation Outline Householder Method Eigenvalues of Symmetric Tridiagonal Matrices

Householder Transformation and Tridiagonal Matrices Householder Reflection Transformation Householder Method Eigenvalues of Symmetric Tridiagonal Matrices Mathematical Methods in Engineering and Science Householder Transformation and Tridiagonal Matrices 103, Householder Reflection Transformation Householder Reflection TransformationHouseholder Method Eigenvalues of Symmetric Tridiagonal Matrices

u u − v w Plane of O Reflection

v

Figure: Vectors in Householder reflection

k u v Consider u, v R , u = v and w = u−v . ∈ k k k k k − k Householder reflection matrix

T Hk = Ik 2ww − is symmetric and orthogonal. For any vector x orthogonal to w,

T T Hk x = (Ik 2ww )x = x and Hk w = (Ik 2ww )w = w. − − −

Hence, Hk y = Hk (yw + y )= yw + y , Hk u = v and Hk v = u. ⊥ − ⊥ Mathematical Methods in Engineering and Science Householder Transformation and Tridiagonal Matrices 104, Householder Reflection Transformation Householder Method Householder Method Eigenvalues of Symmetric Tridiagonal Matrices Consider n n symmetric matrix A. × T n 1 n 1 Let u = [a a an ] R and v = u e R . 21 31 · · · 1 ∈ − k k 1 ∈ − 1 0 Construct P1 = and operate as 0 Hn 1  −  T (1) 1 0 a11 u 1 0 A = P1AP1 = 0 Hn 1 u A1 0 Hn 1  −   −  a vT = 11 . v Hn 1A1Hn 1  − − 

Reorganizing and re-naming,

d1 e2 0 A(1) = e d uT .  2 2 2  0 u2 A2   Mathematical Methods in Engineering and Science Householder Transformation and Tridiagonal Matrices 105, Householder Reflection Transformation Householder Method Householder Method Eigenvalues of Symmetric Tridiagonal Matrices Next, with v = u e , we form 2 k 2k 1 I2 0 P2 = 0 Hn 2  −  (2) (1) and operate as A = P2A P2. After j steps,

d1 e2 ..  e2 d2 .  A(j) = .. ..  . . ej+1   T   ej+1 dj+1 u   j+1   uj Aj   +1 +1    By n 2 steps, with P = P1P2P3 Pn 2, − · · · − (n 2) T A − = P AP

is symmetric tridiagonal. Mathematical Methods in Engineering and Science Householder Transformation and Tridiagonal Matrices 106, Householder Reflection Transformation Eigenvalues of Symmetric TridiagonalHouseholder Matrices Method Eigenvalues of Symmetric Tridiagonal Matrices

d1 e2 ..  e2 d2 .  T = .. ..  . . en 1   −   en 1 dn 1 en   − −   en dn      Characteristic polynomial

λ d e − 1 − 2 .. e2 λ d2 . − − p(λ)= ...... en 1 − − en 1 λ dn 1 en − − − − − en λ dn − −

Mathematical Methods in Engineering and Science Householder Transformation and Tridiagonal Matrices 107, Householder Reflection Transformation Eigenvalues of Symmetric TridiagonalHouseholder Matrices Method Eigenvalues of Symmetric Tridiagonal Matrices

Characteristic polynomial of the leading k k sub-matrix: pk (λ) ×

p0(λ) = 1, p (λ) = λ d , 1 − 1 p (λ) = (λ d )(λ d ) e2, 2 − 2 − 1 − 2 , ··· ··· ··· 2 pk+1(λ) = (λ dk+1)pk (λ) ek+1pk 1(λ). − − −

P(λ)= p (λ), p (λ), , pn(λ) { 0 1 · · · } ◮ a Sturmian sequence if ej = 0 j 6 ∀

Question: What if ej = 0 for some j?! Answer: That is good news. Split the matrix. Mathematical Methods in Engineering and Science Householder Transformation and Tridiagonal Matrices 108, Householder Reflection Transformation Eigenvalues of Symmetric TridiagonalHouseholder Matrices Method Eigenvalues of Symmetric Tridiagonal Matrices Sturmian sequence property of P(λ) with ej = 0: 6 Interlacing property: Roots of pk+1(λ) interlace the roots of pk (λ). That is, if the roots of pk+1(λ) are λ > λ > > λk and those of pk (λ) are 1 2 · · · +1 µ >µ > >µk ; then 1 2 · · ·

λ >µ > λ >µ > > λk >µk > λk . 1 1 2 2 ··· ··· +1 This property leads to a convenient procedure . Proof p1(λ) has a single root, d1. p (d )= e2 < 0, 2 1 − 2 Since p ( )= > 0, roots t and t of p (λ) are separated as 2 ±∞ ∞ 1 2 2 > t > d > t > . ∞ 1 1 2 −∞ The statement is true for k = 1. Mathematical Methods in Engineering and Science Householder Transformation and Tridiagonal Matrices 109, Householder Reflection Transformation Eigenvalues of Symmetric TridiagonalHouseholder Matrices Method Eigenvalues of Symmetric Tridiagonal Matrices Next, we assume that the statement is true for k = i. Roots of pi (λ): α > α > > αi 1 2 · · · Roots of pi (λ): β > β > > βi > βi +1 1 2 · · · +1 Roots of pi (λ): γ > γ > > γi > γi > γi +2 1 2 · · · +1 +2 Assumption: β > α > β > α > > βi > αi > βi 1 1 2 2 ··· ··· +1 γ α α α α γ i+2 i i−1 2 1 1 8 Ο Ο 8

β β β β i+1 i 2 1

(a) Roots of p ( λ ) and p ( λ ) i i+1

α α α j+1 j j−1

β β j+1 j ve ve

p p (b) Sign of i i+2

Figure: Interlacing of roots of characteristic polynomials

To show: γ > β > γ > β > > γi > βi > γi 1 1 2 2 ··· ··· +1 +1 +2 Mathematical Methods in Engineering and Science Householder Transformation and Tridiagonal Matrices 110, Householder Reflection Transformation Eigenvalues of Symmetric TridiagonalHouseholder Matrices Method Eigenvalues of Symmetric Tridiagonal Matrices

Since β > α , pi (β ) is of the same sign as pi ( ), i.e. positive. 1 1 1 ∞ 2 Therefore, pi (β )= e pi (β ) is negative. +2 1 − i+2 1 But, pi ( ) is clearly positive. +2 ∞ Hence, γ (β , ). 1 ∈ 1 ∞ Similarly, γi ( , βi ). +2 ∈ −∞ +1 Question: Where are the rest of the i roots of pi+2(λ)?

2 2 pi+2(βj ) = (βj di+2)pi+1(βj ) ei+2pi (βj )= ei+2pi (βj ) 2− − − pi (βj ) = e pi (βj ) +2 +1 − i+2 +1

That is, pi and pi+2 are of opposite signs at each β. Refer figure.

Over [βi+1, β1], pi+2(λ) changes sign over each sub-interval [βj+1, βj ], along with pi (λ), to maintain opposite signs at each β.

Conclusion: pi+2(λ) has exactly one root in (βj+1, βj ). Mathematical Methods in Engineering and Science Householder Transformation and Tridiagonal Matrices 111, Householder Reflection Transformation Eigenvalues of Symmetric TridiagonalHouseholder Matrices Method Eigenvalues of Symmetric Tridiagonal Matrices

Examine sequence P(w)= p (w), p (w), p (w), , pn(w) . { 0 1 2 · · · } If pk (w) and pk+1(w) have opposite signs then pk+1(λ) has one root more than pk (λ) in the interval (w, ). ∞ Number of roots of pn(λ) above w = number of sign changes in the sequence P(w).

Consequence: Number of roots of pn(λ) in (a, b) = difference between numbers of sign changes in P(a) and P(b). a+b Bisection method: Examine the sequence at 2 . Separate roots, bracket each of them and then squeeze the interval!

Any way to start with an interval to include all eigenvalues?

λi λbnd = max ej + dj + ej+1 | |≤ 1 j n{| | | | | |} ≤ ≤ Mathematical Methods in Engineering and Science Householder Transformation and Tridiagonal Matrices 112, Householder Reflection Transformation Eigenvalues of Symmetric TridiagonalHouseholder Matrices Method Eigenvalues of Symmetric Tridiagonal Matrices

Algorithm ◮ Identify the interval [a, b] of interest.

◮ For a degenerate case (some ej = 0), split the given matrix. ◮ For each of the non-degenerate matrices, ◮ by repeated use of bisection and study of the sequence P(λ), bracket individual eigenvalues within small sub-intervals, and ◮ by further use of the bisection method (or a substitute) within each such sub-interval, determine the individual eigenvalues to the desired accuracy.

Note: The algorithm is based on Sturmian sequence property . Mathematical Methods in Engineering and Science Householder Transformation and Tridiagonal Matrices 113, Householder Reflection Transformation Points to note Householder Method Eigenvalues of Symmetric Tridiagonal Matrices

◮ A Householder matrix is symmetric and orthogonal. It effects a reflection transformation. ◮ A sequence of Householder transformations can be used to convert a symmetric matrix into a symmetric tridiagonal form. ◮ Eigenvalues of the leading square sub-matrices of a symmetric tridiagonal matrix exhibit a useful interlacing structure. ◮ This property can be used to separate and bracket eigenvalues. ◮ Method of bisection is useful in the separation as well as subsequent determination of the eigenvalues.

Necessary Exercises: 2,4,5 Mathematical Methods in Engineering and Science QR Decomposition Method 114, QR Decomposition Outline QR Iterations Conceptual Basis of QR Method* QR Algorithm with Shift*

QR Decomposition Method QR Decomposition QR Iterations Conceptual Basis of QR Method* QR Algorithm with Shift* Mathematical Methods in Engineering and Science QR Decomposition Method 115, QR Decomposition QR Decomposition QR Iterations Conceptual Basis of QR Method* Decomposition (or factorization) A = QR intoQR Algorithm two factors,with Shift* orthogonal Q and upper-triangular R: (a) It always exists. (b) Performing this decomposition is pretty straightforward. (c) It has a number of properties useful in the solution of the eigenvalue problem. r11 r1n ·. · · . [a1 an] = [q1 qn] .. . · · · · · ·  .  r  nn  A simple method based on Gram-Schmidt orthogonalization: j Considering columnwise equality aj = i=1 rij qi , for j = 1, 2, 3, , n; · · · P j 1 T − rij = q aj i < j, a′ = aj rij qi , rjj = a′ ; i ∀ j − k j k i X=1 aj′ /rjj , if rjj = 0; qj = 6 T any vector satisfying q qj = δij for 1 i j, if rjj = 0.  i ≤ ≤ Mathematical Methods in Engineering and Science QR Decomposition Method 116, QR Decomposition QR Decomposition QR Iterations Conceptual Basis of QR Method* Practical method: one-sided HouseholderQR transformations, Algorithm with Shift* starting with u v u = a , v = u e Rn and w = 0 − 0 0 1 0 k 0k 1 ∈ 0 u v k 0 − 0k T and P = Hn = In 2w w . 0 − 0 0

a1 Pn 2Pn 3 P2P1P0A = Pn 2Pn 3 P2P1 k k ∗∗ − − · · · − − · · · 0 A  0  r 11 ∗ ∗∗ = Pn 2Pn 3 P2 r22 = = R − − · · ·  ∗∗  ··· ··· A1   With

T Q = (Pn 2Pn 3 P2P1P0) = P0P1P2 Pn 3Pn 2, − − · · · · · · − − we have QT A = R A = QR. ⇒ Mathematical Methods in Engineering and Science QR Decomposition Method 117, QR Decomposition QR Decomposition QR Iterations Conceptual Basis of QR Method* QR Algorithm with Shift*

Alternative method useful for tridiagonal and Hessenberg matrices: One-sided plane rotations ◮ rotations P12, P23 etc to annihilate a21, a32 etc in that sequence Givens rotation matrices!

Application in solution of a linear system: Q and R factors of a matrix A come handy in the solution of Ax = b

QRx = b Rx = QT b ⇒ needs only a sequence of back-substitutions. Mathematical Methods in Engineering and Science QR Decomposition Method 118, QR Decomposition QR Iterations QR Iterations Conceptual Basis of QR Method* QR Algorithm with Shift* Multiplying Q and R factors in reverse,

T A′ = RQ = Q AQ,

an orthogonal similarity transformation.

1. If A is symmetric, then so is A′.

2. If A is in upper Hessenberg form, then so is A′.

3. If A is symmetric tridiagonal, then so is A′. Complexity of QR iteration: (n) for a symmetric tridiagonal O matrix, (n2) operation for an upper and O (n3) for the general case. O Algorithm: Set A = A and for k = 1, 2, 3, , 1 · · · ◮ decompose Ak = Qk Rk , ◮ reassemble Ak+1 = Rk Qk .

As k , Ak approaches the quasi-upper-triangular form. →∞ Mathematical Methods in Engineering and Science QR Decomposition Method 119, QR Decomposition QR Iterations QR Iterations Conceptual Basis of QR Method* Quasi-upper-triangular form: QR Algorithm with Shift* λ ⋆⋆ 1 ∗ ··· ∗ ··· ∗ ∗ λ2 ⋆⋆  ···. ∗ ··· ∗ ∗  .. ⋆⋆  ∗ ··· ∗ ∗   λr ⋆⋆   ··· ∗ ∗    ,  B   k ··· ∗ ∗   . . .   .. . .     α ω   −   ω β        with λ1 > λ2 > . ◮ | | | | · · · Diagonal blocks Bk correspond to eigenspaces of equal/close (magnitude) eigenvalues. ◮ 2 2 diagonal blocks often correspond to pairs of complex × eigenvalues (for non-symmetric matrices). ◮ For symmetric matrices, the quasi-upper-triangular form reduces to quasi-diagonal form. Mathematical Methods in Engineering and Science QR Decomposition Method 120, QR Decomposition Conceptual Basis of QR Method* QR Iterations Conceptual Basis of QR Method* QR Algorithm with Shift* QR decomposition algorithm operates on the basis of the relative magnitudes of eigenvalues and segregates subspaces.

With k , →∞ Ak Range e = Range q Range v { 1} { 1}→ { 1} T T and (a )k Aq = λ q = λ e . 1 → Qk 1 1Qk 1 1 1 Further,

Ak Range e , e = Range q , q Range v , v . { 1 2} { 1 2}→ { 1 2}

(λ1 λ2)α1 T − and (a )k Aq = λ . 2 → Qk 2  2  0 And, so on ...   Mathematical Methods in Engineering and Science QR Decomposition Method 121, QR Decomposition QR Algorithm with Shift* QR Iterations Conceptual Basis of QR Method* QR Algorithm with Shift*k For λ < λ , entry a decays through iterations as λi . i j ij λj With shift,  

A¯ k = Ak µk I; − A¯ k = Qk Rk , A¯k+1 = Rk Qk ;

Ak+1 = A¯ k+1 + µk I.

Resulting transformation is

T ¯ Ak+1 = Rk Qk + µk I = Qk Ak Qk + µk I T T = Q (Ak µk I)Qk + µk I = Q Ak Qk . k − k For the iteration,

λi µ convergence ratio = − k . λj µ − k

Question: How to find a suitable value for µk ? Mathematical Methods in Engineering and Science QR Decomposition Method 122, QR Decomposition Points to note QR Iterations Conceptual Basis of QR Method* QR Algorithm with Shift* ◮ QR decomposition can be effected on any square matrix. ◮ Practical methods of QR decomposition use Householder transformations or Givens rotations. ◮ A QR iteration effects a similarity transformation on a matrix, preserving symmetry, Hessenberg structure and also a symmetric tridiagonal form. ◮ A sequence of QR iterations converge to an almost upper-triangular form. ◮ Operations on symmetric tridiagonal and Hessenberg forms are computationally efficient. ◮ QR iterations tend to order subspaces according to the relative magnitudes of eigenvalues. ◮ Eigenvalue shifting is useful as an expediting strategy.

Necessary Exercises: 1,3 Mathematical Methods in Engineering and Science Eigenvalue Problem of General Matrices 123, Introductory Remarks Outline Reduction to Hessenberg Form* QR Algorithm on Hessenberg Matrices* Inverse Iteration Recommendation

Eigenvalue Problem of General Matrices Introductory Remarks Reduction to Hessenberg Form* QR Algorithm on Hessenberg Matrices* Inverse Iteration Recommendation Mathematical Methods in Engineering and Science Eigenvalue Problem of General Matrices 124, Introductory Remarks Introductory Remarks Reduction to Hessenberg Form* QR Algorithm on Hessenberg Matrices* Inverse Iteration Recommendation ◮ A general (non-symmetric) matrix may not be diagonalizable. We attempt to triangularize it. ◮ With real arithmetic, 2 2 diagonal blocks are inevitable — × signifying complex pair of eigenvalues. ◮ Higher computational complexity, slow convergence and lack of numerical stability.

A non-symmetric matrix is usually unbalanced and is prone to higher round-off errors.

Balancing as a pre-processing step: multiplication of a row and division of the corresponding column with the same number, ensuring similarity.

Note: A balanced matrix may get unbalanced again through similarity transformations that are not orthogonal! Mathematical Methods in Engineering and Science Eigenvalue Problem of General Matrices 125, Introductory Remarks Reduction to Hessenberg Form* Reduction to Hessenberg Form* QR Algorithm on Hessenberg Matrices* Inverse Iteration Recommendation Methods to find appropriate similarity transformations 1. a full sweep of Givens rotations, 2. a sequence of n 2 steps of Householder transformations, and − 3. a cycle of coordinated Gaussian elimination.

Method based on Gaussian elimination or elementary transformations: The pre-multiplying matrix corresponding to the elementary row transformation and the post-multiplying matrix corresponding to the matching column transformation must be inverses of each other.

Two kinds of steps ◮ Pivoting ◮ Elimination Mathematical Methods in Engineering and Science Eigenvalue Problem of General Matrices 126, Introductory Remarks Reduction to Hessenberg Form* Reduction to Hessenberg Form* QR Algorithm on Hessenberg Matrices* Inverse Iteration ¯ 1 Recommendation Pivoting step: A = Prs APrs = Prs− APrs . ◮ Permutation Prs : interchange of r-th and s-th columns. ◮ 1 Prs− = Prs : interchange of r-th and s-th rows. ◮ Pivot locations: a21, a32, , an 1,n 2. · · · − − ¯ 1 Elimination step: A = Gr− AGr with elimination matrix

Ir 0 0 Ir 0 0 1 Gr = 0 1 0 and G− = 0 1 0 .   r   0 k In r 1 0 k In r 1 − − − − −     1 ◮ G : Row (r +1+ i) Row (r +1+ i) ki Row (r + 1) r− ← − × for i = 1, 2, 3, , n r 1 · · · − − ◮ Gr : Column (r + 1) Column (r + 1)+ n←r 1 − − [ki Column (r +1+ i) ] i=1 × P Mathematical Methods in Engineering and Science Eigenvalue Problem of General Matrices 127, Introductory Remarks QR Algorithm on Hessenberg Matrices*Reduction to Hessenberg Form* QR Algorithm on Hessenberg Matrices* Inverse Iteration Recommendation QR iterations: (n2) operations for upper Hessenberg form. O Whenever a sub-diagonal zero appears, the matrix is split into two smaller upper Hessenberg blocks, and they are processed separately, thereby reducing the cost drastically.

Particular cases: ◮ an,n 1 0: Accept ann = λn as an eigenvalue, continue with − → the leading (n 1) (n 1) sub-matrix. − × − ◮ an 1,n 2 0: Separately find the eigenvalues λn 1 and λn − − → − an 1,n 1 an 1,n from − − − , continue with the leading an,n 1 an,n  −  (n 2) (n 2) sub-matrix. − × − Shift strategy: Double QR steps. Mathematical Methods in Engineering and Science Eigenvalue Problem of General Matrices 128, Introductory Remarks Inverse Iteration Reduction to Hessenberg Form* QR Algorithm on Hessenberg Matrices* Inverse Iteration Assumption: Matrix A has a complete setRecommendation of eigenvectors.

(λi )0: a good estimate of an eigenvalue λi of A.

Purpose: To find λi precisely and also to find vi .

Step: Select a random vector y (with y = 1) and solve 0 k 0k

[A (λi ) I]y = y . − 0 0

Result: y is a good estimate of vi and 1 (λi )1 = (λi )0 + T y0 y is an improvement in the estimate of the eigenvalue.

How to establish the result and work out an algorithm ? Mathematical Methods in Engineering and Science Eigenvalue Problem of General Matrices 129, Introductory Remarks Inverse Iteration Reduction to Hessenberg Form* QR Algorithm on Hessenberg Matrices* n n Inverse Iteration With y = αj vj and y = βj vj , [RecommendationA (λi ) I]y = y gives 0 j=1 j=1 − 0 0 n P P n βj [A (λi ) I]vj = αj vj − 0 j j X=1 X=1 αj βj [λj (λi )0] = αj βj = . ⇒ − ⇒ λj (λi ) − 0 βi is typically large and eigenvector vi dominates y.

Avi = λi vi gives [A (λi ) I]vi = [λi (λi ) ]vi . Hence, − 0 − 0

[λi (λi ) ]y [A (λi ) I]y = y . − 0 ≈ − 0 0

Inner product with y0 gives

T 1 [λi (λi )0]y0 y 1 λi (λi )0 + T . − ≈ ⇒ ≈ y0 y Mathematical Methods in Engineering and Science Eigenvalue Problem of General Matrices 130, Introductory Remarks Inverse Iteration Reduction to Hessenberg Form* QR Algorithm on Hessenberg Matrices* Inverse Iteration Algorithm: Recommendation

Start with estimate (λi )0, guess y0 (normalized). For k = 0, 1, 2, · · · ◮ Solve [A (λi )k I]y = yk . − ◮ y Normalize yk+1 = y . k k ◮ 1 Improve (λi )k+1 = (λi )k + T . yk y ◮ If yk yk <ǫ, terminate. k +1 − k Important issues ◮ Update eigenvalue once in a while, not at every iteration. ◮ Use some acceptable small number as artificial pivot. ◮ The method may not converge for or for one having complex eigenvalues. ◮ Repeated eigenvalues may inhibit the process. Mathematical Methods in Engineering and Science Eigenvalue Problem of General Matrices 131, Introductory Remarks Recommendation Reduction to Hessenberg Form* QR Algorithm on Hessenberg Matrices* Inverse Iteration Recommendation

Table: Eigenvalue problem: summary of methods

Type Size Reduction Algorithm Post-processing General Small Definition: Polynomial Solution of (up to 4) Characteristic root finding linear systems polynomial (eigenvalues) (eigenvectors) Symmetric Intermediate Jacobi sweeps Selective (say, 4–12) Jacobi rotations Tridiagonalization Sturm sequence Inverse iteration (Givens rotation property: (eigenvalue or Householder Bracketing and improvement method) bisection and eigenvectors) (rough eigenvalues) Large Tridiagonalization QR decomposition (usually iterations Householder method) Balancing, and then Non- Intermediate Reduction to QR decomposition Inverse iteration symmetric Large Hessenberg form iterations (eigenvectors) (Above methods or (eigenvalues) Gaussian elimination) General Very large Power method, (selective shift and deflation requirement) Mathematical Methods in Engineering and Science Eigenvalue Problem of General Matrices 132, Introductory Remarks Points to note Reduction to Hessenberg Form* QR Algorithm on Hessenberg Matrices* Inverse Iteration Recommendation ◮ Eigenvalue problem of a non-symmetric matrix is difficult! ◮ Balancing and reduction to Hessenberg form are desirable pre-processing steps. ◮ QR decomposition algorithm is typically used for reduction to an upper-triangular form. ◮ Use inverse iteration to polish eigenvalue and find eigenvectors. ◮ In algebraic eigenvalue problems, different methods or combinations are suitable for different cases; regarding matrix size, symmetry and the requirements.

Necessary Exercises: 1,2 Mathematical Methods in Engineering and Science Singular Value Decomposition 133, SVD Theorem and Construction Outline Properties of SVD Pseudoinverse and Solution of Linear Systems Optimality of Pseudoinverse Solution SVD Algorithm

Singular Value Decomposition SVD Theorem and Construction Properties of SVD Pseudoinverse and Solution of Linear Systems Optimality of Pseudoinverse Solution SVD Algorithm Mathematical Methods in Engineering and Science Singular Value Decomposition 134, SVD Theorem and Construction SVD Theorem and Construction Properties of SVD Pseudoinverse and Solution of Linear Systems 1 Optimality of Pseudoinverse Solution Eigenvalue problem: A = UΛV− where U SVD= V Algorithm Do not ask for similarity. Focus on the form of the decomposition. Guaranteed decomposition with orthogonal U, V, and non-negative diagonal entries in Λ — by allowing U = V. 6 A = UΣVT such that UT AV =Σ

SVD Theorem For any real matrix A Rm n, there ∈ × exist orthogonal matrices U Rm m and V Rn n such ∈ × ∈ × that T m n U AV =Σ R × ∈ is a diagonal matrix, with diagonal entries σ ,σ , 0, 1 2 ···≥ obtained by appending the square diagonal matrix diag (σ ,σ , ,σp) with (m p) zero rows or (n p) 1 2 · · · − − zero columns, where p = min(m, n).

Singular values: σ1,σ2, ,σp. Similar result for complex matrices· · · Mathematical Methods in Engineering and Science Singular Value Decomposition 135, SVD Theorem and Construction SVD Theorem and Construction Properties of SVD Pseudoinverse and Solution of Linear Systems Optimality of Pseudoinverse Solution Question: How to construct U, V and Σ? SVD Algorithm For A Rm n, ∈ × AT A = (VΣT UT )(UΣVT )= VΣT ΣVT = VΛVT ,

where Λ = ΣT Σ is an n n diagonal matrix. × σ 1 | σ2  . |  .. 0 Σ=  |   σp   |   +   −− −− −− −− − − −−     0   | ×  Determine V and Λ. Work out Σ and we have

A = UΣVT AV = UΣ ⇒ This provides a proof as well! Mathematical Methods in Engineering and Science Singular Value Decomposition 136, SVD Theorem and Construction SVD Theorem and Construction Properties of SVD Pseudoinverse and Solution of Linear Systems Optimality of Pseudoinverse Solution SVD Algorithm From AV = UΣ, determine columns of U.

1. Column Avk = σk uk , with σk = 0: determine column uk . 6 Columns developed are bound to be mutually orthonormal! T T 1 1 Verify u uj = Avi Avj = δij . i σi σj 2. Column Avk =σk uk , withσk = 0: uk is left indeterminate (free).

3. In the case of m < n, identically zero columns Avk = 0 for k > m: no corresponding columns of U to determine. 4. In the case of m > n, there will be (m n) columns of U left − indeterminate. Extend columns of U to an orthonormal basis.

All three factors in the decomposition are constructed, as desired. Mathematical Methods in Engineering and Science Singular Value Decomposition 137, SVD Theorem and Construction Properties of SVD Properties of SVD Pseudoinverse and Solution of Linear Systems Optimality of Pseudoinverse Solution For a given matrix, the SVD is unique up toSVD Algorithm (a) the same permutations of columns of U, columns of V and diagonal elements of Σ; (b) the same orthonormal linear combinations among columns of U and columns of V, corresponding to equal singular values; and (c) arbitrary orthonormal linear combinations among columns of U or columns of V, corresponding to zero or non-existent singular values. Ordering of the singular values:

σ σ σr > 0, and σr = σr = = σp = 0. 1 ≥ 2 ≥···≥ +1 +2 · · · Rank(A)= Rank(Σ) = r Rank of a matrix is the same as the number of its non-zero singular values. Mathematical Methods in Engineering and Science Singular Value Decomposition 138, SVD Theorem and Construction Properties of SVD Properties of SVD Pseudoinverse and Solution of Linear Systems Optimality of Pseudoinverse Solution SVD Algorithm

σ1y1 . T . Ax = UΣV x = UΣy = [u1 ur ur+1 um]   · · · · · · σr yr      0  = σ y u + σ y u + + σr yr ur   1 1 1 2 2 2 · · · has non-zero components along only the first r columns of U. U gives an orthonormal basis for the co-domain such that

Range(A)= < u , u , , ur > . 1 2 · · · T T With V x = y, vk x = yk , and

x = y v + y v + + yr vr + yr vr + ynvn. 1 1 2 2 · · · +1 +1 · · · V gives an orthonormal basis for the domain such that

Null(A)= < vr , vr , , vn > . +1 +2 · · · Mathematical Methods in Engineering and Science Singular Value Decomposition 139, SVD Theorem and Construction Properties of SVD Properties of SVD Pseudoinverse and Solution of Linear Systems Optimality of Pseudoinverse Solution In basis V, v = c v + c v + + cnvn =SVDVc Algorithmand the norm is 1 1 2 2 · · · given by Av 2 vT AT Av A 2 = max k k = max k k v v 2 v vT v k k cT VT AT AVc cT ΣT Σc σ2c2 = max = max = max k k k . c cT VT Vc c cT c c c2 P k k

2 2 P Pk σk ck A = maxc c2 = σmax k k Pk k r For a non-singular square matrix,

1 T 1 1 T 1 1 1 T A− = (UΣV )− = VΣ− U = V diag , , , U . σ σ · · · σ  1 2 n  Then, A 1 = 1 and the condition number is k − k σmin 1 σmax κ(A)= A A− = . k k k k σmin Mathematical Methods in Engineering and Science Singular Value Decomposition 140, SVD Theorem and Construction Properties of SVD Properties of SVD Pseudoinverse and Solution of Linear Systems Optimality of Pseudoinverse Solution SVD Algorithm Revision of definition of norm and condition number: The norm of a matrix is the same as its largest singular value, while its condition number is given by the ratio of the largest singular value to the least.

Arranging singular values in decreasing order, with Rank(A)= r,

U = [Ur U¯ ] and V = [Vr V¯ ],

Σ 0 VT A = UΣVT = [U U¯ ] r r , r 0 0 V¯ T    or, r T T A = Ur Σr Vr = σk uk vk . Xk=1 Efficient storage and reconstruction! Mathematical Methods in Engineering and Science Singular Value Decomposition 141, SVD Theorem and Construction Pseudoinverse and Solution of LinearProperties Systems of SVD Pseudoinverse and Solution of Linear Systems Optimality of Pseudoinverse Solution Generalized inverse: G is called a generalizedSVD Algorithm inverse or g-inverse of A if, for b Range(A), Gb is a solution of Ax = b. ∈ The Moore-Penrose inverse or the pseudoinverse:

A# = (UΣVT )# = (VT )#Σ#U# = VΣ#UT

Σ 0 Σ 1 0 With Σ = r ,Σ# = r− . 0 0 0 0     ρ 1 | ρ2  |  .. # . 0 Or, Σ =  |  ,  ρp   |   +   −− −− −− −− − − −−   0   | ×  1   σ , for σk = 0 or for σk >ǫ; where ρk = k 6 | | 0, for σk = 0 or for σk ǫ.  | |≤ Mathematical Methods in Engineering and Science Singular Value Decomposition 142, SVD Theorem and Construction Pseudoinverse and Solution of LinearProperties Systems of SVD Pseudoinverse and Solution of Linear Systems Optimality of Pseudoinverse Solution Inverse-like facets and beyond SVD Algorithm ◮ (A#)# = A. ◮ # 1 If A is invertible, then A = A− . ◮ A#b gives the correct unique solution. ◮ If Ax = b is an under-determined consistent system, then # A b selects the solution x∗ with the minimum norm. ◮ If the system is inconsistent, then A#b minimizes the least square error Ax b . k − k ◮ If the minimizer of Ax b is not unique, then it picks up that minimizer whichk has− thek minimum norm x among such minimizers. k k

Contrast with Tikhonov regularization: Pseudoinverse solution for and diagnosis. Tikhonov’s solution for continuity of solution over variable A and computational efficiency. Mathematical Methods in Engineering and Science Singular Value Decomposition 143, SVD Theorem and Construction Optimality of Pseudoinverse Solution Properties of SVD Pseudoinverse and Solution of Linear Systems Optimality of Pseudoinverse Solution Pseudoinverse solution of Ax = b: SVD Algorithm

r r # T T T x∗ = VΣ U b = ρk vk uk b = (uk b/σk )vk Xk=1 Xk=1 Minimize 1 1 1 E(x)= (Ax b)T (Ax b)= xT AT Ax xT AT b + bT b 2 − − 2 − 2 Condition of vanishing gradient: ∂E = 0 AT Ax = AT b ∂x ⇒ V(ΣT Σ)VT x = VΣT UT b ⇒ (ΣT Σ)VT x =ΣT UT b ⇒ 2 T T σk vk x = σk uk b ⇒ T T v x = u b/σk for k = 1, 2, 3, , r. ⇒ k k · · · Mathematical Methods in Engineering and Science Singular Value Decomposition 144, SVD Theorem and Construction Optimality of Pseudoinverse Solution Properties of SVD Pseudoinverse and Solution of Linear Systems Optimality of Pseudoinverse Solution SVD Algorithm

With V¯ = [vr vr vn], then +1 +2 · · · r T ¯ ¯ x = (uk b/σk )vk + Vy = x∗ + Vy. Xk=1

How to minimize x 2 subject to E(x) minimum? k k Minimize E (y)= x + Vy¯ 2. 1 k ∗ k

Since x∗ and Vy¯ are mutually orthogonal,

2 2 2 E (y)= x∗ + Vy¯ = x∗ + Vy¯ 1 k k k k k k is minimum when Vy¯ = 0, i.e. y = 0. Mathematical Methods in Engineering and Science Singular Value Decomposition 145, SVD Theorem and Construction Optimality of Pseudoinverse Solution Properties of SVD Pseudoinverse and Solution of Linear Systems Optimality of Pseudoinverse Solution Anatomy of the optimization through SVDSVD Algorithm Using basis V for domain and U for co-domain, the variables are transformed as VT x = y and UT b = c. Then,

Ax = b UΣVT x = b ΣVT x = UT b Σy = c. ⇒ ⇒ ⇒ A completely decoupled system! Usable components: yk = ck /σk for k = 1, 2, 3, , r. · · · For k > r, ◮ completely redundant information (ck = 0) ◮ purely unresolvable conflict (ck = 0) 6 SVD extracts this pure redundancy/inconsistency. Setting ρk = 0 for k > r rejects it wholesale! At the same time, y is minimized, and hence x too. k k k k Mathematical Methods in Engineering and Science Singular Value Decomposition 146, SVD Theorem and Construction Points to note Properties of SVD Pseudoinverse and Solution of Linear Systems Optimality of Pseudoinverse Solution SVD Algorithm

◮ SVD provides a complete orthogonal decomposition of the domain and co-domain of a linear transformation, separating out functionally distinct subspaces. ◮ If offers a complete diagnosis of the pathologies of systems of linear equations. ◮ Pseudoinverse solution of linear systems satisfy meaningful optimality requirements in several contexts. ◮ With the existence of SVD guaranteed, many important results can be established in a straightforward manner.

Necessary Exercises: 2,4,5,6,7 Mathematical Methods in Engineering and Science Vector Spaces: Fundamental Concepts* 147, Group Outline Field Vector Space Linear Transformation Isomorphism Inner Product Space Function Space

Vector Spaces: Fundamental Concepts* Group Field Vector Space Linear Transformation Isomorphism Inner Product Space Function Space Mathematical Methods in Engineering and Science Vector Spaces: Fundamental Concepts* 148, Group Group Field Vector Space Linear Transformation Isomorphism A set G and a binary operation, say ‘+’, fulfillingInner Product Space Function Space Closure: a + b G a, b G ∈ ∀ ∈ Associativity: a + (b + c) = (a + b)+ c, a, b, c G ∀ ∈ Existence of identity: 0 G such that a G, a +0= a =0+ a ∃ ∈ ∀ ∈ Existence of inverse: a G, ( a) G such that ∀ ∈ ∃ − ∈ a + ( a)=0=( a)+ a − − Examples: (Z, +), (R, +), (Q 0 , ), 2 5 real matrices, − { } · × Rotations etc.

◮ Commutative group Examples:(Z, +), (R, +), (Q 0 , ), ( , +). − { } · F

◮ Subgroup Mathematical Methods in Engineering and Science Vector Spaces: Fundamental Concepts* 149, Group Field Field Vector Space Linear Transformation Isomorphism Inner Product Space A set F and two binary operations, say ‘+’Function and ‘ Space’, satisfying · Group property for addition: (F , +) is a commutative group. (Denote the identity element of this group as ‘0’.) Group property for multiplication: (F 0 , ) is a commutative − { } · group. (Denote the identity element of this group as ‘1’.) Distributivity: a (b + c)= a b + a c, a, b, c F . · · · ∀ ∈

Concept of field: abstraction of a number system

Examples: (Q, +, ), (R, +, ), (C, +, ) etc. · · ·

◮ Subfield Mathematical Methods in Engineering and Science Vector Spaces: Fundamental Concepts* 150, Group Vector Space Field Vector Space Linear Transformation Isomorphism A vector space is defined by Inner Product Space Function Space ◮ a field F of ‘scalars’, ◮ a commutative group V of ‘vectors’, and ◮ a binary operation between F and V, that may be called ‘scalar multiplication’, such that α, β F , a, b V; the ∀ ∈ ∀ ∈ following conditions hold. Closure: αa V. ∈ Identity: 1a = a. Associativity: (αβ)a = α(βa). Scalar distributivity: α(a + b)= αa + αb. Vector distributivity: (α + β)a = αa + βa.

Examples: Rn, C n, m n real matrices etc. × Field Number system ↔ Vector space Space ↔ Mathematical Methods in Engineering and Science Vector Spaces: Fundamental Concepts* 151, Group Vector Space Field Vector Space Linear Transformation Isomorphism Suppose V is a vector space. Inner Product Space Take a vector ξ = 0 in it. Function Space 1 6 Then, vectors linearly dependent on ξ1: α ξ V α F . 1 1 ∈ ∀ 1 ∈ Question: Are the elements of V exhausted? If not, then take ξ V: linearly independent from ξ . 2 ∈ 1 Then, α ξ + α ξ V α , α F . 1 1 2 2 ∈ ∀ 1 2 ∈ Question: Are the elements of V exhausted now? ··· ··· ··· Question: Will this process ever end? Suppose it does. finite dimensional vector space Mathematical Methods in Engineering and Science Vector Spaces: Fundamental Concepts* 152, Group Vector Space Field Vector Space Linear Transformation Isomorphism Finite dimensional vector space Inner Product Space Function Space Suppose the above process ends after n choices of linearly independent vectors.

χ = α ξ + α ξ + + αnξn 1 1 2 2 · · ·

Then, ◮ n: dimension of the vector space

◮ ordered set ξ ,ξ , ,ξn: a basis 1 2 · · · ◮ α , α , , αn F : coordinates of χ in that basis 1 2 · · · ∈ Rn, Rm etc: vector spaces over the field of real numbers

◮ Subspace Mathematical Methods in Engineering and Science Vector Spaces: Fundamental Concepts* 153, Group Linear Transformation Field Vector Space Linear Transformation Isomorphism A mapping T : V W satisfying Inner Product Space → Function Space T(αa + βb)= αT(a)+ βT(b) α, β F and a, b V ∀ ∈ ∀ ∈ where V and W are vector spaces over the field F .

Question: How to describe the linear transformation T?

◮ For V, basis ξ ,ξ , ,ξn 1 2 · · · ◮ For W, basis η , η , , ηm 1 2 · · · ξ V gets mapped to T(ξ ) W. 1 ∈ 1 ∈

T(ξ )= a η + a η + + am ηm 1 11 1 21 2 · · · 1 m Similarly, enumerate T(ξj )= i=1 aij ηi .

Matrix A = [a a aPn] codes this description! 1 2 · · · Mathematical Methods in Engineering and Science Vector Spaces: Fundamental Concepts* 154, Group Linear Transformation Field Vector Space Linear Transformation Isomorphism A general element χ of V can be expressedInner as Product Space Function Space

χ = x ξ + x ξ + + xnξn 1 1 2 2 · · · T Coordinates in a column: x = [x x xn] 1 2 · · · Mapping:

T(χ)= x T(ξ )+ x T(ξ )+ + xnT(ξn), 1 1 2 2 · · · with coordinates Ax, as we know!

Summary: ◮ basis vectors of V get mapped to vectors in W whose coordinates are listed in columns of A, and ◮ a vector of V, having its coordinates in x, gets mapped to a vector in W whose coordinates are obtained from Ax. Mathematical Methods in Engineering and Science Vector Spaces: Fundamental Concepts* 155, Group Linear Transformation Field Vector Space Linear Transformation Isomorphism Inner Product Space Understanding: Function Space ◮ Vector χ is an actual object in the set V and the column x Rn is merely a list of its coordinates. ∈ ◮ T : V W is the linear transformation and the matrix A → simply stores coefficients needed to describe it. ◮ By changing bases of V and W, the same vector χ and the same linear transformation are now expressed by different x and A, respectively.

Matrix representation emerges as the natural description of a linear transformation between two vector spaces.

Exercise: Set of all T : V W form a vector space of their own!! → Analyze and describe that vector space. Mathematical Methods in Engineering and Science Vector Spaces: Fundamental Concepts* 156, Group Isomorphism Field Vector Space Linear Transformation Consider T : V W that establishes a one-to-oneIsomorphism correspondence. → Inner Product Space ◮ Linear transformation T defines a one-oneFunction onto Space mapping, which is invertible. ◮ dim V = dim W ◮ Inverse linear transformation T 1 : W V − → ◮ T defines (is) an isomorphism. ◮ Vector spaces V and W are isomorphic to each other. ◮ Isomorphism is an equivalence relation. V and W are equivalent! If we need to perform some operations on vectors in one vector space, we may as well 1. transform the vectors to another vector space through an isomorphism, 2. conduct the required operations there, and 3. map the results back to the original space through the inverse. Mathematical Methods in Engineering and Science Vector Spaces: Fundamental Concepts* 157, Group Isomorphism Field Vector Space Linear Transformation Isomorphism Consider vector spaces V and W over the sameInner Product field SpaceF and of the Function Space same dimension n.

Question: Can we define an isomorphism between them?

Answer: Of course. As many as we want!

The underlying field and the dimension together completely specify a vector space, up to an isomorphism.

◮ All n-dimensional vector spaces over the field F are isomorphic to one another. ◮ In particular, they are all isomorphic to F n. ◮ The representation (columns) can be considered as the objects (vectors) themselves. Mathematical Methods in Engineering and Science Vector Spaces: Fundamental Concepts* 158, Group Inner Product Space Field Vector Space Linear Transformation Isomorphism Inner product (a, b) in a real or complex vectorInner Product space: Space a scalar Function Space function p : V V F satisfying × → Closure: a, b V, (a, b) F ∀ ∈ ∈ Associativity: (αa, b)= α(a, b) Distributivity: (a + b, c) = (a, c) + (b, c) Conjugate commutativity: (b, a)= (a, b) Positive definiteness: (a, a) 0; and (a, a)=0iff a = 0 ≥ Note: Property of conjugate commutativity forces (a, a) to be real.

T T Examples: a b, a Wb in R, a∗b in C etc.

Inner product space: a vector space possessing an inner product ◮ Euclidean space: over R ◮ Unitary space: over C Mathematical Methods in Engineering and Science Vector Spaces: Fundamental Concepts* 159, Group Inner Product Space Field Vector Space Linear Transformation Isomorphism Inner Product Space Inner products bring in ideas of angle and lengthFunction Space in the geometry of vector spaces.

Orthogonality: (a, b) = 0

Norm: : V R, such that a = (a, a) k · k → k k Associativity: αa = α a k k | | k k p Positive definiteness: a > 0 for a = 0 and 0 = 0 k k 6 k k Triangle inequality: a + b a + b k k ≤ k k k k Cauchy-Schwarz inequality: (a, b) a b | | ≤ k k k k A distance function or metric: d : V V R such that V × → d (a, b)= a b V k − k Mathematical Methods in Engineering and Science Vector Spaces: Fundamental Concepts* 160, Group Function Space Field Vector Space Linear Transformation Isomorphism Inner Product Space Suppose we decide to represent a continuousFunction function Space f : [a, b] R by the listing → T vf = [f (x ) f (x ) f (x ) f (xN )] 1 2 3 · · ·

with a = x < x < x < < xN = b. 1 2 3 · · · Note: The ‘true’ representation will require N to be infinite!

Here, vf is a real column vector. Do such vectors form a vector space?

Correspondingly, does the set of continuous functions F over [a, b] form a vector space?

infinite dimensional vector space Mathematical Methods in Engineering and Science Vector Spaces: Fundamental Concepts* 161, Group Function Space Field Vector Space Linear Transformation Isomorphism Inner Product Space Vector space of continuous functions Function Space First, ( , +) is a commutative group. F Next, with α, β R, x [a, b], ∈ ∀ ∈ ◮ if f (x) R, then αf (x) R ∈ ∈ ◮ 1 f (x)= f (x) · ◮ (αβ)f (x)= α[βf (x)] ◮ α[f1(x)+ f2(x)] = αf1(x)+ αf2(x) ◮ (α + β)f (x)= αf (x)+ βf (x)

◮ Thus, forms a vector space over R. F ◮ Every function in this space is an (infinite dimensional) vector. ◮ Listing of values is just an obvious basis. Mathematical Methods in Engineering and Science Vector Spaces: Fundamental Concepts* 162, Group Function Space Field Vector Space Linear Transformation Isomorphism Linear dependence of (non-zero) functionsInnerf1 Productand f Space2 Function Space ◮ f2(x)= kf1(x) for all x in the domain ◮ k f (x)+ k f (x) = 0, x with k and k not both zero. 1 1 2 2 ∀ 1 2 : k f (x)+ k f (x) = 0 x k = k = 0 1 1 2 2 ∀ ⇒ 1 2 In general,

◮ Functions f , f , f , , fn are linearly dependent if 1 2 3 · · · ∈ F k , k , k , , kn, not all zero, such that ∃ 1 2 3 · · · k f (x)+ k f (x)+ k f (x)+ + knfn(x) = 0 x [a, b]. 1 1 2 2 3 3 · · · ∀ ∈ ◮ k f (x)+ k f (x)+ k f (x)+ + knfn(x) = 0 x [a, b] 1 1 2 2 3 3 · · · ∀ ∈ ⇒ k , k , k , , kn = 0 means that functions f , f , f , , fn are 1 2 3 · · · 1 2 3 · · · linearly independent.

Example: functions 1, x, x2, x3, are a set of linearly · · · independent functions. Incidentally, this set is a commonly used basis. Mathematical Methods in Engineering and Science Vector Spaces: Fundamental Concepts* 163, Group Function Space Field Vector Space Linear Transformation Inner product: For functions f (x) and g(xIsomorphism) in , the usual inner Inner ProductF Space product between corresponding vectors: Function Space

T (vf , vg )= v vg = f (x )g(x )+ f (x )g(x )+ f (x )g(x )+ f 1 1 2 2 3 3 · · · T Weighted inner product: (vf , vg )= vf Wvg = i wi f (xi )g(xi ) For the functions, P b (f , g)= w(x)f (x)g(x)dx Za

◮ b Orthogonality: (f , g)= a w(x)f (x)g(x)dx = 0 ◮ Norm: f = b w(x)[Rf (x)]2dx k k a ◮ Orthonormal basis:q b R (fj , fk )= w(x)fj (x)fk (x)dx = δjk j, k a ∀ R Mathematical Methods in Engineering and Science Vector Spaces: Fundamental Concepts* 164, Group Points to note Field Vector Space Linear Transformation Isomorphism Inner Product Space Function Space ◮ Matrix algebra provides a natural description for vector spaces and linear transformations. ◮ Through isomorphisms, Rn can represent all n-dimensional real vector spaces. ◮ Through the definition of an inner product, a vector space incorporates key geometric features of physical space. ◮ Continuous functions over an interval constitute an infinite dimensional vector space, complete with the usual notions.

Necessary Exercises: 6,7 Mathematical Methods in Engineering and Science Topics in Multivariate Calculus 165, Derivatives in Multi-Dimensional Spaces Outline Taylor’s Series Chain Rule and Change of Variables Numerical Differentiation An Introduction to Tensors*

Topics in Multivariate Calculus Derivatives in Multi-Dimensional Spaces Taylor’s Series Chain Rule and Change of Variables Numerical Differentiation An Introduction to Tensors* Mathematical Methods in Engineering and Science Topics in Multivariate Calculus 166, Derivatives in Multi-Dimensional Spaces Derivatives in Multi-Dimensional SpacesTaylor’s Series Chain Rule and Change of Variables Numerical Differentiation Gradient An Introduction to Tensors* ∂f ∂f ∂f ∂f T f (x) (x)= ∇ ≡ ∂x ∂x ∂x · · · ∂x  1 2 n  Up to the first order, δf [ f (x)]T δx ≈ ∇ Directional derivative ∂f f (x + αd) f (x) = lim − ∂d α 0 α → Relationships: ∂f ∂f ∂f ∂f = , = dT f (x) and = f (x) ∂ej ∂xj ∂d ∇ ∂ˆg k∇ k Among all unit vectors, taken as directions, ◮ the rate of change of a function in a direction is the same as the component of its gradient along that direction, and ◮ the rate of change along the direction of the gradient is the greatest and is equal to the magnitude of the gradient. Mathematical Methods in Engineering and Science Topics in Multivariate Calculus 167, Derivatives in Multi-Dimensional Spaces Derivatives in Multi-Dimensional SpacesTaylor’s Series Chain Rule and Change of Variables Numerical Differentiation An Introduction to Tensors* Hessian

∂2f ∂2f ∂2f 2 ∂x1 ∂x2∂x1 ∂xn∂x1 2 2 · · · 2 2 ∂ f ∂ f ∂ f 2 ∂ f  ∂x1∂x2 ∂x ∂xn∂x2  H(x)= = 2 · · · ∂x2 . . .. .  . . . .   ∂2f ∂2f ∂2f   2   ∂x1∂xn ∂x2∂xn · · · ∂xn    2 Meaning: f (x + δx) f (x) ∂ f (x) δx ∇ −∇ ≈ ∂x2 h i For a vector function h(x), Jacobian

∂h ∂h ∂h ∂h J(x)= (x)= ∂x ∂x ∂x · · · ∂x  1 2 n  Underlying notion: δh [J(x)]δx ≈ Mathematical Methods in Engineering and Science Topics in Multivariate Calculus 168, Derivatives in Multi-Dimensional Spaces Taylor’s Series Taylor’s Series Chain Rule and Change of Variables Numerical Differentiation Taylor’s formula in the remainder form: An Introduction to Tensors*

f (x + δx)= f (x)+ f ′(x)δx 1 2 1 (n 1) n 1 1 (n) n + f ′′(x)δx + + f − (x)δx − + f (xc )δx 2! · · · (n 1)! n! − where xc = x + tδx with 0 t 1 ≤ ≤ Mean value theorem: existence of xc Taylor’s series:

1 2 f (x + δx)= f (x)+ f ′(x)δx + f ′′(x)δx + 2! · · · For a multivariate function, 1 f (x + δx) = f (x) + [δxT ]f (x)+ [δxT ]2f (x)+ ∇ 2! ∇ · · · 1 T n 1 1 T n + [δx ] − f (x)+ [δx ] f (x + tδx) (n 1)! ∇ n! ∇ − 1 ∂2f f (x + δx) f (x) + [ f (x)]T δx + δxT (x) δx ≈ ∇ 2 ∂x2   Mathematical Methods in Engineering and Science Topics in Multivariate Calculus 169, Derivatives in Multi-Dimensional Spaces Chain Rule and Change of Variables Taylor’s Series Chain Rule and Change of Variables Numerical Differentiation For f (x), the total differential: An Introduction to Tensors*

T ∂f ∂f ∂f df = [ f (x)] dx = dx1 + dx2 + + dxn ∇ ∂x1 ∂x2 · · · ∂xn Ordinary derivative or total derivative: df dx = [ f (x)]T dt ∇ dt

For f (t, x(t)), total derivative: df = ∂f + [ f (x)]T dx dt ∂t ∇ dt For f (v, x(v)) = f (v , v , , vm, x (v), x (v), , xn(v)), 1 2 · · · 1 2 · · · T ∂f ∂f ∂f ∂x ∂f T ∂x (v, x(v)) = + (v, x) = +[ x f (v, x)] ∂v ∂v ∂x ∂v ∂v ∇ ∂v i  i x   i  i x i

∂x T f (v, x(v)) = v f (v, x)+ (v) x f (v, x) ⇒∇ ∇ ∂v ∇   Mathematical Methods in Engineering and Science Topics in Multivariate Calculus 170, Derivatives in Multi-Dimensional Spaces Chain Rule and Change of Variables Taylor’s Series Chain Rule and Change of Variables Numerical Differentiation Let x Rm+n and h(x) Rm. An Introduction to Tensors* ∈ ∈ Partition x Rm+n into z Rn and w Rm. ∈ ∈ ∈ System of equations h(x)= 0 means h(z, w)= 0. Question: Can we work out the function w = w(z)? Solution of m equations in m unknowns? Question: If we have one valid pair (z, w), then is it possible to develop w = w(z) in the local neighbourhood? ∂h Answer: Yes, if Jacobian ∂w is non-singular. Implicit function theorem

1 ∂h ∂h ∂w ∂w ∂h − ∂h + = 0 = ∂z ∂w ∂z ⇒ ∂z − ∂w ∂z     Upto first order, w = w + ∂w (z z). 1 ∂z 1 − h i Mathematical Methods in Engineering and Science Topics in Multivariate Calculus 171, Derivatives in Multi-Dimensional Spaces Chain Rule and Change of Variables Taylor’s Series Chain Rule and Change of Variables Numerical Differentiation For a multiple integral An Introduction to Tensors*

I = f (x, y, z) dx dy dz, ZZA Z change of variables x = x(u, v, w), y = y(u, v, w), z = z(u, v, w) gives

I = f (x(u, v, w), y(u, v, w), z(u, v, w)) J(u, v, w) du dv dw, ¯ | | ZZA Z where Jacobian determinant J(u, v, w) = ∂(x,y,z) . | | ∂(u,v,w) For the differential

P (x)dx + P (x)dx + + Pn(x)dxn, 1 1 2 2 · · · we ask: does there exist a function f (x), ◮ of which this is the differential; ◮ or equivalently, the gradient of which is P(x)? Perfect or exact differential: can be integrated to find f . Mathematical Methods in Engineering and Science Topics in Multivariate Calculus 172, Derivatives in Multi-Dimensional Spaces Chain Rule and Change of Variables Taylor’s Series Chain Rule and Change of Variables Numerical Differentiation Differentiation under the integral sign An Introduction to Tensors* v(x) How To differentiate φ(x)= φ(x, u(x), v(x)) = u(x) f (x, t) dt? In the expression R ∂φ ∂φ du ∂φ dv φ′(x)= + + , ∂x ∂u dx ∂v dx

∂φ v ∂f we have ∂x = u ∂x (x, t)dt. ∂F (x,t) Now, consideringR function F (x, t) such that f (x, t)= ∂t , v ∂F φ(x)= (x, t)dt = F (x, v) F (x, u) φ(x, u, v). ∂t − ≡ Zu Using ∂φ = f (x, v) and ∂φ = f (x, u), ∂v ∂u − v(x) ∂f dv du φ′(x)= (x, t)dt + f (x, v) f (x, u) . ∂x dx − dx Zu(x) Leibnitz rule Mathematical Methods in Engineering and Science Topics in Multivariate Calculus 173, Derivatives in Multi-Dimensional Spaces Numerical Differentiation Taylor’s Series Chain Rule and Change of Variables Numerical Differentiation Forward difference formula An Introduction to Tensors* f (x + δx) f (x) f ′(x)= − + (δx) δx O Central difference formulae

f (x + δx) f (x δx) 2 f ′(x)= − − + (δx ) 2δx O

f (x + δx) 2f (x)+ f (x δx) 2 f ′′(x)= − − + (δx ) δx2 O For gradient f (x) and Hessian, ∇ ∂f 1 (x)= [f (x + δei ) f (x δei )], ∂xi 2δ − −

2 ∂ f f (x + δei ) 2f (x)+ f (x δei ) 2 (x) = − 2 − , and ∂xi δ f (x + δei + δej ) f (x + δei δej ) 2 − − ∂ f f (x δei + δej )+ f (x δei δej ) − − − − (x) = 2 ∂xi ∂xj 4δ Mathematical Methods in Engineering and Science Topics in Multivariate Calculus 174, Derivatives in Multi-Dimensional Spaces An Introduction to Tensors* Taylor’s Series Chain Rule and Change of Variables Numerical Differentiation An Introduction to Tensors* ◮ Indicial notation and summation convention ◮ Kronecker delta and Levi-Civita symbol ◮ Rotation of reference axes ◮ Tensors of order zero, or scalars ◮ Contravariant and covariant tensors of order one, or vectors ◮ Cartesian tensors ◮ Cartesian tensors of order two ◮ Higher order tensors ◮ Elementary tensor operations ◮ Symmetric tensors ◮ Tensor fields ◮ ··· ··· ··· Mathematical Methods in Engineering and Science Topics in Multivariate Calculus 175, Derivatives in Multi-Dimensional Spaces Points to note Taylor’s Series Chain Rule and Change of Variables Numerical Differentiation An Introduction to Tensors*

◮ Gradient, Hessian, Jacobian and the Taylor’s series ◮ Partial and total gradients ◮ Implicit functions ◮ Leibnitz rule ◮ Numerical derivatives

Necessary Exercises: 2,3,4,8 Mathematical Methods in Engineering and Science Vector Analysis: Curves and Surfaces 176, Recapitulation of Basic Notions Outline Curves in Space Surfaces*

Vector Analysis: Curves and Surfaces Recapitulation of Basic Notions Curves in Space Surfaces* Mathematical Methods in Engineering and Science Vector Analysis: Curves and Surfaces 177, Recapitulation of Basic Notions Recapitulation of Basic Notions Curves in Space Surfaces* Dot and cross products: their implications Scalar and vector triple products Differentiation rules Interface with matrix algebra:

a x = aT x, · (a x)b = (baT )x, and · aT x, for 2-d vectors a x = ⊥ × ∼ax, for 3-d vectors  where

0 az ay ay − a = − and ∼a = az 0 ax ⊥ ax  −    ay ax 0 −   Mathematical Methods in Engineering and Science Vector Analysis: Curves and Surfaces 178, Recapitulation of Basic Notions Curves in Space Curves in Space Surfaces* Explicit equation: y = y(x) and z = z(x) Implicit equation: F (x, y, z)=0= G(x, y, z) Parametric equation:

r(t)= x(t)i + y(t)j + z(t)k [x(t) y(t) z(t)]T ≡

◮ Tangent vector: r′(t) ◮ Speed: r′ k k ′ ◮ r Unit tangent: u(t)= r′ k k ◮ Length of the curve: l = b dr = b r r dt a k k a ′ · ′ Arc length function R R p t s(t)= r′(τ) r′(τ) dτ a · Z p with ds = dr = dx2 + dy 2 + dz2 and ds = r k k dt k ′k p Mathematical Methods in Engineering and Science Vector Analysis: Curves and Surfaces 179, Recapitulation of Basic Notions Curves in Space Curves in Space Surfaces*

Curve r(t) is regular if r (t) = 0 t. ′ 6 ∀ ◮ Reparametrization with respect to parameter t∗, some strictly increasing function of t

Observations ◮ Arc length s(t) is obviously a monotonically increasing function. ◮ For a regular curve, ds = 0. dt 6 ◮ Then, s(t) has an inverse function. ◮ Inverse t(s) reparametrizes the curve as r(t(s)).

For a unit speed curve r(s), r (s) = 1 and the unit tangent is k ′ k

u(s)= r′(s). Mathematical Methods in Engineering and Science Vector Analysis: Curves and Surfaces 180, Recapitulation of Basic Notions Curves in Space Curves in Space Surfaces* Curvature: The rate at which the direction changes with arc length. κ(s)= u′(s) = r′′(s) k k k k Unit principal normal: 1 p = u′(s) κ With general parametrization,

d r′ du d r′ 2 r′′(t)= k ku(t)+ r′(t) = k ku(t)+ κ(t) r′ p(t) dt k k dt dt k k

r/

AC = ρ = 1/κ u ◮ C Osculating plane p A / / ◮ Centre of curvature z r r ◮ Radius of curvature O y

x

Figure: Tangent and normal to a curve Mathematical Methods in Engineering and Science Vector Analysis: Curves and Surfaces 181, Recapitulation of Basic Notions Curves in Space Curves in Space Surfaces* Binormal: b = u p × Serret-Frenet frame: Right-handed triad u, p, b { } ◮ Osculating, rectifying and normal planes

Torsion: Twisting out of the osculating plane ◮ rate of change of b with respect to arc length s

b′ = u′ p + u p′ = κ(s)p p + u p′ = u p′ × × × × × What is p′?

Taking p′ = σu + τb,

b′ = u (σu + τb)= τp. × − Torsion of the curve

τ(s)= p(s) b′(s) − · Mathematical Methods in Engineering and Science Vector Analysis: Curves and Surfaces 182, Recapitulation of Basic Notions Curves in Space Curves in Space Surfaces*

We have u′ and b′. What is p′? From p = b u, ×

p′ = b′ u + b u′ = τp u + b κp = κu + τb. × × − × × −

Serret-Frenet formulae

u′ = κp, p = κu + τb, ′ −  b = τp ′ −   Intrinsic representation of a curve is complete with κ(s) and τ(s). The arc-length parametrization of a curve is completely determined by its curvature κ(s) and torsion τ(s) functions, except for a rigid body motion. Mathematical Methods in Engineering and Science Vector Analysis: Curves and Surfaces 183, Recapitulation of Basic Notions Surfaces* Curves in Space Surfaces* Parametric surface equation:

r(u, v)= x(u, v)i+y(u, v)j+z(u, v)k [x(u, v) y(u, v) z(u, v)]T ≡

Tangent vectors ru and rv define a tangent plane . T

N = ru rv is normal to the surface and the unit normal is × N ru rv n = = × . N ru rv k k k × k

Question: How does n vary over the surface?

Information on local geometry: curvature tensor ◮ Normal and principal curvatures ◮ Local shape: convex, concave, saddle, cylindrical, planar Mathematical Methods in Engineering and Science Vector Analysis: Curves and Surfaces 184, Recapitulation of Basic Notions Points to note Curves in Space Surfaces*

◮ Parametric equation is the general and most convenient representation of curves and surfaces. ◮ Arc length is the natural parameter and the Serret-Frenet frame offers the natural frame of reference. ◮ Curvature and torsion are the only inherent properties of a curve. ◮ The local shape of a surface patch can be understood through an analysis of its curvature tensor.

Necessary Exercises: 1,2,3,6 Mathematical Methods in Engineering and Science Scalar and Vector Fields 185, Differential Operations on Field Functions Outline Integral Operations on Field Functions Integral Theorems Closure

Scalar and Vector Fields Differential Operations on Field Functions Integral Operations on Field Functions Integral Theorems Closure Mathematical Methods in Engineering and Science Scalar and Vector Fields 186, Differential Operations on Field Functions Differential Operations on Field FunctionsIntegral Operations on Field Functions Integral Theorems Scalar point function or scalar field φ(x, y, zClosure): R3 R → Vector point function or vector field V(x, y, z): R3 R3 → The del or nabla ( ) operator ∇ ∂ ∂ ∂ i + j + k ∇≡ ∂x ∂y ∂z

◮ is a vector, ∇ ◮ it signifies a differentiation, and ◮ it operates from the left side. Laplacian operator: ∂2 ∂2 ∂2 2 + + = ?? ∇ ≡ ∂x2 ∂y 2 ∂z2 ∇·∇ Laplace’s equation: ∂2φ ∂2φ ∂2φ + + = 0 ∂x2 ∂y 2 ∂z2 Solution of 2φ = 0: harmonic function ∇ Mathematical Methods in Engineering and Science Scalar and Vector Fields 187, Differential Operations on Field Functions Differential Operations on Field FunctionsIntegral Operations on Field Functions Integral Theorems Closure Gradient

∂φ ∂φ ∂φ grad φ φ = i + j + k ≡∇ ∂x ∂y ∂z is orthogonal to the level surfaces. Flow fields: φ gives the velocity vector. −∇ Divergence

For V(x, y, z) Vx (x, y, z)i + Vy (x, y, z)j + Vz (x, y, z)k, ≡ ∂V ∂V ∂V div V V = x + y + z ≡ ∇ · ∂x ∂y ∂z Divergence of ρV: flow rate of mass per unit volume out of the control volume. Similar relation between field and flux in electromagnetics. Mathematical Methods in Engineering and Science Scalar and Vector Fields 188, Differential Operations on Field Functions Differential Operations on Field FunctionsIntegral Operations on Field Functions Integral Theorems Closure Curl

i j k curl V V = ∂ ∂ ∂ ≡ ∇× ∂x ∂y ∂z V V V x y z

∂Vz ∂ Vy ∂Vx ∂Vz ∂Vy ∂Vx = i + j + k ∂y − ∂z ∂z − ∂x ∂x − ∂y       If V = ω r represents the velocity field, then angular velocity × 1 ω = curl V. 2 Curl represents rotationality.

Connections between electric and magnetic fields! Mathematical Methods in Engineering and Science Scalar and Vector Fields 189, Differential Operations on Field Functions Differential Operations on Field FunctionsIntegral Operations on Field Functions Integral Theorems Closure Composite operations Operator is linear. ∇ (φ + ψ) = φ + ψ, ∇ ∇ ∇ (V + W) = V + W, and ∇ · ∇ · ∇ · (V + W) = V + W. ∇× ∇× ∇×

Considering the products φψ, φV, V W, and V W; · × (φψ)= ψ φ + φ ψ ∇ ∇ ∇ (φV)= φ V + φ V ∇ · ∇ · ∇ · (φV)= φ V + φ V ∇× ∇ × ∇× (V W) = (W )V +(V )W +W ( V)+V ( W) ∇ · ·∇ ·∇ × ∇× × ∇× (V W)= W ( V) V ( W) ∇ · × · ∇× − · ∇× (V W) = (W )V W( V) (V )W + V( W) ∇× × ·∇ − ∇ · − ·∇ ∇ · ∂ ∂ ∂ Note: the expression V Vx + Vy + Vz is an operator! ·∇≡ ∂x ∂y ∂z Mathematical Methods in Engineering and Science Scalar and Vector Fields 190, Differential Operations on Field Functions Differential Operations on Field FunctionsIntegral Operations on Field Functions Integral Theorems Second order differential operators Closure

div grad φ ( φ) ≡ ∇ · ∇ curl grad φ ( φ) ≡ ∇× ∇ div curl V ( V) ≡ ∇ · ∇× curl curl V ( V) ≡ ∇× ∇× grad div V ( V) ≡ ∇ ∇ · Important identities:

div grad φ ( φ)= 2φ ≡ ∇ · ∇ ∇ curl grad φ ( φ)= 0 ≡ ∇× ∇ div curl V ( V) = 0 ≡ ∇ · ∇× curl curl V ( V) ≡ ∇× ∇× = ( V) 2V = grad div V 2V ∇ ∇ · −∇ −∇ Mathematical Methods in Engineering and Science Scalar and Vector Fields 191, Differential Operations on Field Functions Integral Operations on Field FunctionsIntegral Operations on Field Functions Integral Theorems Line integral along curve C: Closure

I = V dr = (Vx dx + Vy dy + Vz dz) · ZC ZC For a parametrized curve r(t), t [a, b], ∈ b dr I = V dr = V dt. · · dt ZC Za For simple (non-intersecting) paths contained in a simply connected region, equivalent statements:

◮ Vx dx + Vy dy + Vz dz is an exact differential. ◮ V = φ for some φ(r). ∇ ◮ V dr is independent of path. C · ◮ Circulation V dr = 0 around any closed path. R · ◮ curl V = 0.H ◮ Field V is conservative. Mathematical Methods in Engineering and Science Scalar and Vector Fields 192, Differential Operations on Field Functions Integral Operations on Field FunctionsIntegral Operations on Field Functions Integral Theorems Closure

Surface integral over an orientable surface S:

J = V dS = V ndS · · ZS Z ZS Z For r(u, w), dS = ru rw du dw and k × k

J = V ndS = V (ru rw ) du dw. · · × ZS Z ZR Z

Volume integrals of point functions over a region T :

M = φdv and F = Vdv ZZT Z ZZT Z Mathematical Methods in Engineering and Science Scalar and Vector Fields 193, Differential Operations on Field Functions Integral Theorems Integral Operations on Field Functions Integral Theorems Green’s theorem in the plane Closure R: closed bounded region in the xy-plane C: boundary, a piecewise smooth closed curve F1(x, y) and F2(x, y): first order continuous functions

∂F ∂F (F dx + F dy)= 2 1 dx dy 1 2 ∂x − ∂y IC ZR Z  

y y D d y (x) 2 R

x1 (y) B R

A x2 (y)

c y (x) C 1 O a b x O x

(a) Simple domain (b) General domain

Figure: Regions for proof of Green’s theorem in the plane Mathematical Methods in Engineering and Science Scalar and Vector Fields 194, Differential Operations on Field Functions Integral Theorems Integral Operations on Field Functions Integral Theorems Closure Proof:

∂F b y2(x) ∂F 1 dxdy = 1 dydx ∂y ∂y ZR Z Za Zy1(x) b = [F1 x, y2(x) F1 x, y1(x) ]dx a { }− { } Z a b = F x, y (x) dx F x, y (x) dx − 1{ 2 } − 1{ 1 } Zb Za = F (x, y)dx − 1 IC

∂F d x2(y) ∂F 2 dxdy = 2 dxdy = F (x, y)dy ∂x ∂x 2 ZR Z Zc Zx1(y) IC Difference: (F dx + F dy)= ∂F2 ∂F1 dx dy C 1 2 R ∂x − ∂y In alternativeH form, F dr = R R curl F k dx dy. C · R · H R R Mathematical Methods in Engineering and Science Scalar and Vector Fields 195, Differential Operations on Field Functions Integral Theorems Integral Operations on Field Functions Integral Theorems Closure Gauss’s divergence theorem T : a closed bounded region S: boundary, a piecewise smooth closed orientable surface F(x, y, z): a first order continuous vector function

div Fdv = F ndS · ZZT Z ZS Z Interpretation of the definition extended to finite domains.

∂F ∂F ∂F x + y + z dx dy dz = (F n +F n +F n )dS ∂x ∂y ∂z x x y y z z ZZT Z   ZS Z

∂Fz To show: T ∂z dx dy dz = S Fz nz dS First consider a region, the boundary of which is intersected at R R R R R most twice by any line parallel to a coordinate axis. Mathematical Methods in Engineering and Science Scalar and Vector Fields 196, Differential Operations on Field Functions Integral Theorems Integral Operations on Field Functions Integral Theorems Closure

Lower and upper segments of S: z = z1(x, y) and z = z2(x, y).

∂F z2 ∂F z dx dy dz = z dz dx dy ∂z ∂z ZZT Z ZR Z Zz1  = [Fz x, y, z (x, y) Fz x, y, z (x, y) ]dx dy { 2 }− { 1 } ZR Z R: projection of T on the xy-plane

Projection of area element of the upper segment: nz dS = dx dy Projection of area element of the lower segment: nz dS = dx dy − ∂Fz Thus, T ∂z dx dy dz = S Fz nz dS. Sum ofR three R R such componentsR R leads to the result.

Extension to arbitrary regions by a suitable subdivision of domain! Mathematical Methods in Engineering and Science Scalar and Vector Fields 197, Differential Operations on Field Functions Integral Theorems Integral Operations on Field Functions Integral Theorems Closure

Green’s identities (theorem) Region T and boundary S: as required in premises of Gauss’s theorem φ(x, y, z) and ψ(x, y, z): second order continuous scalar functions

φ ψ ndS = (φ 2ψ + φ ψ)dv ∇ · ∇ ∇ ·∇ ZS Z ZZT Z (φ ψ ψ φ) ndS = (φ 2ψ ψ 2φ)dv ∇ − ∇ · ∇ − ∇ ZS Z ZZT Z Direct consequences of Gauss’s theorem

To establish, apply Gauss’s divergence theorem on φ ψ, and then ∇ on ψ φ as well. ∇ Mathematical Methods in Engineering and Science Scalar and Vector Fields 198, Differential Operations on Field Functions Integral Theorems Integral Operations on Field Functions Integral Theorems Closure Stokes’s theorem S: a piecewise smooth surface C: boundary, a piecewise smooth simple closed curve F(x, y, z): first order continuous vector function

F dr = curl F ndS · · IC ZS Z n: unit normal given by the right hand clasp rule on C

For F(x, y, z)= Fx (x, y, z)i,

∂Fx ∂Fx ∂Fx ∂Fx Fx dx = j k ndS = ny nz dS. ∂z − ∂y · ∂z − ∂y IC ZS Z   ZS Z  

First, consider a surface S intersected at most once by any line parallel to a coordinate axis. Mathematical Methods in Engineering and Science Scalar and Vector Fields 199, Differential Operations on Field Functions Integral Theorems Integral Operations on Field Functions Integral Theorems Closure Represent S as z = z(x, y) f (x, y). ≡ T ∂f ∂f T Unit normal n = [nx ny nz ] is proportional to [ 1] . ∂x ∂y − ∂z ny = nz − ∂y

∂Fx ∂Fx ∂Fx ∂Fx ∂z ny nz dS = + nz dS ∂z − ∂y − ∂y ∂z ∂y ZS Z   ZS Z  

Over projection R of S on xy-plane, φ(x, y)= Fx (x, y, z(x, y)). ∂φ LHS = dx dy = φ(x, y)dx = Fx dx − ∂y ′ ZR Z IC IC Similar results for Fy (x, y, z)j and Fz (x, y, z)k. Mathematical Methods in Engineering and Science Scalar and Vector Fields 200, Differential Operations on Field Functions Points to note Integral Operations on Field Functions Integral Theorems Closure

◮ The ‘del’ operator ∇ ◮ Gradient, divergence and curl ◮ Composite and second order operators ◮ Line, surface and volume intergals ◮ Green’s, Gauss’s and Stokes’s theorems ◮ Applications in physics (and engineering)

Necessary Exercises: 1,2,3,6,7 Mathematical Methods in Engineering and Science Polynomial Equations 201, Basic Principles Outline Analytical Solution General Polynomial Equations Two Simultaneous Equations Elimination Methods* Advanced Techniques*

Polynomial Equations Basic Principles Analytical Solution General Polynomial Equations Two Simultaneous Equations Elimination Methods* Advanced Techniques* Mathematical Methods in Engineering and Science Polynomial Equations 202, Basic Principles Basic Principles Analytical Solution General Polynomial Equations Two Simultaneous Equations Fundamental theorem of algebra Elimination Methods* Advanced Techniques* n n 1 n 2 p(x)= a0x + a1x − + a2x − + + an 1x + an · · · −

has exactly n roots x , x , , xn; with 1 2 · · ·

p(x)= a (x x )(x x )(x x ) (x xn). 0 − 1 − 2 − 3 · · · − In general, roots are complex. Multiplicity: A root of p(x) with multiplicity k satisfies

(k 1) p(x)= p′(x)= p′′(x)= = p − (x) = 0. · · · ◮ Descartes’ rule of signs ◮ Bracketing and separation ◮ Synthetic division and deflation

p(x)= f (x)q(x)+ r(x) Mathematical Methods in Engineering and Science Polynomial Equations 203, Basic Principles Analytical Solution Analytical Solution General Polynomial Equations Two Simultaneous Equations Quadratic equation Elimination Methods* Advanced Techniques* b √b2 4ac ax2 + bx + c = 0 x = − ± − ⇒ 2a Method of completing the square:

b b 2 b2 c b 2 b2 4ac x2 + x + = x + = − a 2a 4a2 − a ⇒ 2a 4a2     Cubic equations (Cardano):

x3 + ax2 + bx + c = 0

Completing the cube? Substituting y = x + k,

y 3 + (a 3k)y 2 + (b 2ak + 3k2)y + (c bk + ak2 k3) = 0. − − − − Choose the shift k = a/3. Mathematical Methods in Engineering and Science Polynomial Equations 204, Basic Principles Analytical Solution Analytical Solution General Polynomial Equations Two Simultaneous Equations 3 Elimination Methods* y + py + q = 0 Advanced Techniques* Assuming y = u + v, we have y 3 = u3 + v 3 + 3uv(u + v).

uv = p/3 − u3 + v 3 = q − 4p3 and hence (u3 v 3)2 = q2 + . − 27 Solution:

q q2 p3 u3, v 3 = + = A, B (say). − 2 ± r 4 27

2 2 u = A1, A1ω, A1ω , and v = B1, B1ω, B1ω

2 2 y1 = A1 + B1, y2 = A1ω + B1ω and y3 = A1ω + B1ω. At least one of the solutions is real!! Mathematical Methods in Engineering and Science Polynomial Equations 205, Basic Principles Analytical Solution Analytical Solution General Polynomial Equations Two Simultaneous Equations Elimination Methods* Quartic equations (Ferrari) Advanced Techniques*

a 2 a2 x4+ax3+bx2+cx+d = 0 x2 + x = b x2 cx d ⇒ 2 4 − − −     For a perfect square,

a y 2 a2 ay y 2 x2 + x + = b + y x2 + c x + d 2 2 4 − 2 − 4 −         Under what condition, the new RHS will be a perfect square?

ay 2 a2 y 2 c 4 b + y d = 0 2 − − 4 − 4 −      Resolvent of a quartic:

y 3 by 2 + (ac 4d)y + (4bd a2d c2) = 0 − − − − Mathematical Methods in Engineering and Science Polynomial Equations 206, Basic Principles Analytical Solution Analytical Solution General Polynomial Equations Two Simultaneous Equations Elimination Methods* Advanced Techniques* Procedure ◮ Frame the cubic resolvent. ◮ Solve this cubic equation. ◮ Pick up one solution as y. ◮ Insert this y to form a y 2 x2 + x + = (ex + f )2. 2 2   ◮ Split it into two quadratic equations as a y x2 + x + = (ex + f ). 2 2 ± ◮ Solve each of the two quadratic equations to obtain a total of four solutions of the original quartic equation. Mathematical Methods in Engineering and Science Polynomial Equations 207, Basic Principles General Polynomial Equations Analytical Solution General Polynomial Equations Two Simultaneous Equations Analytical solution of the general quintic equation?Elimination Methods* Advanced Techniques* Galois: group theory: A general quintic, or higher degree, equation is not solvable by radicals.

General polynomial equations: iterative algorithms ◮ Methods for nonlinear equations ◮ Methods specific to polynomial equations Solution through the Roots of a polynomial are the same as the eigenvalues of its companion matrix.

0 0 0 an · · · − 1 0 0 an 1  . . ·. · · . − − .  Companion matrix: ......    0 0 0 a   · · · − 2   0 0 1 a   · · · − 1    Mathematical Methods in Engineering and Science Polynomial Equations 208, Basic Principles General Polynomial Equations Analytical Solution General Polynomial Equations Two Simultaneous Equations Elimination Methods* Advanced Techniques* Bairstow’s method to separate out factors of small degree. Attempt to separate real linear factors?

Real quadratic factors

2 Synthetic division with a guess factor x + q1x + q2:

remainder r1x + r2 T T r = [r1 r2] is a vector function of q = [q1 q2] .

Iterate over (q1, q2) to make (r1, r2) zero.

Newton-Raphson (Jacobian based) iteration: see exercise. Mathematical Methods in Engineering and Science Polynomial Equations 209, Basic Principles Two Simultaneous Equations Analytical Solution General Polynomial Equations Two Simultaneous Equations Elimination Methods* Advanced Techniques* 2 2 p1x + q1xy + r1y + u1x + v1y + w1 = 0 2 2 p2x + q2xy + r2y + u2x + v2y + w2 = 0 Rearranging, 2 a1x + b1x + c1 = 0 2 a2x + b2x + c2 = 0 Cramer’s rule: x2 x 1 = − = b c b c a c a c a b a b 1 2 − 2 1 1 2 − 2 1 1 2 − 2 1 b c b c a c a c x = 1 2 − 2 1 = 1 2 − 2 1 ⇒ − a c a c −a b a b 1 2 − 2 1 1 2 − 2 1 Consistency condition: (a b a b )(b c b c ) (a c a c )2 = 0 1 2 − 2 1 1 2 − 2 1 − 1 2 − 2 1 A 4th degree equation in y Mathematical Methods in Engineering and Science Polynomial Equations 210, Basic Principles Elimination Methods* Analytical Solution General Polynomial Equations Two Simultaneous Equations Elimination Methods* The method operates similarly even if the degreesAdvanced Techniques* of the original equations in y are higher.

What about the degree of the eliminant equation?

Two equations in x and y of degrees n1 and n2: x-eliminant is an equation of degree n1n2 in y Maximum number of solutions: Bezout number = n1n2 Note: Deficient systems may have less number of solutions.

Classical methods of elimination ◮ Sylvester’s dialytic method ◮ Bezout’s method Mathematical Methods in Engineering and Science Polynomial Equations 211, Basic Principles Advanced Techniques* Analytical Solution General Polynomial Equations Two Simultaneous Equations Elimination Methods* Three or more independent equations in asAdvanced many Techniques*unknowns? ◮ Cascaded elimination? Objections! ◮ Exploitation of special structures through clever heuristics (mechanisms kinematics literature)

◮ Gr¨obner basis representation (algebraic geometry)

◮ Continuation or homotopy method by Morgan For solving the system f(x)= 0, identify another structurally similar system g(x)= 0 with known solutions and construct the parametrized system

h(x)= tf(x) + (1 t)g(x)= 0 for t [0, 1]. − ∈ Track each solution from t = 0 to t = 1. Mathematical Methods in Engineering and Science Polynomial Equations 212, Basic Principles Points to note Analytical Solution General Polynomial Equations Two Simultaneous Equations Elimination Methods* Advanced Techniques* ◮ Roots of cubic and quartic polynomials by the methods of Cardano and Ferrari ◮ For higher degree polynomials, ◮ Bairstow’s method: a clever implementation of Newton-Raphson method for polynomials ◮ Eigenvalue problem of a companion matrix ◮ Reduction of a system of polynomial equations in two unknowns by elimination

Necessary Exercises: 1,3,4,6 Mathematical Methods in Engineering and Science Solution of Nonlinear Equations and Systems 213, Methods for Nonlinear Equations Outline Systems of Nonlinear Equations Closure

Solution of Nonlinear Equations and Systems Methods for Nonlinear Equations Systems of Nonlinear Equations Closure Mathematical Methods in Engineering and Science Solution of Nonlinear Equations and Systems 214, Methods for Nonlinear Equations Methods for Nonlinear Equations Systems of Nonlinear Equations Closure

Algebraic and transcendental equations in the form

f (x) = 0

Practical problem: to find one real root (zero) of f (x)

Example of f (x): x3 2x + 5, x3 ln x sin x + 2, etc. − −

If f (x) is continuous, then Bracketing: f (x )f (x ) < 0 there must be a root of f (x) 0 1 ⇒ between x0 and x1. x0+x1 Bisection: Check the sign of f ( 2 ). Replace either x0 or x1 x0+x1 with 2 . Mathematical Methods in Engineering and Science Solution of Nonlinear Equations and Systems 215, Methods for Nonlinear Equations Methods for Nonlinear Equations Systems of Nonlinear Equations Closure w y = x Fixed point iteration y y = g(x) u v Rearrange f (x)=0in

the form x = g(x). m l Example: For f (x) = tan x x3 2, − − b a n possible rearrangements: 1 3 g1(x) = tan− (x + 2) d c g (x) = (tan x 2)1/3 2 f tan x 2− e g3(x)= x2− g

O p q Iteration: xk+1 = g(xk ) x x r x

Figure: Fixed point iteration

If x∗ is the unique solution in interval J and g (x) h < 1 in J, then any x J converges to x . | ′ |≤ 0 ∈ ∗ Mathematical Methods in Engineering and Science Solution of Nonlinear Equations and Systems 216, Methods for Nonlinear Equations Methods for Nonlinear Equations Systems of Nonlinear Equations Closure Newton-Raphson method First order Taylor series f(x) f (x + δx) f (x)+ f (x)δx a ≈ ′ From f (xk + δx) = 0, δx = f (xk )/f (xk ) e − ′ Iteration: xk = xk f (xk )/f (xk ) +1 − ′ Convergence criterion: 2 b f x* f (x)f (x) < f (x) O x x | ′′ | | ′ | g d 0 Draw tangent to f (x). c Take its x-intercept. Figure: Newton-Raphson method

2 Merit: quadratic speed of convergence: xk x = c xk x | +1 − ∗| | − ∗| Demerit: If the starting point is not appropriate, haphazard wandering, oscillations or outright divergence! Mathematical Methods in Engineering and Science Solution of Nonlinear Equations and Systems 217, Methods for Nonlinear Equations Methods for Nonlinear Equations Systems of Nonlinear Equations Closure Secant method and method of false position f(x)

In the Newton-Raphson formula, f(x0 ) f (x ) f (x − ) k − k 1 f ′(x) x x − ≈ k − k 1 x x − x = x k − k 1 f (x ) k+1 k f (x ) f (x − ) k ⇒ − k − k 1 Draw the chord or x1 x2 x3 x* O x secant to f (x) through x0

(xk 1, f (xk 1)) and (xk , f (xk )). f(x1 ) Take− its x-intercept.− Figure: Method of false position

Special case: Maintain a bracket over the root at every iteration. The method of false position or regula falsi

Convergence is guaranteed! Mathematical Methods in Engineering and Science Solution of Nonlinear Equations and Systems 218, Methods for Nonlinear Equations Methods for Nonlinear Equations Systems of Nonlinear Equations Closure Quadratic interpolation method or Muller method

Evaluate f (x) at three points y and model y = a + bx + cx2. Set y = 0 and solve for x.

(x0 ,y0 ) Quadratic Interpolation

Inverse quadratic interpolation Inverse Quadratic Evaluate f (x) at three points Interpolation (x ,y ) 2 1 1 and model x = a + by + cy . x3 O x3 x Set y =0 to get x = a. (x2 ,y2 )

Figure: Interpolation schemes

Van Wijngaarden-Dekker Brent method ◮ maintains the bracket, ◮ uses inverse quadratic interpolation, and ◮ accepts outcome if within bounds, else takes a bisection step. Opportunistic manoeuvring between a fast method and a safe one! Mathematical Methods in Engineering and Science Solution of Nonlinear Equations and Systems 219, Methods for Nonlinear Equations Systems of Nonlinear Equations Systems of Nonlinear Equations Closure

f (x , x , , xn) = 0, 1 1 2 · · · f (x , x , , xn) = 0, 2 1 2 · · · ··· ··· ··· ··· fn(x , x , , xn) = 0. 1 2 · · · f(x)= 0 ◮ Number of variables and number of equations? ◮ No bracketing! ◮ Fixed point iteration schemes x = g(x)? Newton’s method for systems of equations

∂f f(x + δx)= f(x)+ (x) δx + f(x)+ J(x)δx ∂x ···≈   1 xk = xk [J(xk )]− f(xk ) ⇒ +1 − with the usual merits and demerits! Mathematical Methods in Engineering and Science Solution of Nonlinear Equations and Systems 220, Methods for Nonlinear Equations Closure Systems of Nonlinear Equations Closure Modified Newton’s method

1 xk = xk αk [J(xk )]− f(xk ) +1 −

Broyden’s secant method Jacobian is not evaluated at every iteration, but gets developed through updates.

Optimization-based formulation

Global minimum of the function

f(x) 2 = f 2 + f 2 + + f 2 k k 1 2 · · · n

Levenberg-Marquardt method Mathematical Methods in Engineering and Science Solution of Nonlinear Equations and Systems 221, Methods for Nonlinear Equations Points to note Systems of Nonlinear Equations Closure

◮ Iteration schemes for solving f (x) = 0 ◮ Newton (or Newton-Raphson) iteration for a system of equations 1 xk = xk [J(xk )]− f(xk ) +1 − ◮ Optimization formulation of a multi-dimensional root finding problem

Necessary Exercises: 1,2,3 Mathematical Methods in Engineering and Science Optimization: Introduction 222, The Methodology of Optimization Outline Single-Variable Optimization Conceptual Background of Multivariate Optimization

Optimization: Introduction The Methodology of Optimization Single-Variable Optimization Conceptual Background of Multivariate Optimization Mathematical Methods in Engineering and Science Optimization: Introduction 223, The Methodology of Optimization The Methodology of Optimization Single-Variable Optimization Conceptual Background of Multivariate Optimization

◮ Parameters and variables ◮ The statement of the optimization problem Minimize f (x) subject to g(x) 0, ≤ h(x)= 0.

◮ Optimization methods ◮ Sensitivity analysis ◮ Optimization problems: unconstrained and constrained ◮ Optimization problems: linear and nonlinear ◮ Single-variable and multi-variable problems Mathematical Methods in Engineering and Science Optimization: Introduction 224, The Methodology of Optimization Single-Variable Optimization Single-Variable Optimization Conceptual Background of Multivariate Optimization For a function f (x), a point x∗ is defined as a relative (local) minimum if ǫ such that f (x) f (x ) x [x ǫ, x + ǫ]. ∃ ≥ ∗ ∀ ∈ ∗ − ∗

f( x )

O a x1 x2 x3 x4 x5 x6 b x

Figure: Schematic of optima of a univariate function

Optimality criteria

First order necessary condition: If x∗ is a local minimum or maximum point and if f ′(x∗) exists, then f ′(x∗) = 0. Second order necessary condition: If x∗ is a local minimum point and f (x ) exists, then f (x ) 0. ′′ ∗ ′′ ∗ ≥ Second order sufficient condition: If f ′(x∗)=0 and f ′′(x∗) > 0 then x∗ is a local minimum point. Mathematical Methods in Engineering and Science Optimization: Introduction 225, The Methodology of Optimization Single-Variable Optimization Single-Variable Optimization Conceptual Background of Multivariate Optimization

Higher order analysis: From Taylor’s series,

∆f = f (x∗ + δx) f (x∗) − 1 2 1 3 1 iv 4 = f ′(x∗)δx + f ′′(x∗)δx + f ′′′(x∗)δx + f (x∗)δx + 2! 3! 4! · · ·

For an extremum to occur at point x∗, the lowest order derivative with non-zero value should be of even order.

If f ′(x∗) = 0, then ◮ x∗ is a stationary point, a candidate for an extremum. ◮ Evaluate higher order derivatives till one of them is found to be non-zero. ∗ ◮ If its order is odd, then x is an inflection point. ∗ ◮ If its order is even, then x is a local minimum or maximum, as the derivative value is positive or negative, respectively. Mathematical Methods in Engineering and Science Optimization: Introduction 226, The Methodology of Optimization Single-Variable Optimization Single-Variable Optimization Conceptual Background of Multivariate Optimization Iterative methods of line search Methods based on gradient root finding ◮ Newton’s method

f ′(xk ) xk+1 = xk − f ′′(xk ) ◮ Secant method xk xk 1 xk+1 = xk − − f ′(xk ) − f ′(xk ) f ′(xk 1) − − ◮ Method of cubic estimation point of vanishing gradient of the cubic fit with f (xk 1), f (xk ), f ′(xk 1) and f ′(xk ) − − ◮ Method of quadratic estimation point of vanishing gradient of the quadratic fit through three points

Disadvantage: treating all stationary points alike! Mathematical Methods in Engineering and Science Optimization: Introduction 227, The Methodology of Optimization Single-Variable Optimization Single-Variable Optimization Conceptual Background of Multivariate Optimization Bracketing: x < x < x with f (x ) f (x ) f (x ) 1 2 3 1 ≥ 2 ≤ 3 Exhaustive search method or its variants Direct optimization algorithms ◮ Fibonacci search uses a pre-defined number N, of function evaluations, and the Fibonacci sequence

F0 = 1, F1 = 1, F2 = 2, , Fj = Fj 2 + Fj 1, · · · − − · · · to tighten a bracket with economized number of function evaluations. ◮ Golden section search uses a constant ratio √5 1 τ = − 0.618, 2 ≈ the golden section ratio, of interval reduction, that is determined as the limiting case of N and the actual →∞ number of steps is decided by the accuracy desired. Mathematical Methods in Engineering and Science Optimization: Introduction 228, The Methodology of Optimization Conceptual Background of MultivariateSingle-Variable Optimization Optimization Conceptual Background of Multivariate Optimization

Unconstrained minimization problem x is called a local minimum of f (x) if δ such that ∗ ∃ f (x) f (x ) for all x satisfying x x < δ. ≥ ∗ k − ∗k

Optimality criteria From Taylor’s series,

T 1 T f (x) f (x∗) = [g(x∗)] δx + δx [H(x∗)]δx + . − 2 · · ·

For x∗ to be a local minimum,

necessary condition: g(x∗)= 0 and H(x∗) is positive semi-definite,

sufficient condition: g(x∗)= 0 and H(x∗) is positive definite.

Indefinite characterizes a saddle point. Mathematical Methods in Engineering and Science Optimization: Introduction 229, The Methodology of Optimization Conceptual Background of MultivariateSingle-Variable Optimization Optimization Conceptual Background of Multivariate Optimization Convexity Set S Rn is a convex set if ⊆ x , x S and α (0, 1), αx + (1 α)x S. ∀ 1 2 ∈ ∈ 1 − 2 ∈ Function f (x) over a convex set S: a convex function if x , x S and α (0, 1), ∀ 1 2 ∈ ∈ f (αx + (1 α)x ) αf (x ) + (1 α)f (x ). 1 − 2 ≤ 1 − 2 Chord approximation is an overestimate at intermediate points!

x2 f(x )

f(x 2) X2

X1 f(x 1)

O x1 O x1 x2 x

Figure: A convex domain Figure: A convex function Mathematical Methods in Engineering and Science Optimization: Introduction 230, The Methodology of Optimization Conceptual Background of MultivariateSingle-Variable Optimization Optimization Conceptual Background of Multivariate Optimization

First order characterization of convexity

From f (αx + (1 α)x ) αf (x ) + (1 α)f (x ), 1 − 2 ≤ 1 − 2 f (x + α(x x )) f (x ) f (x ) f (x ) 2 1 − 2 − 2 . 1 − 2 ≥ α As α 0, f (x ) f (x ) + [ f (x )]T (x x ). → 1 ≥ 2 ∇ 2 1 − 2 Tangent approximation is an underestimate at intermediate points!

Second order characterization: Hessian is positive semi-definite.

Convex programming problem: convex function over convex set A local minimum is also a global minimum, and all minima are connected in a convex set. Note: Convexity is a stronger condition than unimodality! Mathematical Methods in Engineering and Science Optimization: Introduction 231, The Methodology of Optimization Conceptual Background of MultivariateSingle-Variable Optimization Optimization Conceptual Background of Multivariate Optimization

Quadratic function 1 q(x)= xT Ax + bT x + c 2 Gradient q(x)= Ax + b and Hessian = A is constant. ∇

◮ If A is positive definite, then the unique solution of Ax = b − is the only minimum point. ◮ If A is positive semi-definite and b Range(A), then the − ∈ entire subspace of solutions of Ax = b are global minima. − ◮ If A is positive semi-definite but b / Range(A), then the − ∈ function is unbounded!

Note: A quadratic problem (with positive definite Hessian) acts as a benchmark for optimization algorithms. Mathematical Methods in Engineering and Science Optimization: Introduction 232, The Methodology of Optimization Conceptual Background of MultivariateSingle-Variable Optimization Optimization Conceptual Background of Multivariate Optimization Optimization Algorithms From the current point, move to another point, hopefully better.

Which way to go? How far to go? Which decision is first?

Strategies and versions of algorithms: Trust Region: Develop a local quadratic model 1 f (x + δx)= f (x ) + [g(x )]T δx + δxT F δx, k k k 2 k and minimize it in a small trust region around xk . (Define trust region with dummy boundaries.) Line search: Identify a descent direction dk and minimize the function along it through the univariate function

φ(α)= f (xk + αdk ). ◮ Exact or accurate line search ◮ Inexact or inaccurate line search ◮ Armijo, Goldstein and Wolfe conditions Mathematical Methods in Engineering and Science Optimization: Introduction 233, The Methodology of Optimization Conceptual Background of MultivariateSingle-Variable Optimization Optimization Conceptual Background of Multivariate Optimization

Convergence of algorithms: notions of guarantee and speed Global convergence: the ability of an algorithm to approach and converge to an optimal solution for an arbitrary problem, starting from an arbitrary point ◮ Practically, a sequence (or even subsequence) of monotonically decreasing errors is enough. Local convergence: the rate/speed of approach, measured by p, where xk+1 x∗ β = lim k − pk < k xk x ∞ →∞ k − ∗k ◮ Linear, quadratic and superlinear rates of convergence for p = 1, 2 and intermediate. ◮ Comparison among algorithms with linear rates of convergence is by the convergence ratio β. Mathematical Methods in Engineering and Science Optimization: Introduction 234, The Methodology of Optimization Points to note Single-Variable Optimization Conceptual Background of Multivariate Optimization

◮ Theory and methods of single-variable optimization ◮ Optimality criteria in multivariate optimization ◮ Convexity in optimization ◮ The quadratic function ◮ Trust region ◮ Line search ◮ Global and local convergence

Necessary Exercises: 1,2,5,7,8 Mathematical Methods in Engineering and Science Multivariate Optimization 235, Direct Methods Outline Steepest Descent (Cauchy) Method Newton’s Method Hybrid (Levenberg-Marquardt) Method Least Square Problems

Multivariate Optimization Direct Methods Steepest Descent (Cauchy) Method Newton’s Method Hybrid (Levenberg-Marquardt) Method Least Square Problems Mathematical Methods in Engineering and Science Multivariate Optimization 236, Direct Methods Direct Methods Steepest Descent (Cauchy) Method Newton’s Method Hybrid (Levenberg-Marquardt) Method Least Square Problems Direct search methods using only function values ◮ Cyclic coordinate search ◮ Rosenbrock’s method ◮ Hooke-Jeeves pattern search ◮ Box’s complex method ◮ Nelder and Mead’s simplex search ◮ Powell’s conjugate directions method Useful for functions, for which derivative either does not exist at all points in the domain or is computationally costly to evaluate.

Note: When derivatives are easily available, gradient-based algorithms appear as mainstream methods. Mathematical Methods in Engineering and Science Multivariate Optimization 237, Direct Methods Direct Methods Steepest Descent (Cauchy) Method Newton’s Method Hybrid (Levenberg-Marquardt) Method Nelder and Mead’s simplex method Least Square Problems Simplex in n-dimensional space: polytope formed by n + 1 vertices Nelder and Mead’s method iterates over simplices that are non-degenerate (i.e. enclosing non-zero hypervolume). First, n + 1 suitable points are selected for the starting simplex.

Among vertices of the current simplex, identify the worst point xw , the best point xb and the second worst point xs .

Need to replace xw with a good point.

Centre of gravity of the face not containing xw :

n+1 1 x = x c n i i=1,i=w X6

Reflect xw with respect to xc as xr = 2xc xw . Consider options. − Mathematical Methods in Engineering and Science Multivariate Optimization 238, Direct Methods Direct Methods Steepest Descent (Cauchy) Method Newton’s Method Hybrid (Levenberg-Marquardt) Method Default xnew = xr . Least Square Problems Revision possibilities:

f(x b ) f(x s ) f(x w ) Expansion Default Positive Negative Contraction Contraction

xnew

xr xw xnew xw xr = x new xw xr xw xr

Figure: Nelder and Mead’s simplex method

1. For f (xr ) < f (xb), expansion: xnew = xc + α(xc xw ), α> 1. − 2. For f (xr ) f (xw ), negative contraction: ≥ xnew = xc β(xc xw ), 0 <β< 1. − − 3. For f (xs ) < f (xr ) < f (xw ), positive contraction: xnew = xc + β(xc xw ), with 0 <β< 1. − Replace xw with xnew . Continue with new simplex. Mathematical Methods in Engineering and Science Multivariate Optimization 239, Direct Methods Steepest Descent (Cauchy) Method Steepest Descent (Cauchy) Method Newton’s Method Hybrid (Levenberg-Marquardt) Method Least Square Problems From a point xk , a move through α units in direction dk :

T 2 f (xk + αdk )= f (xk )+ α[g(xk )] dk + (α ) O T Descent direction dk : For α> 0, [g(xk )] dk < 0

Direction of steepest descent: dk = gk [ or dk = gk / gk ] − − k k Minimize φ(α)= f (xk + αdk ). Exact line search:

T φ′(αk ) = [g(xk + αk dk )] dk = 0

Search direction tangential to the contour surface at (xk + αk dk ).

Note: Next direction dk = g(xk ) orthogonal to dk +1 − +1 Mathematical Methods in Engineering and Science Multivariate Optimization 240, Direct Methods Steepest Descent (Cauchy) Method Steepest Descent (Cauchy) Method Newton’s Method Hybrid (Levenberg-Marquardt) Method Least Square Problems Steepest descent algorithm

1. Select a starting point x0, set k = 0 and several parameters: tolerance ǫG on gradient, absolute tolerance ǫA on reduction in function value, relative tolerance ǫR on reduction in function value and maximum number of iterations M.

2. If gk ǫG , STOP. Else dk = gk / gk . k k≤ − k k 3. Line search: Obtain αk by minimizing φ(α)= f (xk + αdk ), α> 0. Update xk+1 = xk + αk dk .

4. If f (xk ) f (xk ) ǫA + ǫR f (xk ) ,STOP. Else k k + 1. | +1 − |≤ | | ← 5. If k > M, STOP. Else go to step 2.

Very good global convergence.

But, why so many “STOPS”? Mathematical Methods in Engineering and Science Multivariate Optimization 241, Direct Methods Steepest Descent (Cauchy) Method Steepest Descent (Cauchy) Method Newton’s Method Hybrid (Levenberg-Marquardt) Method Least Square Problems Analysis on a quadratic function 1 T T For minimizing q(x)= 2 x Ax + b x, the error function:

1 T E(x)= (x x∗) A(x x∗) 2 − −

E(x ) κ(A) 1 2 Convergence ratio: k+1 − E(xk ) ≤ κ(A)+1 Local convergence is poor. 

Importance of steepest descent method ◮ conceptual understanding ◮ initial iterations in a completely new problem ◮ spacer steps in other sophisticated methods

Re-scaling of the problem through change of variables? Mathematical Methods in Engineering and Science Multivariate Optimization 242, Direct Methods Newton’s Method Steepest Descent (Cauchy) Method Newton’s Method Hybrid (Levenberg-Marquardt) Method Second order approximation of a function: Least Square Problems

T 1 T f (x) f (xk ) + [g(xk )] (x xk )+ (x xk ) H(xk )(x xk ) ≈ − 2 − − Vanishing of gradient

g(x) g(xk )+ H(xk )(x xk ) ≈ − gives the iteration formula

1 xk = xk [H(xk )]− g(xk ). +1 − Excellent local convergence property!

xk+1 x∗ k − 2k β xk x ≤ k − ∗k Caution: Does not have global convergence. 1 If H(xk ) is positive definite then dk = [H(xk )] g(xk ) − − is a descent direction. Mathematical Methods in Engineering and Science Multivariate Optimization 243, Direct Methods Newton’s Method Steepest Descent (Cauchy) Method Newton’s Method Hybrid (Levenberg-Marquardt) Method Least Square Problems Modified Newton’s method

◮ Replace the Hessian by Fk = H(xk )+ γI . ◮ Replace full Newton’s step by a line search.

Algorithm

1. Select x0, tolerance ǫ and δ > 0. Set k = 0.

2. Evaluate gk = g(xk ) and H(xk ). Choose γ, find Fk = H(xk )+ γI , solve Fk dk = gk for dk . − 3. Line search: obtain αk to minimize φ(α)= f (xk + αdk ). Update xk+1 = xk + αk dk .

4. Check convergence: If f (xk ) f (xk ) <ǫ, STOP. | +1 − | Else, k k +1 and go to step 2. ← Mathematical Methods in Engineering and Science Multivariate Optimization 244, Direct Methods Hybrid (Levenberg-Marquardt) MethodSteepest Descent (Cauchy) Method Newton’s Method Hybrid (Levenberg-Marquardt) Method Methods of deflected gradients Least Square Problems

xk = xk αk [Mk ]gk +1 − ◮ identity matrix in place of Mk : steepest descent step ◮ 1 Mk = Fk− : step of modified Newton’s method ◮ 1 Mk = [H(xk )]− and αk = 1: pure Newton’s step

1 In Mk = [H(xk )+ λk I ]− , tune parameter λk over iterations. ◮ Initial value of λ: large enough to favour steepest descent trend ◮ Improvement in an iteration: λ reduced by a factor ◮ Increase in function value: step rejected and λ increased Opportunism systematized! Note: Cost of evaluating the Hessian remains a bottleneck. Useful for problems where Hessian estimates come cheap! Mathematical Methods in Engineering and Science Multivariate Optimization 245, Direct Methods Least Square Problems Steepest Descent (Cauchy) Method Newton’s Method Hybrid (Levenberg-Marquardt) Method Least Square Problems Linear least square problem:

y(θ)= x φ (θ)+ x φ (θ)+ + xnφn(θ) 1 1 2 2 · · · For measured values y(θi )= yi ,

n T ei = xk φk (θi ) yi = [Φ(θi )] x yi . − − Xk=1 Error vector: e = Ax y − Last square fit: 1 2 1 T Minimize E = 2 i ei = 2 e e P Pseudoinverse solution and its variants Mathematical Methods in Engineering and Science Multivariate Optimization 246, Direct Methods Least Square Problems Steepest Descent (Cauchy) Method Newton’s Method Hybrid (Levenberg-Marquardt) Method Nonlinear least square problem Least Square Problems For model function in the form

y(θ)= f (θ, x)= f (θ, x , x , , xn), 1 2 · · · square error function

1 T 1 2 1 2 E(x)= e e = e = [f (θi , x) yi ] 2 2 i 2 − i i X X T Gradient: g(x)= E(x)= [f (θi , x) yi ] f (θi , x)= J e ∇ i − ∇ ∂2 T ∂2 T Hessian: H(x)= 2 E(x)=PJ J + ei 2 f (θi , x) J J ∂x i ∂x ≈ Combining a modified form λ diag(JPT J) δx = g(x) of steepest − descent formula with Newton’s formula, Levenberg-Marquardt step: [JT J + λ diag(JT J)]δx = g(x) − Mathematical Methods in Engineering and Science Multivariate Optimization 247, Direct Methods Least Square Problems Steepest Descent (Cauchy) Method Newton’s Method Hybrid (Levenberg-Marquardt) Method Least Square Problems

Levenberg-Marquardt algorithm

1. Select x0, evaluate E(x0). Select tolerance ǫ, initial λ and its update factor. Set k = 0. T T 2. Evaluate gk and H¯ k = J J + λ diag(J J). Solve H¯ k δx = gk . Evaluate E(xk + δx). − 3. If E(xk + δx) E(xk ) <ǫ, STOP. | − | 4. If E(xk + δx) < E(xk ), then decrease λ, update xk = xk + δx, k k + 1. +1 ← Else increase λ. 5. Go to step 2. Professional procedure for nonlinear least square problems and also for solving systems of nonlinear equations in the form h(x)= 0. Mathematical Methods in Engineering and Science Multivariate Optimization 248, Direct Methods Points to note Steepest Descent (Cauchy) Method Newton’s Method Hybrid (Levenberg-Marquardt) Method Least Square Problems

◮ Simplex method of Nelder and Mead ◮ Steepest descent method with its global convergence ◮ Newton’s method for fast local convergence ◮ Levenberg-Marquardt method for equation solving and least squares

Necessary Exercises: 1,2,3,4,5,6 Mathematical Methods in Engineering and Science Methods of Nonlinear Optimization* 249, Conjugate Direction Methods Outline Quasi-Newton Methods Closure

Methods of Nonlinear Optimization* Conjugate Direction Methods Quasi-Newton Methods Closure Mathematical Methods in Engineering and Science Methods of Nonlinear Optimization* 250, Conjugate Direction Methods Conjugate Direction Methods Quasi-Newton Methods Closure Conjugacy of directions:

Two vectors d1 and d2 are mutually conjugate with T respect to a symmetric matrix A, if d1 Ad2 = 0. Linear independence of conjugate directions: Conjugate directions with respect to a positive definite matrix are linearly independent. Expanding subspace property: In Rn, with conjugate vectors d0, d1, , dn 1 with respect to symmetric positive definite A, { · · · n− } for any x R , the sequence x , x , x , , xn generated as 0 ∈ { 0 1 2 · · · } T gk dk xk+1 = xk + αk dk , with αk = T , −dk Adk where gk = Axk + b, has the property that 1 T T xk minimizes q(x)= 2 x Ax + b x on the line xk 1 + αdk 1, as well as on the linear variety x0 + k , − − B where k is the span of d0, d1, , dk 1. B · · · − Mathematical Methods in Engineering and Science Methods of Nonlinear Optimization* 251, Conjugate Direction Methods Conjugate Direction Methods Quasi-Newton Methods Closure

Question: How to find a set of n conjugate directions?

Gram-Schmidt procedure is a poor option!

Conjugate gradient method

Starting from d = g , 0 − 0

dk = gk + βk dk +1 − +1

Imposing the condition of conjugacy of dk+1 with dk ,

T T gk+1Adk gk+1(gk+1 gk ) βk = T = T − dk Adk αk dk Adk

Resulting dk+1 conjugate to all the earlier directions, for a quadratic problem. Mathematical Methods in Engineering and Science Methods of Nonlinear Optimization* 252, Conjugate Direction Methods Conjugate Direction Methods Quasi-Newton Methods Closure Using k in place of k +1 in the formula for dk+1,

dk = gk + βk 1dk 1 − − − T T T gk gk gk dk = gk gk and αk = T ⇒ − dk Adk Polak-Ribiere formula: T gk+1(gk+1 gk ) βk = T − gk gk No need to know A! Further, T T T T gk+1dk = 0 gk+1gk = βk 1(gk + αk dk A)dk 1 = 0. ⇒ − − Fletcher-Reeves formula: T gk+1gk+1 βk = T gk gk Mathematical Methods in Engineering and Science Methods of Nonlinear Optimization* 253, Conjugate Direction Methods Conjugate Direction Methods Quasi-Newton Methods Closure Extension to general (non-quadratic) functions ◮ Varying Hessian A: determine the step size by line search. ◮ After n steps, minimum not attained. T T But, g dk = g gk implies guaranteed descent. k − k Globally convergent, with superlinear rate of convergence. ◮ What to do after n steps? Restart or continue? Algorithm 1. Select x and tolerances ǫG , ǫD . Evaluate g = f (x ). 0 0 ∇ 0 2. Set k = 0 and dk = gk . − 3. Line search: find αk ; update xk+1 = xk + αk dk . 4. Evaluate gk+1 = f (xk+1). If gk+1 ǫG , STOP. T ∇ k k≤ gk+1(gk+1 gk ) 5. Find βk = T − (Polak-Ribiere) gk gk T gk+1gk+1 or βk = T (Fletcher-Reeves). gk gk Obtain dk+1 = gk+1 + βk dk . T − dk dk+1 6. If 1 d d <ǫD, reset g0 = gk+1and go to step 2. − k k k k k+1k Else, k k +1 and go to step 3. ←

Mathematical Methods in Engineering and Science Methods of Nonlinear Optimization* 254, Conjugate Direction Methods Conjugate Direction Methods Quasi-Newton Methods Closure Powell’s conjugate direction method 1 T T For q(x)= 2 x Ax + b x, suppose T x1 = xA + α1d such that d g1 = 0 and T x2 = xB + α2d such that d g2 = 0. Then, dT A(x x )= dT (g g ) = 0. 2 − 1 2 − 1 Parallel subspace property: In Rn, consider two parallel linear varieties = v + k and = v + k , with S1 1 B S2 2 B k = d , d , , dk , k < n. B { 1 2 · · · } If x1 and x2 minimize q(x)= 1 xT Ax+bT x on and , respectively, 2 S1 S2 then x x is conjugate to d , d , , dk . 2 − 1 1 2 · · ·

Assumptions imply g , g k and hence 1 2 ⊥B T T (g g ) k d A(x x )= d (g g ) = 0 for i = 1, 2, , k. 2− 1 ⊥B ⇒ i 2− 1 i 2− 1 · · · Mathematical Methods in Engineering and Science Methods of Nonlinear Optimization* 255, Conjugate Direction Methods Conjugate Direction Methods Quasi-Newton Methods Closure

Algoithm

1. Select x0, ǫ and a set of n linearly independent (preferably normalized) directions d , d , , dn; possibly di = ei . 1 2 · · · 2. Line search along dn and update x1 = x0 + αdn; set k = 1.

3. Line searches along d1, d2, , dn in sequence to obtain n · · · z = xk + j=1 αj dj . 4. New conjugate direction d = z xk . If d <ǫ, STOP. P − k k 5. Reassign directions dj dj for j = 1, 2, , (n 1) and ← +1 · · · − dn = d/ d . k k (Old d1 gets discarded at this step.)

6. Line search and update xk = z + αdn; set k k +1 and +1 ← go to step 3. Mathematical Methods in Engineering and Science Methods of Nonlinear Optimization* 256, Conjugate Direction Methods Conjugate Direction Methods Quasi-Newton Methods Closure ◮ x0-x1 and b-z1: x1-z1 is conjugate to b-z1. ◮ b-z1-x2 and c-d-z2: c-d, d-z2 and x2-z2 are mutually conjugate.

x3 x3

d z2

x 2 c

x2

x1 z1

x1 a b

x0

Figure: Schematic of Powell’s conjugate direction method

Performance of Powell’s method approaches that of the conjugate gradient method! Mathematical Methods in Engineering and Science Methods of Nonlinear Optimization* 257, Conjugate Direction Methods Quasi-Newton Methods Quasi-Newton Methods Closure Variable metric methods attempt to construct the inverse Hessian Bk .

pk = xk xk and qk = gk gk qk Hpk +1 − +1 − ⇒ ≈ 1 1 With n such steps, B = PQ− : update and construct Bk H− . T ≈ Rank one correction: Bk+1 = Bk + ak zk zk ? Rank two correction: T T Bk+1 = Bk + ak zk zk + bk wk wk Davidon-Fletcher-Powell (DFP) method Select x0, tolerance ǫ and B0 = In. For k = 0, 1, 2, , ◮ · · · dk = Bk gk . ◮ − Line search for αk ; update pk = αk dk , xk+1 = xk + pk , qk = gk+1 gk . ◮ − If pk <ǫ or qk <ǫ, STOP. k k k k T T ◮ DFP pk pk Bk qk qk Bk Rank two correction: Bk+1 = Bk + T T . pk qk − qk Bk qk Mathematical Methods in Engineering and Science Methods of Nonlinear Optimization* 258, Conjugate Direction Methods Quasi-Newton Methods Quasi-Newton Methods Closure Properties of DFP iterations:

1. If Bk is symmetric and positive definite, then so is Bk+1. 2. For quadratic function with positive definite Hessian H,

T p Hpj = 0 for 0 i < j k, i ≤ ≤ and Bk Hpi = pi for 0 i k. +1 ≤ ≤ Implications: 1. Positive definiteness of inverse Hessian estimate is never lost. 2. Successive search directions are conjugate directions.

3. With B0 = I, the algorithm is a conjugate gradient method. 4. For a quadratic problem, the inverse Hessian gets completely constructed after n steps.

Variants: Broyden-Fletcher-Goldfarb-Shanno (BFGS) method and the Broyden family of methods Mathematical Methods in Engineering and Science Methods of Nonlinear Optimization* 259, Conjugate Direction Methods Closure Quasi-Newton Methods Closure

Table 23.1: Summary of performance of optimization methods

Cauchy Newton Levenberg-Marquardt DFP/BFGS FR/PR Powell (Steepest (Hybrid) (Quasi-Newton) (Conjugate (Direction Descent) (Deflected Gradient) (Variable Metric) Gradient) Set) For Quadratic Problems: Convergence steps N 1 N n n n2 Indefinite Unknown

Evaluations Nf 2f Nf (n + 1)f (n + 1)f n2f Ng 2g Ng (n + 1)g (n + 1)g 1H NH

Equivalent function evaluations N(2n + 1) 2n2 +2n +1 N(2n2 + 1) 2n2 +3n +1 2n2 +3n +1 n2

Line searches N 0 N or 0 n n n2

Storage Vector Matrix Matrix Matrix Vector Matrix Performance in general problems Slow Risky Costly Flexible Good Okay Practically good for Unknown Good NL Eqn. systems Bad Large Small start-up functions NL least squares functions problems problems Mathematical Methods in Engineering and Science Methods of Nonlinear Optimization* 260, Conjugate Direction Methods Points to note Quasi-Newton Methods Closure

◮ Conjugate directions and the expanding subspace property ◮ Conjugate gradient method ◮ Powell-Smith direction set method ◮ The quasi-Newton concept in professional optimization

Necessary Exercises: 1,2,3 Mathematical Methods in Engineering and Science Constrained Optimization 261, Constraints Outline Optimality Criteria Sensitivity Duality* Structure of Methods: An Overview*

Constrained Optimization Constraints Optimality Criteria Sensitivity Duality* Structure of Methods: An Overview* Mathematical Methods in Engineering and Science Constrained Optimization 262, Constraints Constraints Optimality Criteria Sensitivity Duality* Constrained optimization problem: Structure of Methods: An Overview*

Minimize f (x) subject to gi (x) 0 for i = 1, 2, , l, or g(x) 0; ≤ · · · ≤ and hj (x) = 0 for j = 1, 2, , m, or h(x)= 0. · · · Conceptually, “minimize f (x), x Ω”. ∈ Equality constraints reduce the domain to a surface or a manifold, possessing a tangent plane at every point. Gradient of the vector function h(x):

∂hT ∂x1 T  ∂h  ∂x2 h(x) [ h1(x) h2(x) hm(x)] . , ∇ ≡ ∇ ∇ ··· ∇ ≡  .   .   ∂hT     ∂xn    ∂h T related to the usual Jacobian as Jh(x)= = [ h(x)] . ∂x ∇ Mathematical Methods in Engineering and Science Constrained Optimization 263, Constraints Constraints Optimality Criteria Sensitivity Duality* Structure of Methods: An Overview* Constraint qualification

h (x), h (x) etc are linearly independent, i.e. h(x) is ∇ 1 ∇ 2 ∇ full-rank.

If a feasible point x0, with h(x0)= 0, satisfies the constraint qualification condition, we call it a regular point.

At a regular feasible point x0, tangent plane

= y : [ h(x )]T y = 0 M { ∇ 0 } gives the collection of feasible directions.

Equality constraints reduce the dimension of the problem.

Variable elimination? Mathematical Methods in Engineering and Science Constrained Optimization 264, Constraints Constraints Optimality Criteria Sensitivity Duality* Active inequality constraints gi (x0) = 0: Structure of Methods: An Overview*

included among hj (x0)

for the tangent plane. Cone of feasible directions:

T T [ h(x )] d = 0 and [ gi (x )] d 0 for i I ∇ 0 ∇ 0 ≤ ∈ where I is the set of indices of active inequality constraints. Handling inequality constraints: ◮ Active set strategy maintains a list of active constraints, keeps checking at every step for a change of scenario and updates the list by inclusions and exclusions. ◮ Slack variable strategy replaces all the inequality constraints by equality constraints as gi (x)+ xn+i = 0 with the inclusion of non-negative slack variables (xn+i ). Mathematical Methods in Engineering and Science Constrained Optimization 265, Constraints Optimality Criteria Optimality Criteria Sensitivity Duality* Structure of Methods: An Overview* Suppose x∗ is a regular point with ◮ active inequality constraints: g(a)(x) 0 ≤ ◮ inactive constraints: g(i)(x) 0 ≤ Columns of h(x ) and g(a)(x ): basis for orthogonal ∇ ∗ ∇ ∗ complement of the tangent plane

Basis of the tangent plane: D = [d d dk ] 1 2 · · · Then, [D h(x ) g(a)(x )]: basis of Rn ∇ ∗ ∇ ∗ Now, f (x ) is a vector in Rn. −∇ ∗ z (a) f (x∗) = [D h(x∗) g (x∗)] λ −∇ ∇ ∇   µ(a)   with unique z, λ and µ(a) for a given f (x ). ∇ ∗ What can you say if x∗ is a solution to the NLP problem? Mathematical Methods in Engineering and Science Constrained Optimization 266, Constraints Optimality Criteria Optimality Criteria Sensitivity Duality* Components of f (x ) in the tangent planeStructure must of Methods: be An zero. Overview* ∇ ∗ (a) (a) z = 0 f (x∗) = [ h(x∗)]λ + [ g (x∗)]µ ⇒ −∇ ∇ ∇ For inactive constraints, insisting on µ(i) = 0,

(a) (a) (i) µ f (x∗) = [ h(x∗)]λ + [ g (x∗) g (x∗)] , −∇ ∇ ∇ ∇ µ(i)   or f (x ) + [ h(x )]λ + [ g(x )]µ = 0 ∇ ∗ ∇ ∗ ∇ ∗ g(a)(x) µ(a) where g(x)= and µ = . g(i)(x) µ(i)     (a) (i) Notice: g (x∗)= 0 and µ = 0 µi gi (x∗) = 0 i, or T ⇒ ∀ µ g(x∗) = 0. Now, components in g(x) are free to appear in any order. Mathematical Methods in Engineering and Science Constrained Optimization 267, Constraints Optimality Criteria Optimality Criteria Sensitivity Duality* Finally, what about the feasible directions inStructure the cone? of Methods: An Overview* Answer: Negative gradient f (x ) can have no component −∇ ∗ towards decreasing g (a)(x), i.e. µ(a) 0, i. i i ≥ ∀ Combining it with µ(i) = 0, µ 0. i ≥ First order necessary conditions or Karusch-Kuhn-Tucker (KKT) conditions: If x∗ is a regular point of the constraints and a solution to the NLP problem, then there exist Lagrange multiplier vectors, λ and µ, such that

Optimality: f (x ) + [ h(x )]λ + [ g(x )]µ = 0, µ 0; ∇ ∗ ∇ ∗ ∇ ∗ ≥ Feasibility: h(x∗)= 0, g(x∗) 0; T ≤ Complementarity: µ g(x∗)= 0.

Convex programming problem: Convex objective function f (x) and convex domain (convex gi (x) and linear hj (x)): KKT conditions are sufficient as well! Mathematical Methods in Engineering and Science Constrained Optimization 268, Constraints Optimality Criteria Optimality Criteria Sensitivity Duality* Lagrangian function: Structure of Methods: An Overview*

L(x, λ, µ)= f (x)+ λT h(x)+ µT g(x) Necessary conditions for a stationary point of the Lagrangian:

x L = 0, L = 0 ∇ ∇λ Second order conditions Consider curve z(t) in the tangent plane with z(0) = x∗.

d 2 d f (z(t)) = [ f (z(t))T ˙z(t)] dt2 dt ∇ t=0 t=0 T T = ˙z(0) H(x∗)˙z(0) + [ f (x∗)] ¨z(0) 0 ∇ ≥ Similarly, from hj (z(t)) = 0,

T T ˙z(0) Hh (x∗)˙z(0) + [ hj (x∗)] ¨z(0) = 0. j ∇ Mathematical Methods in Engineering and Science Constrained Optimization 269, Constraints Optimality Criteria Optimality Criteria Sensitivity Duality* Structure of Methods: An Overview* Including contributions from all active constraints,

2 d T T f (z(t)) = ˙z(0) HL(x∗)˙z(0) + [ x L(x∗, λ, µ)] ¨z(0) 0, dt2 ∇ ≥ t=0

∂2L where HL(x)= ∂x2 = H(x)+ j λj Hhj (x)+ i µi Hgi (x). First order necessary conditionP makes the secondP term vanish!

Second order necessary condition: The Hessian matrix of the Lagrangian function is positive semi-definite on the tangent plane . M Sufficient condition: x L = 0 and HL(x) positive definite on . ∇ M n n Restriction of the mapping HL(x ) : R R on subspace ? ∗ → M Mathematical Methods in Engineering and Science Constrained Optimization 270, Constraints Optimality Criteria Optimality Criteria Sensitivity Duality* Structure of Methods: An Overview* Take y , operate HL(x ) on it, project the image back to . ∈ M ∗ M Restricted mapping LM : M → M Question: Matrix representation for LM of size (n m) (n m)? − × − Select local orthonormal basis D Rn (n m) for . ∈ × − M n m n For arbitrary z R , map y = Dz R as HLy = HLDz. ∈ − ∈ T Its component along di : di HLDz Hence, projection back on : M T LM z = D HLDz,

T The (n m) (n m) matrix LM = D HLD: the restriction! − × −

Second order necessary/sufficient condition: LM p.s.d./p.d. Mathematical Methods in Engineering and Science Constrained Optimization 271, Constraints Sensitivity Optimality Criteria Sensitivity Duality* Structure of Methods: An Overview* Suppose original objective and constraint functions as f (x, p), g(x, p) and h(x, p)

By choosing parameters (p), we arrive at x∗. Call it x∗(p).

Question: How does f (x∗(p), p) depend on p? Total gradients

¯ pf (x∗(p), p) = px∗(p) x f (x∗, p)+ pf (x∗, p), ∇ ∇ ∇ ∇ ¯ ph(x∗(p), p) = px∗(p) x h(x∗, p)+ ph(x∗, p)= 0, ∇ ∇ ∇ ∇

and similarly for g(x∗(p), p).

In view of x L = 0, from KKT conditions, ∇

¯ pf (x∗(p), p)= pf (x∗, p) + [ ph(x∗, p)]λ + [ pg(x∗, p)]µ ∇ ∇ ∇ ∇ Mathematical Methods in Engineering and Science Constrained Optimization 272, Constraints Sensitivity Optimality Criteria Sensitivity Duality* Sensitivity to constraints Structure of Methods: An Overview* In particular, in a revised problem, with h(x)= c and g(x) d, ≤ using p = c,

pf (x∗, p)= 0, ph(x∗, p)= I and pg(x∗, p)= 0. ∇ ∇ − ∇

¯ c f (x (p), p)= λ ∇ ∗ −

Similarly, using p = d, we get ¯ d f (x (p), p)= µ. ∇ ∗ − Lagrange multipliers λ and µ signify costs of pulling the minimum point in order to satisfy the constraints!

◮ Equality constraint: both sides infeasible, sign of λj identifies one side or the other of the hypersurface. ◮ Inequality constraint: one side is feasible, no cost of pulling from that side, so µi 0. ≥ Mathematical Methods in Engineering and Science Constrained Optimization 273, Constraints Duality* Optimality Criteria Sensitivity Duality* Dual problem: Structure of Methods: An Overview* Reformulation of a problem in terms of the Lagrange multipliers. Suppose x∗ as a local minimum for the problem Minimize f (x) subject to h(x)= 0,

with Lagrange multiplier (vector) λ∗.

f (x∗) + [ h(x∗)]λ∗ = 0 ∇ ∇

If HL(x∗) is positive definite (assumption of local duality), then x∗ is also a local minimum of

T f¯(x)= f (x)+ λ∗ h(x).

If we vary λ around λ∗, the minimizer of

L(x, λ)= f (x)+ λT h(x)

varies continuously with λ. Mathematical Methods in Engineering and Science Constrained Optimization 274, Constraints Duality* Optimality Criteria Sensitivity Duality* Structure of Methods: An Overview* In the neighbourhood of λ∗, define the dual function

Φ(λ) = min L(x, λ) = min[f (x)+ λT h(x)]. x x

For a pair x, λ , the dual solution is feasible if and only { } if the primal solution is optimal.

Define x(λ) as the local minimizer of L(x, λ).

Φ(λ)= L(x(λ), λ)= f (x(λ)) + λT h(x(λ)) First derivative:

Φ(λ)= x(λ) x L(x(λ), λ)+ h(x(λ)) = h(x(λ)) ∇ ∇λ ∇ For a pair x, λ , the dual solution is optimal if and only { } if the primal solution is feasible. Mathematical Methods in Engineering and Science Constrained Optimization 275, Constraints Duality* Optimality Criteria Sensitivity Duality* Structure of Methods: An Overview* Hessian of the dual function:

H (λ)= x(λ) x h(x(λ)) φ ∇λ ∇

Differentiating x L(x(λ), λ)= 0, we have ∇ T x(λ)HL(x(λ), λ) + [ x h(x(λ))] = 0. ∇λ ∇ Solving for x(λ) and substituting, ∇λ T 1 H (λ)= [ x h(x(λ))] [HL(x(λ), λ)]− x h(x(λ)), φ − ∇ ∇ negative definite!

At λ∗, x(λ∗)= x , Φ(λ∗)= h(x )= 0, H (λ∗) is negative ∗ ∇ ∗ φ definite and the dual function is maximized.

Φ(λ∗)= L(x∗, λ∗)= f (x∗) Mathematical Methods in Engineering and Science Constrained Optimization 276, Constraints Duality* Optimality Criteria Sensitivity Duality* Consolidation (including all constraints) Structure of Methods: An Overview* ◮ Assuming local convexity, the dual function:

Φ(λ, µ) = min L(x, λ, µ) = min[f (x)+ λT h(x)+ µT g(x)]. x x

◮ Constraints on the dual: x L(x, λ, µ)= 0, optimality of the ∇ primal. ◮ Corresponding to inequality constraints of the primal problem, non-negative variables µ in the dual problem. ◮ First order necessary conditons for the dual optimality: equivalent to the feasibility of the primal problem. ◮ The dual function is concave globally! ◮ Under suitable conditions, Φ(λ∗)= L(x∗, λ∗)= f (x∗). ◮ The Lagrangian L(x, λ, µ) has a saddle point in the combined space of primal and dual variables: positive curvature along x directions and negative curvature along λ and µ directions. Mathematical Methods in Engineering and Science Constrained Optimization 277, Constraints Structure of Methods: An Overview* Optimality Criteria Sensitivity Duality* For a problem of n variables, with m activeStructure constraints, of Methods: An Overview* nature and dimension of working spaces Penalty methods (Rn): Minimize the penalized function

q(c, x)= f (x)+ cP(x).

1 2 1 2 Example: P(x)= 2 h(x) + 2 [max(0, g(x))] . n m k k Primal methods (R − ): Work only in feasible domain, restricting steps to the tangent plane. Example: Gradient projection method. Dual methods (Rm): Transform the problem to the space of Lagrange multipliers and maximize the dual. Example: Augmented Lagrangian method. Lagrange methods (Rm+n): Solve equations appearing in the KKT conditions directly. Example: Sequential quadratic programming. Mathematical Methods in Engineering and Science Constrained Optimization 278, Constraints Points to note Optimality Criteria Sensitivity Duality* Structure of Methods: An Overview*

◮ Constraint qualification ◮ KKT conditions ◮ Second order conditions ◮ Basic ideas for solution strategy

Necessary Exercises: 1,2,3,4,5,6 Mathematical Methods in Engineering and Science Linear and Quadratic Programming Problems* 279, Linear Programming Outline Quadratic Programming

Linear and Quadratic Programming Problems* Linear Programming Quadratic Programming Mathematical Methods in Engineering and Science Linear and Quadratic Programming Problems* 280, Linear Programming Linear Programming Quadratic Programming Standard form of an LP problem: Minimize f (x)= cT x, subject to Ax = b, x 0; with b 0. ≥ ≥ Preprocessing to cast a problem to the standard form ◮ Maximization: Minimize the negative function. ◮ Variables of unrestricted sign: Use two variables. ◮ Inequality constraints: Use slack/surplus variables. ◮ Negative RHS: Multiply with 1. − Geometry of an LP problem ◮ Infinite domain: does a minimum exist? ◮ Finite convex polytope: existence guaranteed ◮ Operating with vertices sufficient as a strategy ◮ Extension with slack/surplus variables: original solution space a subspace in the extented space, x 0 marking the domain ≥ ◮ Essence of the non-negativity condition of variables Mathematical Methods in Engineering and Science Linear and Quadratic Programming Problems* 281, Linear Programming Linear Programming Quadratic Programming The simplex method Suppose x RN, b RM and A RM N full-rank, with M < N. ∈ ∈ ∈ ×

IM xB + A′xNB = b′ M N M Basic and non-basic variables: xB R and xNB R ∈ ∈ − Basic feasible solution: xB = b 0 and xNB = 0 ′ ≥ At every iteration, ◮ selection of a non-basic variable to enter the basis ◮ edge of travel selected based on maximum rate of descent ◮ no qualifier: current vertex is optimal ◮ selection of a basic variable to leave the basis ◮ based on the first constraint becoming active along the edge ◮ no constraint ahead: function is unbounded ◮ elementary row operations: new basic feasible solution Two-phase method: Inclusion of a pre-processing phase with artificial variables to develop a basic feasible solution Mathematical Methods in Engineering and Science Linear and Quadratic Programming Problems* 282, Linear Programming Linear Programming Quadratic Programming General perspective LP problem: T T Minimize f (x, y)= c1 x + c2 y; subject to A x + A y = b , A x + A y b , y 0. 11 12 1 21 22 ≤ 2 ≥ Lagrangian: T T L(x, y, λ, µ, ν)= c1 x + c2 y + λT (A x + A y b )+ µT (A x + A y b ) νT y 11 12 − 1 21 22 − 2 − Optimality conditions: c + AT λ + AT µ = 0 and ν = c + AT λ + AT µ 0 1 11 21 2 12 22 ≥ T T Substituting back, optimal function value: f ∗ = λ b1 µ b2 ∗ ∗− − Sensitivity to the constraints: ∂f = λ and ∂f = µ ∂b1 − ∂b2 − Dual problem: maximize Φ(λ, µ)= bT λ bT µ; − 1 − 2 subject to AT λ + AT µ = c , AT λ + AT µ c , µ 0. 11 21 − 1 12 22 ≥− 2 ≥ Notice the symmetry between the primal and dual problems. Mathematical Methods in Engineering and Science Linear and Quadratic Programming Problems* 283, Linear Programming Quadratic Programming Quadratic Programming

A quadratic objective function and linear constraints define a QP problem. Equations from the KKT conditions: linear! Lagrange methods are the natural choice!

With equality constraints only, 1 Minimize f (x)= xT Qx + cT x, subject to Ax = b. 2 First order necessary conditions:

QAT x c ∗ = − A 0 λ b       Solution of this linear system yields the complete result! Caution: This coefficient matrix is indefinite. Mathematical Methods in Engineering and Science Linear and Quadratic Programming Problems* 284, Linear Programming Quadratic Programming Quadratic Programming Active set method

1 T T Minimize f (x)= 2 x Qx + c x; subject to A1x = b1, A x b . 2 ≤ 2 Start the iterative process from a feasible point. ◮ Construct active set of constraints as Ax = b. ◮ From the current point xk , with x = xk + dk , 1 f (x) = (x + d )T Q(x + d )+ cT (x + d ) 2 k k k k k k 1 = dT Qd + (c + Qx )T d + f (x ). 2 k k k k k ◮ Since gk f (xk )= c + Qxk , subsidiary quadratic program: ≡∇ 1 T T minimize 2 dk Qdk + gk dk subject to Adk = 0.

◮ Examining solution dk and Lagrange multipliers, decide to terminate, proceed or revise the active set. Mathematical Methods in Engineering and Science Linear and Quadratic Programming Problems* 285, Linear Programming Quadratic Programming Quadratic Programming Linear complementary problem (LCP) Slack variable strategy with inequality constraints Minimize 1 xT Qx + cT x, subject to Ax b, x 0. 2 ≤ ≥ KKT conditions: With x, y, µ, ν 0, ≥ Qx + c + AT µ ν = 0, − Ax + y = b, xT ν = µT y = 0.

Denoting

x ν c QAT z = , w = , q = and M = , µ y b A 0        − 

w Mz = q, wT z = 0. − Find mutually complementary non-negative w and z. Mathematical Methods in Engineering and Science Linear and Quadratic Programming Problems* 286, Linear Programming Quadratic Programming Quadratic Programming

If q 0, then w = q, z = 0 is a solution! ≥ Lemke’s method: artificial variable z with e = [1 1 1 1]T : 0 · · · Iw Mz ez = q − − 0

With z = max( qi ), 0 − w = q + ez 0 and z = 0: basic feasible solution 0 ≥

◮ Evolution of the basis similar to the simplex method. ◮ Out of a pair of w and z variables, only one can be there in any basis. ◮ At every step, one variable is driven out of the basis and its partner called in. ◮ The step driving out z0 flags termination.

Handling of equality constraints? Very clumsy!! Mathematical Methods in Engineering and Science Linear and Quadratic Programming Problems* 287, Linear Programming Points to note Quadratic Programming

◮ Fundamental issues and general perspective of the linear programming problem ◮ The simplex method ◮ Quadratic programming ◮ The active set method ◮ Lemke’s method via the linear complementary problem

Necessary Exercises: 1,2,3,4,5 Mathematical Methods in Engineering and Science Interpolation and Approximation 288, Polynomial Interpolation Outline Piecewise Polynomial Interpolation Interpolation of Multivariate Functions A Note on Approximation of Functions Modelling of Curves and Surfaces*

Interpolation and Approximation Polynomial Interpolation Piecewise Polynomial Interpolation Interpolation of Multivariate Functions A Note on Approximation of Functions Modelling of Curves and Surfaces* Mathematical Methods in Engineering and Science Interpolation and Approximation 289, Polynomial Interpolation Polynomial Interpolation Piecewise Polynomial Interpolation Interpolation of Multivariate Functions A Note on Approximation of Functions Problem: To develop an analytical representationModelling of of Curves a functionand Surfaces* from information at discrete data points. Purpose ◮ Evaluation at arbitrary points ◮ Differentiation and/or integration ◮ Drawing conclusion regarding the trends or nature Interpolation: one of the ways of function representation ◮ sampled data are exactly satisfied Polynomial: a convenient class of basis functions For yi = f (xi ) for i = 0, 1, 2, , n with x < x < x < < xn, · · · 0 1 2 · · · 2 n p(x)= a + a x + a x + + anx . 0 1 2 · · ·

Find the coefficients such that p(xi )= f (xi ) for i = 0, 1, 2, , n. · · · Values of p(x) for x [x , xn] interpolate n + 1 values ∈ 0 of f (x), an outside estimate is extrapolation. Mathematical Methods in Engineering and Science Interpolation and Approximation 290, Polynomial Interpolation Polynomial Interpolation Piecewise Polynomial Interpolation Interpolation of Multivariate Functions A Note on Approximation of Functions To determine p(x), solve the linear system Modelling of Curves and Surfaces*

1 x x2 xn 0 0 · · · 0 a0 f (x0) 1 x x2 xn  1 1 · · · 1  a1 f (x1) 1 x x2 xn     2 2 · · · 2 a2 = f (x2) ?  ......       . . . . .       2 n  · · · · · ·  1 xn x x   an   f (xn)   n · · · n            : invertible, but typically ill-conditioned! Invertibility means existence and uniqueness of polynomial p(x).

Two polynomials p1(x) and p2(x) matching the function f (x) at x , x , x , , xn imply 0 1 2 · · · n-th degree polynomial ∆p(x)= p (x) p (x) with 1 − 2 n + 1 roots! ∆p 0 p (x)= p (x): p(x) is unique. ≡ ⇒ 1 2 Mathematical Methods in Engineering and Science Interpolation and Approximation 291, Polynomial Interpolation Polynomial Interpolation Piecewise Polynomial Interpolation Interpolation of Multivariate Functions A Note on Approximation of Functions Lagrange interpolation Modelling of Curves and Surfaces* Basis functions: n j=0,j=k (x xj ) Lk (x) = n 6 − j=0,j=k (xk xj ) Q 6 − (x x0)(x x1) (x xk 1)(x xk+1) (x xn) = Q − − · · · − − − · · · − (xk x0)(xk x1) (xk xk 1)(xk xk+1) (xk xn) − − · · · − − − · · · − Interpolating polynomial:

p(x)= α L (x)+ α L (x)+ α L (x)+ + αnLn(x) 0 0 1 1 2 2 · · · At the data points, Lk (xi )= δik .

Coefficient matrix identity and αi = f (xi ). Lagrange interpolation formula: n p(x)= f (xk )Lk (x)= L (x)f (x )+L (x)f (x )+ +Ln(x)f (xn) 0 0 1 1 · · · Xk=0 Existence of p(x) is a trivial consequence! Mathematical Methods in Engineering and Science Interpolation and Approximation 292, Polynomial Interpolation Polynomial Interpolation Piecewise Polynomial Interpolation Interpolation of Multivariate Functions A Note on Approximation of Functions Two interpolation formulae Modelling of Curves and Surfaces* ◮ one costly to determine, but easy to process ◮ the other trivial to determine, costly to process Newton interpolation for an intermediate trade-off: n 1 p(x)= c + c (x x )+ c (x x )(x x )+ + cn − (x xi ) 0 1 − 0 2 − 0 − 1 · · · i=0 − Hermite interpolation Q uses derivatives as well as function values. (n 1) Data: f (xi ), f (xi ), , f i (xi ) at x = xi , for i = 0, 1, , m: ′ · · · − · · · ◮ m At (m +1) points, a total of n +1= i=0 ni conditions Limitations of single-polynomial interpolationP With large number of data points, polynomial degree is high. ◮ Computational cost and numerical imprecision ◮ Lack of representative nature due to oscillations Mathematical Methods in Engineering and Science Interpolation and Approximation 293, Polynomial Interpolation Piecewise Polynomial Interpolation Piecewise Polynomial Interpolation Interpolation of Multivariate Functions A Note on Approximation of Functions Piecewise linear interpolation Modelling of Curves and Surfaces*

f (xi ) f (xi 1) f (x)= f (xi 1)+ − − (x xi 1) for x [xi 1, xi ] − xi xi 1 − − ∈ − − − Handy for many uses with dense data. But, not differentiable. Piecewise cubic interpolation With function values and derivatives at (n + 1) points, n cubic Hermite segments Data for the j-th segment:

f (xj 1)= fj 1, f (xj )= fj , f ′(xj 1)= fj′ 1 and f ′(xj )= fj′ − − − − Interpolating polynomial: 2 3 pj (x)= a0 + a1x + a2x + a3x

Coefficients a0, a1, a2, a3: linear combinations of fj 1, fj , fj′ 1, fj′ − − Composite function 1 continuous at knot points. C Mathematical Methods in Engineering and Science Interpolation and Approximation 294, Polynomial Interpolation Piecewise Polynomial Interpolation Piecewise Polynomial Interpolation Interpolation of Multivariate Functions A Note on Approximation of Functions General formulation through normalizationModelling of intervals of Curves and Surfaces*

x = xj 1 + t(xj xj 1), t [0, 1] − − − ∈ With g(t)= f (x(t)), g ′(t) = (xj xj 1)f ′(x(t)); − − g0 = fj 1, g1 = fj , g0′ = (xj xj 1)fj′ 1 and g1′ = (xj xj 1)fj′. − − − − − − Cubic polynomial for the j-th segment: 2 3 qj (t)= α0 + α1t + α2t + α3t Modular expression: 1 1 t t qj (t) = [α0 α1 α2 α3]   = [g0 g1 g ′ g ′ ] W   = Gj WT t2 0 1 t2  t3   t3          Packaging data, interpolation type and variable terms separately! Question: How to supply derivatives? And, why? Mathematical Methods in Engineering and Science Interpolation and Approximation 295, Polynomial Interpolation Piecewise Polynomial Interpolation Piecewise Polynomial Interpolation Interpolation of Multivariate Functions A Note on Approximation of Functions Spline interpolation Modelling of Curves and Surfaces* Spline: a drafting tool to draw a smooth curve through key points.

Data: fi = f (xi ), for x < x < x < < xn. 0 1 2 · · · If kj = f ′(xj ), then

pj (x) can be determined in terms of fj 1, fj , kj 1, kj − − and pj+1(x) in terms of fj , fj+1, kj , kj+1.

Then, pj′′(xj )= pj′′+1(xj ): a linear equation in kj 1, kj and kj+1 − From n 1 interior knot points, − n 1 linear equations in derivative values k , k , , kn. − 0 1 · · · Prescribing k0 and kn, a diagonally dominant tridiagonal system!

A spline is a smooth interpolation, with 2 continuity. C Mathematical Methods in Engineering and Science Interpolation and Approximation 296, Polynomial Interpolation Interpolation of Multivariate FunctionsPiecewise Polynomial Interpolation Interpolation of Multivariate Functions A Note on Approximation of Functions Piecewise bilinear interpolation Modelling of Curves and Surfaces* Data: f (x, y) over a dense rectangular grid

x = x , x , x , , xm and y = y , y , y , , yn 0 1 2 · · · 0 1 2 · · · Rectangular domain: (x, y) : x x xm, y y yn { 0 ≤ ≤ 0 ≤ ≤ }

For xi 1 x xi and yj 1 y yj , − ≤ ≤ − ≤ ≤ a a 1 f (x, y)= a + a x + a y + a xy = [1 x] 0,0 0,1 0,0 1,0 0,1 1,1 a a y  1,0 1,1   With data at four corner points, coefficient matrix determined from

1 xi 1 a0,0 a0,1 1 1 fi 1,j 1 fi 1,j − = − − − . 1 xi a1,0 a1,1 yj 1 yj fi,j 1 fi,j    −   − 

Approximation only 0 continuous. C Mathematical Methods in Engineering and Science Interpolation and Approximation 297, Polynomial Interpolation Interpolation of Multivariate FunctionsPiecewise Polynomial Interpolation Interpolation of Multivariate Functions A Note on Approximation of Functions Alternative local formula through reparametrizationModelling of Curves and Surfaces* x xi− y yj−1 With u = − 1 and v = − , denoting xi xi− yj yj− − 1 − 1

fi 1,j 1 = g0,0, fi,j 1 = g1,0, fi 1,j = g0,1 and fi,j = g1,1; − − − − bilinear interpolation: α α 1 g(u, v) = [1 u] 0,0 0,1 for u, v [0, 1]. α1,0 α1,1 v ∈    Values at four corner points fix the coefficient matrix as α α 1 0 g g 1 1 0,0 0,1 = 0,0 0,1 − . α α 1 1 g g 0 1  1,0 1,1   −  1,0 1,1   T T Concisely, g(u, v)= U W Gi,j WV in which

1 1 1 1 fi 1,j 1 fi 1,j U = , V = , W = − , Gi,j = − − − . u v 0 1 fi,j 1 fi,j        −  Mathematical Methods in Engineering and Science Interpolation and Approximation 298, Polynomial Interpolation Interpolation of Multivariate FunctionsPiecewise Polynomial Interpolation Interpolation of Multivariate Functions A Note on Approximation of Functions Piecewise bicubic interpolation Modelling of Curves and Surfaces* ∂f ∂f ∂2f Data: f , ∂x , ∂y and ∂x∂y over grid points With normalizing parameters u and v,

∂g ∂f ∂g ∂f = (xi xi 1) , = (yj yj 1) , and ∂u − − ∂x ∂v − − ∂y ∂2g ∂2f = (xi xi 1)(yj yj 1) ∂u∂v − − − − ∂x∂y

In (x, y) : xi 1 x xi , yj 1 y yj or (u, v) : u, v [0, 1] , { − ≤ ≤ − ≤ ≤ } { ∈ } T T g(u, v)= U W Gi,j WV, with U = [1 u u2 u3]T , V = [1 v v 2 v 3]T , and

g(0, 0) g(0, 1) gv (0, 0) gv (0, 1) g(1, 0) g(1, 1) gv (1, 0) gv (1, 1) Gi,j =   . gu(0, 0) gu(0, 1) guv (0, 0) guv (0, 1)  g (1, 0) g (1, 1) g (1, 0) g (1, 1)   u u uv uv    Mathematical Methods in Engineering and Science Interpolation and Approximation 299, Polynomial Interpolation A Note on Approximation of FunctionsPiecewise Polynomial Interpolation Interpolation of Multivariate Functions A Note on Approximation of Functions Modelling of Curves and Surfaces* A common strategy of function approximation is to ◮ express a function as a linear combination of a set of basis functions (which?), and ◮ determine coefficients based on some criteria (what?).

Criteria: Interpolatory approximation: Exact agreement with sampled data Least square approximation: Minimization of a sum (or integral) of square errors over sampled data Minimax approximation: Limiting the largest deviation

Basis functions: polynomials, sinusoids, orthogonal eigenfunctions or field-specific heuristic choice Mathematical Methods in Engineering and Science Interpolation and Approximation 300, Polynomial Interpolation Points to note Piecewise Polynomial Interpolation Interpolation of Multivariate Functions A Note on Approximation of Functions Modelling of Curves and Surfaces*

◮ Lagrange, Newton and Hermite interpolations ◮ Piecewise polynomial functions and splines ◮ Bilinear and bicubic interpolation of bivariate functions Direct extension to vector functions: curves and surfaces!

Necessary Exercises: 1,2,4,6 Mathematical Methods in Engineering and Science Basic Methods of Numerical Integration 301, Newton-Cotes Integration Formulae Outline Richardson Extrapolation and Romberg Integration Further Issues

Basic Methods of Numerical Integration Newton-Cotes Integration Formulae Richardson Extrapolation and Romberg Integration Further Issues Mathematical Methods in Engineering and Science Basic Methods of Numerical Integration 302, Newton-Cotes Integration Formulae Newton-Cotes Integration Formulae Richardson Extrapolation and Romberg Integration Further Issues

b J = f (x)dx Za Divide [a, b] into n sub-intervals with

a = x0 < x1 < x2 < < xn 1 < xn = b, · · · − b a where xi xi 1 = h = − . − − n n J¯ = hf (x∗)= h[f (x∗)+ f (x∗)+ + f (x∗)] i 1 2 · · · n i X=1 Taking xi∗ [xi 1, xi ] as xi 1 and xi , we get summations J1 and J2. ∈ − − As n (i.e. h 0), if J and J approach the same →∞ → 1 2 limit, then function f (x) is integrable over interval [a, b].

A rectangular rule or a one-point rule

Question: Which point to take as xi∗? Mathematical Methods in Engineering and Science Basic Methods of Numerical Integration 303, Newton-Cotes Integration Formulae Newton-Cotes Integration Formulae Richardson Extrapolation and Romberg Integration Further Issues Mid-point rule xi−1+xi Selecting xi∗ asx ¯i = 2 ,

xi b n f (x)dx hf (¯xi ) and f (x)dx h f (¯xi ). ≈ ≈ xi−1 a i Z Z X=1 Error analysis: From Taylor’s series of f (x) aboutx ¯i ,

xi xi 2 (x x¯i ) f (x)dx = f (¯xi )+ f ′(¯xi )(x x¯i )+ f ′′(¯xi ) − + dx − 2 · · · Zxi−1 Zxi−1   3 5 h h iv = hf (¯xi )+ f ′′(¯xi )+ f (¯xi )+ , 24 1920 · · · third order accurate! Over the entire domain [a, b], n n n b h3 h2 f (x)dx h f (¯xi )+ f ′′(¯xi ) = h f (¯xi )+ (b a)f ′′(ξ), ≈ 24 24 − a i i i Z X=1 X=1 X=1 for ξ [a, b] (from mean value theorem): second order accurate. ∈ Mathematical Methods in Engineering and Science Basic Methods of Numerical Integration 304, Newton-Cotes Integration Formulae Newton-Cotes Integration Formulae Richardson Extrapolation and Romberg Integration Further Issues Trapezoidal rule Approximating function f (x) with a linear interpolation, xi h f (x)dx [f (xi 1)+ f (xi )] ≈ 2 − Zxi−1 and b n 1 1 − 1 f (x)dx h f (x )+ f (xi )+ f (xn) . ≈ 2 0 2 a " i # Z X=1 Taylor series expansions about the mid-point: 2 3 4 h h h h iv f (xi 1) = f (¯xi ) f ′(¯xi )+ f ′′(¯xi ) f ′′′(¯xi )+ f (¯xi ) − − 2 8 − 48 384 −··· 2 3 4 h h h h iv f (xi ) = f (¯xi )+ f ′(¯xi )+ f ′′(¯xi )+ f ′′′(¯xi )+ f (¯xi )+ 2 8 48 384 · · · 3 5 h h h iv [f (xi 1)+ f (xi )] = hf (¯xi )+ f ′′(¯xi )+ f (¯xi )+ ⇒ 2 − 8 384 · · · xi h3 h5 iv Recall f (x)dx = hf (¯xi )+ f ′′(¯xi )+ f (¯xi )+ . xi−1 24 1920 · · · R Mathematical Methods in Engineering and Science Basic Methods of Numerical Integration 305, Newton-Cotes Integration Formulae Newton-Cotes Integration Formulae Richardson Extrapolation and Romberg Integration Further Issues Error estimate of trapezoidal rule

xi 3 5 h h h iv f (x)dx = [f (xi 1)+ f (xi )] f ′′(¯xi ) f (¯xi )+ 2 − − 12 − 480 · · · Zxi−1 Over an extended domain,

n 1 b 1 − h2 f (x)dx = h f (x )+ f (xn) + f (xi ) (b a)f ′′(ξ)+ . 2{ 0 } −12 − · · · a " i # Z X=1 The same order of accuracy as the mid-point rule! Different sources of merit ◮ Mid-point rule: Use of mid-point leads to symmetric error-cancellation. ◮ Trapezoidal rule: Use of end-points allows double utilization of boundary points in adjacent intervals. How to use both the merits? Mathematical Methods in Engineering and Science Basic Methods of Numerical Integration 306, Newton-Cotes Integration Formulae Newton-Cotes Integration Formulae Richardson Extrapolation and Romberg Integration Further Issues Simpson’s rules Divide [a, b] into an even number (n = 2m) of intervals. Fit a quadratic polynomial over a panel of two intervals. For this panel of length 2h, two estimates:

M(f ) = 2hf (xi ) and T (f )= h[f (xi 1)+ f (xi+1)] − 3 5 h h iv J = M(f )+ f ′′(xi )+ f (xi )+ 3 60 · · · 3 5 2h h iv J = T (f ) f ′′(xi ) f (xi )+ − 3 − 15 · · · Simpson’s one-third rule (with error estimate):

xi+1 5 h h iv f (x)dx = [f (xi 1) + 4f (xi )+ f (xi+1)] f (xi ) 3 − − 90 Zxi−1 Fifth (not fourth) order accurate! A four-point rule: Simpson’s three-eighth rule Still higher order rules NOT advisable! Mathematical Methods in Engineering and Science Basic Methods of Numerical Integration 307, Newton-Cotes Integration Formulae Richardson Extrapolation and RombergRichardson Integration Extrapolation and Romberg Integration Further Issues To determine quantity F ◮ using a step size h, estimate F (h) ◮ error terms: hp, hq, hr etc (p < q < r) ◮ F = limδ 0 F (δ)? → ◮ plot F (h), F (αh), F (α2h) (with α< 1) and extrapolate? p q 1 F (h) = F + ch + (h ) pO q 2 F (αh) = F + c(αh) + (h ) 2 2 p O q 4 F (α h) = F + c(α h) + (h ) O Eliminate c and determine (better estimates of) F : p F (αh) α F (h) q r 3 F (h) = − = F + c h + (h ) 1 1 αp 1 O 2 − p F (α h) α F (αh) q r 5 F (αh) = − = F + c (αh) + (h ) 1 1 αp 1 O − q F1(αh) α F1(h) r Still better estimate: 6 F2(h)= 1−αq = F + (h ) − O Richardson extrapolation Mathematical Methods in Engineering and Science Basic Methods of Numerical Integration 308, Newton-Cotes Integration Formulae Richardson Extrapolation and RombergRichardson Integration Extrapolation and Romberg Integration Further Issues b Trapezoidal rule for J = a f (x)dx: p = 2, q = 4, r = 6 etc T (f )= JR + ch2 + dh4 + eh6 + · · · 1 With α = 2 , half the sum available for successive levels. Romberg integration ◮ Trapezoidal rule with h = H: find J11. ◮ With h = H/2, find J12. 1 2 J12 2 J11 4J12 J11 J22 = − 2 = − . 1 1 3 − 2 ◮ If J22 J12 is within tolerance,  STOP. Accept J J22. ◮ | − | ≈ With h = H/4, find J13. 1 4 4J13 J12 J23 2 J22 16J23 J22 J23 = − and J33 = − = − . 1 4 3 1  15 − 2 ◮ If J33 J23 is within tolerance, STOP with J J33. | − | ≈ Mathematical Methods in Engineering and Science Basic Methods of Numerical Integration 309, Newton-Cotes Integration Formulae Further Issues Richardson Extrapolation and Romberg Integration Further Issues

Featured functions: adaptive quadrature ◮ ǫ(xi xi−1) With prescribed tolerance ǫ, assign quota ǫi = b− a of − error to every interval [xi 1, xi ]. − ◮ For each interval, find two estimates of the integral and estimate the error. ◮ If error estimate is not within quota, then subdivide.

Function as tabulated data ◮ Only trapezoidal rule applicable? ◮ Fit a spline over data points and integrate the segments?

Improper integral: Newton-Cotes closed formulae not applicable! ◮ Open Newton-Cotes formulae ◮ Gaussian quadrature Mathematical Methods in Engineering and Science Basic Methods of Numerical Integration 310, Newton-Cotes Integration Formulae Points to note Richardson Extrapolation and Romberg Integration Further Issues

◮ Definition of an integral and integrability ◮ Closed Newton-Cotes formulae and their error estimates ◮ Richardson extrapolation as a general technique ◮ Romberg integration ◮ Adaptive quadrature

Necessary Exercises: 1,2,3,4 Mathematical Methods in Engineering and Science Advanced Topics in Numerical Integration* 311, Gaussian Quadrature Outline Multiple Integrals

Advanced Topics in Numerical Integration* Gaussian Quadrature Multiple Integrals Mathematical Methods in Engineering and Science Advanced Topics in Numerical Integration* 312, Gaussian Quadrature Gaussian Quadrature Multiple Integrals n A typical quadrature formula: a weighted sum i=0 wi fi ◮ fi : function value at i-th sampled point P ◮ wi : corresponding weight Newton-Cotes formulae:

◮ Abscissas (xi ’s) of sampling prescribed ◮ Coefficients or weight values determined to eliminate dominant error terms Gaussian quadrature rules: ◮ no prescription of quadrature points ◮ only the ‘number’ of quadrature points prescribed ◮ locations as well as weights contribute to the accuracy criteria ◮ with n integration points, 2n degrees of freedom ◮ can be made exact for polynomials of degree up to 2n 1 − ◮ best locations: interior points ◮ open quadrature rules: can handle integrable singularities Mathematical Methods in Engineering and Science Advanced Topics in Numerical Integration* 313, Gaussian Quadrature Gaussian Quadrature Multiple Integrals Gauss-Legendre quadrature

1 f (x)dx = w1f (x1)+ w2f (x2) 1 Z− Four variables: Insist that it is exact for 1, x, x2 and x3.

1 w1 + w2 = dx = 2, 1 Z− 1 w1x1 + w2x2 = xdx = 0, 1 Z− 1 2 2 2 2 w1x1 + w2x2 = x dx = 1 3 Z− 1 3 3 3 and w1x1 + w2x2 = x dx = 0. 1 Z− x = x , w = w w = w = 1, x = 1 , x = 1 1 − 2 1 2 ⇒ 1 2 1 − √3 2 √3 Mathematical Methods in Engineering and Science Advanced Topics in Numerical Integration* 314, Gaussian Quadrature Gaussian Quadrature Multiple Integrals Two-point Gauss-Legendre quadrature formula 1 f (x)dx = f ( 1 )+ f ( 1 ) 1 √3 √3 − − Exact for any cubicR polynomial: parallels Simpson’s rule! Three-point quadrature rule along similar lines:

1 5 3 8 5 3 f (x)dx = f + f (0) + f 1 9 − 5 9 9 5 Z− r ! r ! A large number of formulae: Consult mathematical handbooks. For domain of integration [a, b], a + b b a b a x = + − t and dx = − dt 2 2 2 With scaling and relocation,

b b a 1 f (x)dx = − f [x(t)]dt a 2 1 Z Z− Mathematical Methods in Engineering and Science Advanced Topics in Numerical Integration* 315, Gaussian Quadrature Gaussian Quadrature Multiple Integrals General Framework for n-point formula f (x): a polynomial of degree 2n 1 − p(x): Lagrange polynomial through the n quadrature points f (x) p(x): a (2n 1)-degree polynomial having n of its roots at − − the quadrature points

Then, with φ(x) = (x x )(x x ) (x xn), − 1 − 2 · · · − f (x) p(x)= φ(x)q(x). − n 1 i Quotient polynomial: q(x)= i=0− αi x Direct integration: P

1 1 1 n 1 − i f (x)dx = p(x)dx + φ(x) αi x dx 1 1 1 " i # Z− Z− Z− X=0 How to make the second term vanish? Mathematical Methods in Engineering and Science Advanced Topics in Numerical Integration* 316, Gaussian Quadrature Gaussian Quadrature Multiple Integrals

Choose quadrature points x , x , , xn so that φ(x) is orthogonal 1 2 · · · to all polynomials of degree less than n. Legendre polynomial

Gauss-Legendre quadrature

1. Choose Pn(x), Legendre polynomial of degree n, as φ(x).

2. Take its roots x , x , , xn as the quadrature points. 1 2 · · · 3. Fit Lagrange polynomial of f (x), using these n points.

p(x)= L (x)f (x )+ L (x)f (x )+ + Ln(x)f (xn) 1 1 2 2 · · · 4. 1 1 n 1 f (x)dx = p(x)dx = f (xj ) Lj (x)dx 1 1 j 1 Z− Z− X=1 Z− 1 Weight values: wj = 1 Lj (x)dx, for j = 1, 2, , n − · · · R Mathematical Methods in Engineering and Science Advanced Topics in Numerical Integration* 317, Gaussian Quadrature Gaussian Quadrature Multiple Integrals

Weight functions in Gaussian quadrature What is so great about exact integration of polynomials? Demand something else: generalization Exact integration of polynomials times function W (x)

Given weight function W (x) and number (n) of quadrature points,

work out the locations (xj ’s) of the n points and the corresponding weights (wj ’s), so that integral

b n W (x)f (x)dx = wj f (xj ) a j Z X=1 is exact for an arbitrary polynomial f (x) of degree up to (2n 1). − Mathematical Methods in Engineering and Science Advanced Topics in Numerical Integration* 318, Gaussian Quadrature Gaussian Quadrature Multiple Integrals

A family of orthogonal polynomials with increasing degree: quadrature points: roots of n-th member of the family.

For different kinds of functions and different domains, ◮ Gauss-Chebyshev quadrature ◮ Gauss-Laguerre quadrature ◮ Gauss-Hermite quadrature ◮ ··· ··· ··· Several singular functions and infinite domains can be handled.

A very special case: For W (x) = 1, Gauss-Legendre quadrature! Mathematical Methods in Engineering and Science Advanced Topics in Numerical Integration* 319, Gaussian Quadrature Multiple Integrals Multiple Integrals

b g2(x) S = f (x, y) dy dx Za Zg1(x)

g2(x) b F (x)= f (x, y) dy and S = F (x)dx ⇒ Zg1(x) Za with complete flexibility of individual quadrature methods. Double integral on rectangular domain Two-dimensional version of Simpson’s one-third rule:

1 1 f (x, y)dxdy 1 1 Z− Z− = w f (0, 0) + w [f ( 1, 0) + f (1, 0) + f (0, 1) + f (0, 1)] 0 1 − − + w [f ( 1, 1) + f ( 1, 1) + f (1, 1) + f (1, 1)] 2 − − − −

Exact for bicubic functions: w0 = 16/9, w1 = 4/9 and w2 = 1/9. Mathematical Methods in Engineering and Science Advanced Topics in Numerical Integration* 320, Gaussian Quadrature Multiple Integrals Multiple Integrals

Monte Carlo integration

I = f (x)dV ZΩ Requirements: ◮ a simple volume V enclosing the domain Ω ◮ a point classification scheme Generating random points in V ,

f (x) if x Ω, F (x)= ∈ 0 otherwise . 

N V I F (xi ) ≈ N i X=1 Estimate of I (usually) improves with increasing N. Mathematical Methods in Engineering and Science Advanced Topics in Numerical Integration* 321, Gaussian Quadrature Points to note Multiple Integrals

◮ Basic strategy of Gauss-Legendre quadrature ◮ Formulation of a double integral from fundamental principle ◮ Monte Carlo integration

Necessary Exercises: 2,5,6 Mathematical Methods in Engineering and Science Numerical Solution of Ordinary Differential Equations 322, Single-Step Methods Outline Practical Implementation of Single-Step Methods Systems of ODE’s Multi-Step Methods*

Numerical Solution of Ordinary Differential Equations Single-Step Methods Practical Implementation of Single-Step Methods Systems of ODE’s Multi-Step Methods* Mathematical Methods in Engineering and Science Numerical Solution of Ordinary Differential Equations 323, Single-Step Methods Single-Step Methods Practical Implementation of Single-Step Methods Systems of ODE’s Multi-Step Methods*

Initial value problem (IVP) of a first order ODE:

dy = f (x, y), y(x )= y dx 0 0 To determine: y(x) for x [a, b] with x = a. ∈ 0

Numerical solution: Start from the point (x0, y0). ◮ y1 = y(x1)= y(x0 + h) =? ◮ Found (x1, y1). Repeat up to x = b.

Information at how many points are used at every step? ◮ Single-step method: Only the current value ◮ Multi-step method: History of several recent steps Mathematical Methods in Engineering and Science Numerical Solution of Ordinary Differential Equations 324, Single-Step Methods Single-Step Methods Practical Implementation of Single-Step Methods Systems of ODE’s Multi-Step Methods* Euler’s method

◮ dy At (xn, yn), evaluate slope dx = f (xn, yn). ◮ For a small step h,

yn+1 = yn + hf (xn, yn)

Repitition of such steps constructs y(x). First order truncated Taylor’s series: Expected error: (h2) O Accumulation over steps Total error: (h) O Euler’s method is a first order method. Question: Total error = Sum of errors over the steps? Answer: No, in general. Mathematical Methods in Engineering and Science Numerical Solution of Ordinary Differential Equations 325, Single-Step Methods Single-Step Methods Practical Implementation of Single-Step Methods Systems of ODE’s Initial slope for the entire step: is it a goodMulti-Step idea? Methods*

y y C C

C1 y 3 Q* Q2 C2 ∆y 3 Q C1 C3 Q1 y0 y0 P

P1

O x0 x1 x2 x3 x O x0 x1 x

Figure: Euler’s method Figure: Improved Euler’s method

Improved Euler’s method or Heun’s method

y¯n+1 = yn + hf (xn, yn) h yn+1 = yn + 2 [f (xn, yn)+ f (xn+1, y¯n+1)] The order of Heun’s method is two. Mathematical Methods in Engineering and Science Numerical Solution of Ordinary Differential Equations 326, Single-Step Methods Single-Step Methods Practical Implementation of Single-Step Methods Systems of ODE’s Runge-Kutta methods Multi-Step Methods* Second order method:

k1 = hf (xn, yn), k2 = hf (xn + αh, yn + βk1) k = w1k1 + w2k2, and xn+1 = xn + h, yn+1 = yn + k Force agreement up to the second order.

yn+1

= yn + w1hf (xn, yn)+ w2h[f (xn, yn)+ αhfx (xn, yn)+ βk1fy (xn, yn)+ 2 · · = yn + (w1 + w2)hf (xn, yn)+ h w2[αfx (xn, yn)+ βf (xn, yn)fy (xn, yn)] +

From Taylor’s series, using y ′ = f (x, y) and y ′′ = fx + ffy , h2 y(xn )= yn + hf (xn, yn)+ [fx (xn, yn)+ f (xn, yn)fy (xn, yn)] + +1 2 · · ·

1 1 w1 + w2 = 1, αw2 = βw2 = α = β = , w1 = 1 w2 2 ⇒ 2w2 − Mathematical Methods in Engineering and Science Numerical Solution of Ordinary Differential Equations 327, Single-Step Methods Single-Step Methods Practical Implementation of Single-Step Methods Systems of ODE’s Multi-Step Methods* With continuous choice of w2, a family of second order Runge Kutta (RK2) formulae

Popular form of RK2: with choice w2 = 1,

h k1 k1 = hf (xn, yn), k2 = hf (xn + 2 , yn + 2 ) xn+1 = xn + h, yn+1 = yn + k2

Fourth order Runge-Kutta method (RK4):

k1 = hf (xn, yn) h k1 k2 = hf (xn + 2 , yn + 2 ) h k2 k3 = hf (xn + 2 , yn + 2 ) k4 = hf (xn + h, yn + k3)

1 k = 6 (k1 + 2k2 + 2k3 + k4)

xn+1 = xn + h, yn+1 = yn + k Mathematical Methods in Engineering and Science Numerical Solution of Ordinary Differential Equations 328, Single-Step Methods Practical Implementation of Single-StepPractical Methods Implementation of Single-Step Methods Systems of ODE’s Question: How to decide whether the errorMulti-Step is within Methods* tolerance? Additional estimates: ◮ handle to monitor the error ◮ further efficient algorithms Runge-Kutta method with adaptive step size In an interval [xn, xn + h],

(1) 5 yn+1 = yn+1 + ch + higher order terms h Over two steps of size 2 , h 5 y (2) = y + 2c + higher order terms n+1 n+1 2   Difference of two estimates: 15 ∆= y (1) y (2) ch5 n+1 − n+1 ≈ 16 (2) (1) (2) 16y y Best available value: y = y ∆ = n+1− n+1 n∗+1 n+1 − 15 15 Mathematical Methods in Engineering and Science Numerical Solution of Ordinary Differential Equations 329, Single-Step Methods Practical Implementation of Single-StepPractical Methods Implementation of Single-Step Methods Systems of ODE’s Multi-Step Methods* Evaluation of a step: ∆ >ǫ: Step size is too large for accuracy. Subdivide the interval. ∆ << ǫ: Step size is inefficient! Start with a large step size. Keep subdividing intervals whenever ∆ >ǫ. Fast marching over smooth segments and small steps in zones featured with rapid changes in y(x).

Runge-Kutta-Fehlberg method With six function values, An RK4 formula embedded in an RK5 formula ◮ two independent estimates and an error estimate! RKF45 in professional implementations Mathematical Methods in Engineering and Science Numerical Solution of Ordinary Differential Equations 330, Single-Step Methods Systems of ODE’s Practical Implementation of Single-Step Methods Systems of ODE’s Multi-Step Methods* Methods for a single first order ODE directly applicable to a first order vector ODE A typical IVP with an ODE system: dy = f(x, y), y(x )= y dx 0 0 An n-th order ODE: convert into a system of first order ODE’s Defining state vector z(x) = [y(x) y (x) y (n 1)(x)]T , ′ · · · − dz work out dx to form the state space equation. Initial condition: z(x ) = [y(x ) y (x ) y (n 1)(x )]T 0 0 ′ 0 · · · − 0 A system of higher order ODE’s with the highest order derivatives of orders n , n , n , , nk 1 2 3 · · · ◮ Cast into the state space form with the state vector of dimension n = n + n + n + + nk 1 2 3 · · · Mathematical Methods in Engineering and Science Numerical Solution of Ordinary Differential Equations 331, Single-Step Methods Systems of ODE’s Practical Implementation of Single-Step Methods Systems of ODE’s State space formulation is directly applicableMulti-Step when Methods* the highest order derivatives can be solved explicitly. The resulting form of the ODE’s: normal system of ODE’s Example:

d 2x dy dx 2 dx d 2y y 3 + 2x +4 = 0 dt2 − dt dt dt dt2      r 3 2 3/2 xy d y d y t e y + 2x +1 = e− dt3 − dt2   T dx dy d2y State vector: z(t)= x dt y dt dt2 With three trivial derivativesh z1′ (t)= z2, z3′ (it)= z4 and z4′ (t)= z5 and the other two obtained from the given ODE’s, dz we get the state space equations as dt = f(t, z). Mathematical Methods in Engineering and Science Numerical Solution of Ordinary Differential Equations 332, Single-Step Methods Multi-Step Methods* Practical Implementation of Single-Step Methods Systems of ODE’s Multi-Step Methods* Single-step methods: every step a brand new IVP! Why not try to capture the trend? A typical multi-step formula:

yn+1 = yn + h[c0f (xn+1, yn+1)+ c1f (xn, yn)

+ c2f (xn 1, yn 1)+ c3f (xn 2, yn 2)+ ] − − − − · · · Determine coefficients by demanding the exactness for leading polynomial terms.

Explicit methods: c0 = 0, evaluation easy, but involves extrapolation. Implicit methods: c = 0, difficult to evaluate, but better stability. 0 6 Predictor-corrector methods Example: Adams-Bashforth-Moulton method Mathematical Methods in Engineering and Science Numerical Solution of Ordinary Differential Equations 333, Single-Step Methods Points to note Practical Implementation of Single-Step Methods Systems of ODE’s Multi-Step Methods*

◮ Euler’s and Runge-Kutta methods ◮ Step size adaptation ◮ State space formulation of dynamic systems

Necessary Exercises: 1,2,5,6 Mathematical Methods in Engineering and Science ODE Solutions: Advanced Issues 334, Stability Analysis Outline Implicit Methods Stiff Differential Equations Boundary Value Problems

ODE Solutions: Advanced Issues Stability Analysis Implicit Methods Stiff Differential Equations Boundary Value Problems Mathematical Methods in Engineering and Science ODE Solutions: Advanced Issues 335, Stability Analysis Stability Analysis Implicit Methods Stiff Differential Equations Adaptive RK4 is an extremely successful method.Boundary Value Problems But, its scope has a limitation. Focus of explicit methods (such as RK) is accuracy and efficiency. The issue of stabilty is handled indirectly. Stabilty of explicit methods For the ODE system y′ = f(x, y), Euler’s method gives 2 yn = yn + f(xn, yn)h + (h ). +1 O Taylor’s series of the actual solution: 2 y(xn )= y(xn)+ f(xn, y(xn))h + (h ) +1 O Discrepancy or error:

∆n+1 = yn+1 y(xn+1) − 2 = [yn y(xn)] + [f(xn, yn) f(xn, y(xn))]h + (h ) − − O ∂f 2 = ∆n + (xn, ¯yn)∆n h + (h ) (I + hJ)∆n ∂y O ≈   Mathematical Methods in Engineering and Science ODE Solutions: Advanced Issues 336, Stability Analysis Stability Analysis Implicit Methods Stiff Differential Equations Euler’s step magnifies the error by aBoundary factor Value (I + ProblemshJ). Using J loosely as the representative Jacobian, n ∆n (I + hJ) ∆ . +1 ≈ 1 For stability, ∆n 0 as n . +1 → →∞ Eigenvalues of (I + hJ) must fall within the unit circle z = 1. By shift theorem, eigenvalues of hJ must fall | | inside the unit circle with the centre at z = 1. 0 − 2Re (λ) 1+ hλ < 1 h < − | | ⇒ λ 2 | | Note: Same result for single ODE w ′ = λw, with complex λ. For second order Runge-Kutta method, h2λ2 ∆n = 1+ hλ + ∆n +1 2   z2 Region of stability in the plane of z = hλ: 1+ z + 2 < 1

Mathematical Methods in Engineering and Science ODE Solutions: Advanced Issues 337, Stability Analysis Stability Analysis Implicit Methods Stiff Differential Equations Boundary Value Problems

3

UNSTABLE

2 UNSTABLE

RK2 Euler 1 ) λ 0

Im(h O

−1 RK4

−2

−3 −5 −4 −3 −2 −1 0 1 2 3 Re(hλ)

Figure: Stability regions of explicit methods

Question: What do these stability regions mean with reference to the system eigenvalues? Question: How does the step size adaptation of RK4 operate on a system with eigenvalues on the left half of complex plane? Step size adaptation tackles instability by its symptom! Mathematical Methods in Engineering and Science ODE Solutions: Advanced Issues 338, Stability Analysis Implicit Methods Implicit Methods Stiff Differential Equations Backward Euler’s method Boundary Value Problems

yn+1 = yn + f(xn+1, yn+1)h Solve it? Is it worth solving?

∆n yn y(xn ) +1 ≈ +1 − +1 = [yn y(xn)] + h[f(xn , yn ) f(xn , y(xn ))] − +1 +1 − +1 +1 = ∆n + hJ(xn+1, ¯yn+1)∆n+1 Notice the flip in the form of this equation.

1 ∆n (I hJ)− ∆n +1 ≈ − Stability: eigenvalues of (I hJ) outside the unit circle z = 1 − | | 2Re (λ) hλ 1 > 1 h > | − | ⇒ λ 2 | | Absolute stability for a stable ODE, i.e. one with Re (λ) < 0 Mathematical Methods in Engineering and Science ODE Solutions: Advanced Issues 339, Stability Analysis Implicit Methods Implicit Methods Stiff Differential Equations Boundary Value Problems 2

1.5

1 STABLE

STABLE 0.5 UNSTABLE ) λ 0

Im(h O

−0.5

−1 STABLE

−1.5

−2 −1.5 −1 −0.5 0 0.5 1 1.5 2 2.5 3 3.5 Re(hλ)

Figure: Stability region of backward Euler’s method

How to solve g(yn )= yn + hf(xn , yn ) yn = 0 for yn ? +1 +1 +1 − +1 +1 Typical Newton’s iteration: (k+1) (k) 1 (k) (k) y = y + (I hJ)− yn y + hf xn , y n+1 n+1 − − n+1 +1 n+1 Semi-implicit Euler’s method hfor local solution: i 1 yn = yn + h(I hJ)− f(xn , yn) +1 − +1 Mathematical Methods in Engineering and Science ODE Solutions: Advanced Issues 340, Stability Analysis Stiff Differential Equations Implicit Methods Stiff Differential Equations Example: IVP of a mass-spring-damper system:Boundary Value Problems x¨ + cx˙ + kx = 0, x(0) = 0, x˙ (0) = 1 (a) c = 3, k = 2: x = e t e 2t − − − (b) c = 49, k = 600: x = e 24t e 25t − − −

1 1

0.8 0.8

0.6 0.6

0.4 0.4

0.2 0.2

x 0 x 0

−0.2 −0.2

−0.4 −0.4

−0.6 −0.6

−0.8 −0.8

−1 −1 0 0.5 1 1.5 2 2.5 3 3.5 4 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 t t (a) Case of c = 3, k = 2 (b) Case of c = 49, k = 600

Figure: Solutions of a mass-spring-damper system: ordinary situations Mathematical Methods in Engineering and Science ODE Solutions: Advanced Issues 341, Stability Analysis Stiff Differential Equations Implicit Methods Stiff Differential Equations e−2t e−300t Boundary Value Problems (c) c = 302, k = 600: x = −298

−3 −3 x 10 x 10 4 4

3 3

2 2

1 1

x 0 x 0

−1 −1

−2 −2

−3 −3

−4 −4 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 t t (c) With RK4 (d) With implicit Euler

Figure: Solutions of a mass-spring-damper system: stiff situation

To solve stiff ODE systems, use implicit method, preferably with explicit Jacobian. Mathematical Methods in Engineering and Science ODE Solutions: Advanced Issues 342, Stability Analysis Boundary Value Problems Implicit Methods Stiff Differential Equations Boundary Value Problems A paradigm shift from the initial value problems ◮ A ball is thrown with a particular velocity. What trajectory does the ball follow? ◮ How to throw a ball such that it hits a particular window at a neighbouring house after 15 seconds?

Two-point BVP in ODE’s: boundary conditions at two values of the independent variable

Methods of solution ◮ Shooting method ◮ Finite difference (relaxation) method ◮ Finite element method Mathematical Methods in Engineering and Science ODE Solutions: Advanced Issues 343, Stability Analysis Boundary Value Problems Implicit Methods Stiff Differential Equations Boundary Value Problems Shooting method follows the strategy to adjust trials to hit a target. Consider the 2-point BVP

y′ = f(x, y), g1(y(a)) = 0, g2(y(b)) = 0,

where g Rn1 , g Rn2 and n + n = n. 1 ∈ 2 ∈ 1 2

◮ Parametrize initial state: y(a)= h(p) with p Rn2 . ∈ ◮ Guess n2 values of p to define IVP

y′ = f(x, y), y(a)= h(p).

◮ Solve this IVP for [a, b] and evaluate y(b). ◮ Define error vector E(p)= g2(y(b)). Mathematical Methods in Engineering and Science ODE Solutions: Advanced Issues 344, Stability Analysis Boundary Value Problems Implicit Methods Stiff Differential Equations Boundary Value Problems Objective: To solve E(p)= 0

∂E From current vector p, n2 perturbations as p + ei δ: Jacobian ∂p

Each Newton’s step: solution of n2 + 1 initial value problems!

◮ Computational cost ◮ Convergence not guaranteed (initial guess important)

Merits of shooting method ◮ Very few parameters to start ◮ In many cases, it is found quite efficient. Mathematical Methods in Engineering and Science ODE Solutions: Advanced Issues 345, Stability Analysis Boundary Value Problems Implicit Methods Stiff Differential Equations Finite difference (relaxation) method Boundary Value Problems adopts a global perspective.

1. Discretize domain [a, b]: grid of points a = x0 < x1 < x2 < < xN 1 < xN = b. · · · − Function values y(xi ): n(N + 1) unknowns 2. Replace the ODE over intervals by finite difference equations. Considering mid-points, a typical (vector) FDE:

xi + xi 1 yi + yi 1 yi yi 1 hf − , − = 0, for i = 1, 2, 3, , N − − − 2 2 · · ·   nN (scalar) equations 3. Assemble additional n equations from boundary conditions. 4. Starting from a guess solution over the grid, solve this system. (Sparse Jacobian is an advantage.) Iterative schemes for solution of systems of linear equations. Mathematical Methods in Engineering and Science ODE Solutions: Advanced Issues 346, Stability Analysis Points to note Implicit Methods Stiff Differential Equations Boundary Value Problems

◮ Numerical stability of ODE solution methods ◮ Computational cost versus better stability of implicit methods ◮ Multiscale responses leading to stiffness: failure of explicit methods ◮ Implicit methods for stiff systems ◮ Shooting method for two-point boundary value problems ◮ Relaxation method for boundary value problems

Necessary Exercises: 1,2,3,4,5 Mathematical Methods in Engineering and Science Existence and Uniqueness Theory 347, Well-Posedness of Initial Value Problems Outline Uniqueness Theorems Extension to ODE Systems Closure

Existence and Uniqueness Theory Well-Posedness of Initial Value Problems Uniqueness Theorems Extension to ODE Systems Closure Mathematical Methods in Engineering and Science Existence and Uniqueness Theory 348, Well-Posedness of Initial Value Problems Well-Posedness of Initial Value ProblemsUniqueness Theorems Extension to ODE Systems Pierre Simon de Laplace (1749-1827):Closure ”We may regard the present state of the universe as the effect of its past and the cause of its future. An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect were also vast enough to submit these data to analysis, it would embrace in a single formula the movements of the greatest bodies of the universe and those of the tiniest atom; for such an intellect nothing would be uncertain and the future just like the past would be present before its eyes.” Mathematical Methods in Engineering and Science Existence and Uniqueness Theory 349, Well-Posedness of Initial Value Problems Well-Posedness of Initial Value ProblemsUniqueness Theorems Extension to ODE Systems Initial value problem Closure

y ′ = f (x, y), y(x0)= y0

From (x, y), the trajectory develops according to y ′ = f (x, y). The new point: (x + δx, y + f (x, y)δx) The slope now: f (x + δx, y + f (x, y)δx)

Question: Was the old direction of approach valid? With δx 0, directions appropriate, if → lim f (x, y)= f (¯x, y(¯x)), x x¯ → i.e. if f (x, y) is continuous. If f (x, y)= , then y = and trajectory is vertical. ∞ ′ ∞ For the same value of x, several values of y! y(x) not a function, unless f (x, y) = , i.e. f (x, y) is bounded. 6 ∞ Mathematical Methods in Engineering and Science Existence and Uniqueness Theory 350, Well-Posedness of Initial Value Problems Well-Posedness of Initial Value ProblemsUniqueness Theorems Extension to ODE Systems Peano’s theorem: If f (x, y) is continuousClosure and bounded in a rectangle R = (x, y) : x x < h, y y < k , with { | − 0| | − 0| } f (x, y) M < , then the IVP y = f (x, y), y(x )= y has a | |≤ ∞ ′ 0 0 solution y(x) defined in a neighbourhood of x0.

y y        (x0,y0)      (x0,y0)     y +k      y +k  0       0                Mh     k  k   Mh                   y   y       0  0              Mh      k   k   Mh                            y −k     0     y0−k                                       h h   h h                                                       O x 0 −h x 0 x 0+h x O x 0−h xxxxx0 x0+h x

(a) Mh <= k (b) Mh >= k

Figure: Regions containing the trajectories

Guaranteed neighbourhood: [x δ, x + δ], where δ = min(h, k ) > 0 0 − 0 M Mathematical Methods in Engineering and Science Existence and Uniqueness Theory 351, Well-Posedness of Initial Value Problems Well-Posedness of Initial Value ProblemsUniqueness Theorems Extension to ODE Systems Example: Closure y 1 y ′ = − , y(0) = 1 x y 1 Function f (x, y)= −x undefined at (0, 1). Premises of existence theorem not satisfied. But, premises here are sufficient, not necessary! Result inconclusive. The IVP has solutions: y(x)=1+ cx for all values of c. The solution is not unique.

Example: y 2 = y , y(0) = 0 ′ | | Existence theorem guarantees a solution. But, there are two solutions: y(x) = 0 and y(x)= sgn(x) x2/4. Mathematical Methods in Engineering and Science Existence and Uniqueness Theory 352, Well-Posedness of Initial Value Problems Well-Posedness of Initial Value ProblemsUniqueness Theorems Extension to ODE Systems Closure Physical system to mathematical model ◮ Mathematical solution ◮ Interpretation about the physical system

Meanings of non-uniqueness of a solution ◮ Mathematical model admits of extraneous solution(s)? ◮ Physical system itself can exhibit alternative behaviours?

Indeterminacy of the solution ◮ Mathematical model of the system is not complete. The initial value problem is not well-posed.

After existence, next important question:

Uniqueness of a solution Mathematical Methods in Engineering and Science Existence and Uniqueness Theory 353, Well-Posedness of Initial Value Problems Well-Posedness of Initial Value ProblemsUniqueness Theorems Extension to ODE Systems Closure Continuous dependence on initial condition

Suppose that for IVP y ′ = f (x, y), y(x0)= y0, ◮ unique solution: y1(x). Applying a small perturbation to the initial condition, the new IVP: y ′ = f (x, y), y(x0)= y0 + ǫ ◮ unique solution: y2(x)

Question: By how much y2(x) differs from y1(x) for x > x0? Large difference: solution sensitive to initial condition ◮ Practically unreliable solution

Well-posed IVP: An initial value problem is said to be well-posed if there exists a solution to it, the solution is unique and it depends continuously on the initial conditions. Mathematical Methods in Engineering and Science Existence and Uniqueness Theory 354, Well-Posedness of Initial Value Problems Uniqueness Theorems Uniqueness Theorems Extension to ODE Systems Lipschitz condition: Closure f (x, y) f (x, z) L y z | − |≤ | − | L: finite positive constant (Lipschitz constant) Theorem: If f (x, y) is a continuous function satisfying a Lipschitz condition on a strip S = (x, y) : a < x < b, < y < , then for any { −∞ ∞} point (x , y ) S, the initial value problem of 0 0 ∈ y ′ = f (x, y), y(x0)= y0 is well-posed.

Assume y1(x) and y2(x): solutions of the ODE y ′ = f (x, y) with initial conditions y(x0) = (y1)0 and y(x0) = (y2)0 Consider E(x) = [y (x) y (x)]2. 1 − 2 E ′(x) = 2(y y )(y ′ y ′ ) = 2(y y )[f (x, y ) f (x, y )] 1 − 2 1 − 2 1 − 2 1 − 2 Applying Lipschitz condition, 2 E ′(x) 2L(y y ) = 2LE(x). | |≤ 1 − 2 Need to consider the case of E (x) 0 only. ′ ≥ Mathematical Methods in Engineering and Science Existence and Uniqueness Theory 355, Well-Posedness of Initial Value Problems Uniqueness Theorems Uniqueness Theorems Extension to ODE Systems Closure

E (x) x E (x) ′ 2L ′ dx 2L(x x ) E(x) ≤ ⇒ E(x) ≤ − 0 Zx0 Integrating, E(x) E(x )e2L(x x0). ≤ 0 − Hence, L(x x ) y (x) y (x) e − 0 (y ) (y ) . | 1 − 2 |≤ | 1 0 − 2 0|

Since x [a, b], eL(x x0) is finite. ∈ − L(x x ) (y ) (y ) = ǫ y (x) y (x) e − 0 ǫ | 1 0 − 2 0| ⇒ | 1 − 2 |≤ continuous dependence of the solution on initial condition

In particular, (y ) = (y ) = y y (x)= y (x) x [a, b]. 1 0 2 0 0 ⇒ 1 2 ∀ ∈ The initial value problem is well-posed. Mathematical Methods in Engineering and Science Existence and Uniqueness Theory 356, Well-Posedness of Initial Value Problems Uniqueness Theorems Uniqueness Theorems Extension to ODE Systems A weaker theorem (hypotheses are stronger):Closure ∂f Picard’s theorem: If f (x, y) and ∂y are continuous and bounded on a rectangle R = (x, y) : a < x < b, c < y < d , then for every { } (x , y ) R, the IVP y = f (x, y), y(x )= y has a 0 0 ∈ ′ 0 0 unique solution in some neighbourhood x x h. | − 0|≤ From the mean value theorem, ∂f f (x, y ) f (x, y )= (x,ξ)(y y ). 1 − 2 ∂y 1 − 2

∂f With Lipschitz constant L = sup ∂y ,

Lipschitz condition is satisfied ‘lavishly’!

Note: All these theorems give only sufficient conditions! Hypotheses of Picard’s theorem Lipschitz condition ⇒ ⇒ Well-posedness Existence and uniqueness ⇒ Mathematical Methods in Engineering and Science Existence and Uniqueness Theory 357, Well-Posedness of Initial Value Problems Extension to ODE Systems Uniqueness Theorems Extension to ODE Systems Closure For ODE System dy = f(x, y), y(x )= y dx 0 0

◮ Lipschitz condition:

f(x, y) f(x, z) L y z k − k≤ k − k ◮ Scalar function E(x) generalized as

E(x)= y (x) y (x) 2 = (y y )T (y y ) k 1 − 2 k 1 − 2 1 − 2 ◮ ∂f ∂f Partial derivative ∂y replaced by the Jacobian A = ∂y ◮ Boundedness to be inferred from the boundedness of its norm With these generalizations, the formulations work as usual. Mathematical Methods in Engineering and Science Existence and Uniqueness Theory 358, Well-Posedness of Initial Value Problems Extension to ODE Systems Uniqueness Theorems Extension to ODE Systems Closure IVP of linear first order ODE system

y′ = A(x)y + g(x), y(x0)= y0

Rate function: f(x, y)= A(x)y + g(x) Continuity and boundedness of the coefficient functions in A(x) and g(x) are sufficient for well-posedness.

An n-th order linear ordinary differential equation

(n) (n 1) (n 2) y +P1(x)y − +P2(x)y − + +Pn 1(x)y ′+Pn(x)y = R(x) · · · − State vector: z = [y y y y (n 1)]T ′ ′′ · · · − With z1′ = z2, z2′ = z3, , zn′ 1 = zn and zn′ from the ODE, · · · − ◮ state space equation in the form z′ = A(x)z + g(x)

Continuity and boundedness of P (x), P (x), , Pn(x) 1 2 · · · and R(x) guarantees well-posedness. Mathematical Methods in Engineering and Science Existence and Uniqueness Theory 359, Well-Posedness of Initial Value Problems Closure Uniqueness Theorems Extension to ODE Systems Closure A practical by-product of existence and uniqueness results: ◮ important results concerning the solutions

A sizeable segment of current research: ill-posed problems ◮ Dynamics of some nonlinear systems ◮ Chaos: sensitive dependence on initial conditions

For boundary value problems, No general criteria for existence and uniqueness

Note: Taking clue from the shooting method, a BVP in ODE’s can be visualized as a complicated root-finding problem!

Multiple solutions or non-existence of solution is no surprise. Mathematical Methods in Engineering and Science Existence and Uniqueness Theory 360, Well-Posedness of Initial Value Problems Points to note Uniqueness Theorems Extension to ODE Systems Closure

◮ For a solution of initial value problems, questions of existence, uniqueness and continuous dependence on initial condition are of crucial importance. ◮ These issues pertain to aspects of practical relevance regarding a physical system and its dynamic simulation ◮ Lipschitz condition is the tightest (available) criterion for deciding these questions regarding well-posedness

Necessary Exercises: 1,2 Mathematical Methods in Engineering and Science First Order Ordinary Differential Equations 361, Formation of Differential Equations and Their Solutions Outline Separation of Variables ODE’s with Rational Slope Functions Some Special ODE’s Exact Differential Equations and Reduction to the Exact First Order Linear (Leibnitz) ODE and Associated Fo Orthogonal Trajectories Modelling and Simulation First Order Ordinary Differential Equations Formation of Differential Equations and Their Solutions Separation of Variables ODE’s with Rational Slope Functions Some Special ODE’s Exact Differential Equations and Reduction to the Exact Form First Order Linear (Leibnitz) ODE and Associated Forms Orthogonal Trajectories Modelling and Simulation Mathematical Methods in Engineering and Science First Order Ordinary Differential Equations 362, Formation of Differential Equations and Their Solutions Formation of Differential Equations andSeparation Their of Variables Solutions ODE’s with Rational Slope Functions Some Special ODE’s Exact Differential Equations and Reduction to the Exact A differential equation represents a class ofFirst functions. Order Linear (Leibnitz) ODE and Associated Fo Orthogonal Trajectories Modelling and Simulation Example: y(x)= cxk

2 With dy = ckxk 1 and d y = ck(k 1)xk 2, dx − dx2 − − d 2y dy 2 dy xy = x y dx2 dx − dx   A compact ‘intrinsic’ description. Important terms ◮ Order and degree of differential equations ◮ Homogeneous and non-homogeneous ODE’s

Solution of a differential equation ◮ general, particular and singular solutions Mathematical Methods in Engineering and Science First Order Ordinary Differential Equations 363, Formation of Differential Equations and Their Solutions Separation of Variables Separation of Variables ODE’s with Rational Slope Functions Some Special ODE’s ODE form with separable variables: Exact Differential Equations and Reduction to the Exact First Order Linear (Leibnitz) ODE and Associated Fo Orthogonal Trajectories dy φ(x) Modelling and Simulation y ′ = f (x, y) = or ψ(y)dy = φ(x)dx ⇒ dx ψ(y)

Solution as quadrature:

ψ(y)dy = φ(x)dx + c. Z Z Separation of variables through substitution Example: y ′ = g(αx + βy + γ) Substitute v = αx + βy + γ to arrive at dv dv = α + βg(v) x = + c dx ⇒ α + βg(v) Z Mathematical Methods in Engineering and Science First Order Ordinary Differential Equations 364, Formation of Differential Equations and Their Solutions ODE’s with Rational Slope FunctionsSeparation of Variables ODE’s with Rational Slope Functions Some Special ODE’s Exact Differential Equations and Reduction to the Exact f (x, y) First Order Linear (Leibnitz) ODE and Associated Fo 1 Orthogonal Trajectories y ′ = Modelling and Simulation f2(x, y) If f1 and f2 are homogeneous functions of n-th degree, then substitution y = ux separates variables x and u. dy φ (y/x) du φ (u) dx φ (u) = 1 u+x = 1 = 2 du dx φ (y/x) ⇒ dx φ (u) ⇒ x φ (u) uφ (u) 2 2 1 − 2 For y = a1x+b1y+c1 , coordinate shift ′ a2x+b2y+c2 dy dY x = X + h, y = Y + k y ′ = = ⇒ dx dX produces dY a X + b Y + (a h + b k + c ) = 1 1 1 1 1 . dX a2X + b2Y + (a2h + b2k + c2) Choose h and k such that

a1h + b1k + c1 =0= a2h + b2k + c2.

If the system is inconsistent, then substitute u = a2x + b2y. Mathematical Methods in Engineering and Science First Order Ordinary Differential Equations 365, Formation of Differential Equations and Their Solutions Some Special ODE’s Separation of Variables ODE’s with Rational Slope Functions Some Special ODE’s Exact Differential Equations and Reduction to the Exact Clairaut’s equation First Order Linear (Leibnitz) ODE and Associated Fo Orthogonal Trajectories y = xy ′ + f (y ′) Modelling and Simulation

Substitute p = y ′ and differentiate: dp dp dp p = p + x + f ′(p) [x + f ′(p)] = 0 dx dx ⇒ dx

dp dx = 0 means y ′ = p = m (constant) ◮ family of straight lines y = mx + f (m) as general solution

Singular solution:

x = f ′(p) and y = f (p) pf ′(p) − − Singular solution is the envelope of the family of straight lines that constitute the general solution. Mathematical Methods in Engineering and Science First Order Ordinary Differential Equations 366, Formation of Differential Equations and Their Solutions Some Special ODE’s Separation of Variables ODE’s with Rational Slope Functions Some Special ODE’s Second order ODE’s with the function notExact Differentialappearing Equations and Reduction to the Exact First Order Linear (Leibnitz) ODE and Associated Fo explicitly Orthogonal Trajectories Modelling and Simulation f (x, y ′, y ′′) = 0

Substitute y ′ = p and solve f (x, p, p′) = 0 for p(x). Second order ODE’s with independent variable not appearing explicitly f (y, y ′, y ′′) = 0

Use y ′ = p and dp dp dy dp dp y ′′ = = = p f (y, p, p ) = 0. dx dy dx dy ⇒ dy

Solve for p(y). Resulting equation solved through a quadrature as dy dy = p(y) x = x + . dx ⇒ 0 p(y) Z Mathematical Methods in Engineering and Science First Order Ordinary Differential Equations 367, Formation of Differential Equations and Their Solutions Exact Differential Equations and ReductionSeparation of Variables to the Exact Form ODE’s with Rational Slope Functions Some Special ODE’s Mdx + Ndy: an exact differential if Exact Differential Equations and Reduction to the Exact First Order Linear (Leibnitz) ODE and Associated Fo ∂φ ∂φ Orthogonal∂M Trajectories∂N M = and N = , or, Modelling= and Simulation ∂x ∂y ∂y ∂x ∂M ∂N M(x, y)dx + N(x, y)dy = 0 is an exact ODE if ∂y = ∂x ∂φ ∂φ With M(x, y)= ∂x and N(x, y)= ∂y , ∂φ ∂φ dx + dy = 0 dφ = 0. ∂x ∂y ⇒ Solution: φ(x, y)= c Working rule:

φ1(x, y)= M(x, y)dx+g1(y) and φ2(x, y)= N(x, y)dy+g2(x) Z Z Determine g1(y) and g2(x) from φ1(x, y)= φ2(x, y)= φ(x, y). If ∂M = ∂N , but ∂ (FM)= ∂ (FN)? ∂y 6 ∂x ∂y ∂x F : Integrating factor Mathematical Methods in Engineering and Science First Order Ordinary Differential Equations 368, Formation of Differential Equations and Their Solutions First Order Linear (Leibnitz) ODE andSeparation Associated of Variables Forms ODE’s with Rational Slope Functions Some Special ODE’s General first order linear ODE: Exact Differential Equations and Reduction to the Exact First Order Linear (Leibnitz) ODE and Associated Fo dy Orthogonal Trajectories + P(x)y = Q(x)Modelling and Simulation dx Leibnitz equation For integrating factor F (x), dy d dF F (x) + F (x)P(x)y = [F (x)y] = F (x)P(x). dx dx ⇒ dx Separating variables, dF = P(x)dx ln F = P(x)dx. F ⇒ Z Z Z Integrating factor: F (x)= eR P(x)dx

yeR P(x)dx = Q(x)eR P(x)dx dx + C Z Mathematical Methods in Engineering and Science First Order Ordinary Differential Equations 369, Formation of Differential Equations and Their Solutions First Order Linear (Leibnitz) ODE andSeparation Associated of Variables Forms ODE’s with Rational Slope Functions Some Special ODE’s Bernoulli’s equation Exact Differential Equations and Reduction to the Exact First Order Linear (Leibnitz) ODE and Associated Fo dy Orthogonal Trajectories + P(x)y = Q(x)yModellingk and Simulation dx Substitution: z = y 1 k , dz = (1 k)y k dy gives − dx − − dx dz + (1 k)P(x)z = (1 k)Q(x), dx − − in the Leibnitz form. Riccati equation 2 y ′ = a(x)+ b(x)y + c(x)y

If one solution y1(x) is known, then propose y(x)= y1(x)+ z(x). 2 y1′ (x)+ z′(x)= a(x)+ b(x)[y1(x)+ z(x)] + c(x)[y1(x)+ z(x)] 2 Since y1′ (x)= a(x)+ b(x)y1(x)+ c(x)[y1(x)] , 2 z′(x) = [b(x) + 2c(x)y1(x)]z(x)+ c(x)[z(x)] , in the form of Bernoulli’s equation. Mathematical Methods in Engineering and Science First Order Ordinary Differential Equations 370, Formation of Differential Equations and Their Solutions Orthogonal Trajectories Separation of Variables ODE’s with Rational Slope Functions Some Special ODE’s Exact Differential Equations and Reduction to the Exact In xy-plane, one-parameter equation φ(x, y,Firstc) Order = 0: Linear (Leibnitz) ODE and Associated Fo Orthogonal Trajectories a family of curves Modelling and Simulation Differential equation of the family of curves: dy = f (x, y) dx 1

Slope of curves orthogonal to φ(x, y, c) = 0:

dy 1 = dx −f1(x, y)

Solving this ODE, another family of curves ψ(x, y, k) = 0. Orthogonal trajectories

If φ(x, y, c) = 0 represents the potential lines (contours), then ψ(x, y, k) = 0 will represent the streamlines! Mathematical Methods in Engineering and Science First Order Ordinary Differential Equations 371, Formation of Differential Equations and Their Solutions Points to note Separation of Variables ODE’s with Rational Slope Functions Some Special ODE’s Exact Differential Equations and Reduction to the Exact First Order Linear (Leibnitz) ODE and Associated Fo Orthogonal Trajectories Modelling and Simulation ◮ Meaning and solution of ODE’s ◮ Separating variables ◮ Exact ODE’s and integrating factors ◮ Linear (Leibnitz) equations ◮ Orthogonal families of curves

Necessary Exercises: 1,3,5,7 Mathematical Methods in Engineering and Science Second Order Linear Homogeneous ODE’s 372, Introduction Outline Homogeneous Equations with Constant Coefficients Euler-Cauchy Equation Theory of the Homogeneous Equations Basis for Solutions

Second Order Linear Homogeneous ODE’s Introduction Homogeneous Equations with Constant Coefficients Euler-Cauchy Equation Theory of the Homogeneous Equations Basis for Solutions Mathematical Methods in Engineering and Science Second Order Linear Homogeneous ODE’s 373, Introduction Introduction Homogeneous Equations with Constant Coefficients Euler-Cauchy Equation Theory of the Homogeneous Equations Second order ODE: Basis for Solutions f (x, y, y ′, y ′′) = 0 Special case of a linear (non-homogeneous) ODE:

y ′′ + P(x)y ′ + Q(x)y = R(x)

Non-homogeneous linear ODE with constant coefficients:

y ′′ + ay ′ + by = R(x)

For R(x) = 0, linear homogeneous differential equation

y ′′ + P(x)y ′ + Q(x)y = 0

and linear homogeneous ODE with constant coefficients

y ′′ + ay ′ + by = 0 Mathematical Methods in Engineering and Science Second Order Linear Homogeneous ODE’s 374, Introduction Homogeneous Equations with ConstantHomogeneous Coefficients Equations with Constant Coefficients Euler-Cauchy Equation Theory of the Homogeneous Equations Basis for Solutions

y ′′ + ay ′ + by = 0 Assume λx λx 2 λx y = e y ′ = λe and y ′′ = λ e . ⇒ Substitution: (λ2 + aλ + b)eλx = 0 Auxiliary equation: λ2 + aλ + b = 0

Solve for λ1 and λ2: Solutions: eλ1x and eλ2x Three cases ◮ Real and distinct (a2 > 4b): λ = λ 1 6 2

λ1x λ2x y(x)= c1y1(x)+ c2y2(x)= c1e + c2e Mathematical Methods in Engineering and Science Second Order Linear Homogeneous ODE’s 375, Introduction Homogeneous Equations with ConstantHomogeneous Coefficients Equations with Constant Coefficients Euler-Cauchy Equation Theory of the Homogeneous Equations Basis for Solutions ◮ Real and equal (a2 = 4b): λ = λ = λ = a 1 2 − 2 λx only solution in hand: y1 = e Method to develop another solution? ◮ λx Verify that y2 = xe is another solution. λx y(x)= c1y1(x)+ c2y2(x) = (c1 + c2x)e

◮ Complex conjugate (a2 < 4b): λ = a iω 1,2 − 2 ± ( a +iω)x ( a iω)x y(x) = c1e − 2 + c2e − 2 − ax = e− 2 [c1(cos ωx + i sin ωx)+ c2(cos ωx i sin ωx)] ax − = e− 2 [A cos ωx + B sin ωx],

with A = c1 + c2, B = i(c1 c2). −−ax ◮ A third form: y(x)= Ce 2 cos(ωx α) − Mathematical Methods in Engineering and Science Second Order Linear Homogeneous ODE’s 376, Introduction Euler-Cauchy Equation Homogeneous Equations with Constant Coefficients Euler-Cauchy Equation Theory of the Homogeneous Equations Basis for Solutions 2 x y ′′ + axy ′ + by = 0 Substituting y = xk , auxiliary (or indicial) equation: k2 + (a 1)k + b = 0 −

1. Roots real and distinct [(a 1)2 > 4b]: k = k . − 1 6 2 k1 k2 y(x)= c1x + c2x . 2 a 1 2. Roots real and equal [(a 1) = 4b]: k = k = k = − . − 1 2 − 2 k y(x) = (c1 + c2 ln x)x . 2 a 1 3. Roots complex conjugate [(a 1) < 4b]: k = − iν. − 1,2 − 2 ± a−1 a−1 y(x)= x− 2 [A cos(ν ln x)+B sin(ν ln x)] = Cx− 2 cos(ν ln x α). − Alternative approach: substitution dx dt 1 x = et t = ln x, = et = x and = , etc. ⇒ dt dx x Mathematical Methods in Engineering and Science Second Order Linear Homogeneous ODE’s 377, Introduction Theory of the Homogeneous EquationsHomogeneous Equations with Constant Coefficients Euler-Cauchy Equation Theory of the Homogeneous Equations Basis for Solutions

y ′′ + P(x)y ′ + Q(x)y = 0 Well-posedness of its IVP: The initial value problem of the ODE, with arbitrary initial conditions y(x0)= Y0, y ′(x0)= Y1, has a unique solution, as long as P(x) and Q(x) are continuous in the interval under question.

At least two linearly independent solutions: ◮ y1(x): IVP with initial conditions y(x0) = 1, y ′(x0) = 0 ◮ y2(x): IVP with initial conditions y(x0) = 0, y ′(x0) = 1 c y (x)+ c y (x) = 0 c = c = 0 1 1 2 2 ⇒ 1 2 At most two linearly independent solutions? Mathematical Methods in Engineering and Science Second Order Linear Homogeneous ODE’s 378, Introduction Theory of the Homogeneous EquationsHomogeneous Equations with Constant Coefficients Euler-Cauchy Equation Theory of the Homogeneous Equations of two solutions y1(x) and y2(xBasis): for Solutions

y1 y2 W (y , y )= = y y ′ y y ′ 1 2 y y 1 2 − 2 1 1′ 2′

Solutions y1 and y2 are linearly dependent, if and only if x0 ∃ such that W [y1(x0), y2(x0)] = 0. ◮ W [y (x ), y (x )] = 0 W [y (x), y (x)] = 0 x. 1 0 2 0 ⇒ 1 2 ∀ ◮ W [y (x ), y (x )] = 0 W [y (x), y (x)] = 0 x, and y (x) 1 1 2 1 6 ⇒ 1 2 6 ∀ 1 and y2(x) are linearly independent solutions. Complete solution:

If y1(x) and y2(x) are two linearly independent solutions, then the general solution is

y(x)= c1y1(x)+ c2y2(x).

And, the general solution is the complete solution . No third linearly independent solution. No singular solution. Mathematical Methods in Engineering and Science Second Order Linear Homogeneous ODE’s 379, Introduction Theory of the Homogeneous EquationsHomogeneous Equations with Constant Coefficients Euler-Cauchy Equation Theory of the Homogeneous Equations If y1(x) and y2(x) are linearly dependent, thenBasis fory2 Solutions= ky1.

W (y , y )= y y ′ y y ′ = y (ky ′ ) (ky )y ′ = 0 1 2 1 2 − 2 1 1 1 − 1 1 In particular, W [y1(x0), y2(x0)] = 0 Conversely, if there is a value x0, where y (x ) y (x ) W [y (x ), y (x )] = 1 0 2 0 = 0, 1 0 2 0 y (x ) y (x ) 1′ 0 2′ 0

then for y (x ) y (x ) c 1 0 2 0 1 = 0, y (x ) y (x ) c  1′ 0 2′ 0  2  coefficient matrix is singular. c Choose non-zero 1 and frame y(x)= c y + c y , satisfying c 1 1 2 2  2  IVP y ′′ + Py ′ + Qy = 0, y(x0) = 0, y ′(x0) = 0. Therefore, y(x) = 0 y and y are linearly dependent. ⇒ 1 2 Mathematical Methods in Engineering and Science Second Order Linear Homogeneous ODE’s 380, Introduction Theory of the Homogeneous EquationsHomogeneous Equations with Constant Coefficients Euler-Cauchy Equation Theory of the Homogeneous Equations Basis for Solutions Pick a candidate solution Y (x), choose a point x0, evaluate functions y1, y2, Y and their derivatives at that point, frame

y (x ) y (x ) C Y (x ) 1 0 2 0 1 = 0 y (x ) y (x ) C Y (x )  1′ 0 2′ 0  2   ′ 0  C and ask for solution 1 . C  2  Unique solution for C1, C2. Hence, particular solution

y ∗(x)= C1y1(x)+ C2y2(x)

is the “unique” solution of the IVP

y ′′ + Py ′ + Qy = 0, y(x0)= Y (x0), y ′(x0)= Y ′(x0).

But, that is the candidate function Y (x)! Hence, Y (x)= y ∗(x). Mathematical Methods in Engineering and Science Second Order Linear Homogeneous ODE’s 381, Introduction Basis for Solutions Homogeneous Equations with Constant Coefficients Euler-Cauchy Equation Theory of the Homogeneous Equations For completely describing the solutions, weBasis need for Solutions two linearly independent solutions. No guaranteed procedure to identify two basis members!

If one solution y1(x) is available, then to find another? Reduction of order Assume the second solution as

y2(x)= u(x)y1(x)

and determine u(x) such that y2(x) satisfies the ODE.

u′′y1 + 2u′y1′ + uy1′′ + P(u′y1 + uy1′ )+ Quy1 = 0

u′′y + 2u′y ′ + Pu′y + u(y ′′ + Py ′ + Qy ) = 0. ⇒ 1 1 1 1 1 1

Since y1′′ + Py1′ + Qy1 = 0, we have y1u′′ + (2y1′ + Py1)u′ = 0 Mathematical Methods in Engineering and Science Second Order Linear Homogeneous ODE’s 382, Introduction Basis for Solutions Homogeneous Equations with Constant Coefficients Euler-Cauchy Equation y ′ Theory of the Homogeneous Equations Denoting u = U, U + (2 1 + P)U = 0. Basis for Solutions ′ ′ y1 Rearrangement and integration of the reduced equation:

dU dy1 2 R Pdx + 2 + Pdx = 0 Uy1 e = C = 1 (choose). U y1 ⇒

Then, 1 R Pdx u′ = U = 2 e− , y1 Integrating, 1 R Pdx u(x)= e− dx, y 2 Z 1 and 1 R Pdx y (x)= y (x) e− dx. 2 1 y 2 Z 1 Note: The factor u(x) is never constant! Mathematical Methods in Engineering and Science Second Order Linear Homogeneous ODE’s 383, Introduction Basis for Solutions Homogeneous Equations with Constant Coefficients Euler-Cauchy Equation Theory of the Homogeneous Equations Function space perspective: Basis for Solutions Operator ‘D’ means differentiation, operates on an infinite dimensional function space as a linear transformation. ◮ It maps all constant functions to zero. ◮ It has a one-dimensional null space. Second derivative or D2 is an operator that has a two-dimensional null space, c + c x, with basis 1, x . 1 2 { } Examples of composite operators ◮ ax (D + a) has a null space ce− . ◮ a (xD + a) has a null space cx− . A second order linear operator D2 + P(x)D + Q(x) possesses a two-dimensional null space. ◮ Solution of [D2 + P(x)D + Q(x)]y = 0: description of the null space, or a basis for it.. ◮ Analogous to solution of Ax = 0, i.e. development of a basis for Null(A). Mathematical Methods in Engineering and Science Second Order Linear Homogeneous ODE’s 384, Introduction Points to note Homogeneous Equations with Constant Coefficients Euler-Cauchy Equation Theory of the Homogeneous Equations Basis for Solutions

◮ Second order linear homogeneous ODE’s ◮ Wronskian and related results ◮ Solution basis ◮ Reduction of order ◮ Null space of a differential operator

Necessary Exercises: 1,2,3,7,8 Mathematical Methods in Engineering and Science Second Order Linear Non-Homogeneous ODE’s 385, Linear ODE’s and Their Solutions Outline Method of Undetermined Coefficients Method of Variation of Parameters Closure

Second Order Linear Non-Homogeneous ODE’s Linear ODE’s and Their Solutions Method of Undetermined Coefficients Method of Variation of Parameters Closure Mathematical Methods in Engineering and Science Second Order Linear Non-Homogeneous ODE’s 386, Linear ODE’s and Their Solutions Linear ODE’s and Their Solutions Method of Undetermined Coefficients Method of Variation of Parameters The Complete Analogy Closure

Table: Linear systems and mappings: algebraic and differential

In ordinary vector space In infinite-dimensional function space Ax = b y ′′ + Py ′ + Qy = R The system is consistent. P(x), Q(x), R(x) are continuous. A solution x∗ A solution yp(x) Alternative solution: ¯x Alternative solution:y ¯(x) ¯x x satisfies Ax = 0, y¯(x) yp(x) satisfies y + Py + Qy = 0, − ∗ − ′′ ′ is in null space of A. is in null space of D2 + P(x)D + Q(x). Complete solution: Complete solution: x = x∗ + i ci (x0)i yp(x)+ i ci yi (x) Methodology: Methodology: P P Find null space of A Find null space of D2 + P(x)D + Q(x) i.e. basis members (x0)i . i.e. basis members yi (x). Find x∗ and compose. Find yp(x) and compose. Mathematical Methods in Engineering and Science Second Order Linear Non-Homogeneous ODE’s 387, Linear ODE’s and Their Solutions Linear ODE’s and Their Solutions Method of Undetermined Coefficients Method of Variation of Parameters Closure Procedure to solve y ′′ + P(x)y ′ + Q(x)y = R(x) 1. First, solve the corresponding homogeneous equation, obtain a basis with two solutions and construct

yh(x)= c1y1(x)+ c2y2(x).

2. Next, find one particular solution yp(x) of the NHE and compose the complete solution

y(x)= yh(x)+ yp(x)= c1y1(x)+ c2y2(x)+ yp(x). 3. If some initial or boundary conditions are known, they can be imposed now to determine c1 and c2. Caution: If y1 and y2 are two solutions of the NHE, then do not expect c1y1 + c2y2 to satisfy the equation. Implication of linearity or superposition:

With zero initial conditions, if y1 and y2 are responses due to inputs R1(x) and R2(x), respectively, then the response due to input c1R1 + c2R2 is c1y1 + c2y2. Mathematical Methods in Engineering and Science Second Order Linear Non-Homogeneous ODE’s 388, Linear ODE’s and Their Solutions Method of Undetermined CoefficientsMethod of Undetermined Coefficients Method of Variation of Parameters Closure

y ′′ + ay ′ + by = R(x) n ◮ What kind of function to propose as yp(x) if R(x)= x ? ◮ And what if R(x)= eλx ? ◮ n λx If R(x)= x + e , i.e. in the form k1R1(x)+ k2R2(x)? The principle of superposition (linearity)

Table: Candidate solutions for linear non-homogeneous ODE’s

RHS function R(x) Candidate solution yp(x) pn(x) qn(x) eλx keλx cos ωx or sin ωx k1 cos ωx + k2 sin ωx λx λx λx λx e cos ωx or e sin ωx k1e cos ωx + k2e sin ωx λx λx pn(x)e qn(x)e pn(x) cos ωx or pn(x)sin ωx qn(x) cos ωx + rn(x)sin ωx λx λx λx λx pn(x)e cos ωx or pn(x)e sin ωx qn(x)e cos ωx + rn(x)e sin ωx Mathematical Methods in Engineering and Science Second Order Linear Non-Homogeneous ODE’s 389, Linear ODE’s and Their Solutions Method of Undetermined CoefficientsMethod of Undetermined Coefficients Method of Variation of Parameters Closure Example: (a) y 6y + 5y = e3x ′′ − ′ (b) y 5y + 6y = e3x ′′ − ′ (c) y 6y + 9y = e3x ′′ − ′ 3x In each case, the first official proposal: yp = ke (a) y(x)= c ex + c e5x e3x /4 1 2 − 2x 3x 3x (b) y(x)= c1e + c2e + xe 3x 3x 1 2 3x (c) y(x)= c1e + c2xe + 2 x e Modification rule ◮ λx If the candidate function (ke , k1 cos ωx + k2 sin ωx or λx λx k1e cos ωx + k2e sin ωx) is a solution of the corresponding HE; with λ, iω or λ iω (respectively) satisfying the ± ± auxiliary equation; then modify it by multiplying with x. ◮ In the case of λ being a double root, i.e. both eλx and xeλx 2 λx being solutions of the HE, choose yp = kx e . Mathematical Methods in Engineering and Science Second Order Linear Non-Homogeneous ODE’s 390, Linear ODE’s and Their Solutions Method of Variation of Parameters Method of Undetermined Coefficients Method of Variation of Parameters Closure Solution of the HE:

yh(x)= c1y1(x)+ c2y2(x),

in which c1 and c2 are constant ‘parameters’.

For solution of the NHE, how about ‘variable parameters’? Propose yp(x)= u1(x)y1(x)+ u2(x)y2(x)

and force yp(x) to satisfy the ODE.

A single second order ODE in u1(x) and u2(x). We need one more condition to fix them. Mathematical Methods in Engineering and Science Second Order Linear Non-Homogeneous ODE’s 391, Linear ODE’s and Their Solutions Method of Variation of Parameters Method of Undetermined Coefficients Method of Variation of Parameters Closure From yp = u1y1 + u2y2,

yp′ = u1′ y1 + u1y1′ + u2′ y2 + u2y2′ .

Condition u1′ y1 + u2′ y2 = 0 gives

yp′ = u1y1′ + u2y2′ . Differentiating,

yp′′ = u1′ y1′ + u2′ y2′ + u1y1′′ + u2y2′′. Substitution into the ODE:

u1′ y1′ +u2′ y2′ +u1y1′′+u2y2′′+P(x)(u1y1′ +u2y2′ )+Q(x)(u1y1+u2y2)= R(x) Rearranging,

u1′ y1′ +u2′ y2′ +u1(y1′′+P(x)y1′ +Q(x)y1)+u2(y2′′+P(x)y2′ +Q(x)y2)= R(x).

As y1 and y2 satisfy the associated HE, u1′ y1′ + u2′ y2′ = R(x) Mathematical Methods in Engineering and Science Second Order Linear Non-Homogeneous ODE’s 392, Linear ODE’s and Their Solutions Method of Variation of Parameters Method of Undetermined Coefficients Method of Variation of Parameters Closure

y y u 0 1 2 1′ = y y u R  1′ 2′  2′    Since Wronskian is non-zero, this system has unique solution

y2R y1R u′ = and u′ = . 1 − W 2 W Direct quadrature:

y (x)R(x) y (x)R(x) u (x)= 2 dx and u (x)= 1 dx 1 − W [y (x), y (x)] 2 W [y (x), y (x)] Z 1 2 Z 1 2

In contrast to the method of undetermined multipliers, variation of parameters is general. It is applicable for all continuous functions as P(x), Q(x) and R(x). Mathematical Methods in Engineering and Science Second Order Linear Non-Homogeneous ODE’s 393, Linear ODE’s and Their Solutions Points to note Method of Undetermined Coefficients Method of Variation of Parameters Closure

◮ Function space perspective of linear ODE’s ◮ Method of undetermined coefficients ◮ Method of variation of parameters

Necessary Exercises: 1,3,5,6 Mathematical Methods in Engineering and Science Higher Order Linear ODE’s 394, Theory of Linear ODE’s Outline Homogeneous Equations with Constant Coefficients Non-Homogeneous Equations Euler-Cauchy Equation of Higher Order

Higher Order Linear ODE’s Theory of Linear ODE’s Homogeneous Equations with Constant Coefficients Non-Homogeneous Equations Euler-Cauchy Equation of Higher Order Mathematical Methods in Engineering and Science Higher Order Linear ODE’s 395, Theory of Linear ODE’s Theory of Linear ODE’s Homogeneous Equations with Constant Coefficients Non-Homogeneous Equations Euler-Cauchy Equation of Higher Order

(n) (n 1) (n 2) y +P1(x)y − +P2(x)y − + +Pn 1(x)y ′+Pn(x)y = R(x) · · · − General solution: y(x)= yh(x)+ yp(x), where ◮ yp(x): a particular solution ◮ yh(x): general solution of corresponding HE (n) (n 1) (n 2) y +P1(x)y − +P2(x)y − + +Pn 1(x)y ′+Pn(x)y = 0 · · · − For the HE, suppose we have n solutions y (x), y (x), , yn(x). 1 2 · · · Assemble the state vectors in matrix

y y yn 1 2 · · · y y y 1′ 2′ · · · n′  y y y  Y(x)= 1′′ 2′′ · · · n′′ .  . . .. .   . . . .   (n 1) (n 1) (n 1)   y − y − yn −   1 2 · · ·  Wronskian:   W (y , y , , yn) = det[Y(x)] 1 2 · · · Mathematical Methods in Engineering and Science Higher Order Linear ODE’s 396, Theory of Linear ODE’s Theory of Linear ODE’s Homogeneous Equations with Constant Coefficients Non-Homogeneous Equations Euler-Cauchy Equation of Higher Order ◮ If solutions y (x), y (x), , yn(x) of HE are linearly 1 2 · · · dependent, then for a non-zero k Rn, n n ∈ (j) ki yi (x) = 0 ki y (x) = 0 for j = 1, 2, 3, , (n 1) ⇒ i · · · − i i X=1 X=1 [Y(x)]k = 0 [Y(x)] is singular ⇒ ⇒ W [y (x), y (x), , yn(x)] = 0. ⇒ 1 2 · · · ◮ If Wronskian is zero at x = x0, then Y(x0) is singular and a n non-zero k Null[Y(x )] gives ki yi (x) = 0, implying ∈ 0 i=1 y1(x), y2(x), , yn(x) to be linearly dependent. ◮ · · · P Zero Wronskian at some x = x0 implies zero Wronskian everywhere. Non-zero Wronskian at some x = x1 ensures non-zero Wronskian everywhere and the corrseponding solutions as linearly independent. ◮ With n linearly independent solutions y1(x), y2(x), , yn(x) n· · · of the HE, we have its general solution yh(x)= i=1 ci yi (x), acting as the complementary function for the NHE. P Mathematical Methods in Engineering and Science Higher Order Linear ODE’s 397, Theory of Linear ODE’s Homogeneous Equations with ConstantHomogeneous Coefficients Equations with Constant Coefficients Non-Homogeneous Equations Euler-Cauchy Equation of Higher Order

(n) (n 1) (n 2) y + a1y − + a2y − + + an 1y ′ + any = 0 · · · − With trial solution y = eλx , the auxiliary equation:

n n 1 n 2 λ + a1λ − + a2λ − + + an 1λ + an = 0 · · · −

Construction of the basis: 1. For every simple real root λ = γ, eγx is a solution. 2. For every simple pair of complex roots λ = µ iω, ± eµx cos ωx and eµx sin ωx are linearly independent solutions. 3. For every real root λ = γ of multiplicity r; eγx , xeγx , x2eγx , , xr 1eγx are all linearly independent solutions. · · · − 4. For every complex pair of roots λ = µ iω of multiplicity r; ± eµx cos ωx, eµx sin ωx, xeµx cos ωx, xeµx sin ωx, , r 1 µx r 1 µx · · · x − e cos ωx, x − e sin ωx are the required solutions. Mathematical Methods in Engineering and Science Higher Order Linear ODE’s 398, Theory of Linear ODE’s Non-Homogeneous Equations Homogeneous Equations with Constant Coefficients Non-Homogeneous Equations Method of undetermined coefficients Euler-Cauchy Equation of Higher Order (n) (n 1) (n 2) y + a1y − + a2y − + + an 1y ′ + any = R(x) · · · − Extension of the second order case Method of variation of parameters n yp(x)= ui (x)yi (x) i X=1 Imposed condition Derivative n n i=1 ui′(x)yi (x) = 0 yp′ (x)= i=1 ui (x)yi′(x) n ⇒ n u (x)y (x) = 0 y (x)= ui (x)y (x) Pi=1 i′ i′ ⇒ p′′ P i=1 i′′ P···n ···(n ···2) ⇒(n ···1) ···P ···n (n 1) u (x)y − (x) = 0 yp − (x)= ui (x)y − (x) i=1 i′ i ⇒ i=1 i (n) n (n 1) n (n) Finally,P yp (x)= i=1 ui′(x)yi − (x)+ iP=1 ui (x)yi (x) n n (n 1)P (n) P (n 1) u′(x)y − (x)+ ui (x) y + P y − + + Pnyi = R(x) ⇒ i i i 1 i · · · i i X=1 X=1 h i Mathematical Methods in Engineering and Science Higher Order Linear ODE’s 399, Theory of Linear ODE’s Non-Homogeneous Equations Homogeneous Equations with Constant Coefficients Non-Homogeneous Equations Euler-Cauchy Equation of Higher Order Since each yi (x) is a solution of the HE, n (n 1) ui′(x)yi − (x)= R(x). i X=1 Assembling all conditions on u′(x) together,

[Y(x)]u′(x)= enR(x). 1 adj Y Since Y− = det(Y) , 1 R(x) u′(x)= [adj Y(x)]enR(x)= [last column of adj Y(x)]. det[Y(x)] W (x) Using cofactors of elements from last row only,

Wi (x) u′(x)= R(x), i W (x)

with Wi (x) = Wronskian evaluated with en in place of i-th column.

Wi (x)R(x) ui (x)= W (x) dx R Mathematical Methods in Engineering and Science Higher Order Linear ODE’s 400, Theory of Linear ODE’s Points to note Homogeneous Equations with Constant Coefficients Non-Homogeneous Equations Euler-Cauchy Equation of Higher Order

◮ Wronskian for a higher order ODE ◮ General theory of linear ODE’s ◮ Variation for parameters for n-th order ODE

Necessary Exercises: 1,3,4 Mathematical Methods in Engineering and Science Laplace Transforms 401, Introduction Outline Basic Properties and Results Application to Differential Equations Handling Discontinuities Convolution Advanced Issues

Laplace Transforms Introduction Basic Properties and Results Application to Differential Equations Handling Discontinuities Convolution Advanced Issues Mathematical Methods in Engineering and Science Laplace Transforms 402, Introduction Introduction Basic Properties and Results Application to Differential Equations Handling Discontinuities Convolution Classical perspective Advanced Issues ◮ Entire differential equation is known in advance. ◮ Go for a complete solution first. ◮ Afterwards, use the initial (or other) conditions.

A practical situation ◮ You have a plant ◮ intrinsic dynamic model as well as the starting conditions. ◮ You may drive the plant with different kinds of inputs on different occasions.

Implication ◮ Left-hand side of the ODE and the initial conditions are known a priori. ◮ Right-hand side, R(x), changes from task to task. Mathematical Methods in Engineering and Science Laplace Transforms 403, Introduction Introduction Basic Properties and Results Application to Differential Equations Handling Discontinuities Another question: What if R(x) is not continuous?Convolution Advanced Issues ◮ When power is switched on or off, what happens? ◮ If there is a sudden voltage fluctuation, what happens to the equipment connected to the power line? Or, does “anything” happen in the immediate future? “Something” certainly happens. The IVP has a solution! Laplace transforms provide a tool to find the solution, in spite of the discontinuity of R(x).

Integral transform:

b T [f (t)](s)= K(s, t)f (t)dt Za s: frequency variable K(s, t): kernel of the transform Note: T [f (t)] is a function of s, not t. Mathematical Methods in Engineering and Science Laplace Transforms 404, Introduction Introduction Basic Properties and Results Application to Differential Equations Handling Discontinuities st Convolution With kernel function K(s, t)= e , and limitsAdvanceda = Issues 0, b = , − ∞ Laplace transform

b ∞ st st F (s)= L f (t) = e− f (t)dt = lim e− f (t)dt { } b Z0 →∞ Z0 When this integral exists, f (t) has its Laplace transform.

Sufficient condition: ◮ f (t) is piecewise continuous, and ◮ it is of exponential order, i.e. f (t) < Mect for some (finite) | | M and c.

Inverse Laplace transform:

1 f (t)= L− F (s) { } Mathematical Methods in Engineering and Science Laplace Transforms 405, Introduction Basic Properties and Results Basic Properties and Results Application to Differential Equations Handling Discontinuities Linearity: Convolution Advanced Issues L af (t)+ bg(t) = aL f (t) + bL g(t) { } { } { } First shifting property or the frequency shifting rule: L eat f (t) = F (s a) { } − Laplace transforms of some elementary functions: st ∞ st e− ∞ 1 L(1) = e− dt = = , s s Z0  0 − st ∞ st e− ∞ 1 ∞ st 1 L(t) = e− tdt = t + e− dt = , s s s2 Z0  − 0 Z0 n! L(tn) = (for positive integer n), sn+1 Γ(a + 1) L(ta) = (for a R+) sa+1 ∈ 1 and L(eat ) = . s a − Mathematical Methods in Engineering and Science Laplace Transforms 406, Introduction Basic Properties and Results Basic Properties and Results Application to Differential Equations Handling Discontinuities Convolution s Advanced Issues ω L(cos ωt)= , L(sin ωt)= ; s2 + ω2 s2 + ω2 s a L(cosh at)= , L(sinh at)= ; s2 a2 s2 a2 − s µ − ω L(eµt cos ωt)= − , L(eµt sin ωt)= . (s µ)2 + ω2 (s µ)2 + ω2 − − Laplace transform of derivative:

∞ st L f ′(t) = e− f ′(t)dt { } Z0 st ∞ st = e− f (t) ∞ + s e− f (t)dt = sL f (t) f (0) 0 { }− Z0 Using this process recursively, (n) n (n 1) (n 2) (n 1) L f (t) = s L f (t) s − f (0) s − f ′(0) f − (0). { } { }− − −···− t For integral g(t)= 0 f (t)dt, g(0) = 0, and L g (t) = sL g(t) g(0) = sL g(t) L g(t) = 1 L f (t) . { ′ } { }−R { } ⇒ { } s { } Mathematical Methods in Engineering and Science Laplace Transforms 407, Introduction Application to Differential Equations Basic Properties and Results Application to Differential Equations Handling Discontinuities Example: Convolution Initial value problem of a linear constant coefficientAdvanced Issues ODE

y ′′ + ay ′ + by = r(t), y(0) = K0, y ′(0) = K1 Laplace transforms of both sides of the ODE: 2 s Y (s) sy(0) y ′(0) + a[sY (s) y(0)] + bY (s)= R(s) − − − (s2 + as + b)Y (s) = (s + a)K + K + R(s) ⇒ 0 1 A differential equation in y(t) has been converted to an algebraic equation in Y (s).

Transfer function: ratio of Laplace transform of output function y(t) to that of input function r(t), with zero initial conditions Y (s) 1 Q(s)= = (in this case) R(s) s2 + as + b

Y (s) = [(s + a)K0 + K1]Q(s)+ Q(s)R(s) Solution of the given IVP: y(t)= L 1 Y (s) − { } Mathematical Methods in Engineering and Science Laplace Transforms 408, Introduction Handling Discontinuities Basic Properties and Results Application to Differential Equations Handling Discontinuities Unit step function Convolution Advanced Issues 0 if t < a u(t a)= − 1 if t > a  Its Laplace transform: a as ∞ st ∞ st e− L u(t a) = e− u(t a)dt = 0 dt + e− dt = { − } − · s Z0 Z0 Za For input f (t) with a time delay, 0 if t < a f (t a)u(t a)= − − f (t a) if t > a  − has its Laplace transform as

∞ st L f (t a)u(t a) = e− f (t a)dt { − − } − Za ∞ s(a+τ) as = e− f (τ)dτ = e− L f (t) . { } Z0 Second shifting property or the time shifting rule Mathematical Methods in Engineering and Science Laplace Transforms 409, Introduction Handling Discontinuities Basic Properties and Results Application to Differential Equations Handling Discontinuities Define Convolution Advanced Issues 1/k if a t a + k fk (t a) = ≤ ≤ − 0 otherwise  1 1 = u(t a) u(t a k) k − − k − −

1 u(t−a) u(t−a) f (t−a) δ (t−a) 1 k 1 k k k 1 1 1 1

o a t oa a+k t o a a+k t o a t 1

−1 k −1 u(t−a−k) k

(c) Function f (d) Dirac’sδ − function (a) Unit step function (b) Composition k

Figure: Step and impulse functions and note that its integral a+k ∞ 1 Ik = fk (t a)dt = dt = 1. − k Z0 Za does not depend on k. Mathematical Methods in Engineering and Science Laplace Transforms 410, Introduction Handling Discontinuities Basic Properties and Results Application to Differential Equations Handling Discontinuities Convolution In the limit, Advanced Issues

δ(t a) = lim fk (t a) − k 0 − → if t = a ∞ or, δ(t a) = ∞ and δ(t a)dt = 1. − 0 otherwise −  Z0 Unit impulse function or Dirac’s delta function

1 L δ(t a) = lim [L u(t a) L u(t a k) ] { − } k 0 k { − }− { − − } → as (a+k)s e− e− as = lim − = e− k 0 ks →

Through step and impulse functions, Laplace transform method can handle IVP’s with discontinuous inputs. Mathematical Methods in Engineering and Science Laplace Transforms 411, Introduction Convolution Basic Properties and Results Application to Differential Equations Handling Discontinuities A generalized product of two functions Convolution t Advanced Issues h(t)= f (t) g(t)= f (τ)g(t τ) dτ ∗ − Z0 Laplace transform of the convolution: t ∞ st ∞ ∞ st H(s)= e− f (τ)g(t τ)dτ dt = f (τ) e− g(t τ)dt dτ − − Z0 Z0 Z0 Zτ

t = τ t = τ

τ τ          o t o t

(a) Original order (b) Changed order

Figure: Region of integration for L h(t) { } Mathematical Methods in Engineering and Science Laplace Transforms 412, Introduction Convolution Basic Properties and Results Application to Differential Equations Handling Discontinuities Convolution Through substitution t = t τ, Advanced Issues ′ −

∞ ∞ s(t′+τ) H(s) = f (τ) e− g(t′) dt′ dτ Z0 Z0 ∞ sτ ∞ st′ = f (τ)e− e− g(t′) dt′ dτ Z0 Z0  H(s)= F (s)G(s) Convolution theorem: Laplace transform of the convolution integral of two functions is given by the product of the Laplace transforms of the two functions. Utilities: ◮ To invert Q(s)R(s), one can convolute y(t)= q(t) r(t). ∗ ◮ In solving some integral equation. Mathematical Methods in Engineering and Science Laplace Transforms 413, Introduction Points to note Basic Properties and Results Application to Differential Equations Handling Discontinuities Convolution Advanced Issues

◮ A paradigm shift in solution of IVP’s ◮ Handling discontinuous input functions ◮ Extension to ODE systems ◮ The idea of integral transforms

Necessary Exercises: 1,2,4 Mathematical Methods in Engineering and Science ODE Systems 414, Fundamental Ideas Outline Linear Homogeneous Systems with Constant Coefficients Linear Non-Homogeneous Systems Nonlinear Systems

ODE Systems Fundamental Ideas Linear Homogeneous Systems with Constant Coefficients Linear Non-Homogeneous Systems Nonlinear Systems Mathematical Methods in Engineering and Science ODE Systems 415, Fundamental Ideas Fundamental Ideas Linear Homogeneous Systems with Constant Coefficients Linear Non-Homogeneous Systems Nonlinear Systems

y′ = f(t, y) Solution: a vector function y = h(t)

Autonomous system: y′ = f(y) ◮ Points in y-space where f(y) = 0: equilibrium points or critical points

System of linear ODE’s:

y′ = A(t)y + g(t)

◮ autonomous systems if A and g are constant ◮ homogeneous systems if g(t) = 0 ◮ homogeneous constant coefficient systems if A is constant and g(t) = 0 Mathematical Methods in Engineering and Science ODE Systems 416, Fundamental Ideas Fundamental Ideas Linear Homogeneous Systems with Constant Coefficients Linear Non-Homogeneous Systems Nonlinear Systems For a homogeneous system,

y′ = A(t)y

◮ Wronskian: W (y , y , y , , yn)= y y y yn 1 2 3 · · · | 1 2 3 · · · | If Wronskian is non-zero, then

◮ Fundamental matrix: (t) = [y y y yn], Y 1 2 3 · · · giving a basis.

General solution: n y(t)= ci yi (t) = [ (t)] c Y i X=1 Mathematical Methods in Engineering and Science ODE Systems 417, Fundamental Ideas Linear Homogeneous Systems with ConstantLinear Homogeneous Coefficients Systems with Constant Coefficients Linear Non-Homogeneous Systems Nonlinear Systems

y′ = Ay Non-degenerate case: matrix A non-singular ◮ Origin (y = 0) is the unique equilibrium point. Attempt y = xeλt y = λxeλt . ⇒ ′ Substitution: Axeλt = λxeλt Ax = λx ⇒ If A is diagonalizable, λ t ◮ n linearly independent solutions yi = xi e i corresponding to n eigenpairs If A is not diagonalizable? λ t All xi e i together will not complete the basis. Try y = xteµt ? Substitution leads to

xeµt + µxteµt = Axteµt xeµt = 0 x = 0. ⇒ ⇒ Absurd! Mathematical Methods in Engineering and Science ODE Systems 418, Fundamental Ideas Linear Homogeneous Systems with ConstantLinear Homogeneous Coefficients Systems with Constant Coefficients Linear Non-Homogeneous Systems Try a linearly independent solution in the formNonlinear Systems y = xteµt + ueµt .

Linear independence here has two implications: in function space AND in ordinary vector space! Substitution: xeµt + µxteµt + µueµt = Axteµt + Aueµt (A µI)u = x ⇒ − Solve for u, the generalized eigenvector of A. For Jordan blocks of larger sizes, 1 y = xeµt , y = xteµt +u eµt , y = xt2eµt +u teµt +u eµt etc. 1 2 1 3 2 1 2

Jordan canonical form (JCF) of A provides a set of basis functions to describe the complete solution of the ODE system. Mathematical Methods in Engineering and Science ODE Systems 419, Fundamental Ideas Linear Non-Homogeneous Systems Linear Homogeneous Systems with Constant Coefficients Linear Non-Homogeneous Systems Nonlinear Systems

y′ = Ay + g(t) Complementary function: n yh(t)= ci yi (t) = [ (t)]c Y i X=1 Complete solution:

y(t)= yh(t)+ yp(t)

We need to develop one particular solution yp.

Method of undetermined coefficients Based on g(t), select candidate function Gk (t) and propose

yp = uk Gk (t), Xk vector coefficients (uk ) to be determined by substitution. Mathematical Methods in Engineering and Science ODE Systems 420, Fundamental Ideas Linear Non-Homogeneous Systems Linear Homogeneous Systems with Constant Coefficients Linear Non-Homogeneous Systems Nonlinear Systems Method of diagonalization 1 If A is a diagonalizable constant matrix, with X− AX = D, 1 changing variables to z = X− y, such that y = Xz,

1 1 Xz′ = AXz+g(t) z′ = X− AXz+X− g(t)= Dz+h(t) (say). ⇒

Single decoupled Leibnitz equations

z′ = dk zk + hk (t), k = 1, 2, 3, , n; k · · · leading to individual solutions

dk t dk t dk t zk (t)= ck e + e e− hk (t)dt. Z After assembling z(t), we reconstruct y = Xz. Mathematical Methods in Engineering and Science ODE Systems 421, Fundamental Ideas Linear Non-Homogeneous Systems Linear Homogeneous Systems with Constant Coefficients Linear Non-Homogeneous Systems Method of variation of parameters Nonlinear Systems If we can supply a basis (t) of the complementary function yh(t), Y then we propose yp(t) = [ (t)]u(t) Y Substitution leads to

′u + u′ = A u + g. Y Y Y Since = A , Y′ Y 1 u′ = g, or, u′ = [ ]− g. Y Y Complete solution:

1 y(t)= yh + yp = [ ]c + [ ] [ ]− gdt Y Y Y Z This method is completely general. Mathematical Methods in Engineering and Science ODE Systems 422, Fundamental Ideas Points to note Linear Homogeneous Systems with Constant Coefficients Linear Non-Homogeneous Systems Nonlinear Systems

◮ Theory of ODE’s in terms of vector functions ◮ Methods to find ◮ complementary functions in the case of constant coefficients ◮ particular solutions for all cases

Necessary Exercises: 1 Mathematical Methods in Engineering and Science Stability of Dynamic Systems 423, Second Order Linear Systems Outline Nonlinear Dynamic Systems Lyapunov Stability Analysis

Stability of Dynamic Systems Second Order Linear Systems Nonlinear Dynamic Systems Lyapunov Stability Analysis Mathematical Methods in Engineering and Science Stability of Dynamic Systems 424, Second Order Linear Systems Second Order Linear Systems Nonlinear Dynamic Systems Lyapunov Stability Analysis A system of two first order linear differential equations:

y1′ = a11y1 + a12y2

y2′ = a21y1 + a22y2

or, y′ = Ay

Phase: a pair of values of y1 and y2 Phase plane: plane of y1 and y2 Trajectory: a curve showing the evolution of the system for a particular initial value problem Phase portrait: all trajectories together showing the complete picture of the behaviour of the dynamic system

Allowing only isolated equilibrium points, ◮ matrix A is non-singular: origin is the only equilibrium point. Eigenvalues of A: λ2 (a + a )λ + (a a a a ) = 0 − 11 22 11 22 − 12 21 Mathematical Methods in Engineering and Science Stability of Dynamic Systems 425, Second Order Linear Systems Second Order Linear Systems Nonlinear Dynamic Systems Lyapunov Stability Analysis Characteristic equation:

λ2 pλ + q = 0, − with p = (a + a )= λ + λ and q = a a a a = λ λ 11 22 1 2 11 22 − 12 21 1 2 Discriminant D = p2 4q and − p p 2 p √D λ = q = . 1,2 2 ± 2 − 2 ± 2 r  Solution (for diagonalizable A):

λ1t λ2t y = c1x1e + c2x2e

Solution for deficient A:

λt λt y = c1x1e + c2(tx1 + u)e λt λt λt y′ = c λx e + c (x + λu)e + λtc x e ⇒ 1 1 2 1 2 1 Mathematical Methods in Engineering and Science Stability of Dynamic Systems 426, Second Order Linear Systems Second Order Linear Systems Nonlinear Dynamic Systems Lyapunov Stability Analysis

y2 y2 y2

o y1 o y1 o y1

(a) Saddle point (b) Centre (c) Spiral y2

y2 y2

oy1 o y1 o y1

(d) Improper node (e) Proper node (f) Degenerate node

Figure: Neighbourhood of critical points Mathematical Methods in Engineering and Science Stability of Dynamic Systems 427, Second Order Linear Systems Second Order Linear Systems Nonlinear Dynamic Systems Lyapunov Stability Analysis Table: Critical points of linear systems Type Sub-type Eigenvalues Position in p-q chart Stability Saddle pt real, opposite signs q < 0 unstable Centre pure imaginary q > 0, p =0 stable Spiral complex, both q > 0, p =0 stable non-zero components D = p2 6 4q < 0 if p < 0, Node real, same sign q > 0, p−= 0, D 0 unstable improper unequal in magnitude D > 0 6 ≥ if p > 0 proper equal, diagonalizable D =0 degenerate equal, deficient D =0

q spiral spiral stable c unstable e n t r node e q = 0 node p2 − 4

o p

saddle point

unstable Figure: Zones of critical points in p-q chart Mathematical Methods in Engineering and Science Stability of Dynamic Systems 428, Second Order Linear Systems Nonlinear Dynamic Systems Nonlinear Dynamic Systems Lyapunov Stability Analysis

Phase plane analysis ◮ Determine all the critical points. ◮ Linearize the ODE system around each of them as

y′ = J(y )(y y ). 0 − 0 ◮ With z = y y , analyze each neighbourhood from z = Jz. − 0 ′ ◮ Assemble outcomes of local phase plane analyses. ‘Features’ of a dynamic system are typically captured by its critical points and their neighbourhoods.

Limit cycles ◮ isolated closed trajectories (only in nonlinear systems)

Systems with arbitrary dimension of state space? Mathematical Methods in Engineering and Science Stability of Dynamic Systems 429, Second Order Linear Systems Lyapunov Stability Analysis Nonlinear Dynamic Systems Lyapunov Stability Analysis Important terms

Stability: If y0 is a critical point of the dynamic system y = f(y) and for every ǫ> 0, δ > 0 such that ′ ∃ y(t ) y < δ y(t) y <ǫ t > t , k 0 − 0k ⇒ k − 0k ∀ 0

then y0 is a stable critical point. If, further, y(t) y as t , then y is said to be → 0 →∞ 0 asymptotically stable. Positive definite function: A function V (y), with V (0)=0, is called positive definite if

V (y) > 0 y = 0. ∀ 6 Lyapunov function: A positive definite function V (y), having continuous ∂V , with a negative semi-definite rate of ∂yi change T V ′ = [ V (y)] f(y). ∇ Mathematical Methods in Engineering and Science Stability of Dynamic Systems 430, Second Order Linear Systems Lyapunov Stability Analysis Nonlinear Dynamic Systems Lyapunov Stability Analysis

Lyapunov’s stability criteria:

Theorem: For a system y′ = f(y) with the origin as a critical point, if there exists a Lyapunov function V (y), then the system is stable at the origin, i.e. the origin is a stable critical point. Further, if V ′(y) is negative definite, then it is asymptotically stable. A generalization of the notion of total energy: negativity of its rate correspond to trajectories tending to decrease this ‘energy’.

Note: Lyapunov’s method becomes particularly important when a linearized model allows no analysis or when its results are suspect.

Caution: It is a one-way criterion only! Mathematical Methods in Engineering and Science Stability of Dynamic Systems 431, Second Order Linear Systems Points to note Nonlinear Dynamic Systems Lyapunov Stability Analysis

◮ Analysis of second order systems ◮ Classification of critical points ◮ Nonlinear systems and local linearization ◮ Phase plane analysis Examples in physics, engineering, economics, biological and social systems

◮ Lyapunov’s method of stability analysis

Necessary Exercises: 1,2,3,4,5 Mathematical Methods in Engineering and Science Series Solutions and Special Functions 432, Power Series Method Outline Frobenius’ Method Special Functions Defined as Integrals Special Functions Arising as Solutions of ODE’s

Series Solutions and Special Functions Power Series Method Frobenius’ Method Special Functions Defined as Integrals Special Functions Arising as Solutions of ODE’s Mathematical Methods in Engineering and Science Series Solutions and Special Functions 433, Power Series Method Power Series Method Frobenius’ Method Special Functions Defined as Integrals Methods to solve an ODE in terms of elementarySpecial Functions functions: Arising as Solutions of ODE’s ◮ restricted in scope Theory allows study of the properties of solutions! When elementary methods fail, ◮ gain knowledge about solutions through properties, and ◮ for actual evaluation develop infinite series. Power series:

∞ n 2 3 4 5 y(x)= anx = a + a x + a x + a x + a x + a x + 0 1 2 3 4 5 · · · n X=0 or in powers of (x x ). − 0 A simple exercise: Try developing power series solutions in the above form and study their properties for differential equations

2 y ′′ + y = 0 and 4x y ′′ = y. Mathematical Methods in Engineering and Science Series Solutions and Special Functions 434, Power Series Method Power Series Method Frobenius’ Method Special Functions Defined as Integrals Special Functions Arising as Solutions of ODE’s

y ′′ + P(x)y ′ + Q(x)y = 0

If P(x) and Q(x) are analytic at a point x = x0, i.e. if they possess convergent series expansions in powers of (x x ) with some radius of convergence R, − 0 then the solution is analytic at x0, and a power series solution y(x)= a + a (x x )+ a (x x )2 + a (x x )3 + 0 1 − 0 2 − 0 3 − 0 · · · is convergent at least for x x < R. | − 0| For x0 = 0 (without loss of generality), suppose

∞ n 2 3 P(x) = pnx = p + p x + p x + p x + , 0 1 2 3 · · · n X=0 ∞ n 2 3 Q(x) = qnx = q + q x + q x + q x + , 0 1 2 3 · · · n X=0 n and assume y(x)= n∞=0 anx . P Mathematical Methods in Engineering and Science Series Solutions and Special Functions 435, Power Series Method Power Series Method Frobenius’ Method Special Functions Defined as Integrals n Special Functions Arising as Solutions of ODE’s Differentiation of y(x)= n∞=0 anx as

∞ nP ∞ n y ′(x)= (n + 1)an+1x and y ′′(x)= (n + 2)(n + 1)an+2x n n X=0 X=0 leads to n ∞ n ∞ n ∞ n P(x)y ′ = pnx (n + 1)an+1x = pn k (k + 1)ak+1x − n=0 "n=0 # n=0 k=0 X X n X X ∞ n ∞ n ∞ n Q(x)y = qnx anx = qn k ak x − n "n # n k X=0 X=0 X=0 X=0 n n ∞ n (n + 2)(n + 1)an+2 + pn k (k + 1)ak+1 + qn k ak x = 0 ⇒ − − n " # X=0 Xk=0 Xk=0 Recursion formula: n 1 an+2 = [(k + 1)pn k ak+1 + qn k ak ] −(n + 2)(n + 1) − − Xk=0 Mathematical Methods in Engineering and Science Series Solutions and Special Functions 436, Power Series Method Frobenius’ Method Frobenius’ Method Special Functions Defined as Integrals Special Functions Arising as Solutions of ODE’s

For the ODE y ′′ + P(x)y ′ + Q(x)y = 0, a point x = x0 is

ordinary point if P(x) and Q(x) are analytic at x = x0: power series solution is analytic

singular point if any of the two is non-analytic (singular) at x = x0 ◮ regular singularity: (x x )P(x) and − 0 (x x )2Q(x) are analytic at the point − 0 ◮ irregular singularity

The case of regular singularity

b(x) c(x) For x0 = 0, with P(x)= x and Q(x)= x2 ,

2 x y ′′ + xb(x)y ′ + c(x)y = 0

in which b(x) and c(x) are analytic at the origin. Mathematical Methods in Engineering and Science Series Solutions and Special Functions 437, Power Series Method Frobenius’ Method Frobenius’ Method Special Functions Defined as Integrals Special Functions Arising as Solutions of ODE’s Working steps: r n 1. Assume the solution in the form y(x)= x n∞=0 anx . 2. Differentiate to get the series expansions forPy ′(x) and y ′′(x). 3. Substitute these series for y(x), y ′(x) and y ′′(x) into the given ODE and collect coefficients of xr , xr+1, xr+2 etc. 4. Equate the coefficient of xr to zero to obtain an equation in the index r, called the indicial equation as

r(r 1) + b r + c = 0; − 0 0

allowing a0 to become arbitrary.

5. For each solution r, equate other coefficients to obtain a1, a2, a3 etc in terms of a0. Note: The need is to develop two solutions. Mathematical Methods in Engineering and Science Series Solutions and Special Functions 438, Power Series Method Special Functions Defined as IntegralsFrobenius’ Method Special Functions Defined as Integrals Special Functions Arising as Solutions of ODE’s

x n 1 Gamma function: Γ(n)= 0∞ e− x − dx, convergent for n > 0. Recurrence relation Γ(1) = 1, Γ(n +1) = nΓ(n) R allows extension of the definition for the entire real line except for zero and negative integers. Γ(n +1) = n! for non-negative integers. (A generalization of the factorial function.) Beta function: B(m, n)= 1 xm 1(1 x)n 1dx = 0 − − − π/2 2m 1 2n 1 2 0 sin −R θ cos − θ dθ; m, n > 0. Γ(m)Γ(n) BR(m, n)= B(n, m); B(m, n)= Γ(m+n) 2 x t2 Error function: erf (x)= √π 0 e− dt. (Area under theR normal or Gaussian distribution) x sin t Sine integral function: Si (x)= 0 t dt. R Mathematical Methods in Engineering and Science Series Solutions and Special Functions 439, Power Series Method Special Functions Arising as SolutionsFrobenius’ of ODE’s Method Special Functions Defined as Integrals Special Functions Arising as Solutions of ODE’s In the study of some important problems in physics, some variable-coefficient ODE’s appear recurrently, defying analytical solution! Series solutions properties and connections ⇒ further problems further solutions ⇒ ⇒ ⇒ ···

Table: Special functions of mathematical physics

Name of the ODE Form of the ODE Resulting functions ′′ ′ Legendre’s equation (1 − x2)y − 2xy + k(k + 1)y = 0 Legendre functions Legendre polynomials ′′ Airy’s equation y ± k2xy = 0 Airy functions ′′ ′ Chebyshev’s equation (1 − x2)y − xy + k2y = 0 Chebyshev polynomials ′′ ′ Hermite’s equation y − 2xy + 2ky = 0 Hermite functions Hermite polynomials ′′ ′ Bessel’s equation x2y + xy +(x2 − k2)y = 0 Bessel functions Neumann functions Hankel functions ′′ ′ Gauss’s hypergeometric x(1 − x)y + [c − (a + b + 1)x]y − aby = 0 Hypergeometric function equation ′′ ′ Laguerre’s equation xy + (1 − x)y + ky = 0 Laguerre polynomials Mathematical Methods in Engineering and Science Series Solutions and Special Functions 440, Power Series Method Special Functions Arising as SolutionsFrobenius’ of ODE’s Method Special Functions Defined as Integrals Legendre’s equation Special Functions Arising as Solutions of ODE’s

2 (1 x )y ′′ 2xy ′ + k(k + 1)y = 0 − − P(x)= 2x and Q(x)= k(k+1) are analytic at x = 0 with − 1 x2 1 x2 radius of convergence− R = 1. − x = 0 is an ordinary point and a power series solution n y(x)= ∞ anx is convergent at least for x < 1. n=0 | | Apply powerP series method: k(k + 1) a = a , 2 − 2! 0 (k + 2)(k 1) a = − a 3 − 3! 1 (k n)(k + n + 1) and an = − an for n 2. +2 − (n + 2)(n + 1) ≥

Solution: y(x)= a0y1(x)+ a1y2(x) Mathematical Methods in Engineering and Science Series Solutions and Special Functions 441, Power Series Method Special Functions Arising as SolutionsFrobenius’ of ODE’s Method Special Functions Defined as Integrals Special Functions Arising as Solutions of ODE’s Legendre functions

k(k + 1) k(k 2)(k + 1)(k + 3) y (x) = 1 x2 + − x4 1 − 2! 4! −··· (k 1)(k + 2) (k 1)(k 3)(k + 2)(k + 4) y (x) = x − x3 + − − x5 2 − 3! 5! − · ·

Special significance: non-negative integral values of k For each k = 0, 1, 2, 3, , · · · one of the series terminates at the term containing xk . Polynomial solution: valid for the entire real line! Recurrence relation in reverse: k(k 1) ak 2 = − ak − −2(2k 1) − Mathematical Methods in Engineering and Science Series Solutions and Special Functions 442, Power Series Method Special Functions Arising as SolutionsFrobenius’ of ODE’s Method Special Functions Defined as Integrals Legendre polynomial Special Functions Arising as Solutions of ODE’s (2k 1)(2k 3) 3 1 Choosing ak = − k!− ··· · , (2k 1)(2k 3) 3 1 P (x) = − − · · · · k k! k k(k 1) k 2 k(k 1)(k 2)(k 3) k 4 x − x − + − − − x − . × − 2(2k 1) 2 4(2k 1)(2k 3) −···  − · − −  k This choice of ak ensures Pk (1) = 1 and implies Pk ( 1) = ( 1) . − − Initial Legendre polynomials:

P0(x) = 1,

P1(x)= x, 1 P (x)= (3x2 1), 2 2 − 1 P (x)= (5x3 3x), 3 2 − 1 P (x)= (35x4 30x2 + 3) etc. 4 8 − Mathematical Methods in Engineering and Science Series Solutions and Special Functions 443, Power Series Method Special Functions Arising as SolutionsFrobenius’ of ODE’s Method Special Functions Defined as Integrals Special Functions Arising as Solutions of ODE’s 1 P (x) 0 0.8

P (x) 1 0.6

P 3 (x) 0.4 P 2 (x)

0.2

0 n P (x)

−0.2

−0.4

−0.6 P 5 (x) P 4 (x)

−0.8

−1 −1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1 x

Figure: Legendre polynomials

All roots of a Legendre polynomial are real and they lie in [ 1, 1]. − Orthogonality? Mathematical Methods in Engineering and Science Series Solutions and Special Functions 444, Power Series Method Special Functions Arising as SolutionsFrobenius’ of ODE’s Method Special Functions Defined as Integrals Bessel’s equation Special Functions Arising as Solutions of ODE’s

2 2 2 x y ′′ + xy ′ + (x k )y = 0 − x = 0 is a regular singular point. Frobenius’ method: carrying out the early steps, ∞ 2 2 r 2 2 r+1 2 2 r+n (r k )a x +[(r+1) k ]a x + [an− + r k +n(n+2r) an]x =0 − 0 − 1 2 { − } n=2 X Indicial equation: r 2 k2 = 0 r = k − ⇒ ± With r = k, (r + 1)2 k2 = 0 a = 0 and − 6 ⇒ 1 an 2 an = − for n 2. −n(n + 2r) ≥ Odd coefficients are zero and a a a = 0 , a = 0 , etc. 2 −2(2k + 2) 4 2 4(2k + 2)(2k + 4) · Mathematical Methods in Engineering and Science Series Solutions and Special Functions 445, Power Series Method Special Functions Arising as SolutionsFrobenius’ of ODE’s Method Special Functions Defined as Integrals Special Functions Arising as Solutions of ODE’s Bessel functions: 1 Selecting a0 = 2k Γ(k+1) and using n = 2m,

( 1)m a = − . m 2k+2mm!Γ(k + m + 1)

Bessel function of the first kind of order k:

k+2m m x k+2m ∞ m x ∞ ( 1) 2 Jk (x)= ( 1) = − − 2k+2mm!Γ(k + m + 1) m!Γ(k + m + 1) m=0 m=0  X X

When k is not an integer, J k (x) completes the basis. − k For integer k, J k (x) = ( 1) Jk (x), linearly dependent! − − Reduction of order can be used to find another solution. Bessel function of the second kind or Neumann function Mathematical Methods in Engineering and Science Series Solutions and Special Functions 446, Power Series Method Points to note Frobenius’ Method Special Functions Defined as Integrals Special Functions Arising as Solutions of ODE’s

◮ Solution in power series ◮ Ordinary points and singularities ◮ Definition of special functions ◮ Legendre polynomials ◮ Bessel functions

Necessary Exercises: 2,3,4,5 Mathematical Methods in Engineering and Science Sturm-Liouville Theory 447, Preliminary Ideas Outline Sturm-Liouville Problems Eigenfunction Expansions

Sturm-Liouville Theory Preliminary Ideas Sturm-Liouville Problems Eigenfunction Expansions Mathematical Methods in Engineering and Science Sturm-Liouville Theory 448, Preliminary Ideas Preliminary Ideas Sturm-Liouville Problems Eigenfunction Expansions A simple boundary value problem:

y ′′ + 2y = 0, y(0) = 0, y(π) = 0

General solution of the ODE:

y(x)= a sin(x√2)+ b cos(x√2)

Condition y(0) = 0 b = 0. Hence, y(x)= a sin(x√2). ⇒ Then, y(π) = 0 a = 0. Only solution is y(x) = 0. ⇒ Now, consider the BVP

y ′′ + 4y = 0, y(0) = 0, y(π) = 0.

The same steps give y(x)= a sin(2x), with arbitrary value of a. Infinite number of non-trivial solutions! Mathematical Methods in Engineering and Science Sturm-Liouville Theory 449, Preliminary Ideas Preliminary Ideas Sturm-Liouville Problems Eigenfunction Expansions Boundary value problems as eigenvalue problems Explore the possible solutions of the BVP

y ′′ + ky = 0, y(0) = 0, y(π) = 0.

◮ With k 0, no hope for a non-trivial solution. Consider ≤ k = ν2 > 0. ◮ Solutions: y = a sin(νx), only for specific values of ν (or k): ν = 0, 1, 2, 3, ; i.e. k = 0, 1, 4, 9, . ± ± ± · · · · · · Question: ◮ For what values of k (eigenvalues), does the given BVP possess non-trivial solutions, and ◮ what are the corresponding solutions (eigenfunctions), up to arbitrary scalar multiples? Analogous to the algebraic eigenvalue problem Av = λv! Analogy of a : self-adjoint differential operator. Mathematical Methods in Engineering and Science Sturm-Liouville Theory 450, Preliminary Ideas Preliminary Ideas Sturm-Liouville Problems Eigenfunction Expansions Consider the ODE y ′′ + P(x)y ′ + Q(x)y = 0. Question: Is it possible to find functions F (x) and G(x) such that

F (x)y ′′ + F (x)P(x)y ′ + F (x)Q(x)y

gets reduced to the derivative of F (x)y ′ + G(x)y? Comparing with d [F (x)y ′ + G(x)y]= F (x)y ′′ + [F ′(x)+ G(x)]y ′ + G ′(x)y, dx

F ′(x)+ G(x)= F (x)P(x) and G ′(x)= F (x)Q(x). Elimination of G(x):

F ′′(x) P(x)F ′(x) + [Q(x) P′(x)]F (x) = 0 − − This is the adjoint of the original ODE. Mathematical Methods in Engineering and Science Sturm-Liouville Theory 451, Preliminary Ideas Preliminary Ideas Sturm-Liouville Problems Eigenfunction Expansions The adjoint ODE ◮ The adjoint of the ODE y ′′ + P(x)y ′ + Q(x)y =0 is

F ′′ + P1F ′ + Q1F = 0, where P = P and Q = Q P . 1 − 1 − ′ ◮ Then, the adjoint of F ′′ + P1F ′ + Q1F =0 is

φ′′ + P2φ′ + Q2φ = 0, where P = P = P and 2 − 1 Q = Q P = Q P ( P )= Q. 2 1 − 1′ − ′ − − ′ The adjoint of the adjoint of a second order linear homogeneous equation is the original equation itself. ◮ When is an ODE its own adjoint? ′′ ′ ◮ y + P(x)y + Q(x)y = 0 is self-adjoint only in the trivial case of P(x)=0. ′′ ′ ◮ What about F (x)y + F (x)P(x)y + F (x)Q(x)y = 0? Mathematical Methods in Engineering and Science Sturm-Liouville Theory 452, Preliminary Ideas Preliminary Ideas Sturm-Liouville Problems Eigenfunction Expansions Second order self-adjoint ODE

Question: What is the adjoint of Fy ′′ + FPy ′ + FQy = 0? Rephrased question: What is the ODE that φ(x) has to satisfy if

d φFy ′′ + φFPy ′ + φFQy = φFy ′ + ξ(x)y ? dx   Comparing terms, d (φF )+ ξ(x)= φFP and ξ′(x)= φFQ. dx

d2 d Eliminating ξ(x), we have dx2 (φF )+ φFQ = dx (φFP).

F φ′′ + 2F ′φ′ + F ′′φ + FQφ = FPφ′ + (FP)′φ

F φ′′ + (2F ′ FP)φ′ + F ′′ (FP)′ + FQ φ = 0 ⇒ − −

This is the same as the original ODE, when F ′(x)= F (x)P(x) Mathematical Methods in Engineering and Science Sturm-Liouville Theory 453, Preliminary Ideas Preliminary Ideas Sturm-Liouville Problems Eigenfunction Expansions Casting a given ODE into the self-adjoint form:

Equation y ′′ + P(x)y ′ + Q(x)y = 0 is converted to the self-adjoint form through the multiplication of F (x)= eR P(x)dx .

General form of self-adjoint equations: d [F (x)y ′]+ R(x)y = 0 dx Working rules: ◮ To determine whether a given ODE is in the self-adjoint form, check whether the coefficient of y ′ is the derivative of the coefficient of y ′′. ◮ To convert an ODE into the self-adjoint form, first obtain the equation in normal form by dividing with the coefficient of y ′′. If the coefficient of y ′ now is P(x), then next multiply the resulting equation with eR Pdx . Mathematical Methods in Engineering and Science Sturm-Liouville Theory 454, Preliminary Ideas Sturm-Liouville Problems Sturm-Liouville Problems Eigenfunction Expansions Sturm-Liouville equation

[r(x)y ′]′ + [q(x)+ λp(x)]y = 0,

where p, q, r and r ′ are continuous on [a, b], with p(x) > 0 on [a, b] and r(x) > 0 on (a, b). With different boundary conditions, Regular S-L problem: a1y(a)+ a2y ′(a)=0 and b1y(b)+ b2y ′(b) = 0, T T vectors [a1 a2] and [b1 b2] being non-zero. Periodic S-L problem: With r(a)= r(b), y(a)= y(b) and y ′(a)= y ′(b). Singular S-L problem: If r(a) = 0, no boundary condition is needed at x = a. If r(b) = 0, no boundary condition is needed at x = b. (We just look for bounded solutions over [a, b].) Mathematical Methods in Engineering and Science Sturm-Liouville Theory 455, Preliminary Ideas Sturm-Liouville Problems Sturm-Liouville Problems Eigenfunction Expansions Orthogonality of eigenfunctions

Theorem: If ym(x) and yn(x) are eigenfunctions (solutions) of a Sturm-Liouville problem corresponding to distinct eigenvalues λm and λn respectively, then

b (ym, yn) p(x)ym(x)yn(x)dx = 0, ≡ Za i.e. they are orthogonal with respect to the weight function p(x).

From the hypothesis,

(ry ′ )′ + (q + λmp)ym = 0 (q + λmp)ymyn = (ry ′ )′yn m ⇒ − m (ry ′ )′ + (q + λnp)yn = 0 (q + λnp)ymyn = (ry ′ )′ym n ⇒ − n Subtracting,

(λm λn)pymyn = (ry ′ )′ym + (ry ′ )y ′ (ry ′ )y ′ (ry ′ )′yn − n n m − m n − m = r(ymy ′ yny ′ ) ′ . n − m   Mathematical Methods in Engineering and Science Sturm-Liouville Theory 456, Preliminary Ideas Sturm-Liouville Problems Sturm-Liouville Problems Eigenfunction Expansions Integrating both sides, b (λm λn) p(x)ym(x)yn(x)dx − Za = r(b)[ym(b)y ′ (b) yn(b)y ′ (b)] r(a)[ym(a)y ′ (a) yn(a)y ′ (a)]. n − m − n − m ◮ In a regular S-L problem, from the boundary condition at x = a, the homogeneous system y (a) y (a) a 0 m m′ 1 = has non-trivial solutions. y (a) y (a) a 0  n n′  2    Therefore, ym(a)y (a) yn(a)y (a) = 0. n′ − m′ Similarly, ym(b)y (b) yn(b)y (b) = 0. n′ − m′ ◮ In a singular S-L problem, zero value of r(x) at a boundary makes the corresponding term vanish even without a BC. ◮ In a periodic S-L problem, the two terms cancel out together.

Since λm = λn, in all cases, 6 b p(x)ym(x)yn(x)dx = 0. Za Mathematical Methods in Engineering and Science Sturm-Liouville Theory 457, Preliminary Ideas Sturm-Liouville Problems Sturm-Liouville Problems Eigenfunction Expansions Example: Legendre polynomials over [ 1, 1] − Legendre’s equation

d 2 [(1 x )y ′]+ k(k + 1)y = 0 dx − is self-adjoint and defines a singular Sturm Liouville problem over [ 1, 1] with p(x) = 1, q(x) = 0, r(x) = 1 x2 and λ = k(k + 1). − −

1 2 1 (m n)(m+n+1) Pm(x)Pn(x)dx = [(1 x )(PmPn′ PnPm′ )] 1 = 0 − 1 − − − Z− From orthogonal decompositions 1 = P0(x), x = P1(x), 1 1 2 1 x2 = (3x2 1) + = P (x)+ P (x), 3 − 3 3 2 3 0 1 3 2 3 x3 = (5x3 3x)+ x = P (x)+ P (x), 5 − 5 5 3 5 1 8 4 1 x4 = P (x)+ P (x)+ P (x) etc; 35 4 7 2 5 0 Pk (x) is orthogonal to all polynomials of degree less than k. Mathematical Methods in Engineering and Science Sturm-Liouville Theory 458, Preliminary Ideas Sturm-Liouville Problems Sturm-Liouville Problems Eigenfunction Expansions Real eigenvalues Eigenvalues of a Sturm-Liouville problem are real. Let eigenvalue λ = µ + iν and eigenfunction y(x)= u(x)+ iv(x). Substitution leads to

[r(u′ + iv ′)]′ + [q + (µ + iν)p](u + iv) = 0. Separation of real and imaginary parts: 2 [ru′]′ + (q + µp)u νpv = 0 νpv = [ru′]′v + (q + µp)uv − ⇒ 2 [rv ′]′ + (q + µp)v + νpu = 0 νpu = [rv ′]′u (q + µp)uv ⇒ − − Adding together, 2 2 νp(u + v ) = [ru′]′v + [ru′]v ′ [rv ′]u′ [rv ′]′u = r(uv ′ vu′) ′ − − − − Integration and application of boundary conditions leads to  b ν p(x)[u2(x)+ v 2(x)]dx = 0. Za ν = 0 and λ = µ Mathematical Methods in Engineering and Science Sturm-Liouville Theory 459, Preliminary Ideas Eigenfunction Expansions Sturm-Liouville Problems Eigenfunction Expansions Eigenfunctions of Sturm-Liouville problems: convenient and powerful instruments to represent and manipulate fairly general classes of functions

y , y , y , y , : a family of continuous functions over [a, b], { 0 1 2 3 ···} mutually orthogonal with respect to p(x). Representation of a function f (x) on [a, b]:

∞ f (x)= amym(x)= a y (x)+ a y (x)+ a y (x)+ a y (x)+ 0 0 1 1 2 2 3 3 · · · m X=0 Generalized Fourier series Analogous to the representation of a vector as a linear combination of a set of mutually orthogonal vectors.

Question: How to determine the coefficients (an)? Mathematical Methods in Engineering and Science Sturm-Liouville Theory 460, Preliminary Ideas Eigenfunction Expansions Sturm-Liouville Problems Eigenfunction Expansions Inner product: b (f , yn) = p(x)f (x)yn(x)dx Za b ∞ ∞ 2 = [amp(x)ym(x)yn(x)]dx = am(ym, yn)= an yn k k a m m Z X=0 X=0 where b 2 yn = (yn, yn)= p(x)yn (x)dx k k s a p Z Fourier coefficients: a = (f ,yn) n yn 2 Normalized eigenfunctions:k k

ym(x) φm(x)= ym(x) k k Generalized Fourier series (in orthonormal basis):

∞ f (x)= cmφm(x)= c φ (x)+c φ (x)+c φ (x)+c φ (x)+ 0 0 1 1 2 2 3 3 · · · m X=0 Mathematical Methods in Engineering and Science Sturm-Liouville Theory 461, Preliminary Ideas Eigenfunction Expansions Sturm-Liouville Problems Eigenfunction Expansions In terms of a finite number of members of the family φk (x) , { } N ΦN (x)= αmφm(x)= α φ (x)+α φ (x)+α φ (x)+ +αN φN (x). 0 0 1 1 2 2 · · · m X=0 Error 2 b N 2 E = f ΦN = p(x) f (x) αmφm(x) dx k − k − a " m # Z X=0 Error is minimized when N ∂E b = 2p(x) f (x) αmφm(x) [ φn(x)]dx = 0 ∂α − − n a " m # Z X=0 b b 2 αnp(x)φ (x)dx = p(x)f (x)φn(x)dx. ⇒ n Za Za αn = cn best approximation in the mean or least square approximation Mathematical Methods in Engineering and Science Sturm-Liouville Theory 462, Preliminary Ideas Eigenfunction Expansions Sturm-Liouville Problems Eigenfunction Expansions Using the Fourier coefficients, error

N N N N 2 2 2 2 E = (f , f ) 2 cn(f , φn)+ c (φn, φn)= f 2 c + c − n k k − n n n n n n X=0 X=0 X=0 X=0 N E = f 2 c2 0. k k − n ≥ n X=0 Bessel’s inequality:

N b c2 f 2 = p(x)f 2(x)dx n ≤ k k n a X=0 Z Partial sum k sk (x)= amφm(x) m X=0 Question: Does the sequence of sk converge? { } Answer: The bound in Bessel’s inequality ensures convergence. Mathematical Methods in Engineering and Science Sturm-Liouville Theory 463, Preliminary Ideas Eigenfunction Expansions Sturm-Liouville Problems Eigenfunction Expansions Question: Does it converge to f ?

b 2 lim p(x)[sk (x) f (x)] dx = 0? k − →∞ Za Answer: Depends on the basis used. Convergence in the mean or mean-square convergence:

An orthonormal set of functions φk (x) on an interval { } a x b is said to be complete in a class of functions, ≤ ≤ or to form a basis for it, if the corresponding generalized Fourier series for a function converges in the mean to the function, for every function belonging to that class.

2 2 Parseval’s identity: ∞ c = f n=0 n k k Eigenfunction expansion: generalized Fourier series in terms of P eigenfunctions of a Sturm-Liouville problem ◮ convergent for continuous functions with piecewise continuous derivatives, i.e. they form a basis for this class. Mathematical Methods in Engineering and Science Sturm-Liouville Theory 464, Preliminary Ideas Points to note Sturm-Liouville Problems Eigenfunction Expansions

◮ Eigenvalue problems in ODE’s ◮ Self-adjoint differential operators ◮ Sturm-Liouville problems ◮ Orthogonal eigenfunctions ◮ Eigenfunction expansions

Necessary Exercises: 1,2,4,5 Mathematical Methods in Engineering and Science Fourier Series and Integrals 465, Basic Theory of Fourier Series Outline Extensions in Application Fourier Integrals

Fourier Series and Integrals Basic Theory of Fourier Series Extensions in Application Fourier Integrals Mathematical Methods in Engineering and Science Fourier Series and Integrals 466, Basic Theory of Fourier Series Basic Theory of Fourier Series Extensions in Application Fourier Integrals With q(x)=0 and p(x)= r(x) = 1, periodic S-L problem:

y ′′ + λy = 0, y( L)= y(L), y ′( L)= y ′(L) − − Eigenfunctions 1, cos πx , sin πx , cos 2πx , sin 2πx , L L L L · · · constitute an orthogonal basis for representing functions. For a periodic function f (x) of period 2L, we propose

∞ nπx nπx f (x)= a + a cos + b sin 0 n L n L n X=1   and determine the Fourier coefficients from Euler formulae 1 L a0 = f (x)dx, 2L L Z− 1 L mπx 1 L mπx am = f (x) cos dx and bm = f (x)sin dx. L L L L L L Z− Z− Question: Does the series converge? Mathematical Methods in Engineering and Science Fourier Series and Integrals 467, Basic Theory of Fourier Series Basic Theory of Fourier Series Extensions in Application Fourier Integrals Dirichlet’s conditions: If f (x) and its derivative are piecewise continuous on [ L, L] and are periodic with a period 2L, then the series − f (x+)+f (x ) converges to the mean 2 − of one-sided limits, at all points. Fourier series Note: The interval of integration can be [x0, x0 + 2L] for any x0.

◮ It is valid to integrate the Fourier series term by term. ◮ The Fourier series uniformly converges to f (x) over an interval on which f (x) is continuous. At a jump discontinuity, f (x+)+f (x ) convergence to 2 − is not uniform. Mismatch peak shifts with inclusion of more terms (Gibb’s phenomenon). ◮ Term-by-term differentiation of the Fourier series at a point requires f (x) to be smooth at that point. Mathematical Methods in Engineering and Science Fourier Series and Integrals 468, Basic Theory of Fourier Series Basic Theory of Fourier Series Extensions in Application Fourier Integrals Multiplying the Fourier series with f (x),

∞ nπx nπx f 2(x)= a f (x)+ a f (x) cos + b f (x)sin 0 n L n L n X=1 h i Parseval’s identity: L 1 ∞ 1 a2 + (a2 + b2)= f 2(x)dx ⇒ 0 2 n n 2L n L X=1 Z− The Fourier series representation is complete. ◮ A periodic function f (x) is composed of its mean value and several sinusoidal components, or harmonics. ◮ Fourier coefficients are corresponding amplitudes. ◮ Parseval’s identity is simply a statement on energy balance! Bessel’s inequality N 1 1 a2 + (a2 + b2) f (x) 2 0 2 n n ≤ 2Lk k n X=1 Mathematical Methods in Engineering and Science Fourier Series and Integrals 469, Basic Theory of Fourier Series Extensions in Application Extensions in Application Fourier Integrals Original spirit of Fouries series ◮ representation of periodic functions over ( , ). −∞ ∞ Question: What about a function f (x) defined only on [ L, L]? − Answer: Extend the function as F (x)= f (x) for L x L, and F (x + 2L)= F (x). − ≤ ≤ Fourier series of F (x) acts as the Fourier series representation of f (x) in its own domain. In Euler formulae, notice that bm = 0 for an even function. The Fourier series of an even function is a Fourier cosine series

∞ nπx f (x)= a + a cos , 0 n L n X=1 1 L 2 L nπx where a0 = L 0 f (x)dx and an = L 0 f (x) cos L dx. Similarly, for an oddR function, Fourier sineR series. Mathematical Methods in Engineering and Science Fourier Series and Integrals 470, Basic Theory of Fourier Series Extensions in Application Extensions in Application Fourier Integrals Over [0, L], sometimes we need a series of sine terms only, or cosine terms only!

f(x) fc (x)

O L x −3L −2L −L O L 2L 3L x

(a) Function over ( 0,L ) (b) Even periodic extension

fs (x)

−3L −2L −L O L 2L 3L x

(c) Odd periodic extension

Figure: Periodic extensions for cosine and sine series Mathematical Methods in Engineering and Science Fourier Series and Integrals 471, Basic Theory of Fourier Series Extensions in Application Extensions in Application Fourier Integrals Half-range expansions ◮ For Fourier cosine series of a function f (x) over [0, L], even periodic extension: f (x) for 0 x L, fc (x)= ≤ ≤ and fc (x+2L)= fc (x) f ( x) for L x < 0,  − − ≤ ◮ For Fourier sine series of a function f (x) over [0, L], odd periodic extension: f (x) for 0 x L, fs (x)= ≤ ≤ and fs (x+2L)= fs (x) f ( x) for L x < 0,  − − − ≤ To develop the Fourier series of a function, which is available as a set of tabulated values or a black-box library routine, integrals in the Euler formulae are evaluated numerically. Important: Fourier series representation is richer and more powerful compared to interpolatory or least square approximation in many contexts. Mathematical Methods in Engineering and Science Fourier Series and Integrals 472, Basic Theory of Fourier Series Fourier Integrals Extensions in Application Fourier Integrals Question: How to apply the idea of Fourier series to a non-periodic function over an infinite domain? Answer: Magnify a single period to an infinite length.

Fourier series of function fL(x) of period 2L:

∞ fL(x)= a0 + (an cos pnx + bn sin pnx), n X=1 nπ where pn = L is the frequency of the n-th harmonic. Inserting the expressions for the Fourier coefficients,

1 L fL(x) = fL(x)dx 2L L Z− 1 ∞ L L + cos p x f (v) cos p v dv + sin p x f (v)sin p v dv ∆p, π n L n n L n n L L X=1  Z− Z−  π where ∆p = pn pn = . +1 − L Mathematical Methods in Engineering and Science Fourier Series and Integrals 473, Basic Theory of Fourier Series Fourier Integrals Extensions in Application Fourier Integrals In the limit (if it exists), as L , ∆p 0, →∞ → 1 f (x)= ∞ cos px ∞ f (v) cos pv dv + sin px ∞ f (v)sin pv dv dp π 0 Z  Z−∞ Z−∞  Fourier integral of f (x):

f (x)= ∞[A(p) cos px + B(p)sin px]dp, Z0 where amplitude functions 1 1 A(p)= ∞ f (v) cos pv dv and B(p)= ∞ f (v)sin pv dv π π Z−∞ Z−∞ are defined for a continuous frequency variable p. In phase angle form, 1 f (x)= ∞ ∞ f (v) cos p(x v)dv dp. π 0 − Z Z−∞ Mathematical Methods in Engineering and Science Fourier Series and Integrals 474, Basic Theory of Fourier Series Fourier Integrals Extensions in Application Fourier Integrals eiθ+e−iθ Using cos θ = 2 in the phase angle form,

1 ∞ ∞ ip(x v) ip(x v) f (x)= f (v)[e − + e− − ]dv dp. 2π 0 Z Z−∞ With substitution p = q, − 0 ∞ ∞ ip(x v) ∞ iq(x v) f (v)e− − dv dp = f (v)e − dv dq. 0 Z Z−∞ Z−∞ Z−∞ Complex form of Fourier integral

1 ∞ ∞ ip(x v) ∞ ipx f (x)= f (v)e − dv dp = C(p)e dp, 2π Z−∞ Z−∞ Z−∞ in which the complex Fourier integral coefficient is

1 ∞ ipv C(p)= f (v)e− dv. 2π Z−∞ Mathematical Methods in Engineering and Science Fourier Series and Integrals 475, Basic Theory of Fourier Series Points to note Extensions in Application Fourier Integrals

◮ Fourier series arising out of a Sturm-Liouville problem ◮ A versatile tool for function representation ◮ Fourier integral as the limiting case of Fourier series

Necessary Exercises: 1,3,6,8 Mathematical Methods in Engineering and Science Fourier Transforms 476, Definition and Fundamental Properties Outline Important Results on Fourier Transforms Discrete Fourier Transform

Fourier Transforms Definition and Fundamental Properties Important Results on Fourier Transforms Discrete Fourier Transform Mathematical Methods in Engineering and Science Fourier Transforms 477, Definition and Fundamental Properties Definition and Fundamental PropertiesImportant Results on Fourier Transforms Discrete Fourier Transform Complex form of the Fourier integral:

1 ∞ 1 ∞ iwv iwt f (t)= f (v)e− dv e dw √2π √2π Z−∞  Z−∞  Composition of an infinite number of functions in the iwt form e , over a continuous distribution of frequency w. √2π Fourier transform: Amplitude of a frequency component:

1 ∞ iwt (f ) fˆ(w)= f (t)e− dt F ≡ √2π Z−∞ Function of the frequency variable. Inverse Fourier transform

1 1 ∞ iwt − (fˆ) f (t)= fˆ(w)e dw F ≡ √2π Z−∞ recovers the original function. Mathematical Methods in Engineering and Science Fourier Transforms 478, Definition and Fundamental Properties Definition and Fundamental PropertiesImportant Results on Fourier Transforms Discrete Fourier Transform Example: Fourier transform of f (t)=1? Let us find out the inverse Fourier transform of fˆ(w)= kδ(w).

1 1 ∞ iwt k f (t)= − (fˆ)= kδ(w)e dw = F √2π √2π Z−∞ (1) = √2πδ(w) F Linearity of Fourier transforms: αf (t)+ βf (t) = αfˆ (w)+ βfˆ (w) F{ 1 2 } 1 2 Scaling:

1 w 1 w f (at) = fˆ and − fˆ = a f (at) F{ } a a F a | | | |   n  o Shifting rules:

iwt0 f (t t0) = e− f (t) 1F{ − } iw t F{1 } − fˆ(w w ) = e 0 − fˆ(w) F { − 0 } F { } Mathematical Methods in Engineering and Science Fourier Transforms 479, Definition and Fundamental Properties Important Results on Fourier TransformsImportant Results on Fourier Transforms Discrete Fourier Transform Fourier transform of the derivative of a function:

If f (t) is continuous in every interval and f ′(t) is piecewise continuous, ∞ f (t) dt converges and f (t) approaches zero as | | t , −∞then → ±∞ R 1 ∞ iwt f ′(t) = f ′(t)e− dt F{ } √2π Z−∞ 1 iwt 1 ∞ iwt = f (t)e− ∞ ( iw)f (t)e− dt √2π −∞ − √2π − Z−∞ = iwfˆ(w). 

Alternatively, differentiating the inverse Fourier transform, d d 1 [f (t)] = ∞ fˆ(w)eiwt dw dt dt √2π  Z−∞  1 ∞ ∂ iwt 1 = fˆ(w)e dw = − iwfˆ(w) . √2π ∂t F { } Z−∞ h i Mathematical Methods in Engineering and Science Fourier Transforms 480, Definition and Fundamental Properties Important Results on Fourier TransformsImportant Results on Fourier Transforms Discrete Fourier Transform Under appropriate premises,

2 2 f ′′(t) = (iw) fˆ(w)= w fˆ(w). F{ } − In general, f (n)(t) = (iw)nfˆ(w). F{ } Fourier transform of an integral: If f (t) is piecewise continuous on every interval, ∞ f (t) dt converges and fˆ(0) = 0, then −∞ | | R t 1 f (τ)dτ = fˆ(w). F iw Z−∞  Derivative of a Fourier transform (with respect to the frequency variable): d n tnf (t) = i n fˆ(w), F{ } dw n n if f (t) is piecewise continuous and ∞ t f (t) dt converges. −∞ | | R Mathematical Methods in Engineering and Science Fourier Transforms 481, Definition and Fundamental Properties Important Results on Fourier TransformsImportant Results on Fourier Transforms Discrete Fourier Transform Convolution of two functions:

h(t)= f (t) g(t)= ∞ f (τ)g(t τ)dτ ∗ − Z−∞

hˆ(w) = h(t) F{ } 1 ∞ ∞ iwt = f (τ)g(t τ)e− dτ dt √2π − Z−∞ Z−∞ 1 ∞ iwτ ∞ iw(t τ) = f (τ)e− g(t τ)e− − dt dτ √2π − Z−∞ Z−∞  ∞ iwτ 1 ∞ iwt′ = f (τ)e− g(t′)e− dt′ dτ √2π Z−∞  Z−∞  Convolution theorem for Fourier transforms:

hˆ(w)= √2πfˆ(w)ˆg(w) Mathematical Methods in Engineering and Science Fourier Transforms 482, Definition and Fundamental Properties Important Results on Fourier TransformsImportant Results on Fourier Transforms Discrete Fourier Transform Conjugate of the Fourier transform:

1 ∞ iwt fˆ∗(w)= f ∗(t)e dt √2π Z−∞ Inner product of fˆ(w) andg ˆ(w):

∞ ∞ 1 ∞ iwt fˆ∗(w)ˆg(w)dw = f ∗(t)e dt gˆ(w)dw √2π Z−∞ Z−∞ Z−∞ ∞ 1 ∞ iwt = f ∗(t) gˆ(w)e dw dt √2π Z−∞  Z−∞  ∞ = f ∗(t)g(t)dt. Z−∞ Parseval’s identity: For g(t)= f (t) in the above,

∞ fˆ(w) 2dw = ∞ f (t) 2dt, k k k k Z−∞ Z−∞ equating the total energy content of the frequency spectrum of a wave or a signal to the total energy flow over time. Mathematical Methods in Engineering and Science Fourier Transforms 483, Definition and Fundamental Properties Discrete Fourier Transform Important Results on Fourier Transforms Discrete Fourier Transform Consider a signal f (t) from actual measurement or sampling. We want to analyze its amplitude spectrum (versus frequency). For the FT, how to evaluate the integral over ( , )? −∞ ∞ Windowing: Sample the signal f (t) over a finite interval. A window function: 1 for a t b g(t)= ≤ ≤ 0 otherwise  Actual processing takes place on the windowed function f (t)g(t). Next question: Do we need to evaluate the amplitude for all w ( , )? ∈ −∞ ∞ Most useful signals are particularly rich only in their own characteristic frequency bands.

Decide on an expected frequency band, say [ wc , wc ]. − Mathematical Methods in Engineering and Science Fourier Transforms 484, Definition and Fundamental Properties Discrete Fourier Transform Important Results on Fourier Transforms Discrete Fourier Transform Time step for sampling? With N sampling over [a, b),

wc ∆ π, ≤ data being collected at t = a, a +∆, a + 2∆, , a + (N 1)∆, · · · − with N∆= b a. − Nyquist critical frequency Note the duality. ◮ Decision of sampling rate ∆ determines the band of frequency content that can be accommodated. ◮ Decision of the interval [a, b) dictates how finely the frequency spectrum can be developed. Shannon’s sampling theorem A band-limited signal can be reconstructed from a finite number of samples. Mathematical Methods in Engineering and Science Fourier Transforms 485, Definition and Fundamental Properties Discrete Fourier Transform Important Results on Fourier Transforms Discrete Fourier Transform With discrete data at tk = k∆ for k = 0, 1, 2, 3, , N 1, · · · − ˆ ∆ k f(w)= mj f(t), √2π h i iw ∆ k where mj = e j and m is an N N matrix. − j × A similar discrete versionh of inversei Fourier transform. Reconstruction: a trigonometric interpolation of sampled data. ◮ Structure of Fourier and inverse Fourier transforms reduces the problem with a system of linear equations [ (N3) operations] O to that of a matrix-vector multiplication [ (N2) operations]. O ◮ k Structure of matrix mj , with patterns of redundancies, opens up a trick to reduceh i it further to (N log N) operations. O Cooley-Tuckey algorithm: fast Fourier transform (FFT) Mathematical Methods in Engineering and Science Fourier Transforms 486, Definition and Fundamental Properties Discrete Fourier Transform Important Results on Fourier Transforms Discrete Fourier Transform DFT representation reliable only if the incoming signal is really band-limited in the interval [ wc , wc ]. − Frequencies beyond [ wc , wc ] distort the spectrum near w = wc − ± by folding back. Aliasing Detection: a posteriori Bandpass filtering: If we expect a signal having components only in certain frequency bands and want to get rid of unwanted noise frequencies,

for every band [w1, w2] of our interest, we define window function φˆ(w) with intervals [ w , w ] and [w , w ]. − 2 − 1 1 2 Windowed Fourier transform φˆ(w)fˆ(w) filters out frequency components outside this band. For recovery, convolve raw signal f (t) with IFT φ(t) of φˆ(w). Mathematical Methods in Engineering and Science Fourier Transforms 487, Definition and Fundamental Properties Points to note Important Results on Fourier Transforms Discrete Fourier Transform

◮ Fourier transform as amplitude function in Fourier integral ◮ Basic operational tools in Fourier and inverse Fourier transforms ◮ Conceptual notions of discrete Fourier transform (DFT)

Necessary Exercises: 1,3,6 Mathematical Methods in Engineering and Science Minimax Approximation* 488, Approximation with Chebyshev polynomials Outline Minimax Polynomial Approximation

Minimax Approximation* Approximation with Chebyshev polynomials Minimax Polynomial Approximation Mathematical Methods in Engineering and Science Minimax Approximation* 489, Approximation with Chebyshev polynomials Approximation with Chebyshev polynomialsMinimax Polynomial Approximation Chebyshev polynomials: Polynomial solutions of the singular Sturm-Liouville problem

2 2 2 2 ′ n (1 x )y ′′ xy ′ + n y = 0 or 1 x y ′ + y = 0 − − − √1 x2 hp i − over 1 x 1, with Tn(1) = 1 for all n. − ≤ ≤

Closed-form expressions:

1 Tn(x) = cos(n cos− x),

or,

T (x) = 1, T (x)= x, T (x) = 2x2 1, T (x) = 4x3 3x, ; 0 1 2 − 3 − · · · with the three-term recurrence relation

Tk+1(x) = 2xTk (x) Tk 1(x). − − Mathematical Methods in Engineering and Science Minimax Approximation* 490, Approximation with Chebyshev polynomials Approximation with Chebyshev polynomialsMinimax Polynomial Approximation Immediate observations ◮ Coefficients in a Chebyshev polynomial are integers. In n 1 particular, the leading coefficient of Tn(x) is 2 − . ◮ For even n, Tn(x) is an even function, while for odd n it is an odd function. n ◮ Tn(1) = 1, Tn( 1) = ( 1) and Tn(x) 1 for 1 x 1. − − | |≤ − ≤ ≤ ◮ Zeros of a Chebyshev polynomial Tn(x) are real and lie inside (2k 1)π the interval [ 1, 1] at locations x = cos − for − 2n k = 1, 2, 3, , n. · · · These locations are also called Chebyshev accuracy points. Further, zeros of Tn(x) are interlaced by those of Tn+1(x). ◮ Extrema of Tn(x) are of magnitude equal to unity, alternate in sign and occur at x = cos kπ for k = 0, 1, 2, 3, , n. n · · · ◮ Orthogonality and norms:

1 0 if m = n, Tm(x)Tn(x) 6 dx = π if m = n = 0, and 2 2 1 √1 x  6 Z− −  π if m = n = 0.  Mathematical Methods in Engineering and Science Minimax Approximation* 491, Approximation with Chebyshev polynomials Approximation with Chebyshev polynomialsMinimax Polynomial Approximation

1

P 8 (x) 1 T (x) 0.8 8 0.8 0.6 0.6

0.4 0.4

0.2 0.2

0 3 y 0 T (x) −0.2 −0.2 −0.4

−0.4 −0.6

−0.8 −0.6 extrema zeroes −1 −0.8

−1.5 −1 −0.5 0 0.5 1 1.5 −1 −1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1 x x

Figure: Extrema and zeros of T3(x) Figure: Contrast: P8(x) and T8(x)

Being cosines and polynomials at the same time, Chebyshev polynomials possess a wide variety of interesting properties!

Most striking property: equal-ripple oscillations, leading to minimax property Mathematical Methods in Engineering and Science Minimax Approximation* 492, Approximation with Chebyshev polynomials Approximation with Chebyshev polynomialsMinimax Polynomial Approximation Minimax property

Theorem: Among all polynomials pn(x) of degree n > 0 1 n with the leading coefficient equal to unity, 2 − Tn(x) deviates least from zero in [ 1, 1]. That is, − 1 n 1 n max pn(x) max 2 − Tn(x) = 2 − . 1 x 1 | |≥ 1 x 1 | | − ≤ ≤ − ≤ ≤

If there exists a monic polynomial pn(x) of degree n such that 1 n max pn(x) < 2 − , 1 x 1 | | − ≤ ≤ 1 n then at (n + 1) locations of alternating extrema of 2 − Tn(x), the polynomial 1 n qn(x) = 2 − Tn(x) pn(x) − 1 n will have the same sign as 2 − Tn(x). With alternating signs at (n + 1) locations in sequence, qn(x) will have n intervening zeros, even though it is a polynomial of degree at most (n 1): CONTRADICTION! − Mathematical Methods in Engineering and Science Minimax Approximation* 493, Approximation with Chebyshev polynomials Approximation with Chebyshev polynomialsMinimax Polynomial Approximation

Chebyshev series

f (x)= a T (x)+ a T (x)+ a T (x)+ a T (x)+ 0 0 1 1 2 2 3 3 · · · with coefficients

1 1 1 f (x)T0(x) 2 f (x)Tn(x) a0 = dx and an = dx for n = 1, 2, 3, 2 2 π 1 √1 x π 1 √1 x · · Z− − Z− −

n A truncated series k=0 ak Tk (x): Chebyshev economizationP

Leading error term an+1Tn+1(x) deviates least from zero over [ 1, 1] and is qualitatively similar to the error function. − Question: How to develop a Chebyshev series approximation? Find out so many Chebyshev polynomials and evaluate coefficients? Mathematical Methods in Engineering and Science Minimax Approximation* 494, Approximation with Chebyshev polynomials Approximation with Chebyshev polynomialsMinimax Polynomial Approximation

For approximating f (t) over [a, b], scale the variable as a+b b a t = + − x, with x [ 1, 1]. 2 2 ∈ − n Remark: The economized series k=0 ak Tk (x) gives minimax deviation of the leading error term an Tn (x). P +1 +1 Assuming an+1Tn+1(x) to be the error, at the zeros of Tn+1(x), the error will be ‘officially’ zero, i.e.

n ak Tk (xj )= f (t(xj )), Xk=0

where x , x , x , , xn are the roots of Tn (x). 0 1 2 · · · +1 Recall: Values of an n-th degree polynomial at n + 1 points uniquely fix the entire polynomial. Interpolation of these n + 1 values leads to the same polynomial! Chebyshev-Lagrange approximation Mathematical Methods in Engineering and Science Minimax Approximation* 495, Approximation with Chebyshev polynomials Minimax Polynomial Approximation Minimax Polynomial Approximation Situations in which minimax approximation is desirable: ◮ Develop the approximation once and keep it for use in future. Requirement: Uniform quality control over the entire domain Minimax approximation: deviation limited by the constant amplitude of ripple Chebyshev’s minimax theorem Theorem: Of all polynomials of degree up to n, p(x) is the minimax polynomial approximation of f (x), i.e. it minimizes max f (x) p(x) , | − | if and only if there are n + 2 points xi such that

a x < x < x < < xn b, ≤ 1 2 3 · · · +2 ≤ where the difference f (x) p(x) takes its extreme values − of the same magnitude and alternating signs. Mathematical Methods in Engineering and Science Minimax Approximation* 496, Approximation with Chebyshev polynomials Minimax Polynomial Approximation Minimax Polynomial Approximation Utilize any gap to reduce the deviation at the other extrema with values at the bound. y d

f(x) − p(x)

∆p(x) ε/2 l b O a m w n x −ε/2

ε −d

Figure: Schematic of an approximation that is not minimax

Construction of the minimax polynomial: Remez algorithm

Note: In the light of this theorem and algorithm, examine how Tn+1(x) is qualitatively similar to the complete error function! Mathematical Methods in Engineering and Science Minimax Approximation* 497, Approximation with Chebyshev polynomials Points to note Minimax Polynomial Approximation

◮ Unique features of Chebyshev polynomials ◮ The equal-ripple and minimax properties ◮ Chebyshev series and Chebyshev-Lagrange approximation ◮ Fundamental ideas of general minimax approximation

Necessary Exercises: 2,3,4 Mathematical Methods in Engineering and Science Partial Differential Equations 498, Introduction Outline Hyperbolic Equations Parabolic Equations Elliptic Equations Two-Dimensional Wave Equation

Partial Differential Equations Introduction Hyperbolic Equations Parabolic Equations Elliptic Equations Two-Dimensional Wave Equation Mathematical Methods in Engineering and Science Partial Differential Equations 499, Introduction Introduction Hyperbolic Equations Parabolic Equations Elliptic Equations Quasi-linear second order PDE’s Two-Dimensional Wave Equation ∂2u ∂2u ∂2u a + 2b + c = F (x, y, u, u , u ) ∂x2 ∂x∂y ∂y 2 x y hyperbolic if b2 ac > 0, modelling phenomena which evolve in − time perpetually and do not approach a steady state parabolic if b2 ac = 0, modelling phenomena which evolve in − time in a transient manner, approaching steady state elliptic if b2 ac < 0, modelling steady-state configurations, − without evolution in time If F (x, y, u, ux , uy ) = 0, second order linear homogeneous differential equation Principle of superposition: A linear combination of different solutions is also a solution. Solutions are often in the form of infinite series. ◮ Solution techniques in PDE’s typically attack the boundary value problem directly. Mathematical Methods in Engineering and Science Partial Differential Equations 500, Introduction Introduction Hyperbolic Equations Parabolic Equations Elliptic Equations Two-Dimensional Wave Equation Initial and boundary conditions Time and space variables are qualitatively different. ◮ Conditions in time: typically initial conditions. For second order PDE’s, u and ut over the entire space domain: Cauchy conditions ◮ Time is a single variable and is decoupled from the space variables. ◮ Conditions in space: typically boundary conditions. For u(t, x, y), boundary conditions over the entire curve in the x-y plane that encloses the domain. For second order PDE’s, ◮ Dirichlet condition: value of the function ◮ Neumann condition: derivative normal to the boundary ◮ Mixed (Robin) condition

Dirichlet, Neumann and Cauchy problems Mathematical Methods in Engineering and Science Partial Differential Equations 501, Introduction Introduction Hyperbolic Equations Parabolic Equations Elliptic Equations Method of separation of variables Two-Dimensional Wave Equation For u(x, y), propose a solution in the form

u(x, y)= X (x)Y (y)

and substitute

ux = X ′Y , uy = XY ′, uxx = X ′′Y , uxy = X ′Y ′, uyy = XY ′′ to cast the equation into the form

φ(x, X , X ′, X ′′)= ψ(y, Y , Y ′, Y ′′). If the manoeuvre succeeds then, x and y being independent variables, it implies

φ(x, X , X ′, X ′′)= ψ(y, Y , Y ′, Y ′′)= k. Nature of the separation constant k is decided based on the context, resulting ODE’s are solved in consistency with the boundary conditions and assembled to construct u(x, y). Mathematical Methods in Engineering and Science Partial Differential Equations 502, Introduction Hyperbolic Equations Hyperbolic Equations Parabolic Equations Elliptic Equations Transverse vibrations of a string Two-Dimensional Wave Equation

Q T u θ+δθ P θ δx T

Q P

O δx L x

Figure: Transverse vibration of a stretched string

Small deflection and slope: cos θ 1, sin θ θ tan θ ≈ ≈ ≈ Horizontal (longitudinal) forces on PQ balance. From Newton’s second law, vertical (transverse) deflection u(x, t): ∂2u T sin(θ + δθ) T sin θ = ρδx − ∂t2 Mathematical Methods in Engineering and Science Partial Differential Equations 503, Introduction Hyperbolic Equations Hyperbolic Equations Parabolic Equations Elliptic Equations 2 T Two-Dimensional Wave Equation Under the assumptions, denoting c = ρ ,

∂2u ∂u ∂u δx = c2 . ∂t2 ∂x − ∂x " Q P #

In the limit, as δx 0, PDE of transverse vibration: → ∂2u ∂2u = c2 ∂t2 ∂x2

one-dimensional wave equation Boundary conditions (in this case): u(0, t)= u(L, t) = 0 Initial configuration and initial velocity:

u(x, 0) = f (x) and ut (x, 0) = g(x) Cauchy problem: Determine u(x, t) for 0 x L, t 0. ≤ ≤ ≥ Mathematical Methods in Engineering and Science Partial Differential Equations 504, Introduction Hyperbolic Equations Hyperbolic Equations Parabolic Equations Elliptic Equations Solution by separation of variables Two-Dimensional Wave Equation

2 utt = c uxx , u(0, t)= u(L, t) = 0, u(x, 0) = f (x), ut (x, 0) = g(x) Assuming u(x, t)= X (x)T (t),

and substituting utt = XT ′′ and uxx = X ′′T , variables are separated as T X ′′ = ′′ = p2. c2T X − The PDE splits into two ODE’s 2 2 2 X ′′ + p X =0 and T ′′ + c p T = 0. 2 nπ Eigenvalues of BVP X ′′ + p X = 0, X (0) = X (L) = 0 are p = L and eigenfunctions nπx Xn(x) = sin px = sin for n = 1, 2, 3, . L · · · 2 cnπ Second ODE: T ′′ + λnT = 0, with λn = L Mathematical Methods in Engineering and Science Partial Differential Equations 505, Introduction Hyperbolic Equations Hyperbolic Equations Parabolic Equations Elliptic Equations Corresponding solution: Two-Dimensional Wave Equation

Tn(t)= An cos λnt + Bn sin λnt

Then, for n = 1, 2, 3, , · · · nπx un(x, t)= Xn(x)Tn(t) = (An cos λnt + Bn sin λnt)sin L satisfies the PDE and the boundary conditions. Since the PDE and the BC’s are homogeneous, by superposition,

∞ nπx u(x, t)= [A cos λ t + B sin λ t]sin . n n n n L n X=1

Question: How to determine coefficients An and Bn? Answer: By imposing the initial conditions. Mathematical Methods in Engineering and Science Partial Differential Equations 506, Introduction Hyperbolic Equations Hyperbolic Equations Parabolic Equations Elliptic Equations Initial conditions: Fourier sine series of f (x)Two-Dimensional and g(x) Wave Equation

∞ nπx u(x, 0) = f (x) = A sin n L n X=1 ∞ nπx u (x, 0) = g(x) = λ B sin t n n L n X=1 Hence, coefficients: 2 L nπx 2 L nπx A = f (x)sin dx and B = g(x)sin dx n L L n cnπ L Z0 Z0 Related problems: ◮ Different boundary conditions: other kinds of series ◮ Long wire: infinite domain, continuous frequencies and solution from Fourier integrals Alternative: Reduce the problem using Fourier transforms. 2 2 ◮ General wave equation in 3-d: utt = c u 2 ∇ ◮ Membrane equation: utt = c (uxx + uyy ) Mathematical Methods in Engineering and Science Partial Differential Equations 507, Introduction Hyperbolic Equations Hyperbolic Equations Parabolic Equations Elliptic Equations Two-Dimensional Wave Equation D’Alembert’s solution of the wave equation

Method of characteristics Canonical form

By coordinate transformation from (x, y) to (ξ, η), with U(ξ, η)= u[x(ξ, η), y(ξ, η)],

hyperbolic equation: Uξη =Φ

parabolic equation: Uξξ =Φ

elliptic equation: Uξξ + Uηη =Φ

in which Φ(ξ,η, U, Uξ, Uη) is free from second derivatives.

For a hyperbolic equation, entire domain becomes a network of ξ-η coordinate curves, known as characteristic curves, along which decoupled solutions can be tracked! Mathematical Methods in Engineering and Science Partial Differential Equations 508, Introduction Hyperbolic Equations Hyperbolic Equations Parabolic Equations Elliptic Equations For a hyperbolic equation in the form Two-Dimensional Wave Equation ∂2u ∂2u ∂2u a + 2b + c = F (x, y, u, u , u ), ∂x2 ∂x∂y ∂y 2 x y roots of am2 + 2bm + c are b √b2 ac m = − ± − , 1,2 a real and distinct. Coordinate transformation

ξ = y + m1x, η = y + m2x

leads to Uξη = Φ(ξ,η, U, Uξ, Uη). For the BVP 2 utt = c uxx , u(0, t)= u(L, t) = 0, u(x, 0) = f (x), ut (x, 0) = g(x), canonical coordinate transformation: 1 1 ξ = x ct, η = x + ct, with x = (ξ + η), t = (η ξ). − 2 2c − Mathematical Methods in Engineering and Science Partial Differential Equations 509, Introduction Hyperbolic Equations Hyperbolic Equations Parabolic Equations Elliptic Equations Substitution of derivatives Two-Dimensional Wave Equation

ux = Uξξx + Uηηx = Uξ + Uη uxx = Uξξ + 2Uξη + Uηη ⇒ 2 2 2 ut = U ξt + U ηt = cU + cU utt = c U 2c U + c U ξ η − ξ η ⇒ ξξ − ξη ηη 2 into the PDE utt = c uxx gives

c2(U 2U + U )= c2(U + 2U + U ). ξξ − ξη ηη ξξ ξη ηη

Canonical form: Uξη = 0 Integration:

Uξ = Uξηdη + ψ(ξ)= ψ(ξ) Z U(ξ, η)= ψ(ξ)dξ + f (η)= f (ξ)+ f (η) ⇒ 2 1 2 Z D’Alembert’s solution: u(x, t)= f (x ct)+ f (x + ct) 1 − 2 Mathematical Methods in Engineering and Science Partial Differential Equations 510, Introduction Hyperbolic Equations Hyperbolic Equations Parabolic Equations Elliptic Equations Two-Dimensional Wave Equation Physical insight from D’Alembert’s solution: f (x ct): a progressive wave in forward direction with speed c 1 − Reflection at boundary: in a manner depending upon the boundary condition

Reflected wave f2(x + ct): another progressive wave, this one in backward direction with speed c Superposition of two waves: complete solution (response)

cnπ Note: Components of the earlier solution: with λn = L , nπx 1 nπ nπ cos λnt sin = sin (x ct)+sin (x + ct) L 2 L − L nπx 1 h nπ nπ i sin λnt sin = cos (x ct) cos (x + ct) L 2 L − − L h i Mathematical Methods in Engineering and Science Partial Differential Equations 511, Introduction Parabolic Equations Hyperbolic Equations Parabolic Equations Elliptic Equations Heat conduction equation or diffusion equationTwo-Dimensional: Wave Equation ∂u = c2 2u ∂t ∇ One-dimensional heat (diffusion) equation:

2 ut = c uxx Heat conduction in a finite bar: For a thin bar of length L with end-points at zero temperature,

2 ut = c uxx , u(0, t)= u(L, t) = 0, u(x, 0) = f (x). Assumption u(x, t)= X (x)T (t) leads to

2 T ′ X ′′ 2 XT ′ = c X ′′T = = p , ⇒ c2T X − giving rise to two ODE’s as

2 2 2 X ′′ + p X =0 and T ′ + c p T = 0. Mathematical Methods in Engineering and Science Partial Differential Equations 512, Introduction Parabolic Equations Hyperbolic Equations Parabolic Equations 2 Elliptic Equations BVP in the space coordinate X ′′ + p X =Two-Dimensional 0, X (0) = WaveX ( EquationL) = 0 has solutions nπx X (x) = sin . n L cnπ With λn = L , the ODE in T (t) has the corresponding solutions

λ2 t Tn(t)= Ane− n . By superposition,

∞ nπx λ2 t u(x, t)= A sin e− n , n L n X=1 coefficients being determined from initial condition as

∞ nπx u(x, 0) = f (x)= An sin , L n X=1 a Fourier sine series. As t , u(x, t) 0 (steady state) →∞ → Mathematical Methods in Engineering and Science Partial Differential Equations 513, Introduction Parabolic Equations Hyperbolic Equations Parabolic Equations Elliptic Equations Non-homogeneous boundary conditions:Two-Dimensional Wave Equation 2 ut = c uxx , u(0, t)= u1, u(L, t)= u2, u(x, 0) = f (x). For u = u , with u(x, t)= X (x)T (t), BC’s do not separate! 1 6 2 Assume u(x, t)= U(x, t)+ uss (x),

where component uss (x), steady-state temperature (distribution), does not enter the differential equation. u2 u1 u′′ (x) = 0, uss (0) = u , uss (L)= u uss (x)= u + − x ss 1 2 ⇒ 1 L Substituting into the BVP, 2 Ut = c Uxx , U(0, t)= U(L, t) = 0, U(x, 0) = f (x) uss (x). − Final solution:

∞ nπx λ2 t u(x, t)= B sin e− n + u (x), n L ss n X=1 Bn being coefficients of Fourier sine series of f (x) uss (x). − Mathematical Methods in Engineering and Science Partial Differential Equations 514, Introduction Parabolic Equations Hyperbolic Equations Parabolic Equations Elliptic Equations Heat conduction in an infinite wire Two-Dimensional Wave Equation

2 ut = c uxx , u(x, 0) = f (x) nπ In place of L , now we have continuous frequency p. Solution as superposition of all frequencies:

∞ ∞ c2 p2t u(x, t)= up(x, t)dp = [A(p) cos px+B(p)sin px]e− dp Z0 Z0 Initial condition

u(x, 0) = f (x)= ∞[A(p) cos px + B(p)sin px]dp Z0 gives the Fourier integral of f (x) and amplitude functions

1 1 A(p)= ∞ f (v) cos pv dv and B(p)= ∞ f (v)sin pv dv. π π Z−∞ Z−∞ Mathematical Methods in Engineering and Science Partial Differential Equations 515, Introduction Parabolic Equations Hyperbolic Equations Parabolic Equations Elliptic Equations Solution using Fourier transforms Two-Dimensional Wave Equation

2 ut = c uxx , u(x, 0) = f (x) Using derivative formula of Fourier transforms,

2 2 ∂uˆ 2 2 (ut )= c (iw) (u) = c w uˆ, F F ⇒ ∂t − since variables x and t are independent. Initial value problem inu ˆ(w, t): ∂uˆ = c2w 2uˆ, uˆ(0) = fˆ(w) ∂t − c2 w 2t Solution:u ˆ(w, t)= fˆ(w)e− Inverse Fourier transform gives solution of the original problem as

1 1 ∞ c2 w 2t iwx u(x, t) = − uˆ(w, t) = fˆ(w)e− e dw F { } √2π Z−∞ 1 ∞ ∞ c2w 2t u(x, t) = f (v) cos(wx wv)e− dw dv. ⇒ π 0 − Z−∞ Z Mathematical Methods in Engineering and Science Partial Differential Equations 516, Introduction Elliptic Equations Hyperbolic Equations Parabolic Equations Elliptic Equations Heat flow in a plate: two-dimensional heat equationTwo-Dimensional Wave Equation ∂u ∂2u ∂2u = c2 + ∂t ∂x2 ∂y 2   Steady-state temperature distribution: ∂2u ∂2u + = 0 ∂x2 ∂y 2 Laplace’s equation Steady-state heat flow in a rectangular plate:

uxx + uyy = 0, u(0, y)= u(a, y)= u(x, 0) = 0, u(x, b)= f (x); a Dirichlet problem over the domain 0 x a, 0 y b. ≤ ≤ ≤ ≤ Proposal u(x, y)= X (x)Y (y) leads to

X ′′ Y ′′ 2 X ′′Y + XY ′′ = 0 = = p . ⇒ X − Y − Separated ODE’s: 2 2 X ′′ + p X =0 and Y ′′ p Y = 0 − Mathematical Methods in Engineering and Science Partial Differential Equations 517, Introduction Elliptic Equations Hyperbolic Equations Parabolic Equations 2 Elliptic Equations nπx From BVP X ′′ + p X = 0, X (0) = X (a) =Two-Dimensional 0, Xn(x Wave) = Equation sin a Corresponding solution of Y p2Y = 0: ′′ − nπy nπy Y (y)= A cosh + B sinh n n a n a

Condition Y (0) = 0 An = 0, and ⇒ nπx nπy un(x, y)= Bn sin sinh a a The complete solution:

∞ nπx nπy u(x, y)= B sin sinh n a a n X=1 The last boundary condition u(x, b)= f (x) fixes the coefficients from the Fourier sine series of f (x). Note: In the example, BC’s on three sides were homogeneous. How did it help? What if there are more non-homogeneous BC’s? Mathematical Methods in Engineering and Science Partial Differential Equations 518, Introduction Elliptic Equations Hyperbolic Equations Parabolic Equations Elliptic Equations Steady-state heat flow with internal heatTwo-Dimensional generation Wave Equation

2u = φ(x, y) ∇ Poisson’s equation Separation of variables impossible! Consider function u(x, y) as

u(x, y)= uh(x, y)+ up(x, y)

Sequence of steps

◮ one particular solution up(x, y) that may or may not satisfy some or all of the boundary conditions ◮ solution of the corresponding homogeneous equation, namely uxx + uyy = 0 for uh(x, y) ◮ such that u = uh + up satisfies all the boundary conditions Mathematical Methods in Engineering and Science Partial Differential Equations 519, Introduction Two-Dimensional Wave Equation Hyperbolic Equations Parabolic Equations Elliptic Equations Transverse vibration of a rectangular membrane:Two-Dimensional Wave Equation

∂2u ∂2u ∂2u = c2 + ∂t2 ∂x2 ∂y 2   A Cauchy problem of the membrane:

2 utt = c (uxx + uyy ); u(x, y, 0) = f (x, y), ut (x, y, 0) = g(x, y); u(0, y, t)= u(a, y, t)= u(x, 0, t)= u(x, b, t) = 0.

Separate the time variable from the space variables:

F + F T u(x, y, t)= F (x, y)T (t) xx yy = ′′ = λ2 ⇒ F c2T − Helmholtz equation:

2 Fxx + Fyy + λ F = 0 Mathematical Methods in Engineering and Science Partial Differential Equations 520, Introduction Two-Dimensional Wave Equation Hyperbolic Equations Parabolic Equations Elliptic Equations Assuming F (x, y)= X (x)Y (y), Two-Dimensional Wave Equation X Y + λ2Y ′′ = ′′ = µ2 X − Y − 2 2 X ′′ + µ X =0 and Y ′′ + ν Y = 0, ⇒ such that λ = µ2 + ν2. With BC’s X (0)p = X (a)=0 and Y (0) = Y (b) = 0, mπx nπy X (x) = sin and Y (y) = sin . m a n b Corresponding values of λ are

mπ 2 nπ 2 λmn = + a b r    2 2 with solutions of T ′′ + c λ T =0 as

Tmn(t)= Amn cos cλmnt + Bmn sin cλmnt. Mathematical Methods in Engineering and Science Partial Differential Equations 521, Introduction Two-Dimensional Wave Equation Hyperbolic Equations Parabolic Equations Elliptic Equations Composing Xm(x), Yn(y) and Tmn(t) and superposing,Two-Dimensional Wave Equation

∞ ∞ mπx nπy u(x, y, t)= [A cos cλ t+B sin cλ t]sin sin , mn mn mn mn a b m n X=1 X=1 coefficients being determined from the double Fourier series

∞ ∞ mπx nπy f (x, y) = A sin sin mn a b m n X=1 X=1 ∞ ∞ mπx nπy and g(x, y) = cλ B sin sin . mn mn a b m n X=1 X=1

BVP’s modelled in polar coordinates For domains of circular symmetry, important in many practical systems, the BVP is conveniently modelled in polar coordinates, the separation of variables quite often producing ◮ Bessel’s equation, in cylindrical coordinates, and ◮ Legendre’s equation, in spherical coordinates Mathematical Methods in Engineering and Science Partial Differential Equations 522, Introduction Points to note Hyperbolic Equations Parabolic Equations Elliptic Equations Two-Dimensional Wave Equation

◮ PDE’s in physically relevant contexts ◮ Initial and boundary conditions ◮ Separation of variables ◮ Examples of boundary value problems with hyperbolic, parabolic and elliptic equations ◮ Modelling, solution and interpretation ◮ Cascaded application of separation of variables for problems with more than two independent variables

Necessary Exercises: 1,2,4,7,9,10 Mathematical Methods in Engineering and Science Analytic Functions 523, Analyticity of Complex Functions Outline Conformal Mapping Potential Theory

Analytic Functions Analyticity of Complex Functions Conformal Mapping Potential Theory Mathematical Methods in Engineering and Science Analytic Functions 524, Analyticity of Complex Functions Analyticity of Complex Functions Conformal Mapping Potential Theory Function f of a complex variable z gives a rule to associate a unique complex number w = u + iv to every z = x + iy in a set.

Limit: If f (z) is defined in a neighbourhood of z0 (except possibly at z itself) and l C such that ǫ> 0, δ > 0 such that 0 ∃ ∈ ∀ ∃ 0 < z z < δ f (z) l < ǫ, | − 0| ⇒ | − | then l = lim f (z). z z → 0 Crucial difference from real functions: z can approach z0 in all possible manners in the complex plane. Definition of the limit is more restrictive.

Continuity: limz z f (z)= f (z0) → 0 Continuity in a domain D: continuity at every point in D Mathematical Methods in Engineering and Science Analytic Functions 525, Analyticity of Complex Functions Analyticity of Complex Functions Conformal Mapping Potential Theory Derivative of a complex function:

f (z) f (z0) f (z0 + δz) f (z0) f ′(z0)= lim − = lim − z z0 z z δz 0 δz → − 0 → When this limit exists, function f (z) is said to be differentiable. Extremely restrictive definition!

Analytic function A function f (z) is called analytic in a domain D if it is defined and differentiable at all points in D. Points to be settled later: ◮ Derivative of an analytic function is also analytic. ◮ An analytic function possesses derivatives of all orders. A great qualitative difference between functions of a real variable and those of a complex variable! Mathematical Methods in Engineering and Science Analytic Functions 526, Analyticity of Complex Functions Analyticity of Complex Functions Conformal Mapping Potential Theory Cauchy-Riemann conditions If f (z)= u(x, y)+ iv(x, y) is analytic then δu + iδv f ′(z)= lim δx,δy 0 δx + iδy → along all paths of approach for δz = δx + iδy 0 or δx, δy 0. → →

y y 3 δ δ 2 z = i y

z 0 δ z0 z =δx 4 1

O x 5 O x

Figure: Paths approaching z0 Figure: Paths in C-R equations Two expressions for the derivative: ∂u ∂v ∂v ∂u f ′(z)= + i = i ∂x ∂x ∂y − ∂y Mathematical Methods in Engineering and Science Analytic Functions 527, Analyticity of Complex Functions Analyticity of Complex Functions Conformal Mapping Potential Theory Cauchy-Riemann equations or conditions ∂u = ∂v and ∂u = ∂v ∂x ∂y ∂y − ∂x are necessary for analyticity. Question: Do the C-R conditions imply analyticity? Consider u(x, y) and v(x, y) having continuous first order partial derivatives that satisfy the Cauchy-Riemann conditions. By mean value theorem, ∂u ∂u δu = u(x + δx, y + δy) u(x, y)= δx (x , y )+ δy (x , y ) − ∂x 1 1 ∂y 1 1 with x = x + ξδx, y = y + ξδy for some ξ [0, 1]; and 1 1 ∈ ∂v ∂v δv = v(x + δx, y + δy) v(x, y)= δx (x , y )+ δy (x , y ) − ∂x 2 2 ∂y 2 2 with x = x + ηδx, y = y + ηδy for some η [0, 1]. 2 2 ∈ Then, ∂u ∂v ∂v ∂u δf = δx (x , y )+ iδy (x , y ) +i δx (x , y ) iδy (x , y ) ∂x 1 1 ∂y 2 2 ∂x 2 2 − ∂y 1 1     Mathematical Methods in Engineering and Science Analytic Functions 528, Analyticity of Complex Functions Analyticity of Complex Functions Conformal Mapping Potential Theory Using C-R conditions ∂v = ∂u and ∂u = ∂v , ∂y ∂x ∂y − ∂x ∂u ∂u ∂u δf = (δx + iδy) (x , y )+ iδy (x , y ) (x , y ) ∂x 1 1 ∂x 2 2 − ∂x 1 1   ∂v ∂v ∂v + i(δx + iδy) (x , y )+ iδx (x , y ) (x , y ) ∂x 1 1 ∂x 2 2 − ∂x 1 1   δf ∂u ∂v = (x , y )+ i (x , y )+ ⇒ δz ∂x 1 1 ∂x 1 1 δx ∂v ∂v δy ∂u ∂u i (x , y ) (x , y ) + i (x , y ) (x , y ) δz ∂x 2 2 − ∂x 1 1 δz ∂x 2 2 − ∂x 1 1     Since δx , δy 1, as δz 0, the limit exists and δz δz ≤ →

∂u ∂v ∂u ∂v f ′(z)= + i = i + . ∂x ∂x − ∂y ∂y

Cauchy-Riemann conditions are necessary and sufficient for function w = f (z)= u(x, y)+ iv(x, y) to be analytic. Mathematical Methods in Engineering and Science Analytic Functions 529, Analyticity of Complex Functions Analyticity of Complex Functions Conformal Mapping Potential Theory Harmonic function Differentiating C-R equations ∂v = ∂u and ∂u = ∂v , ∂y ∂x ∂y − ∂x ∂2u ∂2v ∂2u ∂2v ∂2u ∂2v ∂2u ∂2v = , = , = , = ∂x2 ∂x∂y ∂y 2 −∂y∂x ∂y∂x ∂y 2 ∂x∂y −∂x2

∂2u ∂2u ∂2v ∂2v + =0= + . ⇒ ∂x2 ∂y 2 ∂x2 ∂y 2

Real and imaginary components of an analytic functions are harmonic functions. Conjugate harmonic function of u(x, y): v(x, y) Families of curves u(x, y)= c and v(x, y)= k are mutually orthogonal, except possibly at points where f ′(z) = 0. Question: If u(x, y) is given, then how to develop the complete analytic function w = f (z)= u(x, y)+ iv(x, y)? Mathematical Methods in Engineering and Science Analytic Functions 530, Analyticity of Complex Functions Conformal Mapping Conformal Mapping Potential Theory Function: mapping of elements in domain to their images in range Depiction of a complex variable requires a plane with two axes. Mapping of a complex function w = f (z) is shown in two planes. Example: mapping of a rectangle under transformation w = ez

3.5

2 3

D C C’ 1.5 2.5

2 1

1.5 v

y 0.5 1 D’

0.5 0 O A B 0 O’ B’ −0.5 A’ −0.5

−1 −1 −1 −0.5 0 0.5 1 1.5 2 −1 −0.5 0 0.5 1 1.5 2 2.5 3 3.5 x u (a) The z-plane (b) The w-plane

Figure: Mapping corresponding to function w = ez Mathematical Methods in Engineering and Science Analytic Functions 531, Analyticity of Complex Functions Conformal Mapping Conformal Mapping Potential Theory Conformal mapping: a mapping that preserves the angle between any two directions in magnitude and sense. Verify: w = ez defines a conformal mapping. Through relative orientations of curves at the points of intersection, ‘local’ shape of a figure is preserved.

Take curve z(t), z(0) = z0 and image w(t)= f [z(t)], w0 = f (z0). For analytic f (z),w ˙ (0) = f ′(z0)˙z(0), implying

w˙ (0) = f ′(z ) z˙ (0) and argw ˙ (0) = arg f ′(z ) + argz ˙(0). | | | 0 | | | 0 For several curves through z0,

image curves pass through w0 and all of them turn by the same angle arg f ′(z0). Cautions ◮ f ′(z) varies from point to point. Different scaling and turning effects take place at different points. ‘Global’ shape changes. ◮ For f ′(z) = 0, argument is undefined and conformality is lost. Mathematical Methods in Engineering and Science Analytic Functions 532, Analyticity of Complex Functions Conformal Mapping Conformal Mapping Potential Theory An analytic function defines a conformal mapping except at its critical points where its derivative vanishes. Except at critical points, an analytic function is invertible. We can establish an inverse of any conformal mapping. Examples ◮ Linear function w = az + b (for a = 0) 6 ◮ Linear fractional transformation az + b w = , ad bc = 0 cz + d − 6 ◮ Other elementary functions like zn, ez etc Special significance of conformal mappings: A harmonic function φ(u, v) in the w-plane is also a harmonic function, in the form φ(x, y) in the z-plane, as long as the two planes are related through a conformal mapping. Mathematical Methods in Engineering and Science Analytic Functions 533, Analyticity of Complex Functions Potential Theory Conformal Mapping Potential Theory Riemann mapping theorem: Let D be a simply connected domain in the z-plane bounded by a closed curve C. Then there exists a conformal mapping that gives a one-to-one correspondence between D and the unit disc w < 1 as well as between C and the | | unit circle w = 1, bounding the unit disc. | | Application to boundary value problems ◮ First, establish a conformal mapping between the given domain and a domain of simple geometry. ◮ Next, solve the BVP in this simple domain. ◮ Finally, using the inverse of the conformal mapping, construct the solution for the given domain. Example: Dirichlet problem with Poisson’s integral formula

1 2π (R2 r 2)f (Reiφ) f (reiθ)= − dφ 2π R2 2Rr cos(θ φ)+ r 2 Z0 − − Mathematical Methods in Engineering and Science Analytic Functions 534, Analyticity of Complex Functions Potential Theory Conformal Mapping Potential Theory

Two-dimensional potential flow ◮ ∂φ Velocity potential φ(x, y) gives velocity components Vx = ∂x ∂φ and Vy = ∂y . ◮ A streamline is a curve in the flow field, the tangent to which at any point is along the local velocity vector. ◮ Stream function ψ(x, y) remains constant along a streamline. ◮ ψ(x, y) is the conjugate harmonic function of φ(x, y). ◮ Complex potential function Φ(z)= φ(x, y)+ iψ(x, y) defines the flow.

If a flow field encounters a solid boundary of a complicated shape, transform the boundary conformally to a simple boundary to facilitate the study of the flow pattern. Mathematical Methods in Engineering and Science Analytic Functions 535, Analyticity of Complex Functions Points to note Conformal Mapping Potential Theory

◮ Analytic functions and Cauchy-Riemann conditions ◮ Conformality of analytic functions ◮ Applications in solving BVP’s and flow description

Necessary Exercises: 1,2,3,4,7,9 Mathematical Methods in Engineering and Science Integrals in the Complex Plane 536, Line Integral Outline Cauchy’s Integral Theorem Cauchy’s Integral Formula

Integrals in the Complex Plane Line Integral Cauchy’s Integral Theorem Cauchy’s Integral Formula Mathematical Methods in Engineering and Science Integrals in the Complex Plane 537, Line Integral Line Integral Cauchy’s Integral Theorem Cauchy’s Integral Formula For w = f (z)= u(x, y)+ iv(x, y), over a smooth curve C,

f (z)dz = (u+iv)(dx+idy)= (udx vdy)+i (vdx+udy). − ZC ZC ZC ZC Extension to piecewise smooth curves is obvious. With parametrization, for z = z(t), a t b, withz ˙ (t) = 0, ≤ ≤ 6 b f (z)dz = f [z(t)]˙z(t)dt. ZC Za Over a simple closed curve, contour integral: C f (z)dz Example: zndz for integer n, around circle z = ρeiθ C H H 2π 0 for n = 1, zndz = iρn+1 ei(n+1)θdθ = 6 − 2πi for n = 1. IC Z0  − The M-L inequality: If C is a curve of finite length L and f (z) < M on C, then | | f (z)dz f (z) dz < M dz = ML. ≤ | | | | | | ZC ZC ZC

Mathematical Methods in Engineering and Science Integrals in the Complex Plane 538, Line Integral Cauchy’s Integral Theorem Cauchy’s Integral Theorem Cauchy’s Integral Formula ◮ C is a simple closed curve in a simply connected domain D. ◮ Function f (z)= u + iv is analytic in D.

Contour integral C f (z)dz =? If f (z) is continuous, then by Green’s theorem in the plane, ′ H ∂v ∂u ∂u ∂v f (z)dz = dxdy+i dxdy, −∂x − ∂y ∂x − ∂y IC ZR Z   ZR Z   where R is the region enclosed by C.

From C-R conditions, C f (z)dz = 0. Proof by Goursat: withoutH the hypothesis of continuity of f ′(z) Cauchy-Goursat theorem If f (z) is analytic in a simply connected domain D, then

C f (z)dz = 0 for every simple closed curve C in D. ImportanceH of Goursat’s contribution: ◮ continuity of f ′(z) appears as consequence! Mathematical Methods in Engineering and Science Integrals in the Complex Plane 539, Line Integral Cauchy’s Integral Theorem Cauchy’s Integral Theorem Cauchy’s Integral Formula Principle of path independence Two points z1 and z2 on the close curve C ◮ two open paths C1 and C2 from z1 to z2 Cauchy’s theorem on C, comprising of C1 in the forward direction and C2 in the reverse direction: z2 f (z)dz f (z)dz = 0 f (z)dz = f (z)dz = f (z)dz − ⇒ ZC1 ZC2 Zz1 ZC1 ZC2 For an analytic function f (z) in a simply connected domain D, z2 f (z)dz is independent of the path and z1 depends only on the end-points, as long as the path is R completely contained in D. Consequence: Definition of the function z F (z)= f (ξ)dξ Zz0 What does the formulation suggest? Mathematical Methods in Engineering and Science Integrals in the Complex Plane 540, Line Integral Cauchy’s Integral Theorem Cauchy’s Integral Theorem Cauchy’s Integral Formula Indefinite integral Question: Is F (z) analytic? Is F ′(z)= f (z)?

F (z + δz) F (z) 1 z+δz z − f (z) = f (ξ)dξ f (ξ)dξ f (z) δz − δz − − Zz0 Zz0  1 z+δz = [f (ξ) f (z)]dξ δz − Zz f is continuous ǫ, δ such that ξ z < δ f (ξ) f (z) <ǫ ⇒∀ ∃ | − | ⇒ | − | Choosing δz < δ, F (z + δz) F (z) ǫ z+δz − f (z) < dξ = ǫ. δz − δz Zz

If f (z ) is analytic in a simply connected domain D, then there exists an analytic function F (z) in D such that

z2 F ′(z)= f (z) and f (z)dz = F (z ) F (z ). 2 − 1 Zz1 Mathematical Methods in Engineering and Science Integrals in the Complex Plane 541, Line Integral Cauchy’s Integral Theorem Cauchy’s Integral Theorem Cauchy’s Integral Formula Principle of deformation of paths

C *

f (z) analytic everywhere other z2 s1 D than isolated points s1, s2, s3 C1

C2

z1 s3 C3 s f (z)dz = f (z)dz = f (z)dz 2 ZC1 ZC2 ZC3

Not so for path C ∗. Figure: Path deformation

The line integral remains unaltered through a continuous deformation of the path of integration with fixed end-points, as long as the sweep of the deformation includes no point where the integrand is non-analytic. Mathematical Methods in Engineering and Science Integrals in the Complex Plane 542, Line Integral Cauchy’s Integral Theorem Cauchy’s Integral Theorem Cauchy’s Integral Formula Cauchy’s theorem in multiply connected domain

L1 C C1

C2 L2

C3

L3

Figure: Contour for multiply connected domain

f (z)dz f (z)dz f (z)dz f (z)dz = 0. − − − IC IC1 IC2 IC3 If f (z) is analytic in a region bounded by the contour C as the outer boundary and non-overlapping contours C1, C , C , , Cn as inner boundaries, then 2 3 · · · n f (z)dz = f (z)dz. C i Ci I X=1 I Mathematical Methods in Engineering and Science Integrals in the Complex Plane 543, Line Integral Cauchy’s Integral Formula Cauchy’s Integral Theorem Cauchy’s Integral Formula f (z): analytic function in a simply connected domain D For z D and simple closed curve C in D, 0 ∈ f (z) dz = 2πif (z ). z z 0 IC − 0 Consider C as a circle with centre at z0 and radius ρ, with no loss of generality (why?).

f (z) dz f (z) f (z ) dz = f (z ) + − 0 dz z z 0 z z z z IC − 0 IC − 0 IC − 0 From continuity of f (z), δ such that for any ǫ, ∃ f (z) f (z ) ǫ z z < δ f (z) f (z ) <ǫ and − 0 < , | − 0| ⇒ | − 0 | z z ρ 0 −

with ρ < δ. From M-L inequality, the second integral vanishes. Mathematical Methods in Engineering and Science Integrals in the Complex Plane 544, Line Integral Cauchy’s Integral Formula Cauchy’s Integral Theorem Cauchy’s Integral Formula Direct applications ◮ Evaluation of contour integral: ◮ If g(z) is analytic on the contour and in the enclosed region,

the Cauchy’s theorem implies C g(z)dz = 0. ◮ If the contour encloses a singularity at z0, then Cauchy’s formula supplies a non-zero contributionH to the integral, if f (z)= g(z)(z z ) is analytic. − 0 ◮ Evaluation of function at a point: If finding the integral on the left-hand-side is relatively simple, then we use it to evaluate f (z0). Significant in the solution of boundary value problems! Example: Poisson’s integral formula 1 2π (R2 r 2)u(R, φ) u(r,θ)= − dφ 2π R2 2Rr cos(θ φ)+ r 2 Z0 − − for the Dirichlet problem over a circular disc. Mathematical Methods in Engineering and Science Integrals in the Complex Plane 545, Line Integral Cauchy’s Integral Formula Cauchy’s Integral Theorem Cauchy’s Integral Formula Poisson’s integral formula iθ iφ Taking z0 = re and z = Re (with r < R) in Cauchy’s formula, 2π f (Reiφ) 2πif (reiθ)= (iReiφ)dφ. Reiφ reiθ Z0 − How to get rid of imaginary quantities from the expression? R2 Develop a complement. With r in place of r, 2π iφ 2π iφ f (Re ) iφ f (Re ) iθ 0= (iRe )dφ = (ire− )dφ. i R2 i iθ iφ 0 Re φ e θ 0 re− Re− Z − r Z − Subtracting, 2π Reiφ re iθ 2πif (reiθ) = i f (Reiφ) + − dφ Reiφ reiθ Re iφ re iθ Z0  − − − −  2π (R2 r 2)f (Reiφ) = i − dφ (Reiφ reiθ)(Re iφ re iθ) Z0 − − − − 1 2π (R2 r 2)f (Reiφ) f (reiθ)= − dφ. ⇒ 2π R2 2Rr cos(θ φ)+ r 2 Z0 − − Mathematical Methods in Engineering and Science Integrals in the Complex Plane 546, Line Integral Cauchy’s Integral Formula Cauchy’s Integral Theorem Cauchy’s Integral Formula Cauchy’s integral formula evaluates contour integral of g(z),

if the contour encloses a point z0 where g(z) is non-analytic but g(z)(z z ) is analytic. − 0 If g(z)(z z ) is also non-analytic, but g(z)(z z )2 is analytic? − 0 − 0 1 f (z) f (z ) = dz, 0 2πi z z IC − 0 1 f (z) f ′(z ) = dz, 0 2πi (z z )2 IC − 0 2! f (z) f ′′(z0) = 3 dz, 2πi C (z z0) = I − , · · · ··· ··· ··· n! f (z) f (n)(z ) = dz. 0 2πi (z z )n+1 IC − 0 The formal expressions can be established through differentiation under the integral sign. Mathematical Methods in Engineering and Science Integrals in the Complex Plane 547, Line Integral Cauchy’s Integral Formula Cauchy’s Integral Theorem Cauchy’s Integral Formula

f (z + δz) f (z ) 1 1 1 0 − 0 = f (z) dz δz 2πiδz z z δz − z z IC  − 0 − − 0  1 f (z)dz = 2πi (z z δz)(z z ) IC − 0 − − 0 1 f (z)dz 1 1 1 = + f (z) dz 2πi (z z )2 2πi (z z δz)(z z ) − (z z )2 IC − 0 IC  − 0 − − 0 − 0  1 f (z)dz 1 f (z)dz = + δz 2πi (z z )2 2πi (z z δz)(z z )2 IC − 0 IC − 0 − − 0 If f (z) < M on C, L is path length and d = min z z , | | 0 | − 0| f (z)dz ML δz δz < | | 0 as δz 0. (z z δz)(z z )2 d 2(d δz ) → → IC 0 0 0 0 − − − − | |

An analytic function possesses derivatives of all orders at every point in its domain. Analyticity implies much more than mere differentiability! Mathematical Methods in Engineering and Science Integrals in the Complex Plane 548, Line Integral Points to note Cauchy’s Integral Theorem Cauchy’s Integral Formula

◮ Concept of line integral in complex plane ◮ Cauchy’s integral theorem ◮ Consequences of analyticity ◮ Cauchy’s integral formula ◮ Derivatives of arbitrary order for analytic functions

Necessary Exercises: 1,2,5,7 Mathematical Methods in Engineering and Science Singularities of Complex Functions 549, Series Representations of Complex Functions Outline Zeros and Singularities Residues Evaluation of Real Integrals

Singularities of Complex Functions Series Representations of Complex Functions Zeros and Singularities Residues Evaluation of Real Integrals Mathematical Methods in Engineering and Science Singularities of Complex Functions 550, Series Representations of Complex Functions Series Representations of Complex FunctionsZeros and Singularities Residues Evaluation of Real Integrals Taylor’s series of function f (z), analytic in a neighbourhood of z0:

∞ n 2 3 f (z)= an(z z ) = a +a (z z )+a (z z ) +a (z z ) + , − 0 0 1 − 0 2 − 0 3 − 0 · · · n X=0 with coefficients 1 1 f (w)dw a = f (n)(z )= , n n! 0 2πi (w z )n+1 IC − 0 where C is a circle with centre at z0. Form of the series and coefficients: similar to real functions The series representation is convergent within a disc z z < R, where radius of convergence R is the | − 0| distance of the nearest singularity from z0.

Note: No valid power series representation around z0, i.e. in powers of (z z ), if f(z) is not analytic at z − 0 0 Question: In that case, what about a series representation that includes negative powers of (z z ) as well? − 0 Mathematical Methods in Engineering and Science Singularities of Complex Functions 551, Series Representations of Complex Functions Series Representations of Complex FunctionsZeros and Singularities Residues Evaluation of Real Integrals Laurent’s series: If f (z) is analytic on circles C1 (outer) and C2 (inner) with centre at z0, and in the annulus in between, then

∞ n ∞ m ∞ cm f (z)= an(z z ) = bm(z z ) + ; − 0 − 0 (z z )m n= m=0 m=1 0 X−∞ X X − with coefficients 1 f (w)dw a = ; n 2πi (w z )n+1 IC − 0 1 f (w)dw 1 m 1 or, bm = , cm = f (w)(w z ) − dw; 2πi (w z )m+1 2πi − 0 IC − 0 IC the contour C lying in the annulus and enclosing C2. Validity of this series representation: in annular region obtained by growing C1 and shrinking C2 till f (z) ceases to be analytic. Observation: If f (z) is analytic inside C2 as well, then cm = 0 and Laurent’s series reduces to Taylor’s series. Mathematical Methods in Engineering and Science Singularities of Complex Functions 552, Series Representations of Complex Functions Series Representations of Complex FunctionsZeros and Singularities Residues Proof of Laurent’s series Evaluation of Real Integrals Cauchy’s integral formula for any point z in the annulus, 1 f (w)dw 1 f (w)dw f (z)= . 2πi w z − 2πi w z IC1 − IC2 −

Organization of the series: z

w

1 1 z0 = C C w z (w z )[1 (z z )/(w z )] 2 1 − − 0 − − 0 − 0 1 1 = w z −(z z0)[1 (w z0)/(z z0)] − − − − − Figure: The annulus

Using the expression for the sum of a geometric series, n n 2 n 1 1 q 1 2 n 1 q 1+q+q + +q − = − = 1+q+q + +q − + . · · · 1 q ⇒ 1 q · · · 1 q − − − z z0 w z0 We use q = w− z for integral over C1 and q = z −z over C2. − 0 − 0 Mathematical Methods in Engineering and Science Singularities of Complex Functions 553, Series Representations of Complex Functions Series Representations of Complex FunctionsZeros and Singularities Residues Proof of Laurent’s series (contd) Evaluation of Real Integrals z z0 Using q = w− z , − 0 1 1 z z (z z )n 1 z z n 1 = + − 0 + + − 0 − + − 0 w z w z (w z )2 · · · (w z )n w z w z − − 0 − 0 − 0  − 0  − 1 f (w)dw n 1 = a0 + a1(z z0)+ + an 1(z z0) − + Tn, ⇒ 2πi w z − · · · − − IC1 − with coefficients as required and n 1 z z0 f (w) Tn = − dw. 2πi w z w z IC1  − 0  − w z0 Similarly, with q = z −z , − 0 1 f (w)dw 1 n = a 1(z z0)− + + a n(z z0)− + T n, −2πi w z − − · · · − − − IC2 − with appropriate coefficients and the remainder term n 1 w z0 f (w) T n = − dw. − 2πi z z z w IC2  − 0  − Mathematical Methods in Engineering and Science Singularities of Complex Functions 554, Series Representations of Complex Functions Series Representations of Complex FunctionsZeros and Singularities Residues Convergence of Laurent’s series Evaluation of Real Integrals

n 1 − k f (z)= ak (z z0) + Tn + T n, − − k= n X− n 1 z z0 f (w) where Tn = − dw 2πi C1 w z0 w z I  − n − 1 w z0 f (w) and T n = − dw. − 2πi z z z w IC2  0  ◮ f (w) is bounded − − ◮ z z0 w z0 w−z < 1 over C1 and z −z < 1 over C2 − 0 − 0

Use M -L inequality to show that

remainder terms Tn and T n approach zero as n . − →∞ Remark: For actually developing Taylor’s or Laurent’s series of a function, algebraic manipulation of known facts are employed quite often, rather than evaluating so many contour integrals! Mathematical Methods in Engineering and Science Singularities of Complex Functions 555, Series Representations of Complex Functions Zeros and Singularities Zeros and Singularities Residues Evaluation of Real Integrals Zeros of an analytic function: points where the function vanishes

If, at a point z0, a function f (z) vanishes along with first m 1 of its − derivatives, but f (m)(z ) = 0; 0 6 then z0 is a zero of f (z) of order m, giving the Taylor’s series as

f (z) = (z z )mg(z). − 0 An isolated zero has a neighbourhood containing no other zero. For an analytic function, not identically zero, every point has a neighbourhood free of zeros of the function, except possibly for that point itself. In particular, zeros of such an analytic function are always isolated. Implication: If f (z) has a zero in every neighbourhood around z0 then it cannot be analytic at z0, unless it is the zero function [i.e. f (z) = 0 everywhere]. Mathematical Methods in Engineering and Science Singularities of Complex Functions 556, Series Representations of Complex Functions Zeros and Singularities Zeros and Singularities Residues Entire function: A function which is analyticEvaluation everywhere of Real Integrals Examples: zn (for positive integer n), ez , sin z etc. The Taylor’s series of an entire function has an infinite radius of convergence. Singularities: points where a function ceases to be analytic

Removable singularity: If f (z) is not defined at z0, but has a limit. ez 1 Example: f (z)= z− at z = 0. Pole: If f (z) has a Laurent’s series around z0, with a finite number of terms with negative powers. If an = 0 for n < m, but a m = 0, then z0 is a pole of order m, − −m 6 limz z (z z0) f (z) being a non-zero finite number. → 0 − A simple pole: a pole of order one. Essential singularity: A singularity which is neither a removable singularity nor a pole. If the function has a Laurent’s series, then it has infinite terms with negative powers. Example: f (z)= e1/z at z = 0. Mathematical Methods in Engineering and Science Singularities of Complex Functions 557, Series Representations of Complex Functions Zeros and Singularities Zeros and Singularities Residues Zeros and poles: complementary to each otherEvaluation of Real Integrals ◮ Poles are necessarily isolated singularities. ◮ 1 A zero of f (z) of order m is a pole of f (z) of the same order and vice versa. ◮ If f (z) has a zero of order m at z0 where g(z) has a pole of the same order, then f (z)g(z) is either analytic at z0 or has a removable singularity there. ◮ Argument theorem: If f (z) is analytic inside and on a simple closed curve C except for a finite number of poles inside and f (z) = 0 on C, then 6 1 f (z) ′ dz = N P, 2πi f (z) − IC where N and P are total numbers of zeros and poles inside C respectively, counting multiplicities (orders). Mathematical Methods in Engineering and Science Singularities of Complex Functions 558, Series Representations of Complex Functions Residues Zeros and Singularities Residues Evaluation of Real Integrals Term by term integration of Laurent’s series: C f (z)dz = 2πia 1 Res 1 − Residue: f (z)= a 1 = f (z)dz z0 − 2πi C H If f (z) has a pole (of order m) atH z0, then

m ∞ m+n (z z ) f (z)= an(z z ) − 0 − 0 n= m X− is analytic at z0, and m 1 d − m ∞ (m + n)! n+1 [(z z ) f (z)] = an(z z ) dzm 1 − 0 (n + 1)! − 0 − n= 1 X− 1 d m 1 Res − m f (z)= a 1 = lim m 1 [(z z0) f (z)]. ⇒ z0 − (m 1)! z z0 dz − − → − Residue theorem: If f (z) is analytic inside and on simple closed curve C, with singularities at z , z , z , , zk inside C; then 1 2 3 · · · k f (z)dz = 2πi Resf (z). zi C i I X=1 Mathematical Methods in Engineering and Science Singularities of Complex Functions 559, Series Representations of Complex Functions Evaluation of Real Integrals Zeros and Singularities Residues General strategy Evaluation of Real Integrals ◮ Identify the required integral as a contour integral of a complex function, or a part thereof. ◮ If the domain of integration is infinite, then extend the contour infinitely, without enclosing new singularities. Example: 2π I = φ(cos θ, sin θ)dθ Z0 With z = eiθ and dz = izdθ, 1 1 1 1 dz I = φ z + , z = f (z)dz, 2 z 2i − z iz IC      IC where C is the unit circle centred at the origin. Denoting poles falling inside the unit circle C as pj , I = 2πi Resf (z). pj j X Mathematical Methods in Engineering and Science Singularities of Complex Functions 560, Series Representations of Complex Functions Evaluation of Real Integrals Zeros and Singularities Residues Example: For real rational function f (x), Evaluation of Real Integrals

I = ∞ f (x)dx, Z−∞ denominator of f (x) being of degree two higher than numerator. Consider contour C enclosing semi-circular region z R, y 0, | |≤ ≥ large enough to enclose all singularities above the x-axis.

R y

f (z)dz = f (x)dx + f (z)dz iR C R S I Z− Z p R S

M p For finite M, f (z) < R2 on C p | | −RO R x M πM f (z)dz < πR = . R2 R Figure: The contour ZS

I = ∞ f (x)dx = 2πi Resf (z) as R . pj →∞ j Z−∞ X Mathematical Methods in Engineering and Science Singularities of Complex Functions 561, Series Representations of Complex Functions Evaluation of Real Integrals Zeros and Singularities Residues Example: Fourier integral coefficients Evaluation of Real Integrals

A(s)= ∞ f (x) cos sx dx and B(s)= ∞ f (x)sin sx dx Z−∞ Z−∞ Consider I = A(s)+ iB(s)= ∞ f (x)eisx dx. Z−∞ Similar to the previous case, R f (z)eisz dz = f (x)eisx dx + f (z)eisz dz. C R S I Z− Z As eisz = eisx e sy = e sy 1 for y 0, we have | | | | | − | | − |≤ ≥ M πM f (z)eisz dz < πR = , R2 R ZS

which yields, as R , →∞ I = 2πi Res[f (z)eisz ]. pj j X Mathematical Methods in Engineering and Science Singularities of Complex Functions 562, Series Representations of Complex Functions Points to note Zeros and Singularities Residues Evaluation of Real Integrals

◮ Taylor’s series and Laurent’s series ◮ Zeros and poles of analytic functions ◮ Residue theorem ◮ Evaluation of real integrals through contour integration of suitable complex functions

Necessary Exercises: 1,2,3,5,8,9,10 Mathematical Methods in Engineering and Science Variational Calculus* 563, Introduction Outline Euler’s Equation Direct Methods

Variational Calculus* Introduction Euler’s Equation Direct Methods Mathematical Methods in Engineering and Science Variational Calculus* 564, Introduction Introduction Euler’s Equation Direct Methods

Consider a particle moving on a smooth surface z = ψ(q1, q2).

T With position r = [q1(t) q2(t) ψ(q1(t), q2(t))] on the surface and δr = [δq δq ( ψ)T δq]T in the tangent plane, length of the 1 2 ∇ path from qi = q(ti ) to qf = q(tf ) is

tf tf 1/2 2 2 T 2 l = δr = ˙r dt = q˙ 1 +q ˙ 2 + ( ψ ˙q) dt. k k t k k t ∇ Z Z i Z i h i For shortest path or geodesic, minimize the path length l.

Question: What are the variables of the problem?

Answer: The entire curve or function q(t).

Variational problem: Optimization of a function of functions, i.e. a functional. Mathematical Methods in Engineering and Science Variational Calculus* 565, Introduction Introduction Euler’s Equation Direct Methods Functionals and their extremization Suppose that a candidate curve is represented as a sequence of points qj = q(tj ) at time instants

ti = t0 < t1 < t2 < t3 < < tN 1 < tN = tf . · · · − Geodesic problem: a multivariate optimization problem with the 2(N 1) variables in qj , 1 j N 1 . − { ≤ ≤ − } With N , we obtain the actual function. →∞ First order necessary condition: Functional is stationary with respect to arbitrary small variations in qj . { } [Equivalent to vanishing of the gradient]

This gives equations for the stationary points. Here, these equations are differential equations! Mathematical Methods in Engineering and Science Variational Calculus* 566, Introduction Introduction Euler’s Equation Direct Methods Examples of variational problems Geodesic path: Minimize l = b r (t) dt a k ′ k Minimal surface of revolution: Minimize R b 2 S = 2πyds = 2π a y 1+ y ′ dx The brachistochrone problem: To find the curve along which the R R p descent is fastest. ds b 1+y ′2 Minimize T = v = a 2gy dx Fermat’s principle: Light takes the fastest path. R R q u √x′2+y ′2+z′2 Minimize T = 2 du u1 c(x,y,z) Isoperimetric problem: Largest area in the plane enclosed by a R closed curve of given perimeter. By extension, extremize a functional under one or more equality constraints. Hamilton’s principle of least action: Evolution of a dynamic system through the minimization of the action

t2 t2 s = Ldt = (K P)dt − Zt1 Zt1 Mathematical Methods in Engineering and Science Variational Calculus* 567, Introduction Euler’s Equation Euler’s Equation Direct Methods Find out a function y(x), that will make the functional

x2 I [y(x)] = f [x, y(x), y ′(x)]dx Zx1 stationary, with boundary conditions y(x1)= y1 and y(x2)= y2. Consider variation δy(x) with δy(x1)= δy(x2)=0 and consistent variation δy ′(x).

x2 ∂f ∂f δI = δy + δy ′ dx ∂y ∂y Zx1  ′  Integration of the second term by parts: x x2 ∂f x2 ∂f d ∂f 2 x2 d ∂f δy ′dx = (δy)dx = δy δy dx x ∂y x ∂y dx ∂y − x dx ∂y Z 1 ′ Z 1 ′  ′ x1 Z 1 ′ With δy(x1)= δy(x2) = 0, the first term vanishes identically, and x2 ∂f d ∂f δI = δy dx. ∂y − dx ∂y Zx1  ′  Mathematical Methods in Engineering and Science Variational Calculus* 568, Introduction Euler’s Equation Euler’s Equation Direct Methods For δI to vanish for arbitrary δy(x), d ∂f ∂f ′ = 0. dx ∂y − ∂y Functions involving higher order derivatives

x2 (n) I [y(x)] = f x, y, y ′, y ′′, , y dx x · · · Z 1   with prescribed boundary values for y, y , y , , y (n 1) ′ ′′ · · · −

x2 ∂f ∂f ∂f ∂f (n) δI = δy + δy ′ + δy ′′ + + δy dx ∂y ∂y ∂y · · · ∂y (n) Zx1  ′ ′′  Working rule: Starting from the last term, integrate one term at a time by parts, using consistency of variations and BC’s. Euler’s equation: ∂f d ∂f d 2 ∂f d n ∂f + + ( 1)n = 0, 2 n (n) ∂y − dx ∂y ′ dx ∂y ′′ −··· − dx ∂y an ODE of order 2n, in general. Mathematical Methods in Engineering and Science Variational Calculus* 569, Introduction Euler’s Equation Euler’s Equation Direct Methods Functionals of a vector function

t2 I [r(t)] = f (t, r, ˙r)dt Zt1 ∂f ∂f In terms of partial gradients ∂r and ∂˙r , T T t2 ∂f ∂f δI = δr + δ˙r dt ∂r ∂˙r Zt1 "    # T T t2 T t2 ∂f ∂f t2 d ∂f = δrdt + δr δrdt t1 ∂r " ∂˙r # − t1 dt ∂˙r Z     t1 Z   T t2 ∂f d ∂f = δrdt. ∂r − dt ∂˙r Zt1   Euler’s equation: a system of second order ODE’s d ∂f ∂f d ∂f ∂f = 0 or = 0 for each i. dt ∂˙r − ∂r dt ∂r˙i − ∂ri Mathematical Methods in Engineering and Science Variational Calculus* 570, Introduction Euler’s Equation Euler’s Equation Direct Methods Functionals of functions of several variables

I [u(x, y)] = f (x, y, u, ux , uy )dx dy ZD Z Euler’s equation: ∂ ∂f + ∂ ∂f ∂f = 0 ∂x ∂ux ∂y ∂uy − ∂u Moving boundaries Revision of the basic case: allowing non-zero δy(x1), δy(x2) ∂f At an end-point, ∂y ′ δy has to vanish for arbitrary δy(x). ∂f ∂y ′ vanishes at the boundary.

Euler boundary condition or natural boundary condition Equality constraints and isoperimetric problems x2 x2 ′ Minimize I = f (x, y, y )dx subject to J = g(x, y, y )dx = J0. x1 ′ x1 In another level of generalization, constraint φ(x, y, y ) = 0. R R ′ Operate with f ∗(x, y, y ′, λ)= f (x, y, y ′)+ λ(x)g(x, y, y ′). Mathematical Methods in Engineering and Science Variational Calculus* 571, Introduction Direct Methods Euler’s Equation Direct Methods Finite difference method With given boundary values y(a) and y(b),

b I [y(x)] = f [x, y(x), y ′(x)]dx Za

◮ Represent y(x) by its values over xi = a + ih with i = 0, 1, 2, , N, where b a = Nh. · · · − ◮ Approximate the functional by

N

I [y(x)] φ(y1, y2, y3, , yN 1)= f (¯xi , y¯i , y¯i′)h, ≈ · · · − i X=1 xi +xi−1 yi +yi−1 yi yi−1 wherex ¯i = 2 ,y ¯i = 2 andy ¯i′ = −h . ◮ Minimize φ(y1, y2, y3, , yN 1) with respect to yi ; · · · − for example, by solving ∂φ = 0 for all i. ∂yi Exercise: Show that ∂φ = 0 is equivalent to Euler’s equation. ∂yi Mathematical Methods in Engineering and Science Variational Calculus* 572, Introduction Direct Methods Euler’s Equation Direct Methods Rayleigh-Ritz method In terms of a set of basis functions, express the solution as

N y(x)= αi wi (x). i X=1 Represent functional I [y(x)] as a multivariate function φ(α).

Optimize φ(α) to determine αi ’s. Note: As N , the numerical solution approaches exactitude. →∞ For a particular tolerance, one can truncate appropriately. Observation: With these direct methods, no need to reduce the variational (optimization) problem to Euler’s equation! Question: Is it possible to reformulate a BVP as a variational problem and then use a direct method? Mathematical Methods in Engineering and Science Variational Calculus* 573, Introduction Direct Methods Euler’s Equation Direct Methods The inverse problem: From b N N I [y(x)] φ(α)= f x, αi wi (x), αi w ′(x) dx, ≈ i a i i ! Z X=1 X=1 N N N N ∂φ b ∂f ′ ∂f ′ ′ = 2 0x, αi wi , αi wi 1 wi (x) + ′ 0x, αi wi , αi wi 1 wi (x)3 dx. ∂αi Za ∂y X X ∂y X X 4 @ i=1 i=1 A @ i=1 i=1 A 5 Integrating the second term by parts and using wi (a)= wi (b) = 0, N ∂φ b = αi wi wi (x)dx, ∂αi R a " i # Z X=1 ∂f d ∂f where [y] ′ = 0 is the Euler’s equation of the R ≡ ∂y − dx ∂y variational problem. Def.: [z(x)]: residual of the differential equation [y] = 0 R R operated over the function z(x) Residual of the Euler’s equation of a variational problem operated upon the solution obtained by Rayleigh-Ritz method is orthogonal to basis functions wi (x). Mathematical Methods in Engineering and Science Variational Calculus* 574, Introduction Direct Methods Euler’s Equation Direct Methods Galerkin method Question: What if we cannot find a ‘corresponding’ variational problem for the differential equation? Answer: Work with the residual directly and demand b [z(x)]wi (x)dx = 0. R Za Freedom to choose two different families of functions as basis functions ψj (x) and trial functions wi (x):

b αj ψj (x) wi (x)dx = 0 R   a j Z X A singular case of the Galerkin method: delta functions, at discrete points, as trial functions Satisfaction of the differential equation exactly at the chosen points, known as collocation points: Collocation method Mathematical Methods in Engineering and Science Variational Calculus* 575, Introduction Direct Methods Euler’s Equation Direct Methods Finite element methods ◮ discretization of the domain into elements of simple geometry ◮ basis functions of low order polynomials with local scope ◮ design of basis functions so as to achieve enough order of continuity or smoothness across element boundaries ◮ piecewise continuous/smooth basis functions for entire domain, with a built-in sparse structure ◮ some weighted residual method to frame the algebraic equations ◮ solution gives coefficients which are actually the nodal values

Suitability of finite element analysis in software environments ◮ effectiveness and efficiency ◮ neatness and modularity Mathematical Methods in Engineering and Science Variational Calculus* 576, Introduction Points to note Euler’s Equation Direct Methods

◮ Optimization with respect to a function ◮ Concept of a functional ◮ Euler’s equation ◮ Rayleigh-Ritz and Galerkin methods ◮ Optimization and equation-solving in the infinite-dimensional function space: practical methods and connections

Necessary Exercises: 1,2,4,5 Mathematical Methods in Engineering and Science Epilogue 577, Outline

Epilogue Mathematical Methods in Engineering and Science Epilogue 578, Epilogue

Source for further information: http://home.iitk.ac.in/˜ dasgupta/MathBook

Destination for feedback: [email protected]

Some general courses in immediate continuation ◮ Advanced Mathematical Methods ◮ Scientific Computing ◮ Advanced Numerical Analysis ◮ Optimization ◮ Advanced Differential Equations ◮ Partial Differential Equations ◮ Finite Element Methods Mathematical Methods in Engineering and Science Epilogue 579, Epilogue

Some specialized courses in immediate continuation ◮ Linear Algebra and Matrix Theory ◮ Approximation Theory ◮ Variational Calculus and Optimal Control ◮ Advanced Mathematical Physics ◮ Geometric Modelling ◮ Computational Geometry ◮ Computer Graphics ◮ Signal Processing ◮ Image Processing Mathematical Methods in Engineering and Science Selected References 580, Outline

Selected References Mathematical Methods in Engineering and Science Selected References 581, Selected References I

F. S. Acton. Numerical Methods that usually Work. The Mathematical Association of America (1990). C. M. Bender and S. A. Orszag. Advanced Mathematical Methods for Scientists and Engineers.

Springer-Verlag (1999). G. Birkhoff and G.-C. Rota. Ordinary Differential Equations. John Wiley and Sons (1989). G. H. Golub and C. F. Van Loan. Matrix Computations. The John Hopkins University Press (1983). Mathematical Methods in Engineering and Science Selected References 582, Selected References II

M. T. Heath. Scientific Computing. Tata McGraw-Hill Co. Ltd (2000). E. Kreyszig. Advanced Engineering Mathematics. John Wiley and Sons (2002). E. V. Krishnamurthy and S. K. Sen. Numerical Algorithms. Affiliated East-West Press Pvt Ltd (1986). D. G. Luenberger. Linear and Nonlinear Programming. Addison-Wesley (1984). P. V. O’Neil. Advanced Engineering Mathematics. Thomson Books (2004). Mathematical Methods in Engineering and Science Selected References 583, Selected References III

W. H. Press, S. A. Teukolsky, W. T. Vellerling and B. P. Flannery. Numerical Recipes. Cambridge University Press (1998). G. F. Simmons. Differential Equations with Applications and Historical Notes. Tata McGraw-Hill Co. Ltd (1991). J. Stoer and R. Bulirsch. Introduction to Numerical Analysis. Springer-Verlag (1993). C. R. Wylie and L. C. Barrett. Advanced Engineering Mathematics. Tata McGraw-Hill Co. Ltd (2003).