Sreenivasa Institute of Technology and Management Studies

Engineering Mathematics-I

I B.TECH. I SEMESTER

Course Code: 18SAH114 Regulation:R18 1 Engineering Mathematics-I

I B.TECH. I SEMESTER

Course Code: 18SAH114 Regulation:R18

Unit-1

Matrices

Introduction Matrices - Introduction

Matrix algebra has at least two advantages: •Reduces complicated systems of equations to simple expressions •Adaptable to systematic method of mathematical treatment and well suited to computers

Definition: A matrix is a set or group of numbers arranged in a square or rectangular array enclosed by two brackets

 4 2 a b 1 1     3 0 c d  Matrices - Introduction

Properties: •A specified number of rows and a specified number of columns •Two numbers (rows x columns) describe the dimensions or size of the matrix.

Examples: 3x3 matrix 1 2 4 2x4 matrix 4 1 5 1 1 3  3 1x2 matrix     1 1 3 3 3 0 0 3 2  Matrices - Introduction

A matrix is denoted by a bold capital letter and the elements within the matrix are denoted by lower case letters e.g. matrix [A] with elements aij

a a ... a a  Amxn= 11 12 ij in n mA   a21 a22... aij a2n           am1 am2 aij amn 

i goes from 1 to m j goes from 1 to n Matrices - Introduction

TYPES OF MATRICES

1. Column matrix or vector: The number of rows may be any integer but the number of columns is always 1

1 a11   1      a21 4      3   2 am1  Matrices - Introduction

TYPES OF MATRICES

2. Row matrix or vector Any number of columns but only one row

1 1 6 0 3 5 2

a11 a12 a13  a1n  Matrices - Introduction

TYPES OF MATRICES

3. Rectangular matrix Contains more than one element and number of rows is not equal to the number of columns

1 1    1 1 1 0 0 3 7    7  7 2 0 3 3 0   7 6  m  n Matrices - Introduction

TYPES OF MATRICES 4. Square matrix The number of rows is equal to the number of columns (a square matrix A has an order of m)

m x m 1 1 1 1 1   9 9 0 3 0   6 6 1

The principal or main diagonal of a square matrix is composed of all elements aij for which i=j

Matrices - Introduction

TYPES OF MATRICES

5. Diagonal matrix A square matrix where all the elements are zero except those on the main diagonal 3 0 0 0 1 0 0 0 3 0 0     0 2 0 0 0 5 0   0 0 1 0 0 0 9

i.e. aij =0 for all i = j

aij = 0 for some or all i = j Matrices - Introduction

TYPES OF MATRICES

6. Unit or Identity matrix - I A diagonal matrix with ones on the main diagonal

1 0 0 0 0 1 0 0 1 0     aij 0  0 0 1 0     0 1 0 aij     0 0 0 1 i.e. aij =0 for all i = j aij = 1 for some or all i = j Matrices - Introduction

TYPES OF MATRICES

7. Null (zero) matrix - 0 All elements in the matrix are zero

0 0 0 0 0     0 0 0 0 0 0 0

For all i,j aij  0 Matrices - Introduction

TYPES OF MATRICES

8. Triangular matrix A square matrix whose elements above or below the main diagonal are all zero

1 0 0 1 8 9 2 1 0     0 1 6   5 2 3 0 0 3 Matrices - Introduction

TYPES OF MATRICES

8a. Upper triangular matrix

A square matrix whose elements below the main diagonal are all zero 1 7 4 4   aij aij aij 1 8 7       0 1 7 4  0 aij aij  0 1 8   0 0 7 8  0 0 a   ij  0 0 3   0 0 0 3 i.e. aij = 0 for all i > j

Matrices - Introduction

TYPES OF MATRICES

8b. Lower triangular matrix

A square matrix whose elements above the main diagonal are all zero

aij 0 0  1 0 0   a a 0    ij ij  2 1 0   aij aij aij  5 2 3

i.e. aij = 0 for all i < j

Matrices – Introduction

TYPES OF MATRICES 9. Scalar matrix A diagonal matrix whose main diagonal elements are equal to the same scalar A scalar is defined as a single number or constant

aij 0 0  1 0 0 6 0 0 0   0 a 0 0 1 0    ij    0 6 0 0  0 0 a  0 0 1  ij    0 0 6 0   i.e. a = 0 for all i = j ij 0 0 0 6 aij = a for all i = j

Matrices

Matrix Operations Matrices - Operations

EQUALITY OF MATRICES Two matrices are said to be equal only when all corresponding elements are equal Therefore their size or dimensions are equal as well

1 0 0 A =   B = A = B 2 1 0 5 2 3 Matrices - Operations

Some properties of equality: •IIf A = B, then B = A for all A and B •IIf A = B, and B = C, then A = C for all A, B and C

1 0 0 b11 b12 b13  A = 2 1 0 B = b b b     21 22 23  5 2 3 b31 b32 b33 

If A = B then aij  bij Matrices - Operations

ADDITION AND SUBTRACTION OF MATRICES

The sum or difference of two matrices, A and B of the same size yields a matrix C of the same size

cij  aij bij

Matrices of different sizes cannot be added or subtracted Matrices - Operations

Commutative Law: A + B = B + A

Associative Law: A + (B + C) = (A + B) + C = A + B + C

7 3 1  1 5 6  8 8 5         2 5 6   4  2 3  2 7 9

A B C 2x3 2x3 2x3 Matrices - Operations

A + 0 = 0 + A = A

A + (-A) = 0 (where –A is the matrix composed of –aij as elements)

6 4 2 1 2 0 5 2 2          3 2 7 1 0 8 2 2 1 Matrices - Operations

SCALAR MULTIPLICATION OF MATRICES

Matrices can be multiplied by a scalar (constant or single element) Let k be a scalar quantity; then kA = Ak

Ex. If k=4 and 3 1 2 1  A    2  3   4 1  Matrices - Operations

3 1 3 1 12  4  2 1  2 1   8 4  4      4    2  3 2  3  8 12       4 1  4 1  16 4 

Properties: • k (A + B) = kA + kB • (k + g)A = kA + gA • k(AB) = (kA)B = A(k)B • k(gA) = (kg)A

Matrices - Operations

MULTIPLICATION OF MATRICES

The product of two matrices is another matrix Two matrices A and B must be conformable for multiplication to be possible i.e. the number of columns of A must equal the number of rows of B Example. A x B = C (1x3) (3x1) (1x1) Matrices - Operations

B x A = Not possible! (2x1) (4x2)

A x B = Not possible! (6x2) (6x3)

Example A x B = C (2x3) (3x2) (2x2) Matrices - Operations

b b  a a a  11 12 c c  11 12 13 b b   11 12   21 22    a21 a22 a23  c21 c22  b31 b32 

(a11 b11)  (a12 b21)  (a13 b31)  c11

(a11 b12 )  (a12 b22 )  (a13 b32 )  c12

(a21 b11)  (a22 b21)  (a23 b31)  c21

(a21 b12 )  (a22 b22 )  (a23 b32 )  c22

Successive multiplication of row i of A with column j of B – row by column multiplication Matrices - Operations

4 8 1 2 3  (14)  (26)  (35) (18)  (22)  (33)   6 2    4 2 7 (44)  (26)  (75) (48)  (22)  (73) 5 3 31 21    63 57

Remember also: IA = A

1 0 31 21     0 1 63 57 Matrices - Operations

Assuming that matrices A, B and C are conformable for the operations indicated, the following are true: 1. AI = IA = A 2. A(BC) = (AB)C = ABC - (associative law) 3. A(B+C) = AB + AC - (first distributive law) 4. (A+B)C = AC + BC - (second distributive law) Caution! 1. AB not generally equal to BA, BA may not be conformable 2. If AB = 0, neither A nor B necessarily = 0 3. If AB = AC, B not necessarily = C Matrices - Operations

AB not generally equal to BA, BA may not be conformable

1 2 T    5 0 3 4 S    0 2 1 23 4  3 8  TS        5 00 2 15 20 3 41 2 23 6 ST        0 25 0 10 0 Matrices - Operations

If AB = 0, neither A nor B necessarily = 0

1 1 2 3  0 0       0 0 2 3 0 0 Matrices - Operations

TRANSPOSE OF A MATRIX

If :

3 2 4 7 A2 A    2x3 5 3 1

Then transpose of A, denoted AT is:

2 5 T AT  A3  4 3 2   7 1 T aij  a ji For all i and j Matrices - Operations

To transpose: Interchange rows and columns The dimensions of AT are the reverse of the dimensions of A

3 2 4 7 A2 A    2 x 3 5 3 1 2 5 T T 2   A  A  4 3 3 x 2 3   7 1 Matrices - Operations

Properties of transposed matrices: 1. (A+B)T = AT + BT 2. (AB)T = BT AT 3. (kA)T = kAT 4. (AT)T = A Matrices - Operations

1. (A+B)T = AT + BT

8  2 7 3 1  1 5 6  8 8 5           8  7 2 5 6   4  2 3  2 7 9   5 9   7 2  1  4 8  2        3 5  5  2  8  7 1 6  6 3  5 9  Matrices - Operations

(AB)T = BT AT

1 1 1 0  2  1     2 8 0 2 3 8 2 1 0   1 1 21 2  2 8 0 3 Matrices - Operations

SYMMETRIC MATRICES A Square matrix is symmetric if it is equal to its transpose: A = AT

a b A    b d

T a b A    b d Matrices - Operations

When the original matrix is square, transposition does not affect the elements of the main diagonal

a b A    c d

T a c A    b d

The identity matrix, I, a diagonal matrix D, and a scalar matrix, K, are equal to their transpose since the diagonal is unaffected. Matrices - Operations

INVERSE OF A MATRIX

Consider a scalar k. The inverse is the reciprocal or division of 1 by the scalar. Example: k=7 the inverse of k or k-1 = 1/k = 1/7 Division of matrices is not defined since there may be AB = AC while B = C Instead matrix inversion is used. The inverse of a square matrix, A, if it exists, is the unique matrix A-1 where: AA-1 = A-1 A = I

Matrices - Operations

Example:

2 3 1 A2 A    2 1

1  1 1 A     2 3  Because:  1 13 1 1 0        2 3 2 1 0 1 3 1 1 1 1 0       2 1 2 3  0 1 Matrices - Operations

Properties of the inverse:

(AB)1  B1 A1 (A1)1  A (AT )1  (A1)T 1 (kA)1  A1 k A square matrix that has an inverse is called a nonsingular matrix A matrix that does not have an inverse is called a singular matrix Square matrices have inverses except when the determinant is zero When the determinant of a matrix is zero the matrix is singular Matrices - Operations

DETERMINANT OF A MATRIX

To compute the inverse of a matrix, the determinant is required Each square matrix A has a unit scalar value called the determinant of A, denoted by det A or |A|

1 2 If A    6 5 1 2 then A  6 5 Matrices - Operations

If A = [A] is a single element (1x1), then the determinant is defined as the value of the element

Then |A| =det A = a11 If A is (n x n), its determinant may be defined in terms of order (n-1) or less. Matrices - Operations

MINORS If A is an n x n matrix and one row and one column are deleted, the resulting matrix is an (n-1) x (n-1) submatrix of A. The determinant of such a submatrix is called a minor of A and is designated by mij , where i and j correspond to the deleted row and column, respectively. mij is the minor of the element aij in A.

Matrices - Operations

eg. a11 a12 a13  A  a a a   21 22 23  a31 a32 a33 

Each element in A has a minor Delete first row and column from A .

The determinant of the remaining 2 x 2 submatrix is the minor of a11

a22 a23 m11  a32 a33 Matrices - Operations

Therefore the minor of a12 is:

a21 a23 m12  a31 a33

And the minor for a13 is:

a21 a22 m13  a31 a32 Matrices - Operations

COFACTORS

The cofactor Cij of an element aij is defined as:

i j Cij  (1) mij

When the sum of a row number i and column j is even, cij = mij and when i+j is odd, cij =-mij

11 c11(i 1, j 1)  (1) m11  m11 12 c12 (i 1, j  2)  (1) m12  m12 13 c13 (i 1, j  3)  (1) m13  m13 Matrices - Operations

DETERMINANTS CONTINUED

The determinant of an n x n matrix A can now be defined as

A  det A  a11c11  a12c12  a1nc1n

The determinant of A is therefore the sum of the products of the elements of the first row of A and their corresponding cofactors. (It is possible to define |A| in terms of any other row or column but for simplicity, the first row only is used) Matrices - Operations

Therefore the 2 x 2 matrix :

a11 a12  A    a21 a22  Has cofactors :

c11  m11  a22  a22

And: c12  m12   a21  a21

And the determinant of A is:

A  a11c11  a12c12  a11a22  a12a21 Matrices - Operations

Example 1: 3 1 A    1 2 A  (3)(2)  (1)(1)  5 Matrices - Operations

For a 3 x 3 matrix:

a11 a12 a13  A  a a a   21 22 23  a31 a32 a33  The cofactors of the first row are:

a22 a23 c11   a22a33  a23a32 a32 a33

a21 a23 c12    (a21a33  a23a31) a31 a33

a21 a22 c13   a21a32  a22a31 a31 a32 Matrices - Operations

The determinant of a matrix A is:

A  a11c11  a12c12  a11a22  a12a21

Which by substituting for the cofactors in this case is:

A  a11(a22a33  a23a32 )  a12 (a21a33  a23a31)  a13 (a21a32  a22a31) Matrices - Operations

Example 2:  1 0 1   A   0 2 3 1 0 1

A  a11(a22a33  a23a32 )  a12 (a21a33  a23a31)  a13 (a21a32  a22a31)

A  (1)(2  0)  (0)(0  3)  (1)(0  2)  4 Matrices - Operations

ADJOINT MATRICES

A cofactor matrix C of a matrix A is the square matrix of the same order as A in

which each element aij is replaced by its cofactor cij .

Example:  1 2 If A    3 4

 4 3 The cofactor C of A is C     2 1 Matrices - Operations

The adjoint matrix of A, denoted by adj A, is the transpose of its cofactor matrix adjA CT

It can be shown that: A(adj A) = (adjA) A = |A| I

Example:  1 2 A     3 4 A  (1)(4)  (2)(3) 10

T 4  2 adjA C    3 1  Matrices - Operations

 1 24  2 10 0  A(adjA)        10I 3 43 1   0 10

4  2 1 2 10 0  (adjA)A        10I 3 1 3 4  0 10 Matrices - Operations

USING THE ADJOINT MATRIX IN MATRIX INVERSION

Since

AA-1 = A-1 A = I

and A(adj A) = (adjA) A = |A| I

then adjA A1  A Matrices - Operations

Example  1 2 A =   3 4

1 1 4  2 0.4 0.2 A       10 3 1  0.3 0.1 

To check AA-1 = A-1 A = I

1  1 20.4  0.2 1 0 AA         I  3 40.3 0.1  0 1

1 0.4  0.2 1 2 1 0 A A         I 0.3 0.1  3 4 0 1 Matrices - Operations

Example 2 3 1 1    A  2 1 0  1 2 1 The determinant of A is

|A| = (3)(-1-0)-(-1)(-2-0)+(1)(4-1) = -2

The elements of the cofactor matrix are

c11  (1), c12  (2), c13  (3),

c21  (1), c22  (4), c23  (7),

c31  (1), c32  (2), c33  (5), Matrices - Operations

The cofactor matrix is therefore

1 2 3    C   1  4  7 1 2 5 

so 1 1 1 T   adjA C   2  4 2   3  7 5  and 1 1 1  0.5  0.5 0.5  adjA 1 A1    2  4 2   1.0 2.0 1.0 A  2      3  7 5  1.5 3.5  2.5 Matrices - Operations

The result can be checked using

AA-1 = A-1 A = I

The determinant of a matrix must not be zero for the inverse to exist as there will not be a solution Nonsingular matrices have non-zero determinants Singular matrices have zero determinants Matrix Inversion

Simple 2 x 2 case Simple 2 x 2 case

Let and a b 1 w x A    A    c d  y z

Since it is known that A A-1 = I then a bw x 1 0       c d y z 0 1 Simple 2 x 2 case

Multiplying gives aw by  1 ax bz  0 cw  dy  0 cx  dz  1

It can simply be shown that A  ad bc Simple 2 x 2 case

thus 1 aw y  b  cw y  d 1 aw  cw  b d d d w   da bc A Simple 2 x 2 case

 ax z  b 1 cx z  d  ax 1 cx  b d b b x     da bc A Simple 2 x 2 case

1 by w  a  dy w  c 1 by  dy  a c c c y     ad  cb A Simple 2 x 2 case

 bz x  a 1 dz x  c  bz 1 dz  a c a a z   ad  bc A Simple 2 x 2 case

So that for a 2 x 2 matrix the inverse can be constructed in a simple fashion as

 d b  w x  A A  1  d  b 1      A      c a  A  c a  y z      A A 

•Exchange elements of main diagonal •Change sign in elements off main diagonal •Divide resulting matrix by the determinant Simple 2 x 2 case

Example 2 3 A    4 1

1 1  1  3  0.1 0.3  A        10  4 2   0.4  0.2

Check inverse A-1 A=I

1  1 32 3 1 0         I 10  4 2 4 1 0 1 Matrices and Linear Equations

Linear Equations Linear Equations

Linear equations are common and important for survey problems Matrices can be used to express these linear equations and aid in the computation of unknown values Example n equations in n unknowns, the aij are numerical coefficients, the bi are constants and the xj are unknowns

a11x1  a12 x2  a1n xn  b1

a21x1  a22 x2  a2n xn  b2 

an1x1  an2 x2  ann xn  bn Linear Equations

The equations may be expressed in the form AX = B where

a11 a12  a1n  x1  b1  a a  a  x  b  A   21 22 2n , X   2 , and B   2          bn  an1 an1  ann  xn   

n x n n x 1 n x 1

Number of unknowns = number of equations = n Linear Equations

If the determinant is nonzero, the equation can be solved to produce n numerical values for x that satisfy all the simultaneous equations To solve, premultiply both sides of the equation by A-1 which exists because |A| = 0

A-1 AX = A-1 B

Now since A-1 A = I

We get X = A-1 B

So if the inverse of the coefficient matrix is found, the unknowns, X would be determined Linear Equations

Example

3x1  x2  x3  2

2x1  x2 1

x1  2x2  x3  3

The equations can be expressed as

3 1 1 x1  2 2 1 0 x   1   2    1 2 1x3  3 Linear Equations

When A-1 is computed the equation becomes

 0.5  0.5 0.5 2  2  1      X  A B  1.0 2.0 1.01  3 1.5 3.5  2.53  7 

Therefore

x1  2,

x2  3,

x3  7 Linear Equations

The values for the unknowns should be checked by substitution back into the initial equations

3x  x  x  2 x1  2, 1 2 3 2x  x 1 x2  3, 1 2

x3  7 x1  2x2  x3  3

3(2)  (3)  (7)  2 2(2)  (3) 1 (2)  2(3)  (7)  3 Eigenvalues & eigenvectors, diagonalisation & the Cayley-Hamilton theorem

Before you start:

 You need to be able to find the determinant and the inverse of a 2x2 matrix and 3x3 matrix.

 You need to be familiar with the work on matrices on FP1, in particular you should know what is meant by an invariant line.

Solving simultaneous equations using matrices, eigenvalues & eigenvectors, Cayley-Hamilton theorem When you have finished… You should:

 Understand the meaning of eigenvalue and eigenvector, and be able to find these for 2 x 2 or 3 x 3 matrices whenever this is possible.

 Understand the term characteristic equation of a 2 x 2 or 3 x 3 matrix.

 Be able to form the matrix of eigenvectors and use this to reduce a matrix to diagonal form.

 Be able to find use the diagonal form to find powers of a 2 x 2 or 3 x 3 matrix.

 Understand that every 2 x 2 or 3 x 3 matrix satisfies its own characteristic equation, and be able to use this.

Recap: FP1 invariant lines

A transformation represented by a 2 × 2 matrix maps lines to lines

The next concept we will consider is which lines map onto themselves under a transformation (represented by a matrix) Before we start... Before we start... Before we start... Eigenvalues and Eigenvectors of a 2×2 matrix

Eigenvalues and Eigenvectors of 3×3 matrices

Diagonalisation Diagonalisation

This is useful for finding powers of matrices

Example

 http://www.meiresources.org/test/fp2m1q. pdf The Cayley Hamilton Theorem

Unit-II

Differential Calculus and Its Applications

Taylor and Maclaurin Series

In this section, we will learn: How to find the Taylor and Maclaurin Series of a function and to multiply and divide a power series. Taylor and Maclaurin Series

We start by supposing that f is any function that can be represented by a power series

2 3 4 . . . f (x) = c0 + c1(x – a) + c2(x – a) + c3(x – a) + c4(x – a) + | x – a | < R

Let’s try to determine what the coefficients cn must be in terms of f.

To begin, notice that if we put x = a in Equation 1, then all terms after the first one are 0 and we get

f (a) = c0 Taylor and Maclaurin Series

We can differentiate the series in Equation 1 term by term:

2 3 . . . f (x) = c1 + 2c2(x – a) + 3c3(x – a) + 4c4(x – a) + | x – a | < R

and substitution of x = a in Equation 2 gives

f (a) = c1

Now we differentiate both sides of Equation 2 and obtain

2 . . . f (x) = 2c2 + 2  3c3(x – a) + 3  4c4(x – a) + | x – a | < R

Again we put x = a in Equation 3. The result is

f (a) = 2c2 Taylor and Maclaurin Series

Let’s apply the procedure one more time. Differentiation of the series in Equation 3 gives

2 . . . f '''(x) = 2  3c3 + 2  3  4c4(x – a) + 3  4  5c5(x – a) + | x – a | < R

and substitution of x = a in Equation 4 gives

f '''(a) = 2  3c3 = 3!c3

By now you can see the pattern. If we continue to differentiate and substitute x = a, we obtain

(n) . . . f (a) = 2  3  4   ncn = n!cn Taylor and Maclaurin Series

Solving this equation for the nth coefficient cn, we get

This formula remains valid even for n = 0 if we adopt the (0) conventions that 0! = 1 and f = f. Thus we have proved the following theorem. Taylor and Maclaurin Series

Substituting this formula for cn back into the series, we see that if f has a power series expansion at a, then it must be of the following form.

The series in Equation 6 is called the Taylor series of the function f at a (or about a or centered at a). Taylor and Maclaurin Series

For the special case a = 0 the Taylor series becomes

This case arises frequently enough that it is given the special name Maclaurin series. Example 1

x Find the Maclaurin series of the function f (x) = e and its radius of convergence.

Solution: x (n) x (n) 0 If f (x) = e , then f (x) = e , so f (0) = e = 1 for all n. Therefore the Taylor series for f at 0 (that is, the Maclaurin series) is Example 1 – Solution cont’d

n To find the radius of convergence we let an = x /n!. Then

so, by the Ratio Test, the series converges for all x and the radius of convergence is R = .

Taylor and Maclaurin Series

The conclusion we can draw from Theorem 5 and Example 1 is that if ex has a power series expansion at 0, then

So how can we determine whether ex does have a power series representation?

Let’s investigate the more general question: Under what circumstances is a function equal to the sum of its Taylor series? In other words, if f has derivatives of all orders, when is it true that

Taylor and Maclaurin Series

As with any convergent series, this means that f (x) is the limit of the sequence of partial sums. In the case of the Taylor series, the partial sums are

Notice that Tn is a polynomial of degree n called the nth-degree Taylor polynomial of f at a. Example 1 – Forming a Power Series

Use the function f(x) = sin x to form the Maclaurin series

and determine the interval of convergence.

Solution: Successive differentiation of f(x) yields

f(x) = sin x f(0) = sin 0 = 0 f'(x) = cos x f'(0) = cos 0 = 1 f''(x) = –sin x f''(0) = –sin 0 = 0 f(3)(x) = –cos x f(3)(0) = –cos 0 = –1 Example 1 – Solution cont’d f(4)(x) = sin x f(4)(0) = sin 0 = 0 f(5)(x) = cos x f(5)(0) = cos 0 = 1 and so on.

The pattern repeats after the third derivative.

Example 1 – Solution cont’d

So, the power series is as follows.

By the Ratio Test, you can conclude that this series converges for all x. Taylor Series and Maclaurin Series

You cannot conclude that the power series converges to sin x for all x.

You can simply conclude that the power series converges to some function, but you are not sure what function it is.

This is a subtle, but important, point in dealing with Taylor or Maclaurin series.

To persuade yourself that the series

might converge to a function other than f, remember that the derivatives are being evaluated at a single point. Example 2

k Find the Maclaurin series for f (x) = (1 + x) , where k is any real number.

Solution: Arranging our work in columns, we have k f (x) = (1 + x) f (0) = 1 k–1

f (x) = k(1 + x) f (0) = k k–2 f(x) = k(k – 1)(1 + x) f (0) = k(k – 1) k – 3 f '''(x) = k(k – 1)(k – 2)(1 + x) f '''(0) = k(k – 1)(k – 2) ...... (n) ... k – n (n) ... f (x) = k(k – 1) (k – n + 1)(1 + x) f (0) = k(k – 1) (k – n + 1) Example 2 – Solution cont’d

k Therefore the Maclaurin series of f (x) = (1 + x) is

This series is called the binomial series. PARTIAL DERIVATIVES

Maximum and Minimum Values

In this section, we will learn how to: Use partial derivatives to locate maxima and minima of functions of two variables. MAXIMUM & MINIMUM VALUES Look at the hills and valleys in the graph of f shown here. ABSOLUTE MAXIMUM There are two points (a, b) where f has a local maximum—that is, where f(a, b) is larger than nearby values of f(x, y).

. The larger of these two values is the absolute maximum. ABSOLUTE MINIMUM Likewise, f has two local minima—where f(a, b) is smaller than nearby values.

. The smaller of these two values is the absolute minimum. LOCAL MAX. & LOCAL MAX. VAL. Definition 1 A function of two variables has a local maximum at (a, b) if f(x, y) ≤ f(a, b) when (x, y) is near (a, b). This means that f(x, y) ≤ f(a, b) for all points (x, y) in some disk with center (a, b).

. The number f(a, b) is called a local maximum value. LOCAL MIN. & LOCAL MIN. VALUE Definition 1 If f(x, y) ≥ f(a, b) when (x, y) is near (a, b), then f has a local minimum at (a, b). f(a, b) is a local minimum value. ABSOLUTE MAXIMUM & MINIMUM

If the inequalities in Definition 1 hold for all points (x, y) in the domain of f, then f has an absolute maximum (or absolute minimum) at (a, b) CRITICAL POINT A point (a, b) is called a critical point (or stationary point) of f if either:

. fx(a, b) = 0 and fy(a, b) = 0

. One of these partial derivatives does not exist. LOCAL MINIMUM Example 1 Let f(x, y) = x2 + y2 – 2x – 6y + 14

Then, fx(x, y) = 2x – 2

fy(x, y) = 2y – 6

. These partial derivatives are equal to 0 when x = 1 and y = 3. . So, the only critical point is (1, 3). LOCAL MINIMUM Example 1 By completing the square, we find:

f(x, y) = 4 + (x – 1)2 + (y – 3)2

. Since (x – 1)2 ≥ 0 and (y – 3)2 ≥ 0, we have f(x, y) ≥ 4 for all values of x and y.

. So, f(1, 3) = 4 is a local minimum.

. In fact, it is the absolute minimum of f. LOCAL MINIMUM Example 1 This can be confirmed geometrically from the graph of f, which is the elliptic paraboloid with vertex (1, 3, 4). EXTREME VALUES Example 2 Find the extreme values of f(x, y) = y2 – x2

. Since fx = –2x and fy = –2y, the only critical point is (0, 0). EXTREME VALUES Example 2 Notice that, for points on the x-axis, we have y = 0.

. So, f(x, y) = –x2 < 0 (if x ≠ 0).

For points on the y-axis, we have x = 0.

. So, f(x, y) = y2 > 0 (if y ≠ 0). EXTREME VALUES Example 2 Thus, every disk with center (0, 0) contains points where f takes positive values as well as points where f takes negative values.

. So, f(0, 0) = 0 can’t be an extreme value for f.

. Hence, f has no extreme value. SECOND DERIVATIVES TEST Example 3 Find the local maximum and minimum values and saddle points of

f(x, y) = x4 + y4 – 4xy + 1 SECOND DERIVATIVES TEST Example 3 We first locate the critical points:

3 fx = 4x – 4y

3 fy = 4y – 4x SECOND DERIVATIVES TEST Example 3 Setting these partial derivatives equal to 0, we obtain: x3 – y = 0 y3 – x = 0

. To solve these equations, we substitute y = x3 from the first equation into the second one. SECOND DERIVATIVES TEST Example 3 This gives:

0xx9 xx(8 1) x( x44  1)( x  1) x( x2  1)( x 2  1)( x 4  1) SECOND DERIVATIVES TEST Example 3 So, there are three real roots: x = 0, 1, –1

. The three critical points are:

(0, 0), (1, 1), (–1, –1) SECOND DERIVATIVES TEST Example 3 Next, we calculate the second partial derivatives and D(x, y):

2 2 fxx = 12x fxy = – 4 fyy = 12y

2 D(x, y) = fxx fyy – (fxy) 2 2 = 144x y – 16 SECOND DERIVATIVES TEST Example 3 As D(0, 0) = –16 < 0, it follows from case c of the Second Derivatives Test that the origin is a saddle point.

. That is, f has no local maximum or minimum at (0, 0). SECOND DERIVATIVES TEST Example 3

As D(1, 1) = 128 > 0 and fxx(1, 1) = 12 > 0, we see from case a of the test that f(1, 1) = – 1 is a local minimum.

Similarly, we have D(–1, –1) = 128 > 0 and fxx(–1, –1) = 12 > 0.

. So f(–1, –1) = –1 is also a local minimum. SECOND DERIVATIVES TEST Example 3 The graph of f is shown here. Finding Relative Maxima & Minima

EXAMPLE A monopolist manufactures and sells two competing products, call them I and II, that cost $30 and $20 per unit, respectively, to produce. The revenue from marketing x units of product I and y units of product II is Rx, y  98x 112y  0.04xy 0.1x2  0.2y2. Find the values of x and y that maximize the monopolist’s profits.

SOLUTION We first use the first-derivative test.

R  98 0.04y  0.2x x

R 112 0.04x  0.4y y Finding Relative Maxima & Minima

CONTINUED Now we set both partial derivatives equal to 0 and then solve each for y.

98 0.04y  0.2x  0 112 0.04x  0.4y  0

y  5x  2450 y  0.1x  280

Now we may set the equations equal to each other and solve for x.

5x 2450 0.1x 280 4.9x 2450 280

4.9x  2170 x  443 Finding Relative Maxima & Minima

CONTINUED We now determine the corresponding value of y by replacing x with 443 in the equation y = -0.1x + 280. y  0.1443 280 236 So we now know that revenue is maximized at the point (443, 236). Let’s verify this using the second-derivative test. To do so, we must first calculate

2 2R 2R  2R  D x, y     .   2 2   x y  xy  Finding Relative Maxima & Minima

CONTINUED 2R   R       98 0.04y  0.2x  0.2 x2 x  x  x

2R   R       1120.04x 0.4y  0.4 2     y y  y  y

2R   R       1120.04x 0.4y  0.04 xy x  y  x 2 2R 2R  2R  D x, y       0.2  0.4   0.04 2  0.0784   2 2        x y  xy 

2 R Since D  x , y   0 and  0 , we know, by the second-derivative test, xy that R(x, y) has a relative maximum at (443, 236). Unit-III

Ordinary Differential equations of first order and first degree and its applications and special functions Exact Equations & Integrating Factors

Consider a first order ODE of the form M (x, y)  N(x, y)y  0 Suppose there is a function  such that

 x (x, y)  M(x, y),  y (x, y)  N(x, y) and such that (x,y) = c defines y = (x) implicitly. Then   dy d M (x, y)  N(x, y)y     x,(x) x y dx dx and hence thed original ODE becomes  x,(x) 0 dx

Thus (x,y) = c defines a solution implicitly. In this case, the ODE is said to be exact. Theorem 2.6.1

Suppose an ODE can be written in the form M (x, y)  N(x, y)y  0 (1)

where the functions M, N, My and Nx are all continuous in the rectangular region R: (x, y)  (,  ) x (,  ). Then Eq. (1) is an exact iff M y (x, y)  Nx (x, y), (x, y)R (2)

That is, xthere(x, y)  existsM(x, ya), function y (x, y)  N satisfying(x, y) (3 )the conditions

iff M and N satisfy Equation (2). Example 1: Exact Equation

Consider the following differential equation. dy x  4y    (x  4y)  ( 4x  y)y  0 dx 4x  y Then M (x, y)  x  4y, N(x, y)  4 x  y and hence

M y (x, y)  4  Nx (x, y)  ODE is exact From Theorem 2.6.1,  (x, y)  x  4y,  (x, y)  4x  y x y Thus 1  (x, y)   (x, y)dx  x  4ydx  x2  4xy C(y)  x  2 Example 1: Direction Field and Solution Curves

Our differential equation and solutions are given by dy x  4y    (x  4y)  (4x  y)y  0  x2  8xy y 2  c dx 4x  y A graph of the direction field for this differential equation, along with several solution curves, is given below. Example 1: Explicit Solution and Graphs

Our solution is defined implicitly by the equation below. x2 8xy y2  c In this case, we can solve the equation explicitly for y: y 2 8xy x2  c  0  y  4x  17x2  c Solution curves for several values of c are given below. Example 2: Exact Equation (1 of 3)

Consider the following differential equation. (y cosx  2xey )  (sin x  x2e y 1)y  0 Then M (x, y)  y cosx  2xey , N(x, y)  sin x  x2e y 1 and hence y M y (x, y)  cosx  2xe  Nx ( x, y)  ODE is exact From Theorem 2.6.1, y 2 y  x (x, y)  M  y cosx  2xe ,  y (x, y)  N  sin x  x e 1 Thus  (x, y)   (x, y)dx  ycosx  2xey dx  ysin x  x2e y C(y)  x   Example 2: Solution (2 of 3)

We have y 2 y  x (x, y)  M  y cosx  2xe ,  y (x, y)  N  sin x  x e 1 and  (x, y)   (x, y)dx  ycosx  2xey dx  ysin x  x2e y C(y)  x   It follows that 2 y 2 y  y (x, y)  sin x  x e 1  sin x  x e  C(y)  C(y)  1  C(y)   y  k Thus  (x, y)  ysin x  x2e y  y  k By Theorem 2.6.1, the solution is given implicitly by ysin x  x2e y  y  c Example 2: Direction Field and Solution Curves (3 of 3)

Our differential equation and solutions are given by (y cosx  2xey )  (sin x  x2e y 1)y  0, y sin x  x2e y  y  c A graph of the direction field for this differential equation, along with several solution curves, is given below. Example 2: Direction Field and Solution Curves (3 of 3)

Our differential equation and solutions are given by (y cosx  2xey )  (sin x  x2e y 1)y  0, y sin x  x2e y  y  c A graph of the direction field for this differential equation, along with several solution curves, is given below. Orthogonal Trajectories Orthogonal Trajectories

An orthogonal trajectory of a family of curves is a curve that intersects each curve of the family orthogonally, that is, at right angles (see Figure 1).

Figure 1 Orthogonal Trajectories

For instance, each member of the family y = mx of straight lines through the origin is an orthogonal trajectory of the family x2 + y2 = r2 of concentric circles with center the origin (see Figure 2). We say that the two families are orthogonal trajectories of each other.

Figure 2 Example 1

Find the orthogonal trajectories of the family of curves x = ky2, where is k an arbitrary constant.

Solution: The curves x = ky2 form a family of parabolas whose axis of symmetry is the x-axis.

The first step is to find a single differential equation that is satisfied by all members of the family.

Example 1 – Solution cont’d

If we differentiate x = ky2, we get

This differential equation depends on k, but we need an equation that is valid for all values of k simultaneously.

To eliminate k we note that, from the equation of the given general parabola x = ky2, we have k = x/y2 and so the differential equation can be written as

or Example 1 – Solution cont’d

This means that the slope of the tangent line at any point

(x, y) on one of the parabolas is y  = y/(2x).

On an orthogonal trajectory the slope of the tangent line must be the negative reciprocal of this slope.

Therefore the orthogonal trajectories must satisfy the differential equation

This differential equation is separable, and we solve it as follows: Example 1 – Solution cont’d

where C is an arbitrary positive constant.

Thus the orthogonal trajectories are the family of given by Equation 4 and sketched in Figure 3. Figure 3 Differential Equations

Analytically, you have learned to solve only two types of differential equations—those of the forms y' = f(x) and y'' = f(x).

In this section, you will learn how to solve a more general type of differential equation.

The strategy is to rewrite the equation so that each variable occurs on only one side of the equation. This strategy is called . Example 1 – Solving a Differential Equation

So, the general solution is given by y2 – 2x2 = C. Example 2 – Newton's Law of Cooling

Let y represent the temperature (in ºF) of an object in a room whose temperature is kept at a constant 60º. If the object cools from 100º to 90º in 10 minutes, how much longer will it take for its temperature to decrease to 80º?

Solution: From Newton's Law of Cooling, you know that the rate of change in y is proportional to the difference between y and 60.

This can be written as y' = k(y – 60), 80 ≤ y ≤ 100. Example 2 – Solution cont'd

To solve this differential equation, use separation of variables, as follows.

Example 2 – Solution cont'd

Because y > 60, |y – 60| = y – 60, and you can omit the absolute value signs.

Using exponential notation, you have

Using y = 100 when t = 0, you obtain 100 = 60 + Cek(0) = 60 + C, which implies that C = 40.

Because y = 90 when t = 10, 90 = 60 + 40ek(10) 30 = 40e10k Example 2 – Solution cont'd

So, the model is y = 60 + 40e–0.02877t Cooling model and finally, when y = 80, you obtain

Figure 1 So, it will require about 14.09 more minutes for the object to cool to a temperature of 80º (see Figure 1). Unit-IV & V Laplace Transforms

• Introduction Definition of Laplace Transform Many practical engineering problems involve mechanical or electrical systems acted upon by discontinuous or impulsive forcing terms. For such problems the methods described in Chapter 3 are difficult to apply. In this chapter we use the Laplace transform to convert a problem for an unknown function f into a simpler problem for F, solve for F, and then recover f from its transform F. Given a known function K(s,t), an transform of a function f is a relation of the form F(s)  K(s,t) f (t)dt,         The Laplace Transform Let f be a function defined for t  0, and satisfies certain conditions to be named later. The Laplace Transform of f is defined as

 st Lf (t) F(s)  e f (t)dt 0

Thus the kernel function is K(s,t) = e-st. Since solutions of linear differential equations with constant coefficients are based on the exponential function, the Laplace transform is particularly useful for such equations. Note that the Laplace Transform is defined by an improper integral, and thus must be checked for convergence. On the next few slides, we review examples of improper and piecewise continuous functions. Table 1 Some Functions ƒ(t) and Their Laplace Transforms L(ƒ)

Example 1

Consider the following improper integral.  estdt 0 We can evaluate this integral as follows:

st b  b st st e 1 sb e dt  lim e dt  lim  lime 1 0 b 0 b s s b 0 Note that if s = 0, then est = 1. Thus the following two cases hold:  1 estdt   , if s  0; and 0 s  estdt diverges, if s  0. 0 Example 2

Consider the following improper integral.  stcostdt 0 We can evaluate this integral using integration by  b parts:stcos tdt  lim stcostdt 0 0 b b  limstsint b  ssintdt b  0 0 

 limstsint b  s cost b  b 0 0  limsbsinb  scosb 1 b

Since this limit diverges, so does the original integral. Piecewise Continuous Functions

A function f is piecewise continuous on an interval [a, b] if this interval can be partitioned by a finite number of points a = t0 < t1 < … < tn = b such that

(1) f is continuous on each (tk, tk+1) (2) lim f (t)  , k  0,,n 1  ttk

(3) lim f (t)  , k 1,,n  ttk1

In other words, f is piecewise continuous on [a, b] if it is continuous there except for a finite number of jump discontinuities. Example 4

Consider the following piecewise-defined function f. t 2 1, 0  t 1  1 f (t)  2  t , 1 t  2  4, 2  t  3 From this definition of f, and from the graph of f below, we see that f is not piecewise continuous on [0, 3]. Theorem 6.1.2

Suppose that f is a function for which the following hold: (1) f is piecewise continuous on [0, b] for all b > 0. (2) | f(t) |  Keat when t  M, for constants a, K, M, with K, M > 0.  Lf (t) F(s)  est f (t)dt finite Then the Laplace Transform0 of f exists for s > a.

Note: A function f that satisfies the conditions specified above is said to to have exponential order as t  . Example 5

Let f (t) = 1 for t  0. Then the Laplace transform F(s) of f is:  L1 estdt 0 b  lim estdt b 0 b est   lim b s 0 1  , s  0 s Example 6

Let f (t) = eat for t  0. Then the Laplace transform F(s) of f is:  Leat  esteat dt 0 b  lim e(sa)t dt b 0 b e(sa)t   lim b s  a 0 1  , s  a s  a Example 7

Let f (t) = sin(at) for t  0. Using integration by parts twice, the Laplace transform F(s) of f is found as follows:  b F(s)  Lsin(at)  est sin atdt  lim est sin atdt 0 b 0

b  st b s st   lim (e cosat) / a  e cosat b 0 a 0  1 s b   lim est cosat a a b0  b 1 s  st b s st    lim(e sin at) / a  e sin at a a b 0 a 0  1 s2 a   F(s)  F(s)  , s  0 a a2 s2  a2 Linearity of the Laplace Transform

Suppose f and g are functions whose Laplace transforms exist for s > a1 and s > a2, respectively.

Then, for s greater than the maximum of a1 and a2, the Laplace transform of c1 f (t) + c2g(t) exists. That is,  st Lc1 f (t)  c2 g(t)  e c1 f (t)  c2 g(t)dt is finite 0 with   st st Lc1 f (t)  c2 g(t)  c1 e f (t)dt  c2 e g(t)dt 0 0

 c1Lf (t)  c2Lg(t) Example 8

Let f (t) = 5e-2t - 3sin(4t) for t  0. Then by linearity of the Laplace transform, and using results of previous examples, the Laplace transform F(s) of f is: F(s)  L{ f (t)}  L5e2t  3sin(4t)   5Le2t  3Lsin(4t)  5 12   , s  0 s  2 s2 16 Example 9 Example 10 First Shifting Theorem, s-Shifting If f(t) has the transform F(s) (where s > k for some k), then eatf(t) has the transform F(s − a) (where s − a > k). In formulas, L {eatf(t)} = F(s − a) or, if we take the inverse on both sides, eatf(t) = L 1{F(s − a)} Transforms of Derivatives and Integrals. ODEs

The Laplace transform is a method of solving ODEs and initial value problems. The crucial idea is that operations of calculus on functions are replaced by operations of algebra on transforms. Roughly, differentiation of f(t) will correspond to multiplication of L(f) by s (see Theorems 1 and 2) and integration of f(t) to division of L(f) by s. To solve ODEs, we must first consider the Laplace transform of derivatives. You have encountered such an idea in your study of logarithms. Under the application of the natural logarithm, a product of numbers becomes a sum of their logarithms, a division of numbers becomes their difference of logarithms. Theorem 1 Laplace Transform of Derivatives The transforms of the first and second derivatives of f(t) satisfy (1) L( f ’) = sL(f) − f(0) (2) L(f ”) = s2L(f) − sf(0) − f ’(0). Formula (1) holds if f(t) is continuous for all t ≥ 0 and satisfies the growth restriction (2) in Sec. 6.1 and f ’(t) is piecewise continuous on every finite interval on the semi-axis t ≥ 0. Similarly, (2) holds if f and f ’ are continuous for all t ≥ 0 and satisfy the growth restriction and f ” is piecewise continuous on every finite interval on the semi-axis t ≥ 0 . EXAMPLE 1

Initial Value Problem: The Basic Laplace Steps Solve y” − y = t, y(0) = 1, y’(0) = 1

Solution. Step 1. From (2) and Table 6.1 we get the subsidiary equation [with Y = L (y)] s2Y − sy(0) − y’(0) − Y = 1/s2, thus (s2 − 1)Y = s + 1 + 1/s2. EXAMPLE 1 (continued)

Initial Value Problem: The Basic Laplace Steps

Solution (continued) Step 2. The transfer function is Q = 1/(s2 − 1), and (7) becomes 1s  1 1 Y( s  1) Q  Q   . s2 s 21 s 2 ( s 2 1) Simplification of the first fraction and an expansion of the last fraction gives 1 1 1 Y    . s 1 ss221 EXAMPLE 1 (continued)

Initial Value Problem: The Basic Laplace Steps

Solution (continued) Step 3. From this expression for Y and Table 6.1 we obtain the solution y()() t L 1 Y

1   1   1  LLL1   1   1 t     22    esinh t t . s 1  ss1    EXAMPLE 1 (continued)

Initial Value Problem: The Basic Laplace Steps

Solution (continued) Step 3 (continued). The diagram in Fig. 1 summarizes our approach.

Fig. 1. Steps of the Laplace transform method Advantages of the Laplace Method 1. Solving a nonhomogeneous ODE does not require first solving the homogeneous ODE. See Example 4. 2. Initial values are automatically taken care of. See Examples 4 and 5. 3. Complicated inputs r(t) (right sides of linear ODEs) can be handled very efficiently, as we show in the next sections. Unit Step Function (Heaviside Function). Second Shifting Theorem (t-Shifting) We shall introduce two auxiliary functions, the unit step function or Heaviside function u(t − a) (following) and Dirac’s delta δ(t − a) (in Sec. 6.4). These functions are suitable for solving ODEs with complicated right sides of considerable engineering interest, such as single waves, inputs (driving forces) that are discontinuous or act for some time only, periodic inputs more general than just cosine and sine, or impulsive forces acting for an instant (hammerblows, for example). Unit Step Function (Heaviside Function) u(t − a)

The unit step function or Heaviside function u(t − a) is 0 for t < a, has a jump of size 1 at t = a (where we can leave it undefined), and is 1 for t > a, in a formula: 0 if ta (1) u() t a  (a ≥ 0). 1 if ta  Unit Step Function (Heaviside Function) u(t − a)

(continued) Figure 2 shows the special case u(t), which has its jump at zero, and Fig. 3 the general case u(t − a) for an arbitrary positive a.

Fig. 2. Unit step function u(t) Fig. 3. Unit step function u(t − a) Theorem 1 Second Shifting Theorem; Time Shifting If f(t) has the transform F(s) then the “shifted function”  0 if ta (3) f()()() t f t  a u t  a    f( t a ) if t a has the transform e−asF(s). That is, if L {f(t)} = F(s), then (4) L {f(t − a)u(t − a)} = e−asF(s). Or, if we take the inverse on both sides, we can write (4*) f(t − a)u(t − a)} = L −1{e−asF(s)}. EXAMPLE 1

Application of Theorem 1. Use of Unit Step Functions Write the following function using unit step functions and find its transform.  2 if 0t 1 11 f( t )  t2 if 1(Fig.  t  4)  22  1    costt if .  2

6.3 Unit Step Function (Heaviside Function). Second Shifting Theorem (t-Shifting) EXAMPLE 1

Application of Theorem 1. Use of Unit Step Functions

Fig. 4. ƒ(t) in Example 1

Section 6.3 p199 Short Impulses. Dirac’s Delta Function. Partial Fractions

An airplane making a “hard” landing, a mechanical system being hit by a hammerblow, a ship being hit by a single high wave, a tennis ball being hit by a racket, and many other similar examples appear in everyday life. They are phenomena of an impulsive nature where actions of forces—mechanical, electrical, etc.—are applied over short intervals of time. We can model such phenomena and problems by “Dirac’s delta function,” and solve them very effectively by the Laplace transform. To model situations of that type, we consider the function

1/k if a t  a  k (1) f k () t a  (Fig. 132) 0 otherwise (and later its limit as k → 0). This function represents, for instance, a force of magnitude 1/k acting from t = a to t = a + k, where k is positive and small. In mechanics, the integral of a force acting over a time interval a ≤ t ≤ a + k is called the impulse of the force; similarly for electromotive forces E(t) acting on circuits. Since the blue rectangle in

Fig. 4 has area 1, the impulse of fk in (1) is ak1 (2) Ikk f( t  a ) dt  dt  1. 0 a k

Fig. 4. The function ƒk(t − a) in (1)

Section 6.4 p203 To find out what will happen if k becomes smaller and smaller, we take the limit of fk as k→0 (k > 0). This limit is denoted by δ(t − a), that is, (t a )  lim f ( t  a ). k0 k δ(t − a) is called the or the unit impulse function. δ(t − a) is not a function in the ordinary sense as used in calculus, but a so-called generalized function. To see this, we note that the impulse Ik of fk is 1, so that from (1) and (2) by taking the limit as k→0 we obtain

 if ta  (3)   ta   and  (t a ) dt 1, 0 0 otherwise