I N T R O D U C T I O N

Computation consists of some basic mathematical operations. Mainly, the operations are addition, subtraction, multiplication and division. Normally, all these operations can be performed with certain efficiency, but with the help of modern computer technology all these basic mathematical operations can be performed within a very short time. In general a computer takes time to perform one mathematical operation in one nanosecond i.e. one billion of a second

(i.e. 10-9 second). Now, in the modern digital age, if time to get the solution of any problem is not time bound then it itself loses its significant. So, it is the challenge to solve any problem with shortest time and acceptable accuracy. Since, now a day‟s computer memory is sufficiently available with low cost, so, optimization of time and accuracy is the utmost important for solving any problem.

If the required number of basic arithmetic operations is bounded by a polynomial in size of the problem, then it is referred as a good algorithm. The required number of basic arithmetic operations may be bounded by exponential function as well as factorial function also. Growth rate of polynomial P(x), exponential E(x) and factorial F(x) functions are bounded as per the following relations

P(x) << E(x) << F(x) (1)

This is why polynomial bounded is good one.

Here we briefly discuss the developments and some techniques which are already used to solve some mathematical problems in relation to the computation.

4

MATRIX-CHAIN MULTIPLICATION: DYNAMIC PROGRAMMING

Suppose we want to evaluate the chain multiplication of four matrices A1, A2, A3 and A4.

This multiplication can be done by five distinct ways which is shown using parenthesization as follows:

1. (A1(A2 (A3 A4)))

2. (A1((A2 A3) A4))

3. ((A1 A2) (A3 A4))

4. ((A1(A2 A3))A4)

5. (((A1 A2) A3)A4)

These different parenthesization yields different evaluation cost in terms of number of scalar multiplication and one may ten or more times costlier than the other. So, proper parenthesization is very important task to minimize the total computational cost. Using the dynamic programming approach Cormen et al. provides the solution in their books [34, 35].

The existing dynamic programming approach [34, 35] for optimal solution for - chain multiplication requires the computation of the recursive equation

m[i, j] = min (m[i, k] + m[k+1, j] + Pi-1 Pk Pj ), i ≤ k < j

where m[i, j]: minimum cost of computing AiAi+1 . . . Ak Ak+1 . . . Aj

m[i, k]: minimum cost of computing sub matrix-chain AiAi+1 . . . Ak

m[k+1, j]: minimum cost of computing sub matrix-chain Ak+1 . . . Aj and

Pi-1 Pk Pj: cost of multiplying the two resultant matrices (AiAi+1 . . . Ak ) and

5

(Ak+1 . . . Aj ) where the matrix Ai is of order Pi -1× Pi ( i = 1, 2, . . . , j). This particular Pk can be any of Pi -1, Pi, . . . , Pj. Obviouly, m[i, j] = 0 for i = j.

NEW TECHNIQUES FOR COMPUTING DETERMINANT OF MATRICES

The roles of matrices are immense important in all branches of science, engineering, social science and management. Matrix representations of system of linear equations are used to solve such system. The inversion of matrices is necessary for this purpose. Again, it is necessary to find out the determinant of matrices for the inversion of matrices. Matrices and their various applications are available in the book of Hill [62]. There are several direct and non-direct methods are available by which the determinant value of matrices are evaluated. Some of the direct methods are Basket weave method, Pivotel condensation method (Chio method) and

Expanding method. The non-direct methods are Gauss elimination method, LU decomposition method, QR decomposition method and Cholesky decomposition method etc. For solving the physical problems different types of matrices may arises. Some of the types of the matrices are , pentadiagonal matrix, , , jordan matrix, banded etc. In the past, researchers have devoted to find out the determinant value of the different types of matrices that arises from the physical problems from many long years back to till now.

The concepts of recurrence relation were used for computing the determinant of a tridiagonal matrix. For computing the determinant of a tridiagonal matrix, a two-term recurrence is used by EI-Mikkawy [99] by imposing certain conditions on three-term recurrence. Later,

Salkuyeh [126] showed that a two-term recurrence is also applicable for a block- tridiagonal matrix. After that Sogabe [131] established the same two-term recurrence for computing the

6 determinant value of the general n × n matrix and shown that the relation is a generalization of the DETGTRI algorithm developed by EI-Mikkawy [100].

Other type of special matrix is pentadiagonal matrix which arises in numerical solution of an ordinary and partial differential equations, interpolations problem, spline problems, boundary value problems. The determinant of pentadiagonal matrix is used to test for the existence of unique solution of partial differential equations, for computing the inverse of symmetric pentadiagonal toeplitz matrices. Some methods have been found in the articles of Cinkir [32],

Hadj and Elouafi [57] and Sogabe [133] to compute the determinant value of a pentadiagonal matrix with complexity O(n). In this regard it is widely known that Sweet‟s algorithm [135] and

Evans algorithm [45] is fast numerical algorithm for evaluating the determinant of pentadiagonal matrix of order n, which requires 24n–59 and 22n–50 operations respectively. A more efficient algorithm was given by Sogabe [132] to compute the determinant value of pentadiagonal matrix of order n, which requires 14n–28 operations, compared to much less than Sweet‟s algorithm

[135] and Evans algorithm [45]. A possibility of a specific procedure based on some results was discussed by Marrero and Tomeo [97] for computing both determinant and the inverse of any non-singular pentadiagonal matrix.

Toeplitz matrix is another special matrix which occurs in the solution of second and fourth- order differential equations with various boundary conditions. For various applications finding the determinant value of the toeplitz matrix is necessary. A number of fast algorithms for computing the determinant of tridiagonal and pentadiagonal toeplitz matrix have been developed by Cinkir [32], Kilic and Mikkawy [78], Lv et al. [93] and Mcnally [98]. Later, the algorithm given by Cinkir [32] is generalized by himself in his work Cinkir [33] for computing the

7 determinant of toeplitz matrices. Recently, Elouafi [44] concentred on an explicit formula for computing the determinant of a pentadiagonal and heptadiagonal symmetric toeplitz matrix.

In another important matrix is hessenberg matrix [53, 58]. Chen [24] presented a recursive algorithm to compute the inverse and the determinant of a hessenberg matrix.

Circulant matrix [36] arises from many areas such as cryptography, Fourier transforms, operator theory, digital image processing, numerical analysis, etc. Using Gauss elimination, the computation of a determinant of a matrix of order n requires 2n3/3 arithmetic operations and for the large dimension of matrix, the computation is not feasible at all.

Therefore, the determinant of some special matrices such as banded matrices [92, 93] or circulant matrices with special entries [6, 20, 25, 129] has been considered.

A survey for the complexity for computing the sign or the value of the determinant of an was presented by Kaltofen and Villard [74] and Pan and Yu [119]. Recently,

Ferrer et al. [46] derive an explicit formula for computing the determinant of any general matrix from the given Jordan matrix. Rezaifar and Rezaee [123] developed a recursion technique to evaluate the determinant of a general matrix as follows:

M M 1 M = 11 1n ×

M n1 M nn M 11,nn

where, M is an n×n matrix, M ij is a matrix, which is given by the eliminations of ith row and

the jth column of the matrix M , M ii, jj is a matrix, which is given by the eliminations of ith row ith column and jth row jth column of the matrix and M presented in the above expression

is the value of the determinant of the matrix M ij .

8

CONSTRUCTION OF A MINIMUM SPANNING TREE

The minimum spanning tree (MST) problem is a classical and well known problem in combinatorial optimization concerned with finding a spanning tree of an undirected, connected graph such that the sum of weights of the selected edges is minimum. The classical method for finding the minimum spanning tree (MST) consists of selecting the particular spanning tree among all the spanning trees of the given undirected, connected graph. A classical result found by Kirchhoff [80] to determine the number of spanning trees. The explicit formulas can be found in the work of Bogdanowicz [18] and Lovasz and Plummer [91] for determining the number of spanning trees of a special family of graphs namely n-fans. Other explicit formulas can also be found for determining the number of spanning trees of other special family of graphs in the work of Cayley [22], Wang and Yang [146].

The minimum spanning tree (MST) has direct application in the network designing of computer, telecommunication, transport, electrical circuit and occurs in the approximate solution of travelling salesman problem, maximum flow problem, matching problem. The various applications of MST are available in Ahuja et al. [2], Graham and Hell [55] and Kumar and Jani

[88]. The MST problem is generally solved by greedy method that consists of choosing the appropriate small edges and excluding the large ones at each stage so that at every inclusion of edges in the MST, do not form a cycle and for the every exclusion of edges from the MST without disconnecting the graph. Tarjan [137] represents a greedy method that consist a selection of one edge of minimum weight and colors it blue and among the uncoloured edges select a maximum weight and colors it red. Bazlamacci [13] categories the existing MST algorithms into five categories namely classical algorithms (Boruvka [19], Kruskal [85], Prim [122]), algorithms using the concepts of specials heaps for the first time which has a faster running time than the

9 classical algorithms (Cheriton and Tarjan [26], Yao [154]), algorithms using F-heaps for the first time (Fredman and Tarjan [47], Gabow et al. [48]) and the algorithms using randomization

(Karger [75], Karger et al. [76]). Several methods have been developed to construct the minimum spanning tree by using different approaches. One such method is developed by Hassan

[60] to construct a minimum spanning tree in the network by using the idea of a distance (cost) matrix. For the configuration of a transmission and distributed network, the minimum expenses spanning tree is widely used in which vertices as well as edges may have weights. Ning and

Longshu [107] presented a polynomial time complexity algorithm for the spanning tree in a network with minimum total expenses.

Earlier, different techniques to evaluate the determinant value of a square matrix were discussed. The of a connected graph G is defined as the n × n matrix whose (i, j)- th entry is equal to the length of the shortest path from the ith vertex to jth vertex. For a tree T with n vertices the determinant of a distance matrix is presented by Graham and Pollack [54] and is given by1n1n 12n2 . If T is a weighted tree, an extension of this result was obtained by

Bapat et al. [10]. Bapat [11] presented a formula for the determinant of a distance matrix which also contains the classical Graham and Pollack formula [54] as a special case by considering each edge of a weighted tree bears a square matrix as weight.

So far the discussion of the MST problem is limited to the fixed edge weight. But we often face situations in which the cables in a communication networks are damage or corrupt causing the blocking of the networks. So, it is very essential for the realistic situation to analyze the structure of minimum spanning trees in a given network when the edge weights vary dynamically. Hutson and Shier [67] presented an algorithm for finding the minimum spanning tree in networks with varying edge weights. In this regard a concept has been developed that the

10 varying weight of an edge may be considered as a random variable and investigated for the probability distribution of the weights of the MST consisting stochastic networks. In these contexts, Kulkarni [87] enlightens an idea where the edge weights are exponentially distributed random variables. Later, Bailey [7] extended Kulkarni‟s method [87] to minimization problems on matroids.

ASSIGNMENT PROBLEM: HUNGARIAN METHOD

The assignment problem is a standard problem of assigning a number of jobs to an equal number of persons as one-to-one basis. The objective of the assignment problem is to assign the jobs to the persons (one job to one person exclusively) at the least total cost or the maximum total profit. It falls under the category of linear programming problem and due to its special mathematical structure; some particular algorithms were developed to solve such problem very efficiently compared to the general methods for the linear programming problem. The most well- known, popular and widely used algorithm to solve such problem is Hungarian method (HM)

[86].) This method requires to find out minimum number of lines to cover all the zeros in the reduced cost matrix. The equality of this minimum number of lines and the order of the cost matrix is the prerequisite for the existence of the optimal solution. But to find out the minimum number of lines is a very laborious task and Lotfi [90] has developed a levelling algorithm for this task. Besides the Hungarian method, the assignment problem can be solved by the simplex method of linear programming problem and for this purpose the simplex method was modified by Balinski [9], Goldfarb [52], Hung [66], Ji et al. [69] and Paparrizos [120]. Other than the simplex method many more relevant method can be found to solve the assignment problem developed by Balinski [8]), Bertsekas [14], Hung and Rom [65], Orlin and Ahuja [115]. So many

11 researchers like Bertesekas and Tseng [15], Jonker and Volgenant [71], Silver [130], Wright

[149] have considered the Hungarian algorithm and modified it to reduce the execution time.

Some researchers like Spivey and Powell [134], Tettey et al. [138] and Toroslu and Ucoluk [140] have considered the dynamic Hungarian algorithm for the assignment problem in which the costs are not fixed, but variable. Apart from the linear assignment problem, Hahn et al. [59] presents a branch-and-bound algorithm for solving the quadratic assignment problem (QAP) based on a dual procedure (DP) similar to the Hungarian method for solving the linear assignment problem.

Here we briefly discuss some numerical techniques which are already used to solve the nonlinear equation f(x) = 0 in relation to the numerical computation.

SOLUTION OF NONLINEAR EQUATIONS

Solution of any physical problem may be classified into two categories. One is the analytical solution and the other is numerical solution. But with the invention of a modern computer, people are using computer to solve the physical problems in numerical forms after converting the physical problems to its equivalent mathematical form. Now with the help of modern technology, numerical solution of mathematical and physical problems has becomes easy task. The numerical solution of mathematical problems may be broadly classified into two types: the direct or exact method in which the solution is obtained through a finite number of arithmetic operation and the iterative methods in which a sequence of successive approximation to the solution is generated which converges to the solution. In a direct method counting the total number of arithmetic operations and in an iterative method, the total number of iterations is very important which gives the inverse measure of the efficiency of the method.

12

One of the oldest and basic problem in Numerical analysis is the solution of the nonlinear equation f(x) = 0 for a given function f. There are so many existing and basic iterative methods by which the equation f(x) = 0 can be solved namely, Bisection method, Regula-falsi method,

Secant method, Newton‟s method [21] etc. Two important factors which give the measure of successes of any iterative method for the solution of a nonlinear equation are order of convergence and the efficiency index. Let x0 and xn are the initial and nth approximation of the

*  root x of f(x) = 0. Suppose  n  xn  x be the error in the nth approximation. If there exists a number p ≥ 1 and a constant c ≠ 0 such that

 lim n1  c n p  n then p is called the order of convergence and c is the rate of convergence of the iterative method.

Out of order of convergence and rate of convergence, we first consider the order of convergence.

If one iterative method has a higher order of convergence than the other one then the method which has a higher order of convergence is better than the other irrespective of the rate of convergence. For the convergence of the successive approximation it is essentially that c ≤ 1. For example, Bisection method, Regula-falsi method is linearly convergent (p = 1) and the Newton‟s method is quadratically convergent (p = 2). The efficiency index of an iterative method is defined

1 as p w , where p is the order of convergence of the method and w is the number of function evaluation per iteration required by the method.

Newton’s Method and its Development

In the last decades many researchers have considered the nonlinear equation and trying to find out the solution by keeping in mind that the order of convergence and the efficiency index

13 should be increased to minimize the computational time. Efficiency index would be increased if the number of function evaluation per iteration is small. So the main aim of the researchers are to minimize the total number of function evaluations per iteration and as well as to increase the order of convergence also. In the context of nonlinear equation, Newton‟s method is considered as a fundamental method. Newton’s method is given by

f (xn ) xn1  xn  / (2) f (xn ) is the most popular method for solving nonlinear equations and till now the main attraction of researchers. But one main disadvantages of Newton’s method (2) is that the initial approximation

* x0 must be chosen in such a way that it is sufficiently close to the root x and the evaluation of the derivative of the function. In order to overcome these disadvantages many authors like

Homeier [63], Ozban [117], Thukral [139], Ujevic [143, 144], Ujevic et al. [145], Weerakoon and Fernando [147] considered Newton‟s method and modified it and they approaches from different angle to get the globally convergent algorithms and also to increase the order of convergence of Newton‟s method. The historical development of a Newton‟s method can be found in the work of Yamamnoto [153]. Steffenson‟s method [70] and the method given by

Anonymous [4] are the two variations of Newton‟s methods in which derivatives of function is not required as in the Newton‟s method and have the same order of convergence as Newton‟s method. Sharma and Goyal [128] has also constructed a families of fourth-order derivative free iterative method in which unlike the other fourth or higher order methods developed by Chun

[29], King [79], Neta [105], it is not necessary to find out the derivative of the function of any order. Newton‟s method is basically based on Taylor‟s series expansion of a continuous function and has at least second order convergence for simple root and per iteration it requires one function evaluation and one evaluation of its first derivative. Assuming the derivative evaluation 14 have the same cost as the function evaluation and considering the definition of efficiency index

1 p w , where p is the order of convergence of the method and w is the number of function evaluation per iteration required by the method, the Newton‟s method has the efficiency index

1 2 2 =1.414.

Effect of Weight Function

Now researchers were thinking whether it is possible to increase the order of convergence of the Newton‟s method at the expense of additional function evaluation at another point. In this regard Ostrowski [116] presents a two-step method which is the improvement of Newton‟s method (2) and has introduced an idea of a weight function to increase the order of convergence of Newton‟s method (2) at the expenses of additional function evaluation given by

f (xn ) yn  xn  / , f (xn )

f (xn ) f (yn ) xn1  yn  / (3) f (xn )  2 f (yn ) f (xn )

The function,

f (x ) 1 1 f (y ) n   , where   n f (x )  2 f (y ) f (y ) 1 2 n f (x ) n n 1 2 n n n f (xn )

f (xn ) is called the weight function. Considering  n = t, the weight function can be f (xn )  2 f (yn ) represented as

f (x ) 1 1 1 n     Ht, say f (x )  2 f (y ) f (y ) 1 2 1 2t n n 1 2 n n f (xn )

15

The order of convergence was increased by at least two compared to the Newton‟s method (2)

1 and efficiency index is given by 4 3 =1.587 which is better than the efficiency index of the

Newton‟s method. By changing the functional form of H(t) many authors provide many formulas for solution of nonlinear equations.

Remarkable Progress

Here we point out some of the remarkable progress in the solution of nonlinear equation in chronological order:

1973: Ostrowski, A. M., Solution of equations in Euclidean and Banach Space, third ed.,

Academic Press, New York.

2003: Homeier, H. H. H., A modified Newton method for root finding with cubic convergence,

Journal of Computational and Applied Mathematics, 157, 227-230.

2005: Sharma, J. R., A composite third-order Newton-Steffenson method for solving nonlinear

equations, Applied Mathematics and Computation, 169, 242-246.

2006: Basto, M., Semiao, V. and Calheiros, F. L., A new iterative method to compute non-

linear equations, Applied Mathematics and Computation, 173, 468-483.

2007: Chun, Changbum and Ham, Yoonmee, Some sixth-order variants of Ostrowski root-

finding methods, Applied Mathematics and Computation, 193, 389-394.

2007: Kou, Jisheng and Li, Yitian and Xiuhua, W., Some variants of Ostrowski method with

Seventh-order convergence, Journal of Computational and Applied Mathematics, 209,

153-159.

16

2007: Noor, M. A., Noor, K. I. and Hassun, M., Third-order iterative methods free from second

derivatives for nonlinear equations, Applied Mathematics and Computation, 190, 1551-

1556.

2007: Kou, J., Li, Y. and Wang, X., Fourth-order iterative methods free from second derivative,

Applied Mathematics and Computation, 184, 880-885.

2007: Kou, J., Li, Y. and Wang, X., A family of fourth-order iterative methods for solving non-

linear equations, Applied Mathematics and Computation, 188, 1031-1036.

2007: Chen, J. and Li, W., An improved exponential regula-falsi method with quadratic

convergence of both diameter and point for solving nonlinear equations, Applied

Numerical Mathematics, 57, 80-88.

2008: Chun, C., A simply constructed third-order modifications of Newton‟s Method, Journal

of Computational and Applied Mathematics, 219, 81-89.

2008: Kou, J., Some variant of Cauchy‟s method with accelerated fourth-order convergence,

Journal of Computational and Applied Mathematics, 213, 71-78.

2009: Bi, Weihong and Ren, Hongmin and Wu, Qingbiao, Three-step iterative methods with

eight-order convergence for solving non-linear equations, Journal of Computational and

Applied Mathematics, 225, 105-112.

2009: Hosseini, M. M., A note on one-step iteration methods for solving nonlinear equations,

World Applied Science Journal, 7, 90-95.

2009: Maheshwari, A. K., A fourth-order iterative method for solving nonlinear equations,

Applied Mathematics and Computation, 211, 383-391.

2010: Liu, Liping and Wang, Xia, Eight-order methods with high efficiency index for solving

nonlinear equations, Applied Mathematics and Computation, 215, 3449-3454.

17

Regula-falsi Method and its Modifications

It is known that the classical Regula-falsi method computes the simple root of a nonlinear equation f(x) = 0 by repeated linear interpolation between the current bracketing of root.

Extensive analysis of Regula-falsi method can be found in the book of Kelly [77] and Traub

[141]. In the last decades, so many authors has considered the Regula-falsi method and modified it to reduced the number of iterations and as well as the cost of computation [37]. Naghipoor et al. [103] modified the classical Regula-falsi method as a predictor-corrector method by considering the classical Regula-falsi method as a predictor and the modified algorithm as a corrector. In this regard, it has to be mentioned that the Noor et al. [108, 109, 110] has also considered some predictor-corrector methods like Regula-Falsi or other classical methods or their improvement as a predictor or corrector. Chen and Li [23] have also modified the classical

Regula-falsi method and employ an exponential iterative method given by

2  f xn   xn1  xn exp 2  , where p  R, P   (4)  xn pf (xn  f (xn )  f (xn  f (xn )))

   such that both sequence of diameters bn  an n1 and sequence of iteration xn  x n1 produced by the new method (4), quadratically converges to zero.

Recent Progress

Recently, several authors have penetrated in the areas of numerical solution of a nonlinear equation of the form f(x) = 0. Chen and Li [23], Ujevic [143, 144], Ujevic et al. [145],

Basto et al. [12], Noor et al. [109-113], Wu [150] and so many authors provided new ideas towards the solutions of nonlinear equations as well as for linear systems. Some researchers like

18

Golbabai [51], He [61], Javidi [68] and Wei et al. [148] also concentred their minds to find out the iterative solution of nonlinear algebraic equations.

Here we point out some of the recent progress in the solution of the nonlinear equation of the form f(x) = 0.

One-Step Methods

Hosseini [64] has developed a one-step third-order and fourth-order method by considering first three and four terms of the Taylor‟s series expansion respectively given by

2 f (x ) f / (x ) x  x  n n , n = 0, 1, 2, 3, … ...... n1 n /2 // 2 f (xn )  f (xn ) f (xn ) and (5)

2 2 f 2 f /  ff //  xn1  xn  , n = 0, 1, 2, 3, … ...... /  / 2 // / 2 // 2 2 / ///  f 2 f  ff 2 f  2 ff  f f f   3  xxn

In this regard it has to be mentioned that Homeier [63] has given a one-step third-order iterative formula defined by

f (xn ) xn1  xn  (6)  f (x )  f /  x  n   n /   2 f (xn ) 

The iterative formula discussed above (4-6) is known as one-step method.

19

Two-Step Methods

Ostrowski [116] has introduce an idea of a weight function to increase the order of convergence of Newton‟s method (2) at the expenses of additional function evaluation, known as two-step method given by (3)

Like the Ostrowski method [116], to improve the local order of convergence many more modified methods are proposed.

The other important two-step methods are as follows:

Sharma [127] defined an iterative formula given by

f (xn ) yn  xn  / , f (xn )

2 f (xn ) xn1  xn  / . f (xn ) f (xn )  f (yn )

Sharma and Goyal [128] defined another family of a derivative free fourth-order two-step method given by

 f (xn ) xn  xn  , g(xn )

   f xn  f xn  pxn f xn  xn1  xn   , n = 0, 1, 2, 3, … ...... gxn  f (xn )  q(xn ) f (xn ) where,

f xn  mf xn  f xn  gxn   , m = ±1 mf (xn ) and

f (x) qx  px1 . f x  mf x

Noor et al. [112] defined the iterative formula by

20

 3f / (x )  f / (y ) f (x ) x  x   n n  n , where n1 n  /  /  2 f (xn )  f (xn )

f (xn ) yn  xn  / . f (xn )

Chun [31] defined an iterative formula given by

 f (x )  2 f (y )  f (x )  n n  n xn1  xn     f (xn )  f (yn )  f '(xn ) and another iterative relation defined as

3 f (xn ) 1 f (xn ) xn1  xn  /  / / , where 2 f (xn ) 2 2 f (xn )  f (yn )

f (xn ) yn  xn  / . f (xn )

Kou et al. [81] defined a fourth-order method given by

 3 f / (y )  f / (x )3 f / (y )  5 f / (x ) f (x ) x  x  1 n n n n  n , where n1 n  / / /  /  4 15 f (yn )  7 f (xn )f (xn )  f (xn )

2 f (xn ) yn  xn  / . 3 f (xn )

In an another paper Kou et al. [82] defined an iterative formula defined as

1 f (x )  f (y ) n n f (y ) x  y  2 n , where n1 n 5 f / (x ) f (x )  f (y ) n n 2 n

f (xn ) yn  xn  / . f (xn )

Maheshwari [94] defined an iterative formula as

21

1  f 2 (x ) f 2 (y )  x  x   n  n  , where n1 n /   f (xn )  f (yn )  f (xn ) f (xn ) 

f (xn ) yn  xn  / . f (xn )

Family of Iterative Methods

There are many family of iterative methods are also available for solving nonlinear equation. One of the best known family, the Chebshev-Halley family given by

// f (x)  1 T f (x)  f (x) f (x) (x)  x  / 1 ,T f (x)  / 2 f (x)  2 1 T f (x) f (x)

Here  is a real parameter. This family has a third-order convergence. Particular cases are derived in the book of Petkov [121] by assigning a suitable value of λ in the above Chebshev-

Halley family as Chebyshev‟s method (CM) for λ = 0, Halley‟s method (HM) for λ = ½, Super-

Halley method (SHM) for λ = 1 and Newton‟s method for λ = ±∞.

Later, Nedzhibov et al. [104] derived two new families corresponding to the Chebshev-Halley family which does not require the second derivative of the function f.

Three-Step Methods

To improve the local order of convergence and efficiency index many more three-step methods are proposed. Some of them are as follows:

Chun and Ham [28] developed a family of variants of Ostrowski‟s method [116] with sixth- order convergence by weight function method given by

f (xn ) yn  xn  / , f (xn )

22

f (xn ) f (yn ) zn  yn  / , f (xn )  2 f (yn ) f (xn )

f (zn ) xn1  zn  H(n ) / , f (xn )

f (yn ) where n  f (xn ) and H(t) represents a real valued function with H(0) = 1, H / (0)  2 and H // (0)   .

Kou et al. [83] presented a family of variants of Ostrowski’s method [116] given by

f (xn ) yn  xn  / , f (xn )

f (xn ) f (yn ) zn  yn  / , f (xn )  2 f (yn ) f (xn )

2 f (z )  f (x )  f (y )  f (z )  x  z  n  n n   n , n1 n f / (x )  f (x )  2 f (y )  f (y ) f (z ) n  n n  n n  where  is a constant.

Bi et al. [17] presented a family of eight-order convergence iterative method which is given by

f (xn ) yn  xn  / , f (xn )

2 f (x )  f (y ) n n f (yn ) zn  yn  . / , 2 f (xn )  5 f (yn ) f (xn )

f (zn ) xn1  zn  H(n ) , f [zn , yn ]  f [zn , xn , xn ](zn  yn )

f (zn ) / where n  and H(t) represents a real valued function with H(0) = 1, H (0)  2 and f (xn )

H(0)  .

23

Wang and Liu [151] developed families of sixth-order methods given by

2 f (xn ) yn  xn  / , 3 f (xn )

9 f / (x )  5 f / (y ) n n f (xn ) zn  xn  / / / , 10 f (xn )  6 f (yn ) f (yn )

f zn  xn1  zn  , 3 /  3  / W f (xn ) f (yn )  1 W f (xn ) f (xn ) 2  2 

/ / / af (xn )  bf (yn ) f (xn ) where, W f (xn )  / / / , a  b  c  d, c  d  0 and a, b, c, d  R are cf (xn )  df (yn ) f (yn ) constants. and another new family by using the method of undetermined coefficients given by

f z af / (x )  bf / (y ) x  z  n n n , n1 n / 2 / / /2 f (xn )  df (xn ) f (yn )  ef (yn )

1 1 where, a  5c  3d  e, b   3c  d  e, c  d  e  0and c, d, e  R are constants. 2 2

Recently, Liu and Wang [89] developed an eight order method given by

f (xn ) yn  xn  / , f (xn )

f (xn ) f (yn ) zn  yn  , f (xn )  2 f (yn ) f '(xn )

24

2 f (z )  f (x )  f (y )  f (z )  x  z  n  n n   n  G( ), n1 n f / (x )  f (x )  2 f (y )  f (y ) f (z ) n n  n n  n n 

f (zn ) where  is a constant, n  and G(t) represents a real valued function such that G(0) = f (xn )

0, G / (0)  4.

All the above methods are iterative methods and are of different order of convergence.

The use of advanced ideas of existing methods has been made to undertake complicated numerical and mathematical problems and also have developed new methods and procedures that are faster and more accurate in respect of existing one.

The thesis presented here consists of finding the zeros of nonlinear equations ( f(x) = 0 ) and the computational aspects of some mathematical problems namely problems of matrix-chain multiplication, computation of determinant of matrices, construction of minimum spanning tree of simple connected graphs and optimization of assignment problems.

The first chapter deals with computation of matrix-chain multiplication and computation of determinant of matrices.

In the second chapter finding the zeros of nonlinear equations, f(x) = 0, are considered.

The third chapter deals with the construction of minimum spanning tree of a simple connected graph.

25

Finally in the fourth chapter, Hungarian method for solving assignment problem is modified to reduce operational / computational cost.

The summary of the thesis is presented below chapter wise.

In the first topic of Chapter-1, matrix-chain multiplication is considered. The minimum and maximum of dimensions of the given matrices is used as a key parameter to yield the optimal parenthesization. Depending upon the number of minimum and maximum of dimensions of the given matrices different cases will arise. Mainly three cases namely, one minimum, two minimum and three minimum of dimensions with different number of maximum of dimensions of the given matrices will cover all the aspects of optimal parenthesization. Basically, a virtual partition is given along the minimum of dimensions of the given matrices to the matrix-chain and the parenthesization is done in such a way that the minimum of dimensions are used maximum number of times and the maximum of dimensions are used minimum number of times to get the optimal parenthesization which drastically reduces (more than 90%) the total number of scalar multiplications as compared with the dynamic programming approach given in the book of

Cormen et al. [34, 35].

In the second topic of this Chapter, the determinant value of a n×n matrix M is calculated by generalizing a new recursion technique developed by Rezaifar and Rezaee [123]. By eliminating ith row and ith column and jth row and jth column, ( i, j can be any valid value for the given determinant) instead of eliminating first row first column and last row last column as done in

[123], five other determinants of lower order are formed to compute the determinant value of the given matrix M as

26

M ii M ij 1 M = × , i ≠ j and M ≠ 0 ii, jj M ji M jj M ii, jj

where M ij is the value of the determinant of the matrix M ij .

The advantage of this generalization over the new recursion technique developed by

Rezaifar and Rezaee [123] are division of zero can be avoided by suitable choice of rows and columns and segregation of most of the non-zero elements which reduces the arithmetic operation than that of the method [123]. In comparison with the expanding method with respect to the arithmetic operations, if Tn denotes the total number of scalar multiplication required to evaluate an n×n determinant then for the expanding method, Tn = nTn-1 + n and for the method

[123] and our generalized method, Tn = 4Tn-1 + Tn-2 + 3.

In the chapter-2, finding the zero of nonlinear equation f(x) = 0 is considered. By observing the ratio of the two real parameter f(a0) and f(b0), a new iterative formula has been established for solving nonlinear equation of the form f(x) = 0 that has at least one root in [a0, b0]. In this work a new approach to the subject based on the ratio of the two parameters f(a0) and f(b0) is given in which linear convergence is assured and the order of convergence of the method is increased with the taking of more terms in the series obtained by the Taylor‟s expansion.

Secondly, another new fourth-order iterative method has been developed for solving nonlinear equation of the form f(x) = 0 by giving a new weight. As mentioned earlier Ostrowski

[116] presents a two-step iterative method which is the improvement of Newton‟s method and has introduce an idea of a weight function to increase the order of convergence of Newton‟s method at the expenses of additional function evaluation. Here a new weight function is introduced to establish a two-step fourth-order iterative method that requires two function

27 evaluation and one evaluation of its first derivative per iteration which has the efficiency index

1.587. The performance of the method is equal or better than the other contemporary two-step fourth-order method supported by the number of numerical examples.

Thirdly, the concept of probability is used to solve the nonlinear equation f(x) = 0. The initial interval containing the real root of the nonlinear equation is divided into a finite number of subintervals of equal length. The main aim of this work is by assigning a probability to each subinterval; choose a particular subinterval that has a maximum probability. The subinterval that has a maximum probability has a chance to contain the root. A new approach to the subject by using the concept of probability is introduced.

In the Chapter-3, a new technique to construct Minimum Spanning Tree (MST) of a weighted graph is presented. The weighted graph is represented by a matrix, called the weight matrix of n2 elements whose (i, j)-th elements are corresponding weights of the edges adjacent to the vertex vi to vj. The algorithm works into two phases namely Marking Pass (MP) and MST construction pass. Marking Pass (MP) marks the edges eij which are the minimum weights either column wise or row wise in the upper and will be a part of the minimum spanning tree. If the marked edge forms a cycle with the previously marked edges then set it as zero, otherwise marked it. Since the algorithm manipulates only n(n-1)/2 elements in the upper triangular matrix, the best and worst case complexities are respectively O(m) and O(m2), where m = n-1.

In the last chapter (chapter-4), the linear assignment problem is considered. It falls under the category of optimization problem (linear programming problem) and so many other

28 developed algorithms available to solve the problem very effectively. The widely used algorithm to solve such problem is Hungarian method (HM) [86] which requires to find out minimum number of lines to cover all the zeros in the reduced cost matrix and equalizing this minimum number of lines to the order of the cost matrix is the prerequisite for the existence of the optimal solution. In this work after breaking this prerequisite condition, Hungarian method is modified to reduced the total number of lines required to draw to get the optimal solution when the order of the matrix and the number of lines drawn in the first reduced cost matrix is differ by one. The modification also supported by several numerical examples that asserts that less number of arithmetic operations and lines compared to the Hungarian method.

After the above introduction, we now present the thesis chapter wise.

29