
I N T R O D U C T I O N Computation consists of some basic mathematical operations. Mainly, the operations are addition, subtraction, multiplication and division. Normally, all these operations can be performed with certain efficiency, but with the help of modern computer technology all these basic mathematical operations can be performed within a very short time. In general a computer takes time to perform one mathematical operation in one nanosecond i.e. one billion of a second (i.e. 10-9 second). Now, in the modern digital age, if time to get the solution of any problem is not time bound then it itself loses its significant. So, it is the challenge to solve any problem with shortest time and acceptable accuracy. Since, now a day‟s computer memory is sufficiently available with low cost, so, optimization of time and accuracy is the utmost important for solving any problem. If the required number of basic arithmetic operations is bounded by a polynomial in size of the problem, then it is referred as a good algorithm. The required number of basic arithmetic operations may be bounded by exponential function as well as factorial function also. Growth rate of polynomial P(x), exponential E(x) and factorial F(x) functions are bounded as per the following relations P(x) << E(x) << F(x) (1) This is why polynomial bounded is good one. Here we briefly discuss the developments and some techniques which are already used to solve some mathematical problems in relation to the computation. 4 MATRIX-CHAIN MULTIPLICATION: DYNAMIC PROGRAMMING Suppose we want to evaluate the chain multiplication of four matrices A1, A2, A3 and A4. This multiplication can be done by five distinct ways which is shown using parenthesization as follows: 1. (A1(A2 (A3 A4))) 2. (A1((A2 A3) A4)) 3. ((A1 A2) (A3 A4)) 4. ((A1(A2 A3))A4) 5. (((A1 A2) A3)A4) These different parenthesization yields different evaluation cost in terms of number of scalar multiplication and one may ten or more times costlier than the other. So, proper parenthesization is very important task to minimize the total computational cost. Using the dynamic programming approach Cormen et al. provides the solution in their books [34, 35]. The existing dynamic programming approach [34, 35] for optimal solution for matrix- chain multiplication requires the computation of the recursive equation m[i, j] = min (m[i, k] + m[k+1, j] + Pi-1 Pk Pj ), i ≤ k < j where m[i, j]: minimum cost of computing AiAi+1 . Ak Ak+1 . Aj m[i, k]: minimum cost of computing sub matrix-chain AiAi+1 . Ak m[k+1, j]: minimum cost of computing sub matrix-chain Ak+1 . Aj and Pi-1 Pk Pj: cost of multiplying the two resultant matrices (AiAi+1 . Ak ) and 5 (Ak+1 . Aj ) where the matrix Ai is of order Pi -1× Pi ( i = 1, 2, . , j). This particular Pk can be any of Pi -1, Pi, . , Pj. Obviouly, m[i, j] = 0 for i = j. NEW TECHNIQUES FOR COMPUTING DETERMINANT OF MATRICES The roles of matrices are immense important in all branches of science, engineering, social science and management. Matrix representations of system of linear equations are used to solve such system. The inversion of matrices is necessary for this purpose. Again, it is necessary to find out the determinant of matrices for the inversion of matrices. Matrices and their various applications are available in the book of Hill [62]. There are several direct and non-direct methods are available by which the determinant value of matrices are evaluated. Some of the direct methods are Basket weave method, Pivotel condensation method (Chio method) and Expanding method. The non-direct methods are Gauss elimination method, LU decomposition method, QR decomposition method and Cholesky decomposition method etc. For solving the physical problems different types of matrices may arises. Some of the types of the matrices are tridiagonal matrix, pentadiagonal matrix, toeplitz matrix, hessenberg matrix, jordan matrix, banded circulant matrix etc. In the past, researchers have devoted to find out the determinant value of the different types of matrices that arises from the physical problems from many long years back to till now. The concepts of recurrence relation were used for computing the determinant of a tridiagonal matrix. For computing the determinant of a tridiagonal matrix, a two-term recurrence is used by EI-Mikkawy [99] by imposing certain conditions on three-term recurrence. Later, Salkuyeh [126] showed that a two-term recurrence is also applicable for a block- tridiagonal matrix. After that Sogabe [131] established the same two-term recurrence for computing the 6 determinant value of the general n × n matrix and shown that the relation is a generalization of the DETGTRI algorithm developed by EI-Mikkawy [100]. Other type of special matrix is pentadiagonal matrix which arises in numerical solution of an ordinary and partial differential equations, interpolations problem, spline problems, boundary value problems. The determinant of pentadiagonal matrix is used to test for the existence of unique solution of partial differential equations, for computing the inverse of symmetric pentadiagonal toeplitz matrices. Some methods have been found in the articles of Cinkir [32], Hadj and Elouafi [57] and Sogabe [133] to compute the determinant value of a pentadiagonal matrix with complexity O(n). In this regard it is widely known that Sweet‟s algorithm [135] and Evans algorithm [45] is fast numerical algorithm for evaluating the determinant of pentadiagonal matrix of order n, which requires 24n–59 and 22n–50 operations respectively. A more efficient algorithm was given by Sogabe [132] to compute the determinant value of pentadiagonal matrix of order n, which requires 14n–28 operations, compared to much less than Sweet‟s algorithm [135] and Evans algorithm [45]. A possibility of a specific procedure based on some results was discussed by Marrero and Tomeo [97] for computing both determinant and the inverse of any non-singular pentadiagonal matrix. Toeplitz matrix is another special matrix which occurs in the solution of second and fourth- order differential equations with various boundary conditions. For various applications finding the determinant value of the toeplitz matrix is necessary. A number of fast algorithms for computing the determinant of tridiagonal and pentadiagonal toeplitz matrix have been developed by Cinkir [32], Kilic and Mikkawy [78], Lv et al. [93] and Mcnally [98]. Later, the algorithm given by Cinkir [32] is generalized by himself in his work Cinkir [33] for computing the 7 determinant of toeplitz matrices. Recently, Elouafi [44] concentred on an explicit formula for computing the determinant of a pentadiagonal and heptadiagonal symmetric toeplitz matrix. In numerical analysis another important matrix is hessenberg matrix [53, 58]. Chen [24] presented a recursive algorithm to compute the inverse and the determinant of a hessenberg matrix. Circulant matrix [36] arises from many areas such as cryptography, Fourier transforms, operator theory, digital image processing, numerical analysis, graph theory etc. Using Gauss elimination, the computation of a determinant of a matrix of order n requires 2n3/3 arithmetic operations and for the large dimension of matrix, the computation is not feasible at all. Therefore, the determinant of some special matrices such as banded matrices [92, 93] or circulant matrices with special entries [6, 20, 25, 129] has been considered. A survey for the complexity for computing the sign or the value of the determinant of an integer matrix was presented by Kaltofen and Villard [74] and Pan and Yu [119]. Recently, Ferrer et al. [46] derive an explicit formula for computing the determinant of any general matrix from the given Jordan matrix. Rezaifar and Rezaee [123] developed a recursion technique to evaluate the determinant of a general matrix as follows: M 11 M 1n 1 M = × M n1 M nn M 11,nn where, M is an n×n matrix, M ij is a matrix, which is given by the eliminations of ith row and the jth column of the matrix , M ii, jj is a matrix, which is given by the eliminations of ith row ith column and jth row jth column of the matrix and M presented in the above expression is the value of the determinant of the matrix M ij . 8 CONSTRUCTION OF A MINIMUM SPANNING TREE The minimum spanning tree (MST) problem is a classical and well known problem in combinatorial optimization concerned with finding a spanning tree of an undirected, connected graph such that the sum of weights of the selected edges is minimum. The classical method for finding the minimum spanning tree (MST) consists of selecting the particular spanning tree among all the spanning trees of the given undirected, connected graph. A classical result found by Kirchhoff [80] to determine the number of spanning trees. The explicit formulas can be found in the work of Bogdanowicz [18] and Lovasz and Plummer [91] for determining the number of spanning trees of a special family of graphs namely n-fans. Other explicit formulas can also be found for determining the number of spanning trees of other special family of graphs in the work of Cayley [22], Wang and Yang [146]. The minimum spanning tree (MST) has direct application in the network designing of computer, telecommunication, transport, electrical circuit and occurs in the approximate solution of travelling salesman problem, maximum flow problem, matching problem. The various applications of MST are available in Ahuja et al. [2], Graham and Hell [55] and Kumar and Jani [88]. The MST problem is generally solved by greedy method that consists of choosing the appropriate small edges and excluding the large ones at each stage so that at every inclusion of edges in the MST, do not form a cycle and for the every exclusion of edges from the MST without disconnecting the graph.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages26 Page
-
File Size-