Available online at www.worldscientificnews.com

WSN 147 (2020) 1-34 EISSN 2392-2192

Further Results on Gbemi’s Method: The Extended Sarrus’ Rule to the Computations of the of n × n (n > 3) Matrices

M. G. Sobamowo Department of Mechanical Engineering, University of Lagos, Lagos, Nigeria. Department of Mathematics, University of Lagos, Lagos, Nigeria E-mail address: [email protected]

ABSTRACT Over the years, the generally accepted fact is that the Sarrus’s rule which was developed by a French Mathematician, P. F. Sarrus in 1833, is only limited to finding the determinant of 3 × 3 Matrices. However, in my previous work [1] “On the Extension of Sarrus’ Rule to n × n (n > 3) Matrices: Development of New Method for the Computation of the Determinant of 4 × 4 Matrices” which was published in International Journal of Engineering Mathematics, vol. 2016, 14 pages, the possibility of extending the Sarrus’s rule to find the determinant of 4 × 4 matrices was displayed using the newly established Gbemi’s method. The simplicity, accuracy, ease of applications as well as comparatively low computational time and cost of the proposed Gbemi’s method were pointed out. In this further study, additional nine methods of extending the Sarrus’s rule to evaluate the determinant of 4 × 4 matrices are established. The further establishes the effectiveness, consistency for handy calculations, high accuracy and relatively low computational time of the new method. Therefore, with the aid of the generalized extended method to n × n, it could be stated that method will greatly reduce the computational and running time of most software that are largely based on matrices. Consequently, this will greatly reduce the computational cost.

Keywords: Further results, , , Determinant, Extended Sarrus’ rule, Gbemi’s method

( Received 16 June 2020; Accepted 07 July 2020; Date of Publication 08 July 2020 ) World Scientific News 147 (2020) 1-34

1. INTRODUCTION

Indisputably, the determinant of a matrix has been a very powerful tool that helps in establishing the characteristics of matrices, inversions of matrices and the solutions of systems of algebraic equations. Its importance and wide areas of applications are well established in various engineering and applied science problems. Therefore, it has become a mathematical area of increasing interest and significance. Consequently, there have been proposition and establishment of various direct and non-direct methods in finding the of matrices. These methods include Butterfly method, Sarrus’s Rule, Triangle’s rule, procedure, Permutation expansion or Expansion by the elements of whatever row or column, Laplace decomposition method, Pivotal or Chio’s condensation method, Cholesky decomposition method, Rezaifar and Razee’s method, Dutta and Pal’s method, Dodgson’s condensation method, Hajrizaj’s method, LU decomposition method, QR decomposition method, Salihu and Gjonbalaj’s method etc. [2-23]. In the pools of these methods, Sarrus’s rule which was developed by a French Mathematician, P. F. Sarrus in 1833, is the simplest, easiest, fasted and very straight-forward method. However, it is gross limitation is that this method does not work for matrices larger than 3×3 matrices [2]. Over the years, this has been the generally accepted fact. Consequently, the state-of-the-art methods for finding the determinants of 4 × 4 matrix and larger matrices are predominantly based on non-Sarrus’ rule. However, in my previous study [1], the possibility of extending the Sarrus’s rule to find the determinant of 4 × 4 matrices was displayed using the newly established Gbemi’s method. In this further study, additional nine methods of extending the Sarrus’s rule to evaluate the determinant of 4 × 4 matrices are established. This further establishes the effectiveness, consistency for handy calculations, high accuracy and relatively low computational time of the Gbemi’s method.

2. DEFINITION OF DETERMINANTS AND THE EXISTING METHODS OF COMPUTATION

The definition of the determinant and the existing methods are presented in this section.

2. 1. Definition of Determinants

The determinant of a n × n matrix A = [aij] is a real number or a function of the elements of the matrix that yields a single number that well determines something about the matrix. It determines whether the system has a unique solution and whether the matrix is singular or not. The determinant of an n-order matrix will be called sum, which has n! different terms

휀푗1,푗2,…,푗푛푎1푗1푎2푗2 … 푎푛푗푛 which will be formed of the matrix A elements. Let A be and n × n matrix,

푎11 푎12 … 푎1푛 푎 푎 … 푎 21 22 2푛 퐴 = . . … 푎3푛 (1) . . … 푎4푛 ⌊푎푛1 푎푛2 … 푎5푛⌋

-2- World Scientific News 147 (2020) 1-34

Then determinant of A is

푎11 푎12 … 푎1푛 푎21 푎22 … 푎2푛 퐷 = det 퐴 = |퐴| = | . . … 푎3푛| = ∑ 휀 푎 푎 … 푎 (2) | | 푠푛 푗1,푗2,…,푗푛 1푗1 2푗2 푛푗푛 . . … 푎4푛 푎푛1 푎푛2 … 푎5푛

+1 , 푖푓 푗1, 푗2, … , 푗푛 푖푠 푎푛 푒푣푒푛 푝푒푟푚푢푡푎푡푖표푛 where 휀푗1,푗2,…,푗푛 = { −1, 푖푓 푗1, 푗2, … , 푗푛 푖푠 푎푛 표푑푑 푝푒푟푚푢푡푎푡푖표푛

The determinant of matrix A could also be written in Laplace co-factor form as

푛 푖+푗 det(퐴) = |퐴| = ∑푖=1(−1) 푎푖,푗det (퐴푖푗) (3a)

푛 푖+푗 det(퐴) = |퐴| = ∑푗=1(−1) 푎푖,푗det (퐴푖푗) (3b)

2. 2. Existing Methods of Computation of Determinants The determinant of matrices can be evaluated using difference methods as presented in literature. This methods include Basket weave method, Butterfly method, Sarrus’s Method, Triangle’s rule, Gaussian elimination procedure, Permutation expansion or Laplace expansion by the elements of whatever row or column, Row reduction Method, Column reduction method, Pivotal or Chio’s condensation method, Dodgson’s condensation method, LU decomposition method, QR decomposition method, Cholesky decomposition method, Hajrizaj’s method, Salihu and Gjonbalaj’s method, Rezaifar and Rezaee’s method, Dutta and Pal’s method etc. The Butterfly method and Sarrus’ rule have shown to the simplest methods for finding the determinant of 2×2 and 3×3 matrices, respectively.

2. 2. 1 The Sarrus’ Method The Sarrus rule (basket-weave method) is an alternative way to evaluate the determinant of a 3×3 matrix. However, the method is limited to 3×3 matrices. The procedure of application is demonstrated as follows: A 3x3 matrix is written as

푎11 푎12 푎13 (푎21 푎22 푎23) 푎31 푎22 푎33

In order to apply the method, we construct a 3×5 array by writing down the entries of the 3×3 matrix and then repeating the first two columns. We calculate the products along the six diagonal lines shown in the diagram. The determinant is equal to the sum of products along diagonals labeled 1, 2 and 3 minus the sum of the products along the diagonals labeled 4, 5 and 6.

-3- World Scientific News 147 (2020) 1-34

1 2 3 Example: Evaluate 퐴 = |2 1 4| 3 2 5

det (A) = (5 + 24 + 12) – (9 + 8 + 20) = (41) – (37) = 4

Multiplication of the numbers on the same line, addition of the ones from down-going lines and subtraction of the ones from up-going lines, is an approach that led to the name, the basket weave method. The basket weave method does not work on matrices larger than 3 × 3 [2]. The use Laplace co-factor expansion either along the row or column is a common method for the computation of the determinant of 3 × 3, 4×4 and 5×5. The evaluation of the determinant of an n×n matrix using the definition involves the summation of n! terms, each term being a product of n factors. As n increases, this computation becomes too cumbersome.

2. 3. The Development of the New Methods for the Computation of Determinants Consider a 4 x 4 matrix whose determinant is required, given as shown below

푎11 푎12 푎13 푎14 푎 푎 푎 푎 퐴 = | 21 22 23 24| (4) 푎31 푎32 푎33 푎34 푎41 푎42 푎43 푎44

Following, the definition given in section 2.1, the conventional method of finding the determinant is Laplace co-factor expansion method is carried out as follows Expanding along the first row, we have

푎22 푎23 푎24 푎21 푎23 푎24 푎21 푎22 푎24 det(퐴) = 푎11 |푎32 푎33 푎34| − 푎12 |푎31 푎33 푎34| + 푎13 |푎31 푎32 푎34| 푎42 푎43 푎44 푎41 푎43 푎44 푎41 푎42 푎44

푎21 푎22 푎23 − 푎14 |푎31 푎32 푎33| (5) 푎41 푎42 푎43

-4- World Scientific News 147 (2020) 1-34

Again, expand each of the 3 × 3 matrix along the first row, we have

푎33 푎34 푎32 푎34 푎32 푎33 det(퐴) = 푎11 [ 푎22 | | − 푎23 | | + 푎24 | |] 푎43 푎44 푎42 푎44 푎42 푎43

푎33 푎34 푎31 푎34 푎31 푎33 −푎12 [ 푎21 | | − 푎23 | | + 푎24 | |] 푎43 푎44 푎41 푎44 푎41 푎43

푎32 푎34 푎31 푎34 푎31 푎32 +푎13 [ 푎21 | | − 푎22 | | + 푎24 | |] 푎42 푎44 푎41 푎44 푎41 푎42

푎32 푎33 푎31 푎33 푎31 푎32 −푎14 [ 푎21 | | − 푎22 | | + 푎23 | |] (6) 푎42 푎43 푎41 푎43 푎41 푎42

Now, we have

det(퐴) = [ (푎11푎22푎33푎44) − (푎12푎23푎34푎41) + (푎13푎24푎31푎42) − (푎14푎21푎32푎43)]

−[ (푎13푎22푎31푎44) − (푎12푎21푎34푎43) + (푎11푎24푎33푎42) − (푎14푎23푎32푎41)]

+ [ (푎11푎23푎34푎42) − (푎13푎24푎32푎41) + (푎14푎22푎31푎43) − (푎12푎21푎33푎44)]

− [ (푎14푎23푎31푎42) − (푎13푎21푎32푎44) + (푎11푎22푎34푎43) − (푎12푎24푎33푎41)]

+ [ (푎11푎24푎32푎43) − (푎14푎22푎33푎41) + (푎12푎23푎31푎44) − (푎13푎21푎34푎42)]

− [ (푎12푎24푎31푎43) − (푎14푎21푎33푎42) + (푎11푎23푎32푎44) − (푎13푎22푎34푎41)] (7)

So, it shows that 4! different terms will be needed to compute the determinant of the forth order matrice. In order to generate these 4! different terms (24 terms), we have the following 3 different 4×4 matrices as shown below

C1 C2 C3 C4 C1 C3 C4 C2 C1 C4 C2 C3 C1 C2 C3 C4 (8)

In the arrangements, C1 represents first column, C2 represents second column, C3 represents third column and C4 represents forth column as given in the original 4×4 matrix. We could see from the arrangements that the first arrangement (C1 C2 C3 C4) of 4×4 matrix still remained the same as given in the matrix A. To get the second arrangement (C1 C3 C4 C2) of another 4×4 matrix, remove and transfer the second column in the first arrangement to the last column of the given 4 x 4 matrix A. To get the third arrangement (C1 C4 C2 C3) of another new 4×4 matrix, remove and transfer the second column in the second arrangement to be the last column of the second 4 x 4 matrix.

-5- World Scientific News 147 (2020) 1-34

This forms the third 4 x 4 matrix. After the third step, we need not go further to perform the routine procedure removing and transferring the second column in the first arrangement to the last column of the given 4 x 4 matrix because if we do, we will end up repeating the first step in this procedure. In fact, this approach helps us to when to stop the procedures. That is why the last arrangement was cancelled. From the above, we the given 4 x 4 matrix is which the first 4 x 4 matrix of the procedure as

푎11 푎12 푎13 푎14 푎21 푎22 푎23 푎24 퐴푓푝 = | | 푎31 푎32 푎33 푎34 푎41 푎42 푎43 푎44

The second 4 x 4 matrix is given as

푎11 푎13 푎14 푎12 푎21 푎23 푎24 푎22 퐴푠푝 = | | 푎31 푎33 푎34 푎32 푎41 푎43 푎44 푎42

The third 4 x 4 matrix is given as

푎11 푎14 푎12 푎13 푎21 푎24 푎22 푎23 퐴푡푝 = | | 푎31 푎34 푎32 푎33 푎41 푎44 푎42 푎43

From the above, 10 new methods based on Sarrus’ rule were developed for the computation of the determinant the 4 x 4 matrix.

2. 3. 1 New Method 1 In the new scheme 1, the next step to find the det (A) after the arrangements st nd rd 1. In the first sub-matrix Afp, rewrite the 1 , 2 and 3 columns on the right hand side of the matrix Afp (as “columns 5, 6 and 7). To the 4 x 7 , assign “+” to the leading element in the odd numbered columns while “-” sign to the leading element in the even numbered columns. That is

+ − + − + − + 푎11 푎12 푎13 푎14 푎11 푎12 푎13 |푎 푎 푎 푎 푎 푎 푎 | 퐴푎푟푔푓푝 = | 21 22 23 24 21 22 23| 푎31 푎32 푎33 푎34 푎31 푎32 푎33 푎41 푎42 푎43 푎44 푎41 푎42 푎43

This is the first part of the solution of the computation of determinant the given 4 x 4 matrix.

-6- World Scientific News 147 (2020) 1-34

st nd rd 2. In the second sub-matrix Asp, rewrite the 1 , 2 and 3 columns on the right hand side of the matrix Asp (as “columns 5, 6 and 7). As in the first step, assign to the augmented matrix, assign “+” to the leading element in the odd numbered columns while “-” sign to the leading element in the even numbered columns and then apply the Sarrus’ rule.

+ − + − + − + 푎11 푎13 푎14 푎12 푎11 푎13 푎14 |푎 푎 푎 푎 푎 푎 푎 | 퐴푎푟푔푠푝 = | 21 23 24 22 21 23 24| 푎31 푎33 푎34 푎32 푎31 푎33 푎34 푎41 푎43 푎44 푎42 푎41 푎43 푎44

This is the second part of the computation of determinant of the given 4 x 4 matrix.

st nd rd 3. In the third sub-matrix Atp rewrite the newest 1 2 and 3 columns on the right hand side of the matrix Atp (as “columns 5, 6 and 7). And again, to the augmented matrix, assign “+” to the leading element in the odd numbered columns while “-” sign to the leading element in the even numbered columns and then apply the Sarrus’ rule.

+ − + − + − + 푎11 푎12 푎13 푎14 푎11 푎12 푎13 |푎 푎 푎 푎 푎 푎 푎 | 퐴푎푟푔푠푝 = | 21 22 23 24 21 22 23| 푎31 푎32 푎33 푎34 푎31 푎32 푎33 푎41 푎42 푎43 푎44 푎41 푎42 푎43

This is the third part of the computation of determinant of the given 4 x 4 matrix

4. For each of augmented matrix 퐴푎푟푔푓푝 , 퐴푎푟푔푠푝 푎푛푑 퐴푎푟푔푡푝 , apply Sarrus’ rule by adding the products along the four full diagonals that extend from upper left to lower right and subtract the products along the four full diagonals that extend from the lower left to the upper right. After finding the determinant of the augmented matrices 퐴푎푟푔푓푝 , 퐴푎푟푔푠푝 푎푛푑 퐴푎푟푔푡푝 , the addition of the results after applying the Sarrus’ rule on the augmented matrices 퐴푎푟푔푓푝 , 퐴푎푟푔푠푝 푎푛푑 퐴푎푟푔푡푝 is the determinant of A. This is shown below

+

-7- World Scientific News 147 (2020) 1-34

+

Therefore, we have Equ. (9) which is equivalent in all entireties to Equ. (7) 퐴 = [ (푎11푎22푎33푎44) − (푎12푎23푎34푎41) + (푎13푎24푎31푎42) − (푎14푎21푎32푎43)]

−[ (푎13푎22푎31푎44) − (푎12푎21푎34푎43) + (푎11푎24푎33푎42) − (푎14푎23푎32푎41)]

+ [ (푎11푎23푎34푎42) − (푎13푎24푎32푎41) + (푎14푎22푎31푎43) − (푎12푎21푎33푎44)]

− [ (푎14푎23푎31푎42) − (푎13푎21푎32푎44) + (푎11푎22푎34푎43) − (푎12푎24푎33푎41)]

+ [ (푎11푎24푎32푎43) − (푎14푎22푎33푎41) + (푎12푎23푎31푎44) − (푎13푎21푎34푎42)]

− [ (푎12푎24푎31푎43) − (푎14푎21푎33푎42) + (푎11푎23푎32푎44) − (푎13푎22푎34푎41)] (9)

2. 3. 2. New Method 2 In the new scheme 2, the algorithm still remains the same. The difference is in the manner the sub-matrices 퐴푓푝, 퐴푠푝, 퐴푡푝 are constructed. The different between this scheme and the first scheme is that, instead of rewriting the 1st 2nd and 3rd columns on the right hand side of the matrix A (as “columns 5, 6 and 7), we rewrite the 1st, 2nd and 3rd columns on the left hand side of the matrix A (as “columns 0, -1 and -2) to form the required 4 x 7 augmented matrix. Therefore, we have For the scheme 2, the sub-matrices 퐴푓푝, 퐴푠푝, 퐴푡푝 are given as,

푎12 푎13 푎14 푎11 푎12 푎13 푎14 푎22 푎23 푎24 푎21 푎22 푎23 푎24 퐴푓푝 = | | 푎32 푎33 푎34 푎31 푎32 푎33 푎34 푎42 푎43 푎44 푎41 푎42 푎43 푎44

푎13 푎14 푎12 푎11 푎13 푎14 푎12 푎23 푎24 푎22 푎21 푎23 푎24 푎22 퐴푠푝 = | | 푎33 푎34 푎32 푎31 푎33 푎34 푎32 푎43 푎44 푎42 푎41 푎43 푎44 푎42

푎14 푎12 푎13 푎11 푎14 푎12 푎13 푎24 푎22 푎23 푎21 푎24 푎22 푎23 퐴푡푝 = | | 푎34 푎32 푎33 푎31 푎34 푎32 푎33 푎44 푎42 푎43 푎41 푎44 푎42 푎43

-8- World Scientific News 147 (2020) 1-34

As done previously,

+

+

Again, we arrived at Equ. (7)

푑푒푡 (퐴) = [ (푎11푎22푎33푎44) − (푎12푎23푎34푎41) + (푎13푎24푎31푎42) − (푎14푎21푎32푎43)]

−[ (푎13푎22푎31푎44) − (푎12푎21푎34푎43) + (푎11푎24푎33푎42) − (푎14푎23푎32푎41)]

+ [ (푎11푎23푎34푎42) − (푎13푎24푎32푎41) + (푎14푎22푎31푎43) − (푎12푎21푎33푎44)]

− [ (푎14푎23푎31푎42) − (푎13푎21푎32푎44) + (푎11푎22푎34푎43) − (푎12푎24푎33푎41)]

+ [ (푎11푎24푎32푎43) − (푎14푎22푎33푎41) + (푎12푎23푎31푎44) − (푎13푎21푎34푎42)]

− [ (푎12푎24푎31푎43) − (푎14푎21푎33푎42) + (푎11푎23푎32푎44) − (푎13푎22푎34푎41)]

(9)

2. 3. 3. New Method 3

In method 3, the sub-matrices 퐴푓푝, 퐴푠푝, 퐴푡푝 are constructed via a different approach. For the given fourth order matrix A which the determinant is to be found, we have

-9- World Scientific News 147 (2020) 1-34

푎11 푎12 푎13 푎14 푎 푎 푎 푎 퐴 = | 21 22 23 24| 푎31 푎32 푎33 푎34 푎41 푎42 푎43 푎44

For the first sub-matrix

푆(퐴푓푝) = [ (푎11푎22푎33푎44) − (푎12푎23푎34푎41) + (푎13푎24푎31푎42) − (푎14푎21푎32푎43)] −[ (푎13푎22푎31푎44) − (푎12푎21푎34푎43) + (푎11푎24푎33푎42) − (푎14푎23푎31푎44)]

The second part is given as,

푆(퐴푠푝) = [ (푎11푎23푎34푎42) − (푎13푎24푎32푎41) + (푎14푎22푎31푎43) − (푎12푎21푎33푎44)] − [ (푎14푎23푎31푎42) − (푎13푎21푎32푎44) + (푎11푎22푎34푎43) − (푎12푎24푎33푎41)]

And third part is given as,

푆(퐴푡푝) = [ (푎11푎24푎32푎43) − (푎14푎22푎33푎41) + (푎12푎23푎31푎44) − (푎13푎21푎34푎42)] − [ (푎12푎24푎31푎43) − (푎14푎21푎33푎42) + (푎11푎23푎32푎44) − (푎13푎22푎34푎41)]

As before

푑푒푡(퐴) = 푆(퐴푓푝) + 푆(퐴푠푝) + 푆(퐴푡푝)

-10- World Scientific News 147 (2020) 1-34

Then, we arrived at Equ. (7)

d푒푡 (퐴) = [ (푎11푎22푎33푎44) − (푎12푎23푎34푎41) + (푎13푎24푎31푎42) − (푎14푎21푎32푎43)]

−[ (푎13푎22푎31푎44) − (푎12푎21푎34푎43) + (푎11푎24푎33푎42) − (푎14푎23푎32푎41)]

+ [ (푎11푎23푎34푎42) − (푎13푎24푎32푎41) + (푎14푎22푎31푎43) − (푎12푎21푎33푎44)]

− [ (푎14푎23푎31푎42) − (푎13푎21푎32푎44) + (푎11푎22푎34푎43) − (푎12푎24푎33푎41)]

+ [ (푎11푎24푎32푎43) − (푎14푎22푎33푎41) + (푎12푎23푎31푎44) − (푎13푎21푎34푎42)]

− [ (푎12푎24푎31푎43) − (푎14푎21푎33푎42) + (푎11푎23푎32푎44) − (푎13푎22푎34푎41)]

2. 3. 4. New Method 4 In the new scheme 4, the different approach used in the construction of the sub- matrices 퐴푓푝, 퐴푠푝, 퐴푡푝 are shown below. Now, consider the matrix A whose determinant is to be found shown below:

푎11 푎12 푎13 푎14 푎 푎 푎 푎 퐴 = | 21 22 23 24| 푎31 푎32 푎33 푎34 푎41 푎42 푎43 푎44

For scheme 4, the first part is given as,

푆(퐴푓푝) = [ (푎11푎22푎33푎44) − (푎12푎23푎34푎41) + (푎13푎24푎31푎42) − (푎14푎21푎32푎43)]

−[ (푎13푎22푎31푎44) − (푎12푎21푎34푎43) + (푎11푎24푎33푎42) − (푎14푎23푎32푎41)]

Also, the second part

-11- World Scientific News 147 (2020) 1-34

푆(퐴푠푝) = [ (푎11푎23푎34푎42) − (푎13푎24푎32푎41) + (푎14푎22푎31푎43) − (푎12푎21푎33푎44)]

− [ (푎14푎23푎31푎42) − (푎13푎21푎32푎44) + (푎11푎22푎34푎43) − (푎12푎24푎33푎41)] and the third part is given as,

푆(퐴푡푝) = [ (푎11푎24푎32푎43) − (푎14푎22푎33푎41) + (푎12푎23푎31푎44) − (푎13푎21푎34푎42)]

− [ (푎12푎24푎31푎43) − (푎14푎21푎33푎42) + (푎11푎23푎32푎44) − (푎13푎22푎34푎41)]

As before,

푑푒푡(퐴) = 푆(퐴푓푝) + 푆(퐴푠푝) + 푆(퐴푡푝) which gives the Equ. (7) d푒푡 (퐴) = [ (푎11푎22푎33푎44) − (푎12푎23푎34푎41) + (푎13푎24푎31푎42) − (푎14푎21푎32푎43)]

−[ (푎13푎22푎31푎44) − (푎12푎21푎34푎43) + (푎11푎24푎33푎42) − (푎14푎23푎32푎41)]

+ [ (푎11푎23푎34푎42) − (푎13푎24푎32푎41) + (푎14푎22푎31푎43) − (푎12푎21푎33푎44)]

− [ (푎14푎23푎31푎42) − (푎13푎21푎32푎44) + (푎11푎22푎34푎43) − (푎12푎24푎33푎41)]

+ [ (푎11푎24푎32푎43) − (푎14푎22푎33푎41) + (푎12푎23푎31푎44) − (푎13푎21푎34푎42)]

− [ (푎12푎24푎31푎43) − (푎14푎21푎33푎42) + (푎11푎23푎32푎44) − (푎13푎22푎34푎41)]

-12- World Scientific News 147 (2020) 1-34

2. 3. 5. New Method 5

In the new scheme 5, the sub-matrices 퐴푓푝, 퐴푠푝, 퐴푡푝 are constructed as follows. Give the matrix A whose determinant is to be found as shown below:

푎11 푎12 푎13 푎14 푎 푎 푎 푎 퐴 = | 21 22 23 24| 푎31 푎32 푎33 푎34 푎41 푎42 푎43 푎44

The first part is given as,

푆(퐴푓푝) = [ (푎11푎22푎33푎44) − (푎12푎23푎34푎41) + (푎13푎24푎31푎42) − (푎14푎21푎32푎43)]

−[ (푎13푎22푎31푎44) − (푎12푎21푎34푎43) + (푎11푎24푎33푎42) − (푎14푎23푎32푎41)]

The second part is given as,

푆(퐴푠푝) = [ (푎11푎23푎34푎42) − (푎13푎24푎32푎41) + (푎14푎22푎31푎43) − (푎12푎21푎33푎44)]

− [ (푎14푎23푎31푎42) − (푎13푎21푎32푎44) + (푎11푎22푎34푎43) − (푎12푎24푎33푎41)] and the third part is given as,

-13- World Scientific News 147 (2020) 1-34

푆(퐴푡푝) = [ (푎11푎24푎32푎43) − (푎14푎22푎33푎41) + (푎12푎23푎31푎44) − (푎13푎21푎34푎42)]

− [ (푎12푎24푎31푎43) − (푎14푎21푎33푎42) + (푎11푎23푎32푎44) − (푎13푎22푎34푎41)] where 푑푒푡(퐴) = 푆(퐴푓푝) + 푆(퐴푠푝) + 푆(퐴푡푝)

Again, we arrived at the Equ. (7)

d푒푡 (퐴) = [ (푎11푎22푎33푎44) − (푎12푎23푎34푎41) + (푎13푎24푎31푎42) − (푎14푎21푎32푎43)]

−[ (푎13푎22푎31푎44) − (푎12푎21푎34푎43) + (푎11푎24푎33푎42) − (푎14푎23푎32푎41)]

+ [ (푎11푎23푎34푎42) − (푎13푎24푎32푎41) + (푎14푎22푎31푎43) − (푎12푎21푎33푎44)]

− [ (푎14푎23푎31푎42) − (푎13푎21푎32푎44) + (푎11푎22푎34푎43) − (푎12푎24푎33푎41)]

+ [ (푎11푎24푎32푎43) − (푎14푎22푎33푎41) + (푎12푎23푎31푎44) − (푎13푎21푎34푎42)]

− [ (푎12푎24푎31푎43) − (푎14푎21푎33푎42) + (푎11푎23푎32푎44) − (푎13푎22푎34푎41)]

2. 3. 6. New Method 6

In the new scheme 6, the sub-matrices 퐴푓푝, 퐴푠푝, 퐴푡푝 are constructed in a similar way as scheme 5 but with some differences in the elements arrangements. Again, consider the matrix A whose determinant is to be found as shown below:

푎11 푎12 푎13 푎14 푎 푎 푎 푎 퐴 = | 21 22 23 24| 푎31 푎32 푎33 푎34 푎41 푎42 푎43 푎44

The first part is given as,

-14- World Scientific News 147 (2020) 1-34

푆(퐴푓푝) = [ (푎11푎22푎33푎44) − (푎12푎23푎34푎41) + (푎13푎24푎31푎42) − (푎14푎21푎32푎43)]

−[ (푎13푎22푎31푎44) − (푎12푎21푎34푎43) + (푎11푎24푎33푎42) − (푎14푎23푎32푎41)]

Also, the second part is given as,

푆(퐴푠푝) = [ (푎11푎23푎34푎42) − (푎13푎24푎32푎41) + (푎14푎22푎31푎43) − (푎12푎21푎33푎44)]

− [ (푎14푎23푎31푎42) − (푎13푎21푎32푎44) + (푎11푎22푎34푎43) − (푎12푎24푎33푎41)] and the third part is given as,

푆(퐴푡푝) = [ (푎11푎24푎32푎43) − (푎14푎22푎33푎41) + (푎12푎23푎31푎44) − (푎13푎21푎34푎42)]

− [ (푎12푎24푎31푎43) − (푎14푎21푎33푎42) + (푎11푎23푎32푎44) − (푎13푎22푎34푎41)]

Then,

푑푒푡(퐴) = 푆(퐴푓푝) + 푆(퐴푠푝) + 푆(퐴푡푝)

-15- World Scientific News 147 (2020) 1-34

As before, we also arrived at the Equ. (7)

d푒푡 (퐴) = [ (푎11푎22푎33푎44) − (푎12푎23푎34푎41) + (푎13푎24푎31푎42) − (푎14푎21푎32푎43)]

−[ (푎13푎22푎31푎44) − (푎12푎21푎34푎43) + (푎11푎24푎33푎42) − (푎14푎23푎32푎41)]

+ [ (푎11푎23푎34푎42) − (푎13푎24푎32푎41) + (푎14푎22푎31푎43) − (푎12푎21푎33푎44)]

− [ (푎14푎23푎31푎42) − (푎13푎21푎32푎44) + (푎11푎22푎34푎43) − (푎12푎24푎33푎41)]

+ [ (푎11푎24푎32푎43) − (푎14푎22푎33푎41) + (푎12푎23푎31푎44) − (푎13푎21푎34푎42)]

− [ (푎12푎24푎31푎43) − (푎14푎21푎33푎42) + (푎11푎23푎32푎44) − (푎13푎22푎34푎41)]

2. 3. 7. New Method 7 In the new scheme 7, the sub-matrices 퐴푓푝, 퐴푠푝, 퐴푡푝 are constructed in a similar way as scheme 6 but with some differences in the elements arrangements. For the matrix A whose determinant is to be found as shown below:

푎11 푎12 푎13 푎14 푎 푎 푎 푎 퐴 = | 21 22 23 24| 푎31 푎32 푎33 푎34 푎41 푎42 푎43 푎44

The first part is given as,

푆(퐴푓푝) = [ (푎11푎22푎33푎44) − (푎12푎23푎34푎41) + (푎13푎24푎31푎42) − (푎14푎21푎32푎43)]

−[ (푎13푎22푎31푎44) − (푎12푎21푎34푎43) + (푎11푎24푎33푎42) − (푎14푎23푎32푎41)] The second part is given as,

-16- World Scientific News 147 (2020) 1-34

푆(퐴푠푝) = [ (푎11푎23푎34푎42) − (푎13푎24푎32푎41) + (푎14푎22푎31푎43) − (푎12푎21푎33푎44)]

− [ (푎14푎23푎31푎42) − (푎13푎21푎32푎44) + (푎11푎22푎34푎43) − (푎12푎24푎33푎41)] and the third part is given as,

푆(퐴푡푝) = [ (푎11푎24푎32푎43) − (푎14푎22푎33푎41) + (푎12푎23푎31푎44) − (푎13푎21푎34푎42)]

− [ (푎12푎24푎31푎43) − (푎14푎21푎33푎42) + (푎11푎23푎32푎44) − (푎13푎22푎34푎41)]

Then, 푑푒푡(퐴) = 푆(퐴푓푝) + 푆(퐴푠푝) + 푆(퐴푡푝)

We arrived at the Equ. (7)

d푒푡 (퐴) = [ (푎11푎22푎33푎44) − (푎12푎23푎34푎41) + (푎13푎24푎31푎42) − (푎14푎21푎32푎43)]

−[ (푎13푎22푎31푎44) − (푎12푎21푎34푎43) + (푎11푎24푎33푎42) − (푎14푎23푎32푎41)]

+ [ (푎11푎23푎34푎42) − (푎13푎24푎32푎41) + (푎14푎22푎31푎43) − (푎12푎21푎33푎44)]

− [ (푎14푎23푎31푎42) − (푎13푎21푎32푎44) + (푎11푎22푎34푎43) − (푎12푎24푎33푎41)]

+ [ (푎11푎24푎32푎43) − (푎14푎22푎33푎41) + (푎12푎23푎31푎44) − (푎13푎21푎34푎42)]

− [ (푎12푎24푎31푎43) − (푎14푎21푎33푎42) + (푎11푎23푎32푎44) − (푎13푎22푎34푎41)]

2. 3. 8. New Method 8

In the new scheme 8, the sub-matrices 퐴푓푝, 퐴푠푝, 퐴푡푝 are constructed in a similar way as scheme 6 but with some differences in the elements arrangements. Then for the matrix A whose determinant is to be found as shown below:

푎11 푎12 푎13 푎14 푎 푎 푎 푎 퐴 = | 21 22 23 24| 푎31 푎32 푎33 푎34 푎41 푎42 푎43 푎44

The first part is given as,

-17- World Scientific News 147 (2020) 1-34

푆(퐴푓푝) = [ (푎11푎22푎33푎44) − (푎12푎23푎34푎41) + (푎13푎24푎31푎42) − (푎14푎21푎32푎43)]

−[ (푎13푎22푎31푎44) − (푎12푎21푎34푎43) + (푎11푎24푎33푎42) − (푎14푎23푎32푎41)]

The second part is given as,

푆(퐴푠푝) = [ (푎11푎23푎34푎42) − (푎13푎24푎32푎41) + (푎14푎22푎31푎43) − (푎12푎21푎33푎44)]

− [ (푎14푎23푎31푎42) − (푎13푎21푎32푎44) + (푎11푎22푎34푎43) − (푎12푎24푎33푎41)] and the third part is given as,

푆(퐴푡푝) = [ (푎11푎24푎32푎43) − (푎14푎22푎33푎41) + (푎12푎23푎31푎44) − (푎13푎21푎34푎42)]

− [ (푎12푎24푎31푎43) − (푎14푎21푎33푎42) + (푎11푎23푎32푎44) − (푎13푎22푎34푎41)]

Recall, 푑푒푡(퐴) = 푆(퐴푓푝) + 푆(퐴푠푝) + 푆(퐴푡푝)

-18- World Scientific News 147 (2020) 1-34

We arrived at the Equ. (7)

d푒푡 (퐴) = [ (푎11푎22푎33푎44) − (푎12푎23푎34푎41) + (푎13푎24푎31푎42) − (푎14푎21푎32푎43)]

−[ (푎13푎22푎31푎44) − (푎12푎21푎34푎43) + (푎11푎24푎33푎42) − (푎14푎23푎32푎41)]

+ [ (푎11푎23푎34푎42) − (푎13푎24푎32푎41) + (푎14푎22푎31푎43) − (푎12푎21푎33푎44)]

− [ (푎14푎23푎31푎42) − (푎13푎21푎32푎44) + (푎11푎22푎34푎43) − (푎12푎24푎33푎41)]

+ [ (푎11푎24푎32푎43) − (푎14푎22푎33푎41) + (푎12푎23푎31푎44) − (푎13푎21푎34푎42)]

− [ (푎12푎24푎31푎43) − (푎14푎21푎33푎42) + (푎11푎23푎32푎44) − (푎13푎22푎34푎41)]

2. 3. 9. New Method 9

In the new scheme 9, the sub-matrices 퐴푓푝, 퐴푠푝, 퐴푡푝 are constructed in a similar way as scheme 6 but with some differences in the elements arrangements. Consider the matrix A whose determinant is to be found as shown below:

푎11 푎12 푎13 푎14 푎 푎 푎 푎 퐴 = | 21 22 23 24| 푎31 푎32 푎33 푎34 푎41 푎42 푎43 푎44

The first part is given as,

푆(퐴푓푝) = [ (푎11푎22푎33푎44) − (푎12푎23푎34푎41) + (푎13푎24푎31푎42) − (푎14푎21푎32푎43)]

−[ (푎13푎22푎31푎44) − (푎12푎21푎34푎43) + (푎11푎24푎33푎42) − (푎14푎23푎32푎41)]

The second part is given as,

-19- World Scientific News 147 (2020) 1-34

푆(퐴푠푝) = [ (푎11푎23푎34푎42) − (푎13푎24푎32푎41) + (푎14푎22푎31푎43) − (푎12푎21푎33푎44)]

− [ (푎14푎23푎31푎42) − (푎13푎21푎32푎44) + (푎11푎22푎34푎43) − (푎12푎24푎33푎41)] and the third part is given as,

푆(퐴푡푝) = [ (푎11푎24푎32푎43) − (푎14푎22푎33푎41) + (푎12푎23푎31푎44) − (푎13푎21푎34푎42)]

− [ (푎12푎24푎31푎43) − (푎14푎21푎33푎42) + (푎11푎23푎32푎44) − (푎13푎22푎34푎41)]

As before;

푑푒푡(퐴) = 푆(퐴푓푝) + 푆(퐴푠푝) + 푆(퐴푡푝)

Then, we arrived at the Equ. (7)

d푒푡 (퐴) = [ (푎11푎22푎33푎44) − (푎12푎23푎34푎41) + (푎13푎24푎31푎42) − (푎14푎21푎32푎43)]

−[ (푎13푎22푎31푎44) − (푎12푎21푎34푎43) + (푎11푎24푎33푎42) − (푎14푎23푎32푎41)]

+ [ (푎11푎23푎34푎42) − (푎13푎24푎32푎41) + (푎14푎22푎31푎43) − (푎12푎21푎33푎44)]

− [ (푎14푎23푎31푎42) − (푎13푎21푎32푎44) + (푎11푎22푎34푎43) − (푎12푎24푎33푎41)]

-20- World Scientific News 147 (2020) 1-34

+ [ (푎11푎24푎32푎43) − (푎14푎22푎33푎41) + (푎12푎23푎31푎44) − (푎13푎21푎34푎42)]

− [ (푎12푎24푎31푎43) − (푎14푎21푎33푎42) + (푎11푎23푎32푎44) − (푎13푎22푎34푎41)]

2. 3. 10. New Method 10

In the new scheme 10, the sub-matrices 퐴푓푝, 퐴푠푝, 퐴푡푝 are constructed in a similar way as scheme 6 but with some differences in the elements arrangements. If we consider the matrix A whose determinant is to be found as shown below:

푎11 푎12 푎13 푎14 푎 푎 푎 푎 퐴 = | 21 22 23 24| 푎31 푎32 푎33 푎34 푎41 푎42 푎43 푎44

The first part is given as,

푆(퐴푓푝) = [ (푎11푎22푎33푎44) − (푎12푎23푎34푎41) + (푎13푎24푎31푎42) − (푎14푎21푎32푎43)]

−[ (푎13푎22푎31푎44) − (푎12푎21푎34푎43) + (푎11푎24푎33푎42) − (푎14푎23푎32푎41)]

The second part is given as,

푆(퐴푠푝) = [ (푎11푎23푎34푎42) − (푎13푎24푎32푎41) + (푎14푎22푎31푎43) − (푎12푎21푎33푎44)]

-21- World Scientific News 147 (2020) 1-34

− [ (푎14푎23푎31푎42) − (푎13푎21푎32푎44) + (푎11푎22푎34푎43) − (푎12푎24푎33푎41)]

The third part is given as,

푆(퐴푡푝) = [ (푎11푎24푎32푎43) − (푎14푎22푎33푎41) + (푎12푎23푎31푎44) − (푎13푎21푎34푎42)]

− [ (푎12푎24푎31푎43) − (푎14푎21푎33푎42) + (푎11푎23푎32푎44) − (푎13푎22푎34푎41)]

Then

푑푒푡(퐴) = 푆(퐴푓푝) + 푆(퐴푠푝) + 푆(퐴푡푝)

Then, we arrived at the Equ. (7)

d푒푡 (퐴) = [ (푎11푎22푎33푎44) − (푎12푎23푎34푎41) + (푎13푎24푎31푎42) − (푎14푎21푎32푎43)]

−[ (푎13푎22푎31푎44) − (푎12푎21푎34푎43) + (푎11푎24푎33푎42) − (푎14푎23푎32푎41)]

+ [ (푎11푎23푎34푎42) − (푎13푎24푎32푎41) + (푎14푎22푎31푎43) − (푎12푎21푎33푎44)]

− [ (푎14푎23푎31푎42) − (푎13푎21푎32푎44) + (푎11푎22푎34푎43) − (푎12푎24푎33푎41)]

+ [ (푎11푎24푎32푎43) − (푎14푎22푎33푎41) + (푎12푎23푎31푎44) − (푎13푎21푎34푎42)]

− [ (푎12푎24푎31푎43) − (푎14푎21푎33푎42) + (푎11푎23푎32푎44) − (푎13푎22푎34푎41)]

3. NUMERICAL EXAMPLES

In order to investigate the investigate the workability, correctness and efficiency of the use of the new methods, two numerical examples are presented in this section. This was done by first applying the other known and common methods such Expansion of Cofactors method, Pivotal Condensation method, and the new methods (Gbemi’s methods).

-22- World Scientific News 147 (2020) 1-34

Example 1

1 −2 −3 4 2 −2 5 −6 | | −1 3 −4 6 6 5 −3 6

Using Laplace Expansion of Co-factor method

1 2 −3 4 2 −2 5 −6 퐴 = [ ] −1 3 −4 6 6 5 −3 6 det(퐴) = −2 5 −6 2 5 −6 2 −2 −6 2 −2 5 1 | 3 −4 6 | − 2 |−1 −4 6 | + (−3) |−1 3 6 | −4 |−1 3 −4| 5 −3 6 6 −3 6 6 5 6 6 5 −3

−4 6 3 6 3 −4 det(퐴) = 1 [ −2 | | − 5 | | + (−6) | |] −3 6 5 6 5 −3 −4 6 −1 6 −1 −4 − 2 [ 2 | | − 5 | | + (−6) | | ] −3 6 6 6 6 −3 3 6 −1 6 −1 3 +(−3) [2 | | − (−2) | | + (−6) | |] 5 6 6 6 6 5 3 −4 −1 −4 −1 3 −4 [2 | | − (−2) | | + 5 | |] 5 −3 6 −3 6 5

det(퐴) = 1(−2(−24 + 18) − 5(18 − 30) − 6(−9 + 20)) −2( 2(−24 + 18) − 5(−6 − 36) − 6(3 + 24) ) −3( 2(18 − 30) + 2(−6 − 36) − 6(−5 − 18) ) −4( 2(−9 + 20) + 2(3 + 24) + 5(−5 − 18) )

det(퐴) = 6 + (−72) + (−90) + 156 = 0

Using Chio’s Pivotal Condensation Method

1 2 −3 4 2 −2 5 −6 퐴 = [ ] −1 3 −4 6 6 5 −3 6

Initialise D = 1 and reduce A to ro-ehcelon form [43]:

1 2 −3 4 0 −6 11 −6 [ ] Adding -2 times the first row to the second row: D remains 1 −1 3 −4 6 6 5 −4 6

-23- World Scientific News 147 (2020) 1-34

1 2 −3 4 0 −6 11 −14 [ ] Adding 1 times the first row to the third row: D remains 1 0 5 −7 10 6 5 −4 6

1 2 −3 4 0 −6 11 −14 [ ] Adding -6 times the first row to the fourth row: D remains 1 0 5 −7 10 0 −7 15 −18

1 2 −3 4 0 1 −11/6 14/6 [ ] Multiplying the second row by -1/6: D  D(-6) = 1(-6) = -6 0 5 −7 10 0 −7 15 −18

1 2 −3 4 0 1 −11/6 14/6 [ ] Adding -5 times the second row to the third row: D remains -6 0 0 13/6 −5/3 0 −7 15 −18

1 2 −3 4 0 1 −11/6 14/6 [ ] Adding 7 times the second row to the fourth row: D remains -6 0 0 13/6 −5/3 0 −7 13/6 −5/3

Multiplying the second row by 6/13: D  D(13/6) = -6(13/6) = -13

1 2 −3 4 0 1 −11/6 14/6 [ ] 0 0 1 −10/13 0 0 13/6 −5/3

Adding -13/6 times the third row to the fourth row: D remains -13

1 2 −3 4 0 1 −11/6 14/6 [ ] 0 0 1 −10/13 0 0 0 0

The matrix is now in row-echelon form with diagonal elements 1, 1, 1, and 0. Thus, det A = -13(1)(1)(0) = 0.

Using the Gbemi’s Method (The first method)

1 2 −3 4 2 −2 5 −6 퐴 = [ ] −1 3 −4 6 6 5 −3 6

-24- World Scientific News 147 (2020) 1-34

1 2 −3 4 1 2 −3 2 −2 5 −6 2 −2 5 퐴 = | | 푓푝 −1 3 −4 6 −1 3 −4 6 5 −3 6 6 5 −4

1 −3 4 2 1 −3 4 2 5 −6 −2 2 5 −6 퐴 = | | 푠푝 −1 −4 6 3 −1 −4 6 6 −3 6 5 6 −3 6

1 4 2 −3 1 4 2 2 −6 −2 5 2 −6 −2 퐴 = | | 푡푝 −1 6 3 −4 −1 6 3 6 6 5 −3 6 6 5

Hence

푆(퐴푓푝) = (48 – 360 − 90 + 72)– (−36 + 72 + 120 – 360) = −126

Applying Sarrus’ rule to the second part 퐴푠푝

푆(퐴푠푝) = (150 − 324 − 24 + 96)– (−100 + 108 + 36 − 288) = 142

Applying Sarrus’ rule to the third part 퐴푡푝

푆(퐴푡푝) = (54 − 192 − 60 + 180)– (−36 + 160 + 90 − 216) = −16

-25- World Scientific News 147 (2020) 1-34

Therefore,

푑푒푡(퐴) = 푆(퐴푓푝) + 푆(퐴푠푝) + 푆(퐴푡푝) = −126 + 142 + (−16) = 0

NB: The above method is the application of the first scheme presented in the previous section. Applying the other nine remaining schemes to the problem, the same results was arrived at.

Example 2

Using the Laplace Expansion of Co-factors method

2 2 3 3 2 3 3 2 퐴 = | | 5 3 7 9 3 2 4 7

3 3 2 2 3 2 2 3 2 2 3 3 det(퐴) = 2 |3 7 9| − 2 |5 7 9| + 3 |5 3 9| − 3 |5 3 7| 2 4 7 3 4 7 3 2 7 3 2 4

det(퐴) = 2( 3(49 − 36) − 3(21 − 18) + 2(12 − 14)) −2( 2(49 − 36) − 3(35 − 27) + 2(20 − 21) ) +3( 2(21 − 18) − 3(35 − 27) + 2(10 − 9) ) −3( 2(12 − 14) − 3(20 − 21) + 3(10 − 9) ) = −2

Using Gbemi’s method

2 2 3 3 2 3 3 2 퐴 = | | 5 3 7 9 3 2 4 7

2 2 3 3 2 2 3 2 3 3 2 2 3 3 퐴 = | | 푓푝 5 3 7 9 5 3 7 3 2 4 7 3 2 4

2 3 3 2 2 3 3 2 3 2 3 2 3 2 퐴 = | | 푠푝 5 7 9 3 5 7 9 3 4 7 2 3 4 7

2 3 2 3 2 3 2 2 2 3 3 2 2 3 퐴 = | | 푡푝 5 9 3 7 5 9 3 3 7 2 4 3 7 2

-26- World Scientific News 147 (2020) 1-34

Hence

푆(퐴푓푝) = (294 – 162 + 60 – 72)– (315 – 144 – 56 – 81) = −26

푆(퐴푠푝) = (108 − 54 + 180 − 196)– (90 − 126 + 216 − 84) = −58

푆(퐴푡푝) = (48 − 189 + 210 − 108)– (80 − 84 + 126 − 243) = 82

Therefore,

푑푒푡(퐴) = 푆(퐴푓푝) + 푆(퐴푠푝) + 푆(퐴푡푝) = −26 − 58 + 82 = −2

Hence

푑푒푡(퐴) = −2

It should be not that we only apply the first scheme in this example. If we use any of the 10 schemes developed in work, we still arrive at the same results.

4. EFFICIENCY OF THE NEW METHOD 4. 1. Asymptotic analysis: The efficiency of the Gbemi’s method was determined through an asymptotic analysis using Big-O. The advantage of asymptotic analysis is that it is independent of the computer specifications. This will be used to compare the existing methods with the new method. The conventional method in most texts and literatures is the Laplace Expansion method which evaluates the determinant as a weighted sum of its sub-matrices. It is well established in literature that the run time of the Laplace Expansion method for finding determinant is O(n!).

-27- World Scientific News 147 (2020) 1-34

4. 1. 1. Run time of New Method The new method evaluates the determinant of a 4×4 matrix as an extension of Sarrus’ rule. Thus, for every diagonal, there are n items that are visited. Thus, the running time is O(n2). This can also be verified from the MatLab program. There are two nested for loops which means an O(n2) algorithm.

4. 1. 2. Run time of other variations of the new method Analyzing the other variations of the New Method, the run-time is O(n2).

5. PROGRAMMING

This section presents the evaluation of the new approach (called the G-Method) and its ability to be used in programming (i.e. as a sub-routine for more applications), the program is written in MATLAB. Also, the Matlab codes for Rezaifar and Razee and Laplace Expansion Method are also presented as shown below. function answer = GMethod(A,part) %% Developed by Sobamowo M. Gbeminiyi************************************ %% GMethod stands for Gbeminiyi's method. n = length(A); sum1 = 0; sum2 = 0; for i = 1:1:n

sum3 = 1; sum4 = 1; for j = 1:1:n sum3 = sum3*A(j,non_zero(mod(i+j-1,4),4)); sum4 = sum4*A(n - j + 1,non_zero(mod(i+j-1,4),4)); end

sum1 = sum1 + ((-1)^(i+1))*sum3; sum2 = sum2 + ((-1)^(i))*sum4; end answer = (sum1 - sum2); if part <= 2 answer = answer + GMethod([A(:,1),A(:,3:n),A(:,2)],part+1); end end function answer = RMethod(m) %% Developed by Omid Rezaifar.**************************************** %% RMethod stands for Rezaifar Method.

-28- World Scientific News 147 (2020) 1-34

%% Developed by Omind Rezaifar. %% This code was extracted from Omind Rezaifar et al (2006). n = length(m); if n == 1 answer = m; elseif n == 2 answer = m(1,1)*m(2,2) - m(1,2)*m(2,1); else m11 = m(2:n,2:n); m1n = m(2:n,1:n -1); mn1 = m(1:n -1,2:n); mnn = m(1:n - 1, 1:n -1); m11nn = m11(1:n -2, 1:n-2); answer = RMethod(m11)*RMethod(mnn) - RMethod(m1n)*RMethod(mn1); answer = answer / RMethod(m11nn); end end function [answer] = ExpansionMethod(A) %% Laplace Expansion Method ****************************************** %% n = length(A); if n==1 answer = A; else if n == 2 answer = A(1,1)*A(2,2) - A(1,2)*A(2,1); else answer = 0; for i = 1:1:n answer = answer + ((- 1)^(1+i)*A(1,i))*ExpansionMethod([A(2:n,1:i - 1), A(2:n, i+1:n)]); end end end

>> ExecutionTest1

******************************************************************* Matrix Executed

10 1 3 -7 5 4 1 12 0 2 10 1 4 3 20 11

*******************************************************************

********Average Time per Execution For Laplace Expansion Method ********* Number of Executions =1000 Total Time for Execution =0.453

-29- World Scientific News 147 (2020) 1-34

Average Time per Execution =0.000453 *****************************************************************

>> ExecutionTest2

******************************************************************* Matrix Executed

10 1 3 -7 5 4 1 12 0 2 10 1 4 3 20 11

*******************************************************************

********Average Time per Execution For Gbeminiyi Method (G-Method)*********** Number of Executions =1000 Total Time for Execution =0.218 Average Time per Execution =0.000218 *****************************************************************

>> ExecutionTest3

******************************************************************* Matrix Executed

10 1 3 -7 5 4 1 12 0 2 10 1 4 3 20 11

*******************************************************************

********Average Time per Execution For Rezaifar Method ********* Number of Executions =1000 Total Time for Execution =0.359 Average Time per Execution =0.000359 *****************************************************************

6. COMPARISON WITH EXISTING METHODS

The running time O(n2) is far better than O(n!) running time. This means that the Gbemi’s method is more efficient than the existing Laplace expansion method and other existing methods for the computation of 4x4 matrix. This fact was also illustrated with the execution time of the MatLab code run on a Intel® Core™2 Duo CPU 2.00 GHz 2.00 GHz 4.00 GB (RAM) System. The codes for the Laplace expansion method and G-Method were run with a test matrix. In order to see the difference in execution time and to speed of execution more efficiently, the algorithm has to be run many times. Therefore, the codes were run 1000 and 10,000 times on the same matrix, and the average execution time per problem is calculated. The results are shown in the Tables 1 and 2.

-30- World Scientific News 147 (2020) 1-34

Table 1. Comparison of computational time among different methods for 1,000 executions

Laplace expansion Rezaifar and Razaee’s Gbemi’s method method method [2]

Number of executions 1,000 1,000 1,000 Total time for 0.453 s 0.359 s 0.218 s executions Average time per 0.00453 s 0.00359 s 0.00218 s execution

Table 2. Comparison of computational time among different methods for 10,000 executions

Laplace expansion Rezaifar and Razaee’s Gbemi’s method method method [2]

Number of executions 1,000 1,000 1,000 Total time for 4.197 s 3.496 s 1.766 s executions Average time per 0.0004197 s 0.0003496 s 0.0001766 s execution

As presented in Tables 1 and 2, the Gbemi’s method saves much time and the speed of running is faster than the Laplace expansion and Rezaifar and Rezaee’s methods. Although, the recursive loops in the Rezaifar and Rezaee ’s method makes it more to be used in programming, if the division by zero appears during the computation of the determinant of a matrix, then the method fails to evaluate the value of the determinant unless rows are changed and as a result the determinant altered [2]. This shortcoming or limitation of the Rezaifar’s method was also pointed out by Dutta and Pal [22]. However, the newly developed method (Gbemi’s method) overcomes the limitation of the Rezaifar and Rezaee’s method [2]. For the optimized Matlab in-built Method, for the 1000 Number of Executions, the total time for execution is 0.015 sec while the average time per execution is 0.000015 sec. However, it has been pointed out that commonly in machine programs which required some algorithm to find the determinant of matrices, Gaussian elimination or Gauss-Jordan method is used. This method is based on linear and unilateral approach to find the determinant [2]. It is hoped that if this newly developed algorithm is optimized, it will run faster than the Matlab in-built Method.

7. CONCLUSION AND FUTURE WORKS

In this work, further results of additional nine methods of extending the Sarrus’s rule to evaluate the determinant of 4 × 4 matrices were presented. The results further established the

-31- World Scientific News 147 (2020) 1-34

effectiveness, consistency for handy calculations, high accuracy and relatively low computational time the Gbemi’s method. The applications of the new method to the computation of the determinants of larger matrices such as 5 × 5, 6 × 6 and all other n × n (n > 6) matrices will be present in the future study. It could be stated that with the aid of the generalized extended method to n × n and its applications to most software, there will be great reduction in the computational and running time of the software. This in consequent, will reduce the computational cost.

Acknowledgement

The author shows sincere appreciation and acknowledgement to the University of Lagos, Nigeria for the material supports and provision of conducive environment for this research work.

Dr Sobamowo M. Gbeminiyi was born in Lagos in 1978. He obtained OND and HND form The Polytechnic, Ibadan in 1998 and 2002, respectively. He also obtained B.Sc., M.Sc. and Ph.D. in 2006, 2009 and 2013, respectively in the Department of Mechanical Engineering, University of Lagos, Nigeria. Although, he began his lecturing career in Lagos State University in 2009, presently, he lectures in the Department of Mechanical Engineering, University of Lagos. Dr Sobamowo has published over 180 research papers in various prestigious international journals. He is the author of the textbook “Student’s Companion for Excellent Performance in Final Year Project, and also a co-author of a Textbook on Fluid Mechanics and Hydraulic Machines. Dr Sobamowo is a reviewer for many international and local journals. He is a co-editor of various international journals. He has received many international invitations as a speaker on his research topics in many international conferences and workshops. He is the founder and principal researcher of Herzer Modelling and Simulation Research Group and also, Renewable Energy-For-Cold Chain Research Group, University of Lagos, Nigeria. He is an inventor of the software “GEM” (General Empirical Modeler). His research interests include energy systems modelling, simulation and design, renewable energy systems analysis and design, flow and heat transfer and thermal fluidic-induced vibration in energy systems. He has supervised and still supervising B. Sc., M.Sc. and Ph.D. students on these research areas. He is a member of the Nigerian Institution of Mechanical Engineers, Nigeria Society of Engineers, and the Council of Regulation of Engineering in Nigeria. His areas of specialization include Energy Systems Modelling, Simulations and Design.

References

[1] M. G. Sobamowo. On the extension of Sarrus’ rule to n×n (n > 3) matrices: Development of new method for the computation of the determinant of 4×4 matrix. International Journal of Engineering Mathematics (2016) 1-14. https://doi.org/10.1155/2016/9382739 [2] O. Rezaifar and M. Rezaee. A new approach for finding the determinant of matrices. Applied Mathematics and Computation 188 (2007) 1445-1454. [3] A. A. M. Ahmed and K. L. Bondar. Modern Method to Compute the Determinants of Matrices of Order 3. Journal of Informatics and Mathematical Sciences 6(2) (2014) 55- 60.

-32- World Scientific News 147 (2020) 1-34

[4] C. Dubbs and D. Siege. Computing determinants. The College Mathematics Journal 18 (1987) 48-50. [5] W. M. Gentleman and S. C. Johnson (1974). The evaluation of determinants by expansion by minors and the general problem of substitution. Mathematics of Computation, 28(126) (1974) 543-548. [6] A. Assen and J. Venkateswara Rao. A Study on the Computation of the Determinants of a 3×3 Matrix. International Journal of Science and Research 3(6) (2014) 912-921 [7] C. L. Dodgson. Condensation of Determinants, Being a New and Brief Method for Computing their Arithmetic Values. Proc. Roy. Soc. Ser. A. 15 (1866-1887) 150-155. [8] M. A. El-Mikkawy. Fast Algorithm for Evaluating nth Order Tri-diagonal Determinants. J. Comput. Appl. Math. 166(2014) 581-584 [9] Q. Gjonbalaj and A. Salihu. Computing the Determinants by Reducing the Orders by four. Applied Mathematics E-Notes 10 (2010) 151-158 [10] D. Hajrizaj. New method to compute the determinant of 3X3 matrix, International Journal of Algebra 3(5) (2009) 211-219 [11] Ilse C. F. Ipsen and Dean J. Lee. Determinant Approximations, Numer. Linear Algebra Appl. (2005) 1-15. https://arxiv.org/abs/1105.0437 [12] L. G. Molinari. Determinants of Block Tridiagonal Matrices. Linear Algebra and its Applications 429 (2008) 2221-2226. [13] V. Y. Pan. Computing the determinant and the characteristic polynomial of a matrix via solving linear systems of equations. Information Processing Letters 28 (2) (1998) 71- 75. [14] C. M. Radi ́. A Generalization of the Determinant of a Square Matrix and Some of Its Applications in Geometry. Serbo-Croatian Matematika 20 (1991) 19-36. [15] R. Adrian and E. Torrence. “Shutting up like a telescope”: Lewis Carroll's “Curious” Condensation Method for Evaluating Determinants. College Mathematics Journal 38(2) (2007) 85-95 [16] H. Teimoori and M. Bayat, A. Amiri and E, Sarijloo. A New Parallel Algorithm for Evaluating the Determinant of a Matrix of Order n. Euro Combinatory (2005), 123-134. [17] X. B. Chen. A fast algorithm for computing the determinants of banded circulant matrices. Applied Mathematics and Computation 229 (2014) 201-07 [18] D. Bozkurt, Tin-Yau Tam, Determinants and inverses of circulant matrices with Jacobsthal and Jocobsthal–Lucas numbers. Appl. Math. Comput. 219 (2012) 544-551 [19] X.G. Lv, T.Z. Huang, J. Le, A fast numerical algorithm for the determinant of a pentadiagonal matrix. Appl. Math. Comput. 196 (2008) 835-841 [20] X.G. Lv, T.Z. Huang, J. Le. A note on computing the inverse and the determinant of a pentadiagonal . Appl. Math. Comput. 206 (2008) 327-331 [21] V. Pan, Complexity of computation with matrices and polynomials. SIAM Rev. 34 (1992), 225–262.

-33- World Scientific News 147 (2020) 1-34

[22] J. Dutta and S.C. Pal. Generalization of a New Technique for Finding the Determinant of Matrices. Journal of Computer and Mathematical Sciences 2(2) (2011) 266-273. [23] S.Q. Shen, J.M. Cen, Y. Hao. On the determinants and inverses of circulant matrices with Fibonacci and Lucas numbers. Appl. Math. Comput. 217 (2011) 9790-9797

-34-