Futo Journal Series (FUTOJNLS) e-ISSN : 2476-8456 p-ISSN : 2467-8325 Volume-5, Issue-1, pp- 189 - 209 www.futojnls.org Research Paper July 2019 Comparing AKS Algorithm for Deterministic Primality Testing Using Matlab Updated With Mapple and C++ 1Chidozie, K. K, 1*Obi, M. C. and 2Nwamba, I. 1Department of Mathematics, Federal University of Technology, Owerri, Imo State, Nigeria 2ICT Department, Imo State University, Owerri, Imo State, Nigeria *Corresponding Author’s Email: [email protected]; [email protected] Abstract This work compared the use of MATLAB updated with MAPPLE and C++ to implement AKS algorithm which is the first deterministic primality testing algorithm in polynomial time. To test if the number 9965468763136528274628451 is prime took about 210 minutes in AKS algorithm. This is inefficient especially in cryptography where large numbers are often tested for primality. Thus implementing AKS algorithm in MATLAB updated with MAPPLE is faster and efficient than using C++. 1. Introduction The study of prime numbers dates back to 300BC. An ancient Greek Mathematician called Euclid used the method of contradiction to prove that there were infinitely many prime numbers. He also proved the fundamental theorem of arithmetic. In 200BC, another Greek Mathematician called Eratosthenes introduced an algorithm for primality testing called the Sieve of Eratosthenes when given integer numbers from 2 to , mark all the even numbers except 2. Then start with the next unmarked number, which in our case is 3, mark all the numbers which are multiplies of three except 3. Continue in this way until there are no new unmarked numbers. All the numbers remaining unmarked are prime numbers. The test, however, is efficient. It takes √ steps to determine if n is prime. Afterwards, little happened with the study of prime numbers until the 17th century, during which Pierre Fermat put a great effort into it. One of his greatest achievements is the Fermat‟s Little Theorem that, for any given prime number p any number such that a , then Although the above test could be efficiently computed, it could not be used as such since there were numbers called the Carmichael numbers which satisfied the Fermat‟s property for all a‟s Though the direct application of the Fermat‟s Little Theorem was discarded, it lays the foundation for many different algorithms which were primarily based on this theorem. 189 Obi et al., Comparing AKS Algorithm… The theory of primarily testing for restricted families of numbers had an earlier start. The first and most famous “modern” algorithm is the Lucas-Lehmer Test (1876). It is an algorithm that runs in ̃ time to determine whether a Mersenne number (a number of the form , prime) is prime or composite. Proth (1878) enlarged the family of numbers for which a primarily test that runs in ̃ exists. The Proth Test applied to all numbers n such that ( ⁄ ) (by we always mean log to the base ) provided an integer is given for which the Jacobi Symbol ( ) Usually such an integer can easily be found using the quadratic reciprocity law; thus, the Proth test become deterministic for a large proportion of, though not all the numbers n satisfying ( ⁄ ) ( is the Lucas-Lehmer Test). Later, the Lucas-Lehmer Test was also extended to all numbers , such that ( ⁄ ) for which an integer α is given such that ( ) ( ) Hugh Williams (1970) extended these tests to numbers satisfying ( ⁄ ) where is a prime, provided there is a prime , such that n is not a power modulo , and gave many concrete implementations and tables of primes. Most of the mathematicians who are interested in prime numbers have two challenges: To find the next largest prime number relative to current one. To provide a fast method or improve a current available one which determines whether a given number is prime or composite. This is due to the natural properties of prime numbers and the need of their widespread applications. The second problem is particularly important because the given number could be any magnitude. Currently the largest known prime number is which was found by Findley (2004). Therefore methods such as Sieve of Eratosthenes are not efficient for large numbers since their complexity of time is exponential in terms of the length of a large number. It is not feasible to determine an integer number of large magnitude via these methods even based on the latest mainframe computers. Naturally, a more efficient primality testing algorithm is urgently needed because of the important role prime numbers play in modern cryptosystems. As a result, there has been a dramatic increase in developing efficient primality testing algorithms since 1970s. Most of them are based on the Fermat‟s Little Theorem. Pratt (1975) showed that a short certificate of primality always exists and hence that primes are in non deterministic polynomial time, but while his method is constructive, it requires factorisation of large integers and thus does not run in polynomial time. Miller (1976) used the property of Fermat‟s Little Theorem polynomial time algorithm for primarily testing assuming Extended Riemann Hypothesis (ERH). Independently, Solovay and Strassen (1977) obtained different randomized polynomial-time algorithm using the property that for a prime , ( ) for every a. Their algorithm can also be made deterministic under Extended Riemann Hypothesis. Since then a number of randomized polynomial-time algorithms have been proposed for primarily- testing based on many different properties. 190 Obi et al., Comparing AKS Algorithm… Solovay and Strassen (1977) used Euler‟s theorem in order to develop a primality test. Their idea was to randomly choose integers less than and ensure that each satisfied the above equality. Rabin (1980) modified Miller‟s test to provide an unconditional but randomized polynomial time algorithm. The Miller-Rabin algorithm relies on an equality or set of equalities that hold true for prime values, and then see whether or not they hold for a number that we want to test for primality. The algorithm was able to correctly identify the composite numbers but had a probability which is less than of error in identifying primes. Adlemann, Pomerance, and Rumely achieved (1983) a major breakthrough by giving a deterministic algorithm for primality testing that runs in time (all the previous deterministic algorithm required exponential time). Their algorithm was (in a sense) a generalization of Miller‟s idea and used higher reciprocity laws. It was simplified by Cohen and Lenstra (1984), and implemented by Cohen and Lenstra (1987). Goldwasser and Kilian (1986) proposed a randomized algorithm based on Elliptic Curves running in expected polynomial time on almost all inputs (all inputs under a widely believed hypothesis) that produces a certified for primality (until then, all randomized algorithms produced a certificates for compositeness only). Based on their idea a similar algorithm was developed by Atkin (1986). Later, Atkin and Morain (1990, 1993) devised a method called Elliptic Curves Primality Proving (ECPP). It stated that given any prime n of length k, the algorithm outputs a certificate of correctness of size which can be verified correct in deterministic time. The running time of this algorithm is for some and subsequently optimized by Bosma and van der Hulst (1990). A major breakthrough was achieved by Agrawal, Kayal and Saxena (2002) who together proposed an algorithm (popularly known as the AKS algorithm) in their paper “PRIMES is in ”. The AKS algorithm can determine the input number n is prime number or not in the runs time ̃ . The algorithm which is based on a slight modification of the Fermat‟s Little Theorem was the first deterministic polynomial time algorithm for primality testing. Soon after the publication of AKS paper, Lenstra Jr (2003) modified the AKS algorithm with the observation made by Pomerance who observed that need not to be the largest prime divisor of . The major difference is the construction of the useful prime . Also, Berrizbeitia (2003) presented a deterministic primality tests with running time of . For integers such that for a given integer ( ) and for integers such that ( ) ( ) . Cheng in his paper (2003), states that the reduction in time complexity is achieved by first generalizing Berrizbeitia‟s algorithm to one which has higher density of easily-proved primes. For a general prime, one round of ECPP is deployed to reduce its primality proof to the proof of a random easily proved prime. The algorithm has a running time of 191 Obi et al., Comparing AKS Algorithm… Also in the same year, Bernstein (2003) generalize Berrizbeitia improvements and presents an algorithm that, for a given integer n, finds and verifies the primality of n in random time . In the later version of AKS (2004), the original algorithm has been improved based on various modifications by other people. This version of algorithm which is presented in this paper eliminates the while-loop and uses the idea of Lenstra for constructing r. Hence the complexity time is reduced to . And Lenstra (2005), proposed a variant of AKS which runs in ̃ time. However, both of these improvements are only of theoretical interest. Lenstra Jr (2011) improved the running time once again. He and Pomerance were able to improve the running time to ̃ by generating rings using Gaussian periods, instead of roots of unity. Their proof relies on recently developed results in both analytic and additive number theory. This paper presented AKS algorithm under the basic principles involved and provide its implementation in updated MATLAB with MAPPLE and C++ and also compare the run time. Since MAPPLE functions assisted for the exponentiation and modular reduction in MATLAB, there was no need to carry on our test on MAPPLE. 2. Some basic definitions 2.1. Congruence If two numbers and have the property that their difference – is integrally divisible by a number (i.e.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages21 Page
-
File Size-