A Survey of Modern Integer Factorization Algorithms

Total Page:16

File Type:pdf, Size:1020Kb

A Survey of Modern Integer Factorization Algorithms Volume pp A Survey of Mo dern Integer Factorization Algorithms Peter L Montgomery Las Colindas Road San Rafael CA USA email pmontgomcwinl Every p ositive integer is expressible as a pro duct of prime numb ers in a unique way Although it is easy to prove that this factorization exists it is b elieved very hard to factoranarbitrary integer We survey the b est known algorithms for this problem and give some factorizations found at CWI Introduction An integer n is said to b e a prime number or simply prime if the only divisors of n are and n There are innitely many prime numb ers the rst four b eing and If n andn is not prime then n is said to b e comp osite The integer is neither prime nor comp osite tal Theorem of Arithmetic states that every p ositiveinteger The Fundamen can b e expressed as a nite p erhaps empty pro duct of prime numb ers and that this factorization is unique except for the ordering of the factors Table has some sample factorizations Table Sample factorizations The existence of this factorization is an easy consequence of the denition of prime number and the wellordering principle The uniqueness pro of is only slightly harder However this existence pro of gives no clue ab out how to eciently nd the factors of a large given integer No p olynomialtime algorithm for solving this problem is known Factoring large integers has fascinated mathematicians for centuries Gauss wrote The problem of distinguishing prime numb ers from comp osites and of resolving comp osite numb ers into their prime factors is one of the most imp ortant and useful in all of arithmetic The dignity of science seems to demand that every aid to the solution of such an elegant and celebrated problem b e zealously cultivated Some b o oks are devoted to tabulating the factors of numb ers of sp ecial form The most referenced such table is the Cunningham table which lists known n factors of b for bases b and small n Brent et al extend Some of these factorizations app ear in which these tables through b n n also has some factors of a b Brillhart et al give factors of the Fib onacci numb ers and related Lucas numbers The RSA Challenge is building a table of factorizations of partition numb ers Factorization was once primarily of academic interest It gained in prac tical imp ortance after the intro duction of the RSA publickey cryptosystem see x The cryptographic strength of RSA dep ends up on the dicultyof factoring large numb ers Let N b e a large comp osite integer Until the s the b est algorithms for factoring N to ok time O N for some One such algorithm is trial divi p N This changed when Mor sion which tries to divide N by all primes up to rison and Brillhart intro duced the continued fraction metho d whose time is n o p p o log log N log N exp o log N log log N O N Mo dern algorithms for factoring N fall into two ma jor categories Algo rithms in the rst category nd small prime factors quickly These include trial division Pollard Rho P and the elliptic curve metho d Algorithms in the second category factor a numb er regardless of the sizes of its prime factors but cost much more when applied to larger integers These algorithms include continued fraction quadratic sieve and numb er eld sieve In practice algorithms in b oth categories are imp ortant Given a large in teger with no clue ab out the sizes of its prime factors one typically tries algo rithms in the rst category until the cofactor ie the quotient after dividing byknown prime factors of the original numb er is suciently small Then one tries an algorithm in the second category if the cofactor is not itself prime If An algorithm is said to b e p olynomialtime if its worst case execution time is b ounded by a p olynomial function of the length of the input If one wants to factor N whose length is O log N then an O log N algorithm would b e p olynomialtime whereas an O N algorithm is not one is unable to nd a suciently small cofactor or a prime cofactor using metho ds in the rst category the factorization remains incomplete The interpretation of suciently small has changed considerably as tech nology has progressed In the s John Brillhart and John Selfridge p predicted that factoring numbers over digits would be hard In the s Richard Guy p predicted that few numb ers over digits would b e factored In the cuto is around digits We illustrate some algorithms using N This number was selected using CWIs street address Williams and Shallit give a computational history of factoring and primality testing from to ie b efore the era of electronic computers Richard Guy gives a go o d survey of factorization metho ds known in Brillhart et al giveachronology of developments in factorization b oth by the Cunningham pro ject hardware and software esp the metho ds used Robert Silverman gives a more recent exp osition Several textb o oks cover factorization We review some elementary number theory We review some fundamental algorithms which will b e needed later Notations The symbols Q R and Z denote the sets of rational numbers real numb ers and integers resp ectively If x and y are integers then x j y read x divides y means that y is a if and only if there exists k Z such that y kx multiple of x That is x j y The greatest common divisor GCD of two integers x and y is denoted gcdx y The GCD is always p ositive unless x y If gcdx y then x and y are said to b e coprime they have no common divisors except Review of Elementary Number Theory This section reviews some elementary and analytic numb er theory Pro ofs can b e found in manynumb er theory textb o oks Congruence classes and modular arithmetic Fix n Twointegers x and y are said to b e congruentmodulon if x y is divisible by n This is written x y mo d n For xed n the relation is an equivalence relation reexive symmetric transitive The equivalence classes are called congruence classes A set with exactly one representative from each congruence class is called a complete residue system There are exactly n congruence classes The canonical complete residue system is f n g We omit the mo dulus n writing simply x y when the mo dulus is clear from the context The relation is preserved under addition subtraction and multiplication If x x y y and n are integers such that x y mo d n and x y mo d n then x x y y mo d n x x y y mo d n x x y y mo d n These are easily proved using the denition of For example x x y y ultiples of n x y x y x y isasumoftwom Equation says that it is meaningful to add subtract or multiply two congruence classes since the equivalence class of the result do es not dep end up on the selections of the representatives The congruence classes mo dulo n form a commutative ring under these op erations This ring denoted by ZnZ is called integers mo dulo n When n is prime division by nonzero elements is p ossible and this ring is a eld often written GF n A corollary to is that if f is a p olynomial in k variables with integer co ecients and if x y mo d nfor i k then i i f x x x f y y y mo d n k k Occasionally we write r r where r and r are rational numb ers rather than integers The notation a b a b mo d n means that the numera tor of a b a b is divisible by n and that its denominator is coprime to n That is a b a b mo d n and gcdb b n Properties of prime numbers Let p b e a prime number The following prop erties are stated without pro of and p j xy then p j x or p j y Equivalently if xy If x y Z mo d p then x mo d pory mo d p Unique factorization As previously mentioned every p ositiveinteger can b e written as a p erhaps empty pro duct of primes and this representa tion is unique except for ordering See Table for some examples p p p The p olynomials X Y and X Y are congruentmodulop For example every term in X Y X Y X Y X Y X Y XY is divisible by This can b e proved using the binomial theorem p Fermats little theorem If a Zthen a a mo d p In particular p if p do es not divide athen a mo d p One pro of uses the last prop erty and induction on a Chinese remainder theorem Let n and n b e coprime p ositiveintegers If r and r are arbitrary integers then the two congruences x r mo d n and x r mo d n may b e replaced by a single congruence x r mo d n n where r is chosen to satisfy r r mo d n andr r mo d n This is known as the Chinese Remainder Theorem For example the two congruences x mo d and x mo d are equivalent to the single congruence x mo d To conrm this note rst that such an equation exists since and are coprime cf If holds then there exist integers k and suchthat x k k Hence k x x x k k k k k k whichshows that x mo d Converselyif x mo d say x k then x k k implying A generalization allows more than two mo duli If gcdn n for i j i j k ie if the integers n n are pairwise coprime and if r Z k i i then the system of congruences for all x r mo d n i k i i is equivalent to a single congruence x r mo d n n n k when r is suitably chosen There are ecientways to nd this r given the fn g i and fr g i When we know an upp er b ound on an integer x then we can determine x uniquely if we knowit mo dulo enough primes More precisely if we know that jxjB and if wecho ose n n n B then determines x k uniquely Smooth numbers
Recommended publications
  • Primality Testing and Integer Factorisation
    Primality Testing and Integer Factorisation Richard P. Brent, FAA Computer Sciences Laboratory Australian National University Canberra, ACT 2601 Abstract The problem of finding the prime factors of large composite numbers has always been of mathematical interest. With the advent of public key cryptosystems it is also of practical importance, because the security of some of these cryptosystems, such as the Rivest-Shamir-Adelman (RSA) system, depends on the difficulty of factoring the public keys. In recent years the best known integer factorisation algorithms have improved greatly, to the point where it is now easy to factor a 60-decimal digit number, and possible to factor numbers larger than 120 decimal digits, given the availability of enough computing power. We describe several recent algorithms for primality testing and factorisation, give examples of their use and outline some applications. 1. Introduction It has been known since Euclid’s time (though first clearly stated and proved by Gauss in 1801) that any natural number N has a unique prime power decomposition α1 α2 αk N = p1 p2 ··· pk (1.1) αj (p1 < p2 < ··· < pk rational primes, αj > 0). The prime powers pj are called αj components of N, and we write pj kN. To compute the prime power decomposition we need – 1. An algorithm to test if an integer N is prime. 2. An algorithm to find a nontrivial factor f of a composite integer N. Given these there is a simple recursive algorithm to compute (1.1): if N is prime then stop, otherwise 1. find a nontrivial factor f of N; 2.
    [Show full text]
  • Computing Prime Divisors in an Interval
    MATHEMATICS OF COMPUTATION Volume 84, Number 291, January 2015, Pages 339–354 S 0025-5718(2014)02840-8 Article electronically published on May 28, 2014 COMPUTING PRIME DIVISORS IN AN INTERVAL MINKYU KIM AND JUNG HEE CHEON Abstract. We address the problem of finding a nontrivial divisor of a com- posite integer when it has a prime divisor in an interval. We show that this problem can be solved in time of the square root of the interval length with a similar amount of storage, by presenting two algorithms; one is probabilistic and the other is its derandomized version. 1. Introduction Integer factorization is one of the most challenging problems in computational number theory. It has been studied for centuries, and it has been intensively in- vestigated after introducing the RSA cryptosystem [18]. The difficulty of integer factorization has been examined not only in pure factoring situations but also in various modified situations. One such approach is to find a nontrivial divisor of a composite integer when it has prime divisors of special form. These include Pol- lard’s p − 1 algorithm [15], Williams’ p + 1 algorithm [20], and others. On the other hand, owing to the importance and the usage of integer factorization in cryptog- raphy, one needs to examine this problem when some partial information about divisors is revealed. This side information might be exposed through the proce- dures of generating prime numbers or through some attacks against crypto devices or protocols. This paper deals with integer factorization given the approximation of divisors, and it is motivated by the above mentioned research.
    [Show full text]
  • Fast Generation of RSA Keys Using Smooth Integers
    1 Fast Generation of RSA Keys using Smooth Integers Vassil Dimitrov, Luigi Vigneri and Vidal Attias Abstract—Primality generation is the cornerstone of several essential cryptographic systems. The problem has been a subject of deep investigations, but there is still a substantial room for improvements. Typically, the algorithms used have two parts – trial divisions aimed at eliminating numbers with small prime factors and primality tests based on an easy-to-compute statement that is valid for primes and invalid for composites. In this paper, we will showcase a technique that will eliminate the first phase of the primality testing algorithms. The computational simulations show a reduction of the primality generation time by about 30% in the case of 1024-bit RSA key pairs. This can be particularly beneficial in the case of decentralized environments for shared RSA keys as the initial trial division part of the key generation algorithms can be avoided at no cost. This also significantly reduces the communication complexity. Another essential contribution of the paper is the introduction of a new one-way function that is computationally simpler than the existing ones used in public-key cryptography. This function can be used to create new random number generators, and it also could be potentially used for designing entirely new public-key encryption systems. Index Terms—Multiple-base Representations, Public-Key Cryptography, Primality Testing, Computational Number Theory, RSA ✦ 1 INTRODUCTION 1.1 Fast generation of prime numbers DDITIVE number theory is a fascinating area of The generation of prime numbers is a cornerstone of A mathematics. In it one can find problems with cryptographic systems such as the RSA cryptosystem.
    [Show full text]
  • Sieving for Twin Smooth Integers with Solutions to the Prouhet-Tarry-Escott Problem
    Sieving for twin smooth integers with solutions to the Prouhet-Tarry-Escott problem Craig Costello1, Michael Meyer2;3, and Michael Naehrig1 1 Microsoft Research, Redmond, WA, USA fcraigco,[email protected] 2 University of Applied Sciences Wiesbaden, Germany 3 University of W¨urzburg,Germany [email protected] Abstract. We give a sieving algorithm for finding pairs of consecutive smooth numbers that utilizes solutions to the Prouhet-Tarry-Escott (PTE) problem. Any such solution induces two degree-n polynomials, a(x) and b(x), that differ by a constant integer C and completely split into linear factors in Z[x]. It follows that for any ` 2 Z such that a(`) ≡ b(`) ≡ 0 mod C, the two integers a(`)=C and b(`)=C differ by 1 and necessarily contain n factors of roughly the same size. For a fixed smoothness bound B, restricting the search to pairs of integers that are parameterized in this way increases the probability that they are B-smooth. Our algorithm combines a simple sieve with parametrizations given by a collection of solutions to the PTE problem. The motivation for finding large twin smooth integers lies in their application to compact isogeny-based post-quantum protocols. The recent key exchange scheme B-SIDH and the recent digital signature scheme SQISign both require large primes that lie between two smooth integers; finding such a prime can be seen as a special case of finding twin smooth integers under the additional stipulation that their sum is a prime p. When searching for cryptographic parameters with 2240 ≤ p < 2256, an implementation of our sieve found primes p where p + 1 and p − 1 are 215-smooth; the smoothest prior parameters had a similar sized prime for which p−1 and p+1 were 219-smooth.
    [Show full text]
  • Integer Factorization and Computing Discrete Logarithms in Maple
    Integer Factorization and Computing Discrete Logarithms in Maple Aaron Bradford∗, Michael Monagan∗, Colin Percival∗ [email protected], [email protected], [email protected] Department of Mathematics, Simon Fraser University, Burnaby, B.C., V5A 1S6, Canada. 1 Introduction As part of our MITACS research project at Simon Fraser University, we have investigated algorithms for integer factorization and computing discrete logarithms. We have implemented a quadratic sieve algorithm for integer factorization in Maple to replace Maple's implementation of the Morrison- Brillhart continued fraction algorithm which was done by Gaston Gonnet in the early 1980's. We have also implemented an indexed calculus algorithm for discrete logarithms in GF(q) to replace Maple's implementation of Shanks' baby-step giant-step algorithm, also done by Gaston Gonnet in the early 1980's. In this paper we describe the algorithms and our optimizations made to them. We give some details of our Maple implementations and present some initial timings. Since Maple is an interpreted language, see [7], there is room for improvement of both implementations by coding critical parts of the algorithms in C. For example, one of the bottle-necks of the indexed calculus algorithm is finding and integers which are B-smooth. Let B be a set of primes. A positive integer y is said to be B-smooth if its prime divisors are all in B. Typically B might be the first 200 primes and y might be a 50 bit integer. ∗This work was supported by the MITACS NCE of Canada. 1 2 Integer Factorization Starting from some very simple instructions | \make integer factorization faster in Maple" | we have implemented the Quadratic Sieve factoring al- gorithm in a combination of Maple and C (which is accessed via Maple's capabilities for external linking).
    [Show full text]
  • Integer Factorization with a Neuromorphic Sieve
    Integer Factorization with a Neuromorphic Sieve John V. Monaco and Manuel M. Vindiola U.S. Army Research Laboratory Aberdeen Proving Ground, MD 21005 Email: [email protected], [email protected] Abstract—The bound to factor large integers is dominated by to represent n, the computational effort to factor n by trial the computational effort to discover numbers that are smooth, division grows exponentially. typically performed by sieving a polynomial sequence. On a Dixon’s factorization method attempts to construct a con- von Neumann architecture, sieving has log-log amortized time 2 2 complexity to check each value for smoothness. This work gruence of squares, x ≡ y mod n. If such a congruence presents a neuromorphic sieve that achieves a constant time is found, and x 6≡ ±y mod n, then gcd (x − y; n) must be check for smoothness by exploiting two characteristic properties a nontrivial factor of n. A class of subexponential factoring of neuromorphic architectures: constant time synaptic integration algorithms, including the quadratic sieve, build on Dixon’s and massively parallel computation. The approach is validated method by specifying how to construct the congruence of by modifying msieve, one of the fastest publicly available integer factorization implementations, to use the IBM Neurosynaptic squares through a linear combination of smooth numbers [5]. System (NS1e) as a coprocessor for the sieving stage. Given smoothness bound B, a number is B-smooth if it does not contain any prime factors greater than B. Additionally, let I. INTRODUCTION v = e1; e2; : : : ; eπ(B) be the exponents vector of a smooth number s, where s = Q pvi , p is the ith prime, and A number is said to be smooth if it is an integer composed 1≤i≤π(B) i i π (B) is the number of primes not greater than B.
    [Show full text]
  • Sieving by Large Integers and Covering Systems of Congruences
    JOURNAL OF THE AMERICAN MATHEMATICAL SOCIETY Volume 20, Number 2, April 2007, Pages 495–517 S 0894-0347(06)00549-2 Article electronically published on September 19, 2006 SIEVING BY LARGE INTEGERS AND COVERING SYSTEMS OF CONGRUENCES MICHAEL FILASETA, KEVIN FORD, SERGEI KONYAGIN, CARL POMERANCE, AND GANG YU 1. Introduction Notice that every integer n satisfies at least one of the congruences n ≡ 0(mod2),n≡ 0(mod3),n≡ 1(mod4),n≡ 1(mod6),n≡ 11 (mod 12). A finite set of congruences, where each integer satisfies at least one them, is called a covering system. A famous problem of Erd˝os from 1950 [4] is to determine whether for every N there is a covering system with distinct moduli greater than N.In other words, can the minimum modulus in a covering system with distinct moduli be arbitrarily large? In regards to this problem, Erd˝os writes in [6], “This is perhaps my favourite problem.” It is easy to see that in a covering system, the reciprocal sum of the moduli is at least 1. Examples with distinct moduli are known with least modulus 2, 3, and 4, where this reciprocal sum can be arbitrarily close to 1; see [10], §F13. Erd˝os and Selfridge [5] conjectured that this fails for all large enough choices of the least modulus. In fact, they made the following much stronger conjecture. Conjecture 1. For any number B,thereisanumberNB,suchthatinacovering system with distinct moduli greater than NB, the sum of reciprocals of these moduli is greater than B. A version of Conjecture 1 also appears in [7].
    [Show full text]
  • Factoring Integers with a Brain-Inspired Computer John V
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS—I: REGULAR PAPERS 1 Factoring Integers with a Brain-Inspired Computer John V. Monaco and Manuel M. Vindiola Abstract—The bound to factor large integers is dominated • Constant-time synaptic integration: a single neuron in the by the computational effort to discover numbers that are B- brain may receive electrical potential inputs along synap- smooth, i.e., integers whose largest prime factor does not exceed tic connections from thousands of other neurons. The B. Smooth numbers are traditionally discovered by sieving a polynomial sequence, whereby the logarithmic sum of prime incoming potentials are continuously and instantaneously factors of each polynomial value is compared to a threshold. integrated to compute the neuron’s membrane potential. On a von Neumann architecture, this requires a large block of Like the brain, neuromorphic architectures aim to per- memory for the sieving interval and frequent memory updates, form synaptic integration in constant time, typically by resulting in O(ln ln B) amortized time complexity to check each leveraging physical properties of the underlying device. value for smoothness. This work presents a neuromorphic sieve that achieves a constant-time check for smoothness by reversing Unlike the traditional CPU-based sieve, the factor base is rep- the roles of space and time from the von Neumann architecture resented in space (as spiking neurons) and the sieving interval and exploiting two characteristic properties of brain-inspired in time (as successive time steps). Sieving is performed by computation: massive parallelism and constant time synaptic integration. The effects on sieving performance of two common a population of leaky integrate-and-fire (LIF) neurons whose neuromorphic architectural constraints are examined: limited dynamics are simple enough to be implemented on a range synaptic weight resolution, which forces the factor base to be of current and future architectures.
    [Show full text]
  • On Distribution of Semiprime Numbers
    ISSN 1066-369X, Russian Mathematics (Iz. VUZ), 2014, Vol. 58, No. 8, pp. 43–48. c Allerton Press, Inc., 2014. Original Russian Text c Sh.T. Ishmukhametov, F.F. Sharifullina, 2014, published in Izvestiya Vysshikh Uchebnykh Zavedenii. Matematika, 2014, No. 8, pp. 53–59. On Distribution of Semiprime Numbers Sh. T. Ishmukhametov* and F. F. Sharifullina** Kazan (Volga Region) Federal University, ul. Kremlyovskaya 18, Kazan, 420008 Russia Received January 31, 2013 Abstract—A semiprime is a natural number which is the product of two (possibly equal) prime numbers. Let y be a natural number and g(y) be the probability for a number y to be semiprime. In this paper we derive an asymptotic formula to count g(y) for large y and evaluate its correctness for different y. We also introduce strongly semiprimes, i.e., numbers each of which is a product of two primes of large dimension, and investigate distribution of strongly semiprimes. DOI: 10.3103/S1066369X14080052 Keywords: semiprime integer, strongly semiprime, distribution of semiprimes, factorization of integers, the RSA ciphering method. By smoothness of a natural number n we mean possibility of its representation as a product of a large number of prime factors. A B-smooth number is a number all prime divisors of which are bounded from above by B. The concept of smoothness plays an important role in number theory and cryptography. Possibility of using the concept in cryptography is based on the fact that the procedure of decom- position of an integer into prime divisors (factorization) is a laborious computational process requiring significant calculating resources [1, 2].
    [Show full text]
  • Cracking RSA with Various Factoring Algorithms Brian Holt
    Cracking RSA with Various Factoring Algorithms Brian Holt 1 Abstract For centuries, factoring products of large prime numbers has been recognized as a computationally difficult task by mathematicians. The modern encryption scheme RSA (short for Rivest, Shamir, and Adleman) uses products of large primes for secure communication protocols. In this paper, we analyze and compare four factorization algorithms which use elementary number theory to assess the safety of various RSA moduli. 2 Introduction The origins of prime factorization can be traced back to around 300 B.C. when Euclid's theorem and the unique factorization theorem were proved in his work El- ements [3]. For nearly two millenia, the simple trial division algorithm was used to factor numbers into primes. More efficient means were studied by mathemeticians during the seventeenth and eighteenth centuries. In the 1640s, Pierre de Fermat devised what is known as Fermat's factorization method. Over one century later, Leonhard Euler improved upon Fermat's factorization method for numbers that take specific forms. Around this time, Adrien-Marie Legendre and Carl Gauss also con- tributed crucial ideas used in modern factorization schemes [3]. Prime factorization's potential for secure communication was not recognized until the twentieth century. However, economist William Jevons antipicated the "one-way" or ”difficult-to-reverse" property of products of large primes in 1874. This crucial con- cept is the basis of modern public key cryptography. Decrypting information without access to the private key must be computationally complex to prevent malicious at- tacks. In his book The Principles of Science: A Treatise on Logic and Scientific Method, Jevons challenged the reader to factor a ten digit semiprime (now called Jevon's Number)[4].
    [Show full text]
  • Primality Testing and Integer Factorization in Public-Key Cryptography Pdf, Epub, Ebook
    PRIMALITY TESTING AND INTEGER FACTORIZATION IN PUBLIC-KEY CRYPTOGRAPHY PDF, EPUB, EBOOK Song Y. Yan | 371 pages | 29 Nov 2010 | Springer-Verlag New York Inc. | 9781441945860 | English | New York, NY, United States Primality Testing and Integer Factorization in Public-Key Cryptography PDF Book RA 39 math. Blog at WordPress. The two equations give. Book Description Condition: New. Stock Image. The square and multiply algorithm is equivalent to the Python one-liner pow x, k, p. One can use a crude version of the prime number theorem to get the upper bound on. Since has order greater than in , we see that the number of residue classes of the form is at least. Factoring integers with elliptic curves. HO 12 math. They matter because of an important class of cryptosystems, in a multibillion dollar industry. You have to search a smaller space until you brute force the key. View 2 excerpts, references background and methods. By Terence Tao. So to establish the proposition it suffices to show that all these products are distinct. View all copies of this ISBN edition:. Hello I think that binomial test is also suitable for factoring. View 9 excerpts, references background. Each shard is capable of processing transactions in parallel, yielding a high throughput for the network. Sorry, your blog cannot share posts by email. Updates on my research and expository papers, discussion of open problems, and other maths-related topics. Note that can be computed in time for any fixed by expressing in binary, and repeatedly squaring. I implemented those steps in the Python code below.
    [Show full text]
  • Primality Testing and Sub-Exponential Factorization
    Primality Testing and Sub-Exponential Factorization David Emerson Advisor: Howard Straubing Boston College Computer Science Senior Thesis May, 2009 Abstract This paper discusses the problems of primality testing and large number factorization. The first section is dedicated to a discussion of primality test- ing algorithms and their importance in real world applications. Over the course of the discussion the structure of the primality algorithms are devel- oped rigorously and demonstrated with examples. This section culminates in the presentation and proof of the modern deterministic polynomial-time Agrawal-Kayal-Saxena algorithm for deciding whether a given n is prime. The second section is dedicated to the process of factorization of large com- posite numbers. While primality and factorization are mathematically tied in principle they are very di⇥erent computationally. This fact is explored and current high powered factorization methods and the mathematical structures on which they are built are examined. 1 Introduction Factorization and primality testing are important concepts in mathematics. From a purely academic motivation it is an intriguing question to ask how we are to determine whether a number is prime or not. The next logical question to ask is, if the number is composite, can we calculate its factors. The two questions are invariably related. If we can factor a number into its pieces then it is obviously not prime, if we can’t then we know that it is prime. The definition of primality is very much derived from factorability. As we progress through the known and developed primality tests and factorization algorithms it will begin to become clear that while primality and factorization are intertwined they occupy two very di⇥erent levels of computational di⇧culty.
    [Show full text]