Some Integer Factorization Algorithms Using Elliptic Curves
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Primality Testing and Integer Factorisation
Primality Testing and Integer Factorisation Richard P. Brent, FAA Computer Sciences Laboratory Australian National University Canberra, ACT 2601 Abstract The problem of finding the prime factors of large composite numbers has always been of mathematical interest. With the advent of public key cryptosystems it is also of practical importance, because the security of some of these cryptosystems, such as the Rivest-Shamir-Adelman (RSA) system, depends on the difficulty of factoring the public keys. In recent years the best known integer factorisation algorithms have improved greatly, to the point where it is now easy to factor a 60-decimal digit number, and possible to factor numbers larger than 120 decimal digits, given the availability of enough computing power. We describe several recent algorithms for primality testing and factorisation, give examples of their use and outline some applications. 1. Introduction It has been known since Euclid’s time (though first clearly stated and proved by Gauss in 1801) that any natural number N has a unique prime power decomposition α1 α2 αk N = p1 p2 ··· pk (1.1) αj (p1 < p2 < ··· < pk rational primes, αj > 0). The prime powers pj are called αj components of N, and we write pj kN. To compute the prime power decomposition we need – 1. An algorithm to test if an integer N is prime. 2. An algorithm to find a nontrivial factor f of a composite integer N. Given these there is a simple recursive algorithm to compute (1.1): if N is prime then stop, otherwise 1. find a nontrivial factor f of N; 2. -
X9.80–2005 Prime Number Generation, Primality Testing, and Primality Certificates
This is a preview of "ANSI X9.80:2005". Click here to purchase the full version from the ANSI store. American National Standard for Financial Services X9.80–2005 Prime Number Generation, Primality Testing, and Primality Certificates Accredited Standards Committee X9, Incorporated Financial Industry Standards Date Approved: August 15, 2005 American National Standards Institute American National Standards, Technical Reports and Guides developed through the Accredited Standards Committee X9, Inc., are copyrighted. Copying these documents for personal or commercial use outside X9 membership agreements is prohibited without express written permission of the Accredited Standards Committee X9, Inc. For additional information please contact ASC X9, Inc., P.O. Box 4035, Annapolis, Maryland 21403. This is a preview of "ANSI X9.80:2005". Click here to purchase the full version from the ANSI store. ANS X9.80–2005 Foreword Approval of an American National Standard requires verification by ANSI that the requirements for due process, consensus, and other criteria for approval have been met by the standards developer. Consensus is established when, in the judgment of the ANSI Board of Standards Review, substantial agreement has been reached by directly and materially affected interests. Substantial agreement means much more than a simple majority, but not necessarily unanimity. Consensus requires that all views and objections be considered, and that a concerted effort be made toward their resolution. The use of American National Standards is completely voluntary; their existence does not in any respect preclude anyone, whether he has approved the standards or not from manufacturing, marketing, purchasing, or using products, processes, or procedures not conforming to the standards. -
Computing Prime Divisors in an Interval
MATHEMATICS OF COMPUTATION Volume 84, Number 291, January 2015, Pages 339–354 S 0025-5718(2014)02840-8 Article electronically published on May 28, 2014 COMPUTING PRIME DIVISORS IN AN INTERVAL MINKYU KIM AND JUNG HEE CHEON Abstract. We address the problem of finding a nontrivial divisor of a com- posite integer when it has a prime divisor in an interval. We show that this problem can be solved in time of the square root of the interval length with a similar amount of storage, by presenting two algorithms; one is probabilistic and the other is its derandomized version. 1. Introduction Integer factorization is one of the most challenging problems in computational number theory. It has been studied for centuries, and it has been intensively in- vestigated after introducing the RSA cryptosystem [18]. The difficulty of integer factorization has been examined not only in pure factoring situations but also in various modified situations. One such approach is to find a nontrivial divisor of a composite integer when it has prime divisors of special form. These include Pol- lard’s p − 1 algorithm [15], Williams’ p + 1 algorithm [20], and others. On the other hand, owing to the importance and the usage of integer factorization in cryptog- raphy, one needs to examine this problem when some partial information about divisors is revealed. This side information might be exposed through the proce- dures of generating prime numbers or through some attacks against crypto devices or protocols. This paper deals with integer factorization given the approximation of divisors, and it is motivated by the above mentioned research. -
An Analysis of Primality Testing and Its Use in Cryptographic Applications
An Analysis of Primality Testing and Its Use in Cryptographic Applications Jake Massimo Thesis submitted to the University of London for the degree of Doctor of Philosophy Information Security Group Department of Information Security Royal Holloway, University of London 2020 Declaration These doctoral studies were conducted under the supervision of Prof. Kenneth G. Paterson. The work presented in this thesis is the result of original research carried out by myself, in collaboration with others, whilst enrolled in the Department of Mathe- matics as a candidate for the degree of Doctor of Philosophy. This work has not been submitted for any other degree or award in any other university or educational establishment. Jake Massimo April, 2020 2 Abstract Due to their fundamental utility within cryptography, prime numbers must be easy to both recognise and generate. For this, we depend upon primality testing. Both used as a tool to validate prime parameters, or as part of the algorithm used to generate random prime numbers, primality tests are found near universally within a cryptographer's tool-kit. In this thesis, we study in depth primality tests and their use in cryptographic applications. We first provide a systematic analysis of the implementation landscape of primality testing within cryptographic libraries and mathematical software. We then demon- strate how these tests perform under adversarial conditions, where the numbers being tested are not generated randomly, but instead by a possibly malicious party. We show that many of the libraries studied provide primality tests that are not pre- pared for testing on adversarial input, and therefore can declare composite numbers as being prime with a high probability. -
Polynomial Evaluation Engl2
Alexander Rostovtsev, Alexey Bogdanov and Mikhail Mikhaylov St. Petersburg state Polytechnic University [email protected] Secure evaluation of polynomial using privacy ring homomorphisms Method of secure evaluation of polynomial y = F(x1, …, xk) over some rings on untrusted computer is proposed. Two models of untrusted computer are considered: passive and active. In passive model untrusted computer correctly computes polyno- mial F and tries to know secret input (x1, …, xk) and output y. In active model un- trusted computer tries to know input and output and tries to change correct output y so that this change cannot be determined. Secure computation is proposed by using one-time privacy ring homomorphism /n ® /n[z]/(f(z)), n = pq, generated by trusted computer. In the case of active model secret check point v = F(u1, …, uk) is used. Trusted computer generates poly- nomial f(z) = (z - t)(z + t), t Î /n and input Xi(z) Î /n[z]/(f(z)) such that Xi(t) º xi (mod n) for passive model, and f(z) = (z - t1)(z - t2)(z - t3), ti Î /n and input Xi(z) Î /n[z]/(f(z)) such that Xi(t1) º xi (mod n), Xi(t2) º ui (mod n) for active model. Untrusted computer computes function Y(z) = F(X1(z), …, Xk(z)) in the ring /n[z]/(f(z)). For passive model trusted computer determines secret output y º Y(t) (mod n). For active model trusted computer checks that Y(t2) º v (mod n), then deter- mines correct output y º Y(t1) (mod n). -
Generalized Kakeya Sets for Polynomial Evaluation and Faster Computation of Fermionants
Generalized Kakeya sets for polynomial evaluation and faster computation of fermionants Andreas Björklund1, Petteri Kaski2, and Ryan Williams3 1 Department of Computer Science, Lund University, Lund, Sweden 2 Department of Computer Science, Aalto University, Helsinki, Finland 3 Department of Electrical Engineering and Computer Science & CSAIL, MIT, Cambridge, MA, USA Abstract We present two new data structures for computing values of an n-variate polynomial P of degree at most d over a finite field of q elements. Assuming that d divides q − 1, our first data structure relies on (d + 1)n+2 tabulated values of P to produce the value of P at any of the qn points using O(nqd2) arithmetic operations in the finite field. Assuming that s divides d and d/s divides q − 1, our second data structure assumes that P satisfies a degree-separability condition and relies on (d/s+1)n+s tabulated values to produce the value of P at any point using O(nqssq) arithmetic operations. Our data structures are based on generalizing upper-bound constructions due to Mockenhaupt and Tao (2004), Saraf and Sudan (2008), and Dvir (2009) for Kakeya sets in finite vector spaces from linear to higher-degree polynomial curves. As an application we show that the new data structures enable a faster algorithm for com- puting integer-valued fermionants, a family of self-reducible polynomial functions introduced by Chandrasekharan and Wiese (2011) that captures numerous fundamental algebraic and combin- atorial invariants such as the determinant, the permanent, the number of Hamiltonian cycles in a directed multigraph, as well as certain partition functions of strongly correlated electron systems in statistical physics. -
Factoring Polynomials Over Finite Fields
Factoring Polynomials over Finite Fields More precisely: Factoring and testing irreduciblity of sparse polynomials over small finite fields Richard P. Brent MSI, ANU joint work with Paul Zimmermann INRIA, Nancy 27 August 2009 Richard Brent (ANU) Factoring Polynomials over Finite Fields 27 August 2009 1 / 64 Outline Introduction I Polynomials over finite fields I Irreducible and primitive polynomials I Mersenne primes Part 1: Testing irreducibility I Irreducibility criteria I Modular composition I Three algorithms I Comparison of the algorithms I The “best” algorithm I Some computational results Part 2: Factoring polynomials I Distinct degree factorization I Avoiding GCDs, blocking I Another level of blocking I Average-case complexity I New primitive trinomials Richard Brent (ANU) Factoring Polynomials over Finite Fields 27 August 2009 2 / 64 Polynomials over finite fields We consider univariate polynomials P(x) over a finite field F. The algorithms apply, with minor changes, for any small positive characteristic, but since time is limited we assume that the characteristic is two, and F = Z=2Z = GF(2). P(x) is irreducible if it has no nontrivial factors. If P(x) is irreducible of degree r, then [Gauss] r x2 = x mod P(x): 2r Thus P(x) divides the polynomial Pr (x) = x − x. In fact, Pr (x) is the product of all irreducible polynomials of degree d, where d runs over the divisors of r. Richard Brent (ANU) Factoring Polynomials over Finite Fields 27 August 2009 3 / 64 Counting irreducible polynomials Let N(d) be the number of irreducible polynomials of degree d. Thus X r dN(d) = deg(Pr ) = 2 : djr By Möbius inversion we see that X rN(r) = µ(d)2r=d : djr Thus, the number of irreducible polynomials of degree r is ! 2r 2r=2 N(r) = + O : r r Since there are 2r polynomials of degree r, the probability that a randomly selected polynomial is irreducible is ∼ 1=r ! 0 as r ! +1. -
Integer Factorization and Computing Discrete Logarithms in Maple
Integer Factorization and Computing Discrete Logarithms in Maple Aaron Bradford∗, Michael Monagan∗, Colin Percival∗ [email protected], [email protected], [email protected] Department of Mathematics, Simon Fraser University, Burnaby, B.C., V5A 1S6, Canada. 1 Introduction As part of our MITACS research project at Simon Fraser University, we have investigated algorithms for integer factorization and computing discrete logarithms. We have implemented a quadratic sieve algorithm for integer factorization in Maple to replace Maple's implementation of the Morrison- Brillhart continued fraction algorithm which was done by Gaston Gonnet in the early 1980's. We have also implemented an indexed calculus algorithm for discrete logarithms in GF(q) to replace Maple's implementation of Shanks' baby-step giant-step algorithm, also done by Gaston Gonnet in the early 1980's. In this paper we describe the algorithms and our optimizations made to them. We give some details of our Maple implementations and present some initial timings. Since Maple is an interpreted language, see [7], there is room for improvement of both implementations by coding critical parts of the algorithms in C. For example, one of the bottle-necks of the indexed calculus algorithm is finding and integers which are B-smooth. Let B be a set of primes. A positive integer y is said to be B-smooth if its prime divisors are all in B. Typically B might be the first 200 primes and y might be a 50 bit integer. ∗This work was supported by the MITACS NCE of Canada. 1 2 Integer Factorization Starting from some very simple instructions | \make integer factorization faster in Maple" | we have implemented the Quadratic Sieve factoring al- gorithm in a combination of Maple and C (which is accessed via Maple's capabilities for external linking). -
The Fast Fourier Transform and Applications to Multiplication
The Fast Fourier Transform and Applications to Multiplication Analysis of Algorithms Prepared by John Reif, Ph.D. Topics and Readings: - The Fast Fourier Transform • Reading Selection: • CLR, Chapter 30 Advanced Material : - Using FFT to solve other Multipoint Evaluation Problems - Applications to Multiplication Nth Roots of Unity • Assume Commutative Ring (R,+,·, 0,1) • ω is principal nth root of unity if – ωk ≠ 1 for k = 1, …, n-1 – ωn = 1, and n-1 ∑ω jp = 0 for 1≤ p ≤ n j=0 2πi/n • Example: ω = e for complex numbers Example of nth Root of Unity for Complex Numbers ω = e 2 π i / 8 is the 8th root of unity Fourier Matrix #1 1 … 1 $ % n−1 & %1 ω … ω & M (ω) = %1 ω 2 … ω 2(n−1) & n % & % & % n−1 (n−1)(n−1) & '1 ω … ω ( ij so M(ω)ij = ω for 0 ≤ i, j<n #a0 $ % & given a =% & % & 'a n-1 ( Discrete Fourier Transform T Input a column n-vector a = (a0, …, an-1) Output an n-vector which is the product of the Fourier matrix times the input vector DFTn (a) = M(ω) x a "f0 # $ % = $ % where $ % &fn-1 ' n-1 f = a ik i ∑ kω k=0 Inverse Fourier Transform -1 -1 DFTn (a) = M(ω) x a 1 Theorem M(ω)-1 = ω -ij ij n proof We must show M(ω)⋅M(ω)-1 = I 1 n-1 1 n-1 ∑ωikω -kj = ∑ω k(i-j) n k=0 n k=0 $0 if i-j ≠ 0 = % &1 if i-j = 0 n-1 using identity ∑ω kp = 0, for 1≤ p < n k=0 Fourier Transform is Polynomial Evaluation at the Roots of Unity T Input a column n-vector a = (a0, …, an-1) T Output an n-vector (f0, …, fn-1) which are the values polynomial f(x)at the n roots of unity "f0 # DFT (a) = $ % where n $ % $ % &fn-1 ' i fi = f(ω ) and n-1 f(x) = a x j ∑ j -
Integer Factorization with a Neuromorphic Sieve
Integer Factorization with a Neuromorphic Sieve John V. Monaco and Manuel M. Vindiola U.S. Army Research Laboratory Aberdeen Proving Ground, MD 21005 Email: [email protected], [email protected] Abstract—The bound to factor large integers is dominated by to represent n, the computational effort to factor n by trial the computational effort to discover numbers that are smooth, division grows exponentially. typically performed by sieving a polynomial sequence. On a Dixon’s factorization method attempts to construct a con- von Neumann architecture, sieving has log-log amortized time 2 2 complexity to check each value for smoothness. This work gruence of squares, x ≡ y mod n. If such a congruence presents a neuromorphic sieve that achieves a constant time is found, and x 6≡ ±y mod n, then gcd (x − y; n) must be check for smoothness by exploiting two characteristic properties a nontrivial factor of n. A class of subexponential factoring of neuromorphic architectures: constant time synaptic integration algorithms, including the quadratic sieve, build on Dixon’s and massively parallel computation. The approach is validated method by specifying how to construct the congruence of by modifying msieve, one of the fastest publicly available integer factorization implementations, to use the IBM Neurosynaptic squares through a linear combination of smooth numbers [5]. System (NS1e) as a coprocessor for the sieving stage. Given smoothness bound B, a number is B-smooth if it does not contain any prime factors greater than B. Additionally, let I. INTRODUCTION v = e1; e2; : : : ; eπ(B) be the exponents vector of a smooth number s, where s = Q pvi , p is the ith prime, and A number is said to be smooth if it is an integer composed 1≤i≤π(B) i i π (B) is the number of primes not greater than B. -
Cracking RSA with Various Factoring Algorithms Brian Holt
Cracking RSA with Various Factoring Algorithms Brian Holt 1 Abstract For centuries, factoring products of large prime numbers has been recognized as a computationally difficult task by mathematicians. The modern encryption scheme RSA (short for Rivest, Shamir, and Adleman) uses products of large primes for secure communication protocols. In this paper, we analyze and compare four factorization algorithms which use elementary number theory to assess the safety of various RSA moduli. 2 Introduction The origins of prime factorization can be traced back to around 300 B.C. when Euclid's theorem and the unique factorization theorem were proved in his work El- ements [3]. For nearly two millenia, the simple trial division algorithm was used to factor numbers into primes. More efficient means were studied by mathemeticians during the seventeenth and eighteenth centuries. In the 1640s, Pierre de Fermat devised what is known as Fermat's factorization method. Over one century later, Leonhard Euler improved upon Fermat's factorization method for numbers that take specific forms. Around this time, Adrien-Marie Legendre and Carl Gauss also con- tributed crucial ideas used in modern factorization schemes [3]. Prime factorization's potential for secure communication was not recognized until the twentieth century. However, economist William Jevons antipicated the "one-way" or ”difficult-to-reverse" property of products of large primes in 1874. This crucial con- cept is the basis of modern public key cryptography. Decrypting information without access to the private key must be computationally complex to prevent malicious at- tacks. In his book The Principles of Science: A Treatise on Logic and Scientific Method, Jevons challenged the reader to factor a ten digit semiprime (now called Jevon's Number)[4]. -
Sequences of Numbers Obtained by Digit and Iterative Digit Sums of Sophie Germain Primes and Its Variants
Global Journal of Pure and Applied Mathematics. ISSN 0973-1768 Volume 12, Number 2 (2016), pp. 1473-1480 © Research India Publications http://www.ripublication.com Sequences of numbers obtained by digit and iterative digit sums of Sophie Germain primes and its variants 1Sheila A. Bishop, *1Hilary I. Okagbue, 2Muminu O. Adamu and 3Funminiyi A. Olajide 1Department of Mathematics, Covenant University, Canaanland, Ota, Nigeria. 2Department of Mathematics, University of Lagos, Akoka, Lagos, Nigeria. 3Department of Computer and Information Sciences, Covenant University, Canaanland, Ota, Nigeria. Abstract Sequences were generated by the digit and iterative digit sums of Sophie Germain and Safe primes and their variants. The results of the digit and iterative digit sum of Sophie Germain and Safe primes were almost the same. The same applied to the square and cube of the respective primes. Also, the results of the digit and iterative digit sum of primes that are not Sophie Germain are the same with the primes that are notSafe. The results of the digit and iterative digit sum of prime that are either Sophie Germain or Safe are like the combination of the results of the respective primes when considered separately. Keywords: Sophie Germain prime, Safe prime, digit sum, iterative digit sum. Introduction In number theory, many types of primes exist; one of such is the Sophie Germain prime. A prime number p is said to be a Sophie Germain prime if 21p is also prime. A prime number qp21 is known as a safe prime. Sophie Germain prime were named after the famous French mathematician, physicist and philosopher Sophie Germain (1776-1831).