Primality Testing∗
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Primality Testing and Integer Factorisation
Primality Testing and Integer Factorisation Richard P. Brent, FAA Computer Sciences Laboratory Australian National University Canberra, ACT 2601 Abstract The problem of finding the prime factors of large composite numbers has always been of mathematical interest. With the advent of public key cryptosystems it is also of practical importance, because the security of some of these cryptosystems, such as the Rivest-Shamir-Adelman (RSA) system, depends on the difficulty of factoring the public keys. In recent years the best known integer factorisation algorithms have improved greatly, to the point where it is now easy to factor a 60-decimal digit number, and possible to factor numbers larger than 120 decimal digits, given the availability of enough computing power. We describe several recent algorithms for primality testing and factorisation, give examples of their use and outline some applications. 1. Introduction It has been known since Euclid’s time (though first clearly stated and proved by Gauss in 1801) that any natural number N has a unique prime power decomposition α1 α2 αk N = p1 p2 ··· pk (1.1) αj (p1 < p2 < ··· < pk rational primes, αj > 0). The prime powers pj are called αj components of N, and we write pj kN. To compute the prime power decomposition we need – 1. An algorithm to test if an integer N is prime. 2. An algorithm to find a nontrivial factor f of a composite integer N. Given these there is a simple recursive algorithm to compute (1.1): if N is prime then stop, otherwise 1. find a nontrivial factor f of N; 2. -
An Analysis of Primality Testing and Its Use in Cryptographic Applications
An Analysis of Primality Testing and Its Use in Cryptographic Applications Jake Massimo Thesis submitted to the University of London for the degree of Doctor of Philosophy Information Security Group Department of Information Security Royal Holloway, University of London 2020 Declaration These doctoral studies were conducted under the supervision of Prof. Kenneth G. Paterson. The work presented in this thesis is the result of original research carried out by myself, in collaboration with others, whilst enrolled in the Department of Mathe- matics as a candidate for the degree of Doctor of Philosophy. This work has not been submitted for any other degree or award in any other university or educational establishment. Jake Massimo April, 2020 2 Abstract Due to their fundamental utility within cryptography, prime numbers must be easy to both recognise and generate. For this, we depend upon primality testing. Both used as a tool to validate prime parameters, or as part of the algorithm used to generate random prime numbers, primality tests are found near universally within a cryptographer's tool-kit. In this thesis, we study in depth primality tests and their use in cryptographic applications. We first provide a systematic analysis of the implementation landscape of primality testing within cryptographic libraries and mathematical software. We then demon- strate how these tests perform under adversarial conditions, where the numbers being tested are not generated randomly, but instead by a possibly malicious party. We show that many of the libraries studied provide primality tests that are not pre- pared for testing on adversarial input, and therefore can declare composite numbers as being prime with a high probability. -
Primes and Primality Testing
Primes and Primality Testing A Technological/Historical Perspective Jennifer Ellis Department of Mathematics and Computer Science What is a prime number? A number p greater than one is prime if and only if the only divisors of p are 1 and p. Examples: 2, 3, 5, and 7 A few larger examples: 71887 524287 65537 2127 1 Primality Testing: Origins Eratosthenes: Developed “sieve” method 276-194 B.C. Nicknamed Beta – “second place” in many different academic disciplines Also made contributions to www-history.mcs.st- geometry, approximation of andrews.ac.uk/PictDisplay/Eratosthenes.html the Earth’s circumference Sieve of Eratosthenes 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 Sieve of Eratosthenes We only need to “sieve” the multiples of numbers less than 10. Why? (10)(10)=100 (p)(q)<=100 Consider pq where p>10. Then for pq <=100, q must be less than 10. By sieving all the multiples of numbers less than 10 (here, multiples of q), we have removed all composite numbers less than 100. -
Primality Testing for Beginners
STUDENT MATHEMATICAL LIBRARY Volume 70 Primality Testing for Beginners Lasse Rempe-Gillen Rebecca Waldecker http://dx.doi.org/10.1090/stml/070 Primality Testing for Beginners STUDENT MATHEMATICAL LIBRARY Volume 70 Primality Testing for Beginners Lasse Rempe-Gillen Rebecca Waldecker American Mathematical Society Providence, Rhode Island Editorial Board Satyan L. Devadoss John Stillwell Gerald B. Folland (Chair) Serge Tabachnikov The cover illustration is a variant of the Sieve of Eratosthenes (Sec- tion 1.5), showing the integers from 1 to 2704 colored by the number of their prime factors, including repeats. The illustration was created us- ing MATLAB. The back cover shows a phase plot of the Riemann zeta function (see Appendix A), which appears courtesy of Elias Wegert (www.visual.wegert.com). 2010 Mathematics Subject Classification. Primary 11-01, 11-02, 11Axx, 11Y11, 11Y16. For additional information and updates on this book, visit www.ams.org/bookpages/stml-70 Library of Congress Cataloging-in-Publication Data Rempe-Gillen, Lasse, 1978– author. [Primzahltests f¨ur Einsteiger. English] Primality testing for beginners / Lasse Rempe-Gillen, Rebecca Waldecker. pages cm. — (Student mathematical library ; volume 70) Translation of: Primzahltests f¨ur Einsteiger : Zahlentheorie - Algorithmik - Kryptographie. Includes bibliographical references and index. ISBN 978-0-8218-9883-3 (alk. paper) 1. Number theory. I. Waldecker, Rebecca, 1979– author. II. Title. QA241.R45813 2014 512.72—dc23 2013032423 Copying and reprinting. Individual readers of this publication, and nonprofit libraries acting for them, are permitted to make fair use of the material, such as to copy a chapter for use in teaching or research. Permission is granted to quote brief passages from this publication in reviews, provided the customary acknowledgment of the source is given. -
Miller–Rabin Test
Lecture 31: Miller–Rabin Test Miller–Rabin Test Recall In the previous lecture we considered an efficient randomized algorithm to generate prime numbers that need n-bits in their binary representation This algorithm sampled a random element in the range f2n−1; 2n−1 + 1;:::; 2n − 1g and test whether it is a prime number or not By the Prime Number Theorem, we are extremely likely to hit a prime number So, all that remains is an algorithm to test whether the random sample we have chosen is a prime number or not Miller–Rabin Test Primality Testing Given an n-bit number N as input, we have to ascertain whether N is a prime number or not in time polynomial in n Only in 2002, Agrawal–Kayal–Saxena constructed a deterministic polynomial time algorithm for primality testing. That is, the algorithm will always run in time polynomial in n. For any input N (that has n-bits in its binary representation), if N is a prime number, the AKS primality testing algorithm will return 1; otherwise (if, the number N is a composite number), the AKS primality testing algorithm will return 0. In practice, this algorithm is not used for primality testing because this turns out to be too slow. In practice, we use a randomized algorithm, namely, the Miller–Rabin Test, that successfully distinguishes primes from composites with very high probability. In this lecture, we will study a basic version of this Miller–Rabin primality test. Miller–Rabin Test Assurance of Miller–Rabin Test Miller–Rabin outputs 1 to indicate that it has classified the input N as a prime. -
15-251: Great Theoretical Ideas in Computer Science Recitation 14 Solutions PRIMES ∈ NP
15-251: Great Theoretical Ideas In Computer Science Recitation 14 Solutions PRIMES 2 NP The set of PRIMES of all primes is in co-NP, because if n is composite and k j n, we can verify this in polynomial time. In fact, the AKS primality test means that PRIMES is in P. We'll just prove PRIMES 2 NP. ∗ (a) We know n is prime iff φ(n) = n−1. Additionally, if n is prime then Zn is cyclic with order n−1. ∗ Given a generator a of Zn and a (guaranteed correct) prime factorization of n − 1, can we verify n is prime? If a has order n − 1, then we are done. So we must verify a has order dividing n − 1: an−1 ≡ 1 mod n and doesn't have smaller order, i.e. for all prime p j n − 1: a(n−1)=p 6≡ 1 mod n a is smaller than n, and there are at most log(n) factors each smaller than n, so our certificate is polynomial in n's representation. Similarly, each arithmetic computation is polynomial in log(n), and there are O(log(n)) of them. 1 (b) What else needs to be verified to use this as a primality certificate? Do we need to add more information? We also need to verify that our factorization (p1; α1); (p2; α2);:::; (pk; αk) of n − 1 is correct, two conditions: α1 α2 αk 1. p1 p2 : : : pk = n − 1 can be verified by multiplying and comparing, polynomial in log(n). 2. -
Pseudoprimes and Classical Primality Tests
Pseudoprimes and Classical Primality Tests Sebastian Pancratz 2 March 2009 1 Introduction This brief introduction concerns various notions of pseudoprimes and their relation to classical composite tests, that is, algorithms which can prove numbers composite, but might not identify all composites as such and hence cannot prove numbers prime. In the talk I hope to also present one of the classical algorithms to prove a number n prime. In contrast to modern algorithms, these are often either not polynomial-time algorithms, heavily depend on known factorisations of numbers such as n − 1 or are restricted to certain classes of candidates n. One of the simplest composite tests is based on Fermat's Little Theorem, stating that for any prime p and any integer a coprime to p we have ap−1 ≡ 1 (mod p). Hence we can prove an integer n > 1 composite by nding an integer b coprime to n with the property that bn−1 6≡ 1 (mod n). Denition 1. Let n > 1 be an odd integer and b > 1 with (b, n) = 1. We call n a pseudo-prime to base b if bn−1 ≡ 1 (mod n), that is, if the pair n, b satises the conclusion of Fermat's Little Theorem. Denition 2. If n is a positive odd composite integer which is a pseudo-prime to all bases b, we call n a Carmichael number. Using this criterion, we can easily check that e.g. 561 = 3 × 11 × 17 is a Carmichael number. It has been shown that there are innitely many Carmichael numbers [1]. -
CS265/CME309: Randomized Algorithms and Probabilistic Analysis Lecture #1:Computational Models, and the Schwartz-Zippel Randomized Polynomial Identity Test
CS265/CME309: Randomized Algorithms and Probabilistic Analysis Lecture #1:Computational Models, and the Schwartz-Zippel Randomized Polynomial Identity Test Gregory Valiant,∗ updated by Mary Wootters September 7, 2020 1 Introduction Welcome to CS265/CME309!! This course will revolve around two intertwined themes: • Analyzing random structures, including a variety of models of natural randomized processes, including random walks, and random graphs/networks. • Designing random structures and algorithms that solve problems we care about. By the end of this course, you should be able to think precisely about randomness, and also appreciate that it can be a powerful tool. As we will see, there are a number of basic problems for which extremely simple randomized algorithms outperform (both in theory and in practice) the best deterministic algorithms that we currently know. 2 Computational Model During this course, we will discuss algorithms at a high level of abstraction. Nonetheless, it's helpful to begin with a (somewhat) formal model of randomized computation just to make sure we're all on the same page. Definition 2.1 A randomized algorithm is an algorithm that can be computed by a Turing machine (or random access machine), which has access to an infinite string of uniformly random bits. Equivalently, it is an algorithm that can be performed by a Turing machine that has a special instruction “flip-a-coin”, which returns the outcome of an independent flip of a fair coin. ∗©2019, Gregory Valiant. Not to be sold, published, or distributed without the authors' consent. 1 We will never describe algorithms at the level of Turing machine instructions, though if you are ever uncertain whether or not an algorithm you have in mind is \allowed", you can return to this definition. -
Primality Testing and Sub-Exponential Factorization
Primality Testing and Sub-Exponential Factorization David Emerson Advisor: Howard Straubing Boston College Computer Science Senior Thesis May, 2009 Abstract This paper discusses the problems of primality testing and large number factorization. The first section is dedicated to a discussion of primality test- ing algorithms and their importance in real world applications. Over the course of the discussion the structure of the primality algorithms are devel- oped rigorously and demonstrated with examples. This section culminates in the presentation and proof of the modern deterministic polynomial-time Agrawal-Kayal-Saxena algorithm for deciding whether a given n is prime. The second section is dedicated to the process of factorization of large com- posite numbers. While primality and factorization are mathematically tied in principle they are very di⇥erent computationally. This fact is explored and current high powered factorization methods and the mathematical structures on which they are built are examined. 1 Introduction Factorization and primality testing are important concepts in mathematics. From a purely academic motivation it is an intriguing question to ask how we are to determine whether a number is prime or not. The next logical question to ask is, if the number is composite, can we calculate its factors. The two questions are invariably related. If we can factor a number into its pieces then it is obviously not prime, if we can’t then we know that it is prime. The definition of primality is very much derived from factorability. As we progress through the known and developed primality tests and factorization algorithms it will begin to become clear that while primality and factorization are intertwined they occupy two very di⇥erent levels of computational di⇧culty. -
Primality Testing
Syracuse University SURFACE Electrical Engineering and Computer Science - Technical Reports College of Engineering and Computer Science 6-1992 Primality Testing Per Brinch Hansen Syracuse University, School of Computer and Information Science, [email protected] Follow this and additional works at: https://surface.syr.edu/eecs_techreports Part of the Computer Sciences Commons Recommended Citation Hansen, Per Brinch, "Primality Testing" (1992). Electrical Engineering and Computer Science - Technical Reports. 169. https://surface.syr.edu/eecs_techreports/169 This Report is brought to you for free and open access by the College of Engineering and Computer Science at SURFACE. It has been accepted for inclusion in Electrical Engineering and Computer Science - Technical Reports by an authorized administrator of SURFACE. For more information, please contact [email protected]. SU-CIS-92-13 Primality Testing Per Brinch Hansen June 1992 School of Computer and Information Science Syracuse University Suite 4-116, Center for Science and Technology Syracuse, NY 13244-4100 Primality Testing1 PER BRINCH HANSEN Syracuse University, Syracuse, New York 13244 June 1992 This tutorial describes the Miller-Rabin method for testing the primality of large integers. The method is illustrated by a Pascal algorithm. The performance of the algorithm was measured on a Computing Surface. Categories and Subject Descriptors: G.3 [Probability and Statistics: [Proba bilistic algorithms (Monte Carlo) General Terms: Algorithms Additional Key Words and Phrases: Primality testing CONTENTS INTRODUCTION 1. FERMAT'S THEOREM 2. THE FERMAT TEST 3. QUADRATIC REMAINDERS 4. THE MILLER-RABIN TEST 5. A PROBABILISTIC ALGORITHM 6. COMPLEXITY 7. EXPERIMENTS 8. SUMMARY ACKNOWLEDGEMENTS REFERENCES 1Copyright@1992 Per Brinch Hansen Per Brinch Hansen: Primality Testing 2 INTRODUCTION This tutorial describes a probabilistic method for testing the primality of large inte gers. -
Prime Numbers and Discrete Logarithms
Lecture 11: Prime Numbers And Discrete Logarithms Lecture Notes on “Computer and Network Security” by Avi Kak ([email protected]) February 25, 2021 12:20 Noon ©2021 Avinash Kak, Purdue University Goals: • Primality Testing • Fermat’s Little Theorem • The Totient of a Number • The Miller-Rabin Probabilistic Algorithm for Testing for Primality • Python and Perl Implementations for the Miller-Rabin Primal- ity Test • The AKS Deterministic Algorithm for Testing for Primality • Chinese Remainder Theorem for Modular Arithmetic with Large Com- posite Moduli • Discrete Logarithms CONTENTS Section Title Page 11.1 Prime Numbers 3 11.2 Fermat’s Little Theorem 5 11.3 Euler’s Totient Function 11 11.4 Euler’s Theorem 14 11.5 Miller-Rabin Algorithm for Primality Testing 17 11.5.1 Miller-Rabin Algorithm is Based on an Intuitive Decomposition of 19 an Even Number into Odd and Even Parts 11.5.2 Miller-Rabin Algorithm Uses the Fact that x2 =1 Has No 20 Non-Trivial Roots in Zp 11.5.3 Miller-Rabin Algorithm: Two Special Conditions That Must Be 24 Satisfied By a Prime 11.5.4 Consequences of the Success and Failure of One or Both Conditions 28 11.5.5 Python and Perl Implementations of the Miller-Rabin 30 Algorithm 11.5.6 Miller-Rabin Algorithm: Liars and Witnesses 39 11.5.7 Computational Complexity of the Miller-Rabin Algorithm 41 11.6 The Agrawal-Kayal-Saxena (AKS) Algorithm 44 for Primality Testing 11.6.1 Generalization of Fermat’s Little Theorem to Polynomial Rings 46 Over Finite Fields 11.6.2 The AKS Algorithm: The Computational Steps 51 11.6.3 Computational Complexity of the AKS Algorithm 53 11.7 The Chinese Remainder Theorem 54 11.7.1 A Demonstration of the Usefulness of CRT 58 11.8 Discrete Logarithms 61 11.9 Homework Problems 65 Computer and Network Security by Avi Kak Lecture 11 Back to TOC 11.1 PRIME NUMBERS • Prime numbers are extremely important to computer security. -
Lecture 1: January 21 Instructor: Alistair Sinclair
CS271 Randomness & Computation Spring 2020 Lecture 1: January 21 Instructor: Alistair Sinclair Disclaimer: These notes have not been subjected to the usual scrutiny accorded to formal publications. They may be distributed outside this class only with the permission of the Instructor. 1.1 Introduction There are two main flavors of research in randomness and computation. The first looks at randomness as a computational resource, like space and time. The second looks at the use of randomness to design algorithms. This course focuses on the second area; for the first area, see other graduate classes such as CS278. The starting point for the course is the observation that, by giving a computer access to a string of random bits, one can frequently obtain algorithms that are far simpler and more elegant than their deterministic counterparts. Sometimes these algorithms can be “derandomized,” although the resulting deterministic algorithms are often rather obscure. In certain restricted models of computation, one can even prove that randomized algorithms are more efficient than the best possible deterministic algorithm. In this course we will look at both randomized algorithms and random structures (random graphs, random boolean formulas etc.). Random structures are of interest both as a means of understanding the behavior of (randomized or deterministic) algorithms on “typical” inputs, and as a mathematically clean framework in which to investigate various probabilistic techniques that are also of use in analyzing randomized algorithms. 1.1.1 Model of Computation We first need to formalize the model of computation we will use. Our machine extends a standard model (e.g., a Turing Machine or random-access machine), with an additional input consisting of a stream of perfectly random bits (fair coin flips).