Hardware Implementation of the Baillie-PSW Primality Test
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Fast Tabulation of Challenge Pseudoprimes Andrew Shallue and Jonathan Webster
THE OPEN BOOK SERIES 2 ANTS XIII Proceedings of the Thirteenth Algorithmic Number Theory Symposium Fast tabulation of challenge pseudoprimes Andrew Shallue and Jonathan Webster msp THE OPEN BOOK SERIES 2 (2019) Thirteenth Algorithmic Number Theory Symposium msp dx.doi.org/10.2140/obs.2019.2.411 Fast tabulation of challenge pseudoprimes Andrew Shallue and Jonathan Webster We provide a new algorithm for tabulating composite numbers which are pseudoprimes to both a Fermat test and a Lucas test. Our algorithm is optimized for parameter choices that minimize the occurrence of pseudoprimes, and for pseudoprimes with a fixed number of prime factors. Using this, we have confirmed that there are no PSW-challenge pseudoprimes with two or three prime factors up to 280. In the case where one is tabulating challenge pseudoprimes with a fixed number of prime factors, we prove our algorithm gives an unconditional asymptotic improvement over previous methods. 1. Introduction Pomerance, Selfridge, and Wagstaff famously offered $620 for a composite n that satisfies (1) 2n 1 1 .mod n/ so n is a base-2 Fermat pseudoprime, Á (2) .5 n/ 1 so n is not a square modulo 5, and j D (3) Fn 1 0 .mod n/ so n is a Fibonacci pseudoprime, C Á or to prove that no such n exists. We call composites that satisfy these conditions PSW-challenge pseudo- primes. In[PSW80] they credit R. Baillie with the discovery that combining a Fermat test with a Lucas test (with a certain specific parameter choice) makes for an especially effective primality test[BW80]. -
Primality Testing and Integer Factorisation
Primality Testing and Integer Factorisation Richard P. Brent, FAA Computer Sciences Laboratory Australian National University Canberra, ACT 2601 Abstract The problem of finding the prime factors of large composite numbers has always been of mathematical interest. With the advent of public key cryptosystems it is also of practical importance, because the security of some of these cryptosystems, such as the Rivest-Shamir-Adelman (RSA) system, depends on the difficulty of factoring the public keys. In recent years the best known integer factorisation algorithms have improved greatly, to the point where it is now easy to factor a 60-decimal digit number, and possible to factor numbers larger than 120 decimal digits, given the availability of enough computing power. We describe several recent algorithms for primality testing and factorisation, give examples of their use and outline some applications. 1. Introduction It has been known since Euclid’s time (though first clearly stated and proved by Gauss in 1801) that any natural number N has a unique prime power decomposition α1 α2 αk N = p1 p2 ··· pk (1.1) αj (p1 < p2 < ··· < pk rational primes, αj > 0). The prime powers pj are called αj components of N, and we write pj kN. To compute the prime power decomposition we need – 1. An algorithm to test if an integer N is prime. 2. An algorithm to find a nontrivial factor f of a composite integer N. Given these there is a simple recursive algorithm to compute (1.1): if N is prime then stop, otherwise 1. find a nontrivial factor f of N; 2. -
An Analysis of Primality Testing and Its Use in Cryptographic Applications
An Analysis of Primality Testing and Its Use in Cryptographic Applications Jake Massimo Thesis submitted to the University of London for the degree of Doctor of Philosophy Information Security Group Department of Information Security Royal Holloway, University of London 2020 Declaration These doctoral studies were conducted under the supervision of Prof. Kenneth G. Paterson. The work presented in this thesis is the result of original research carried out by myself, in collaboration with others, whilst enrolled in the Department of Mathe- matics as a candidate for the degree of Doctor of Philosophy. This work has not been submitted for any other degree or award in any other university or educational establishment. Jake Massimo April, 2020 2 Abstract Due to their fundamental utility within cryptography, prime numbers must be easy to both recognise and generate. For this, we depend upon primality testing. Both used as a tool to validate prime parameters, or as part of the algorithm used to generate random prime numbers, primality tests are found near universally within a cryptographer's tool-kit. In this thesis, we study in depth primality tests and their use in cryptographic applications. We first provide a systematic analysis of the implementation landscape of primality testing within cryptographic libraries and mathematical software. We then demon- strate how these tests perform under adversarial conditions, where the numbers being tested are not generated randomly, but instead by a possibly malicious party. We show that many of the libraries studied provide primality tests that are not pre- pared for testing on adversarial input, and therefore can declare composite numbers as being prime with a high probability. -
Computation of 2700 Billion Decimal Digits of Pi Using a Desktop Computer
Computation of 2700 billion decimal digits of Pi using a Desktop Computer Fabrice Bellard Feb 11, 2010 (4th revision) This article describes some of the methods used to get the world record of the computation of the digits of π digits using an inexpensive desktop computer. 1 Notations We assume that numbers are represented in base B with B = 264. A digit in base B is called a limb. M(n) is the time needed to multiply n limb numbers. We assume that M(Cn) is approximately CM(n), which means M(n) is mostly linear, which is the case when handling very large numbers with the Sch¨onhage-Strassen multiplication [5]. log(n) means the natural logarithm of n. log2(n) is log(n)/ log(2). SI and binary prefixes are used (i.e. 1 TB = 1012 bytes, 1 GiB = 230 bytes). 2 Evaluation of the Chudnovsky series 2.1 Introduction The digits of π were computed using the Chudnovsky series [10] ∞ 1 X (−1)n(6n)!(A + Bn) = 12 π (n!)3(3n)!C3n+3/2 n=0 with A = 13591409 B = 545140134 C = 640320 . It was evaluated with the binary splitting algorithm. The asymptotic running time is O(M(n) log(n)2) for a n limb result. It is worst than the asymptotic running time of the Arithmetic-Geometric Mean algorithms of O(M(n) log(n)) but it has better locality and many improvements can reduce its constant factor. Let S be defined as n2 n X Y pk S(n1, n2) = an . qk n=n1+1 k=n1+1 1 We define the auxiliary integers n Y2 P (n1, n2) = pk k=n1+1 n Y2 Q(n1, n2) = qk k=n1+1 T (n1, n2) = S(n1, n2)Q(n1, n2). -
Course Notes 1 1.1 Algorithms: Arithmetic
CS 125 Course Notes 1 Fall 2016 Welcome to CS 125, a course on algorithms and computational complexity. First, what do these terms means? An algorithm is a recipe or a well-defined procedure for performing a calculation, or in general, for transform- ing some input into a desired output. In this course we will ask a number of basic questions about algorithms: • Does the algorithm halt? • Is it correct? That is, does the algorithm’s output always satisfy the input to output specification that we desire? • Is it efficient? Efficiency could be measured in more than one way. For example, what is the running time of the algorithm? What is its memory consumption? Meanwhile, computational complexity theory focuses on classification of problems according to the com- putational resources they require (time, memory, randomness, parallelism, etc.) in various computational models. Computational complexity theory asks questions such as • Is the class of problems that can be solved time-efficiently with a deterministic algorithm exactly the same as the class that can be solved time-efficiently with a randomized algorithm? • For a given class of problems, is there a “complete” problem for the class such that solving that one problem efficiently implies solving all problems in the class efficiently? • Can every problem with a time-efficient algorithmic solution also be solved with extremely little additional memory (beyond the memory required to store the problem input)? 1.1 Algorithms: arithmetic Some algorithms very familiar to us all are those those for adding and multiplying integers. We all know the grade school algorithm for addition from kindergarten: write the two numbers on top of each other, then add digits right 1-1 1-2 1 7 8 × 2 1 3 5 3 4 1 7 8 +3 5 6 3 7 914 Figure 1.1: Grade school multiplication. -
FACTORING COMPOSITES TESTING PRIMES Amin Witno
WON Series in Discrete Mathematics and Modern Algebra Volume 3 FACTORING COMPOSITES TESTING PRIMES Amin Witno Preface These notes were used for the lectures in Math 472 (Computational Number Theory) at Philadelphia University, Jordan.1 The module was aborted in 2012, and since then this last edition has been preserved and updated only for minor corrections. Outline notes are more like a revision. No student is expected to fully benefit from these notes unless they have regularly attended the lectures. 1 The RSA Cryptosystem Sensitive messages, when transferred over the internet, need to be encrypted, i.e., changed into a secret code in such a way that only the intended receiver who has the secret key is able to read it. It is common that alphabetical characters are converted to their numerical ASCII equivalents before they are encrypted, hence the coded message will look like integer strings. The RSA algorithm is an encryption-decryption process which is widely employed today. In practice, the encryption key can be made public, and doing so will not risk the security of the system. This feature is a characteristic of the so-called public-key cryptosystem. Ali selects two distinct primes p and q which are very large, over a hundred digits each. He computes n = pq, ϕ = (p − 1)(q − 1), and determines a rather small number e which will serve as the encryption key, making sure that e has no common factor with ϕ. He then chooses another integer d < n satisfying de % ϕ = 1; This d is his decryption key. When all is ready, Ali gives to Beth the pair (n; e) and keeps the rest secret. -
Primality Test
Primality test A primality test is an algorithm for determining whether (6k + i) for some integer k and for i = −1, 0, 1, 2, 3, or 4; an input number is prime. Amongst other fields of 2 divides (6k + 0), (6k + 2), (6k + 4); and 3 divides (6k mathematics, it is used for cryptography. Unlike integer + 3). So a more efficient method is to test if n is divisible factorization, primality tests do not generally give prime by 2 or 3, then to check through all the numbers of form p factors, only stating whether the input number is prime 6k ± 1 ≤ n . This is 3 times as fast as testing all m. or not. Factorization is thought to be a computationally Generalising further, it can be seen that all primes are of difficult problem, whereas primality testing is compara- the form c#k + i for i < c# where i represents the numbers tively easy (its running time is polynomial in the size of that are coprime to c# and where c and k are integers. the input). Some primality tests prove that a number is For example, let c = 6. Then c# = 2 · 3 · 5 = 30. All prime, while others like Miller–Rabin prove that a num- integers are of the form 30k + i for i = 0, 1, 2,...,29 and k ber is composite. Therefore the latter might be called an integer. However, 2 divides 0, 2, 4,...,28 and 3 divides compositeness tests instead of primality tests. 0, 3, 6,...,27 and 5 divides 0, 5, 10,...,25. -
Fast Generation of RSA Keys Using Smooth Integers
1 Fast Generation of RSA Keys using Smooth Integers Vassil Dimitrov, Luigi Vigneri and Vidal Attias Abstract—Primality generation is the cornerstone of several essential cryptographic systems. The problem has been a subject of deep investigations, but there is still a substantial room for improvements. Typically, the algorithms used have two parts – trial divisions aimed at eliminating numbers with small prime factors and primality tests based on an easy-to-compute statement that is valid for primes and invalid for composites. In this paper, we will showcase a technique that will eliminate the first phase of the primality testing algorithms. The computational simulations show a reduction of the primality generation time by about 30% in the case of 1024-bit RSA key pairs. This can be particularly beneficial in the case of decentralized environments for shared RSA keys as the initial trial division part of the key generation algorithms can be avoided at no cost. This also significantly reduces the communication complexity. Another essential contribution of the paper is the introduction of a new one-way function that is computationally simpler than the existing ones used in public-key cryptography. This function can be used to create new random number generators, and it also could be potentially used for designing entirely new public-key encryption systems. Index Terms—Multiple-base Representations, Public-Key Cryptography, Primality Testing, Computational Number Theory, RSA ✦ 1 INTRODUCTION 1.1 Fast generation of prime numbers DDITIVE number theory is a fascinating area of The generation of prime numbers is a cornerstone of A mathematics. In it one can find problems with cryptographic systems such as the RSA cryptosystem. -
Primes and Primality Testing
Primes and Primality Testing A Technological/Historical Perspective Jennifer Ellis Department of Mathematics and Computer Science What is a prime number? A number p greater than one is prime if and only if the only divisors of p are 1 and p. Examples: 2, 3, 5, and 7 A few larger examples: 71887 524287 65537 2127 1 Primality Testing: Origins Eratosthenes: Developed “sieve” method 276-194 B.C. Nicknamed Beta – “second place” in many different academic disciplines Also made contributions to www-history.mcs.st- geometry, approximation of andrews.ac.uk/PictDisplay/Eratosthenes.html the Earth’s circumference Sieve of Eratosthenes 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 Sieve of Eratosthenes We only need to “sieve” the multiples of numbers less than 10. Why? (10)(10)=100 (p)(q)<=100 Consider pq where p>10. Then for pq <=100, q must be less than 10. By sieving all the multiples of numbers less than 10 (here, multiples of q), we have removed all composite numbers less than 100. -
Efficient Regular Modular Exponentiation Using
J Cryptogr Eng (2017) 7:245–253 DOI 10.1007/s13389-016-0134-5 SHORT COMMUNICATION Efficient regular modular exponentiation using multiplicative half-size splitting Christophe Negre1,2 · Thomas Plantard3,4 Received: 14 August 2015 / Accepted: 23 June 2016 / Published online: 13 July 2016 © Springer-Verlag Berlin Heidelberg 2016 Abstract In this paper, we consider efficient RSA modular x K mod N where N = pq with p and q prime. The private exponentiations x K mod N which are regular and con- data are the two prime factors of N and the private exponent stant time. We first review the multiplicative splitting of an K used to decrypt or sign a message. In order to insure a integer x modulo N into two half-size integers. We then sufficient security level, N and K are chosen large enough take advantage of this splitting to modify the square-and- to render the factorization of N infeasible: they are typically multiply exponentiation as a regular sequence of squarings 2048-bit integers. The basic approach to efficiently perform always followed by a multiplication by a half-size inte- the modular exponentiation is the square-and-multiply algo- ger. The proposed method requires around 16% less word rithm which scans the bits ki of the exponent K and perform operations compared to Montgomery-ladder, square-always a sequence of squarings followed by a multiplication when and square-and-multiply-always exponentiations. These the- ki is equal to one. oretical results are validated by our implementation results When the cryptographic computations are performed on which show an improvement by more than 12% compared an embedded device, an adversary can monitor power con- approaches which are both regular and constant time. -
Arxiv:1412.5226V1 [Math.NT] 16 Dec 2014 Hoe 11
q-PSEUDOPRIMALITY: A NATURAL GENERALIZATION OF STRONG PSEUDOPRIMALITY JOHN H. CASTILLO, GILBERTO GARC´IA-PULGAR´IN, AND JUAN MIGUEL VELASQUEZ-SOTO´ Abstract. In this work we present a natural generalization of strong pseudoprime to base b, which we have called q-pseudoprime to base b. It allows us to present another way to define a Midy’s number to base b (overpseudoprime to base b). Besides, we count the bases b such that N is a q-probable prime base b and those ones such that N is a Midy’s number to base b. Furthemore, we prove that there is not a concept analogous to Carmichael numbers to q-probable prime to base b as with the concept of strong pseudoprimes to base b. 1. Introduction Recently, Grau et al. [7] gave a generalization of Pocklignton’s Theorem (also known as Proth’s Theorem) and Miller-Rabin primality test, it takes as reference some works of Berrizbeitia, [1, 2], where it is presented an extension to the concept of strong pseudoprime, called ω-primes. As Grau et al. said it is right, but its application is not too good because it is needed m-th primitive roots of unity, see [7, 12]. In [7], it is defined when an integer N is a p-strong probable prime base a, for p a prime divisor of N −1 and gcd(a, N) = 1. In a reading of that paper, we discovered that if a number N is a p-strong probable prime to base 2 for each p prime divisor of N − 1, it is actually a Midy’s number or a overpseu- doprime number to base 2. -
A Scalable System-On-A-Chip Architecture for Prime Number Validation
A SCALABLE SYSTEM-ON-A-CHIP ARCHITECTURE FOR PRIME NUMBER VALIDATION Ray C.C. Cheung and Ashley Brown Department of Computing, Imperial College London, United Kingdom Abstract This paper presents a scalable SoC architecture for prime number validation which targets reconfigurable hardware This paper presents a scalable SoC architecture for prime such as FPGAs. In particular, users are allowed to se- number validation which targets reconfigurable hardware. lect predefined scalable or non-scalable modular opera- The primality test is crucial for security systems, especially tors for their designs [4]. Our main contributions in- for most public-key schemes. The Rabin-Miller Strong clude: (1) Parallel designs for Montgomery modular arith- Pseudoprime Test has been mapped into hardware, which metic operations (Section 3). (2) A scalable design method makes use of a circuit for computing Montgomery modu- for mapping the Rabin-Miller Strong Pseudoprime Test lar exponentiation to further speed up the validation and to into hardware (Section 4). (3) An architecture of RAM- reduce the hardware cost. A design generator has been de- based Radix-2 Scalable Montgomery multiplier (Section veloped to generate a variety of scalable and non-scalable 4). (4) A design generator for producing hardware prime Montgomery multipliers based on user-defined parameters. number validators based on user-specified parameters (Sec- The performance and resource usage of our designs, im- tion 5). (5) Implementation of the proposed hardware ar- plemented in Xilinx reconfigurable devices, have been ex- chitectures in FPGAs, with an evaluation of its effective- plored using the embedded PowerPC processor and the soft ness compared with different size and speed tradeoffs (Sec- MicroBlaze processor.