Parametrized Hardware Architectures for the Lucas Primality Test

Total Page:16

File Type:pdf, Size:1020Kb

Parametrized Hardware Architectures for the Lucas Primality Test Parametrized Hardware Architectures for the Lucas Primality Test Adrien Le Masle, Wayne Luk Csaba Andras Moritz Department of Computing BlueRISC, Inc Imperial College London, UK Amherst, MA, USA fal1108,[email protected] [email protected] Abstract—We present our parametric hardware architecture Lucas probabilistic test can be performed to ensure that the of the NIST approved Lucas probabilistic primality test. To number is prime. In fact, as of today no composite number our knowledge, our work is the first hardware architecture passing an appropriate number of Rabin-Miller tests plus a for the Lucas test. Our main contributions are a hardware architecture for calculating the Jacobi symbol based on the binary Lucas test is known. Jacobi algorithm, a pipelined modular add-shift module for FPGAs and ASICs are relevant platforms for cryptographic calculating the Lucas sequences, a dependencies analysis and a algorithm implementations. First, the structure of FPGAs scheduling of the Lucas sequences computation. Our architecture makes them particularly fit for pipelined applications, which is implemented on a Virtex-5 FPGA is 30% slower but 3 times the case for most of the basic cryptographic operations. Sec- more energy efficient than the software version running on a Intel Xeon W3505. Our fastest 45 nm ASIC implementation is ond, FPGAs and ASICs can be used to embed security into low 3.6 times faster and 400 times more energy efficient than the power environments keeping very good performance. Finally, optimised software implementation in comparable technology. a pure hardware implementation of a cryptographic algorithm The performance scaling of our architecture is much better is inherently less vulnerable than its software counter-parts than linear in area. Different speed/area/energy trade-offs are which are usually run in a multi-tasking operating system. available through parametrization. The cell count and the power consumption of our ASIC implementations make them suitable For instance, software-based attacks such as cache-attacks [4] for integration into an embedded system whereas our FPGA do not apply to hardware. implementation would more likely benefit server applications. This paper presents the first hardware architecture for the NIST approved Lucas probabilistic primality test. Our main I. INTRODUCTION contributions include: Many public-key cryptographic algorithms require large • A hardware architecture for calculating the Jacobi symbol prime numbers in order to generate a key pair. This is based on the binary Jacobi algorithm for example the case of the Diffie-Hellman key exchange • A pipelined modular add-shift module for calculating the protocol or the most popular Rivest, Shamir and Adleman Lucas sequences (RSA) encryption scheme. The common method for finding • A dependencies analysis and a scheduling of the Lucas a large prime number consists in testing randomly generated sequences computation numbers until a prime is determined. The AKS primality • An implementation of the proposed design on a Xilinx test [1] determines if a number is prime in polynomial time. Virtex-5 FPGA and in TSMC 65 nm and 45 nm ASIC This test is deterministic. However, due to the complexity processes of implementing this algorithm in hardware as well as in • A comparison of the performance of our hardware imple- software, probabilistic tests are still in use. mentations with an optimised software implementation in A common primality tester first tries to divide the number terms of speed and energy efficiency under test by a few first primes, then performs a number Our architecture implemented on a Virtex-5 FPGA is 30% of Miller-Rabin probabilistic tests [2]. The error probability slower but 3 times more energy efficient than the software of such a primality test depends on the bitwidth of the version running on a Intel Xeon W3505. Our design is scalable number under test and on the number of Miller-Rabin tests and the performance scaling of our architecture is much better performed. The American National Institute of Standards and than linear in area. Hence we expect the energy efficiency of Technology (NIST) gives recommended values for different our FPGA designs would improve with advances in technology algorithms and different key sizes [3]. However, even with and area available. Our 65 nm ASIC implementation is more well-chosen parameters there is still a small probability that than 2 times faster and 40 times more energy efficient than the a composite number passes the primality test. Ultimately, a FPGA implementation. Our fastest 45 nm ASIC implementa- tion is 3.6 times faster and 400 times more energy efficient The support of BlueRISC, Alpha Data, Xilinx, UK EPSRC and the HiPEAC than the optimised software implementation. NoE is gratefully acknowledged. The research leading to these results has received funding from the European Union Seventh Framework Programme The rest of the paper is organised as follows. Section II under grant agreement number 248976 and 257906. explains the background relevant to our work. In section III, we present the challenges we face when designing the Algorithm 1: Simple Lucas test Lucas primality tester and how we solve them. In section IV, Input: n odd integer we compare our software, FPGA and ASIC implementations. Output: composite if n is composite, probably prime if Finally, section V concludes the paper. n is probably prime 2 1 Choose a; b 6= 0 such that D = a − 4b 6= 0, II. BACKGROUND gcd(b; n) = 1 The Lucas primality test is a probabilistic test based on the and D = −1 properties of the Lucas sequences. We first introduce the Lucas n 2 Compute Un+1(a; b) sequences and the Lucas theorem. Then we describe the Lucas 3 if Un+1(a; b) mod n = 0 then primality test algorithm. 4 return probably prime A. Lucas Sequences 5 else 6 return composite The Lucas sequences U(a; b) and V (a; b) of the pair (a; b) are the sequences: U(a; b) = (U0(a; b);U1(a; b);U2(a; b); :::) (1) another probabilistic primality test such as the Miller-Rabin V (a; b) = (V0(a; b);V1(a; b);V2(a; b); :::) (2) test. Consider line 1 of Alg. 1. Different methods to choose a, b such that for each k ≥ 0: and D are given in [6], [7]. In particular, it is shown that D αk − βk should not be a square modulo n. In that case, the Lucas test U (a; b) = (3) k α − β would be an ordinary primality test. That is in fact ensured k k D Vk(a; b) = α + β (4) by the condition n = −1. A method proposed by Selfridge [7] consists in choosing D as the first element in the sequence where α and β are the two roots of the quadratic equation f5; −7; 9; −11; 13; :::g such that D = −1, a = 1 and b = 2 n x − ax + b = 0, with a; b chosen such that a; b 6= 0 and the (1 − D)=4. If n is square, it can be shown that D > −1 2 n discriminant D = a − 4b 6= 0 [5]. for any D. Hence, so that the algorithm terminates, we have a to check if n is a perfect square either before looking for a Let us define the Jacobi symbol . For a integer n correct D or after a few trials for D. and p prime, we have: Consider line 2. Several methods can be used to compute 8 Un+1. A simple method is to use equation 3 directly. However, > 0 if a = 0 mod p > 1 if a 6= 0 mod p and x2 = a mod p this method is not efficient as it requires two exponentiations a <> = is soluble for some integer x and possibly one division. Instead we can use a recurrence p > 2 relation. For example, we can easily show that: > −1 if a 6= 0 mod p and x = a mod p :> is not soluble Uk(a; b) = aUk−1 − bUk−2 (5) α1 α2 αk For an integer n with prime decomposition n = p1 p2 :::pk , Vk(a; b) = aVk−1 − bVk−2 (6) we have: In [8], a more complex recurrence relation is used to come a a α1 a α2 a αk = ::: up with an efficient method to compute the Lucas sequences: n p1 p2 pk j Ui+j(a; b) = UiVj − b Ui−j (7) j The Lucas theorem is defined as follows [5]. Vi+j(a; b) = ViVj − b Vi−j (8) Lucas Theorem: Let a, b, D, and Uk be as above. If n is D The American National Institute of Standards and Tech- prime, gcd(b; n) = 1 and n = −1, then n divides Un+1. nology (NIST) approved Lucas probabilistic primality test is The Lucas test is based on the contrapositive of the given in Alg. 2. The NIST algorithm first checks if n is Lucas theorem stating that if, under the previous assumptions, a perfect square. If not, it uses the algorithm described by an odd positive integer n does not divide Un+1, then n is Selfridge to find a correct value for D. This test is sometimes composite. called the Lucas-Selfridge probabilistic primality test. Lines D 4-5 rely on the fact that if n = 0, a factor of n exists [6] B. Lucas Test Algorithms and we can therefore stop the test. Lines 6 to 18 computes A simple algorithm performing a Lucas Test is given in Un+1(a; b), with a = 1 and b = (1 − D)=4, using a modified Alg. 1. Note that a Lucas test alone is not enough to determine version of the method presented in [8]. After r iterations, whether a number is prime with a low probability of error.
Recommended publications
  • An Analysis of Primality Testing and Its Use in Cryptographic Applications
    An Analysis of Primality Testing and Its Use in Cryptographic Applications Jake Massimo Thesis submitted to the University of London for the degree of Doctor of Philosophy Information Security Group Department of Information Security Royal Holloway, University of London 2020 Declaration These doctoral studies were conducted under the supervision of Prof. Kenneth G. Paterson. The work presented in this thesis is the result of original research carried out by myself, in collaboration with others, whilst enrolled in the Department of Mathe- matics as a candidate for the degree of Doctor of Philosophy. This work has not been submitted for any other degree or award in any other university or educational establishment. Jake Massimo April, 2020 2 Abstract Due to their fundamental utility within cryptography, prime numbers must be easy to both recognise and generate. For this, we depend upon primality testing. Both used as a tool to validate prime parameters, or as part of the algorithm used to generate random prime numbers, primality tests are found near universally within a cryptographer's tool-kit. In this thesis, we study in depth primality tests and their use in cryptographic applications. We first provide a systematic analysis of the implementation landscape of primality testing within cryptographic libraries and mathematical software. We then demon- strate how these tests perform under adversarial conditions, where the numbers being tested are not generated randomly, but instead by a possibly malicious party. We show that many of the libraries studied provide primality tests that are not pre- pared for testing on adversarial input, and therefore can declare composite numbers as being prime with a high probability.
    [Show full text]
  • FACTORING COMPOSITES TESTING PRIMES Amin Witno
    WON Series in Discrete Mathematics and Modern Algebra Volume 3 FACTORING COMPOSITES TESTING PRIMES Amin Witno Preface These notes were used for the lectures in Math 472 (Computational Number Theory) at Philadelphia University, Jordan.1 The module was aborted in 2012, and since then this last edition has been preserved and updated only for minor corrections. Outline notes are more like a revision. No student is expected to fully benefit from these notes unless they have regularly attended the lectures. 1 The RSA Cryptosystem Sensitive messages, when transferred over the internet, need to be encrypted, i.e., changed into a secret code in such a way that only the intended receiver who has the secret key is able to read it. It is common that alphabetical characters are converted to their numerical ASCII equivalents before they are encrypted, hence the coded message will look like integer strings. The RSA algorithm is an encryption-decryption process which is widely employed today. In practice, the encryption key can be made public, and doing so will not risk the security of the system. This feature is a characteristic of the so-called public-key cryptosystem. Ali selects two distinct primes p and q which are very large, over a hundred digits each. He computes n = pq, ϕ = (p − 1)(q − 1), and determines a rather small number e which will serve as the encryption key, making sure that e has no common factor with ϕ. He then chooses another integer d < n satisfying de % ϕ = 1; This d is his decryption key. When all is ready, Ali gives to Beth the pair (n; e) and keeps the rest secret.
    [Show full text]
  • Primality Test
    Primality test A primality test is an algorithm for determining whether (6k + i) for some integer k and for i = −1, 0, 1, 2, 3, or 4; an input number is prime. Amongst other fields of 2 divides (6k + 0), (6k + 2), (6k + 4); and 3 divides (6k mathematics, it is used for cryptography. Unlike integer + 3). So a more efficient method is to test if n is divisible factorization, primality tests do not generally give prime by 2 or 3, then to check through all the numbers of form p factors, only stating whether the input number is prime 6k ± 1 ≤ n . This is 3 times as fast as testing all m. or not. Factorization is thought to be a computationally Generalising further, it can be seen that all primes are of difficult problem, whereas primality testing is compara- the form c#k + i for i < c# where i represents the numbers tively easy (its running time is polynomial in the size of that are coprime to c# and where c and k are integers. the input). Some primality tests prove that a number is For example, let c = 6. Then c# = 2 · 3 · 5 = 30. All prime, while others like Miller–Rabin prove that a num- integers are of the form 30k + i for i = 0, 1, 2,...,29 and k ber is composite. Therefore the latter might be called an integer. However, 2 divides 0, 2, 4,...,28 and 3 divides compositeness tests instead of primality tests. 0, 3, 6,...,27 and 5 divides 0, 5, 10,...,25.
    [Show full text]
  • Elementary Number Theory
    Elementary Number Theory Peter Hackman HHH Productions November 5, 2007 ii c P Hackman, 2007. Contents Preface ix A Divisibility, Unique Factorization 1 A.I The gcd and B´ezout . 1 A.II Two Divisibility Theorems . 6 A.III Unique Factorization . 8 A.IV Residue Classes, Congruences . 11 A.V Order, Little Fermat, Euler . 20 A.VI A Brief Account of RSA . 32 B Congruences. The CRT. 35 B.I The Chinese Remainder Theorem . 35 B.II Euler’s Phi Function Revisited . 42 * B.III General CRT . 46 B.IV Application to Algebraic Congruences . 51 B.V Linear Congruences . 52 B.VI Congruences Modulo a Prime . 54 B.VII Modulo a Prime Power . 58 C Primitive Roots 67 iii iv CONTENTS C.I False Cases Excluded . 67 C.II Primitive Roots Modulo a Prime . 70 C.III Binomial Congruences . 73 C.IV Prime Powers . 78 C.V The Carmichael Exponent . 85 * C.VI Pseudorandom Sequences . 89 C.VII Discrete Logarithms . 91 * C.VIII Computing Discrete Logarithms . 92 D Quadratic Reciprocity 103 D.I The Legendre Symbol . 103 D.II The Jacobi Symbol . 114 D.III A Cryptographic Application . 119 D.IV Gauß’ Lemma . 119 D.V The “Rectangle Proof” . 123 D.VI Gerstenhaber’s Proof . 125 * D.VII Zolotareff’s Proof . 127 E Some Diophantine Problems 139 E.I Primes as Sums of Squares . 139 E.II Composite Numbers . 146 E.III Another Diophantine Problem . 152 E.IV Modular Square Roots . 156 E.V Applications . 161 F Multiplicative Functions 163 F.I Definitions and Examples . 163 CONTENTS v F.II The Dirichlet Product .
    [Show full text]
  • Primality Testing : a Review
    Primality Testing : A Review Debdas Paul April 26, 2012 Abstract Primality testing was one of the greatest computational challenges for the the- oretical computer scientists and mathematicians until 2004 when Agrawal, Kayal and Saxena have proved that the problem belongs to complexity class P . This famous algorithm is named after the above three authors and now called The AKS Primality Testing having run time complexity O~(log10:5(n)). Further improvement has been done by Carl Pomerance and H. W. Lenstra Jr. by showing a variant of AKS primality test has running time O~(log7:5(n)). In this review, we discuss the gradual improvements in primality testing methods which lead to the breakthrough. We also discuss further improvements by Pomerance and Lenstra. 1 Introduction \The problem of distinguishing prime numbers from a composite numbers and of resolving the latter into their prime factors is known to be one of the most important and useful in arithmetic. It has engaged the industry and wisdom of ancient and modern geometers to such and extent that its would be superfluous to discuss the problem at length . Further, the dignity of the science itself seems to require that every possible means be explored for the solution of a problem so elegant and so celebrated." - Johann Carl Friedrich Gauss, 1781 A number is a mathematical object which is used to count and measure any quantifiable object. The numbers which are developed to quantify natural objects are called Nat- ural Numbers. For example, 1; 2; 3; 4;::: . Further, we can partition the set of natural 1 numbers into three disjoint sets : f1g, set of fprime numbersg and set of fcomposite numbersg.
    [Show full text]
  • Average Case Error Estimates of the Strong Lucas Probable Prime Test
    Average Case Error Estimates of the Strong Lucas Probable Prime Test Master Thesis Semira Einsele Monday 27th July, 2020 Advisors: Prof. Dr. Kenneth Paterson Prof. Dr. Ozlem¨ Imamoglu Department of Mathematics, ETH Z¨urich Abstract Generating and testing large prime numbers is crucial in many public- key cryptography algorithms. A common choice of a probabilistic pri- mality test is the strong Lucas probable prime test which is based on the Lucas sequences. Throughout this work, we estimate bounds for average error behaviour of this test. To do so, let us consider a procedure that draws k-bit odd integers in- dependently from the uniform distribution, subjects each number to t independent iterations of the strong Lucas probable prime test with randomly chosen bases, and outputs the first number that passes all t tests. Let qk,t denote the probability that this procedurep returns a com- 2 2.3− k posite number. We show that qk,1 < log(k)k 4 for k ≥ 2. We see that slightly modifying the procedure, using trial division by the first l odd primes, gives remarkable improvements in this error analysis. Let qk,l,t denote the probability that the now modified procedurep returns a 2 1.87727− k composite number. We show that qk,128,1 < k 4 for k ≥ 2. We also give general bounds for both qk,t and qk,l,t when t ≥ 2, k ≥ 21 and l 2 N. In addition, we treat the numbers, that add the most to our probability estimate differently in the analysis, and give improved bounds for large t.
    [Show full text]
  • A Performant, Misuse-Resistant API for Primality Testing
    Research Collection Conference Paper A Performant, Misuse-Resistant API for Primality Testing Author(s): Massimo, Jake; Paterson, Kenneth G. Publication Date: 2020-10 Permanent Link: https://doi.org/10.3929/ethz-b-000409319 Originally published in: http://doi.org/10.1145/3372297.3417264 Rights / License: In Copyright - Non-Commercial Use Permitted This page was generated automatically upon download from the ETH Zurich Research Collection. For more information please consult the Terms of use. ETH Library A Performant, Misuse-Resistant API for Primality Testing Jake Massimo Kenneth G. Paterson [email protected] [email protected] Royal Holloway, University of London Department of Computer Science, ETH Zurich ABSTRACT parameter sets (see also Bleichenbacher [9] for an earlier example Primality testing is a basic cryptographic task. But developers today involving the GNU Crypto library). are faced with complex APIs for primality testing, along with docu- The main underlying issue identified in [3] is that, while all li- mentation that fails to clearly state the reliability of the tests being braries examined performed well on random inputs, some failed performed. This leads to the APIs being incorrectly used in practice, miserably on maliciously crafted ones in their default settings. with potentially disastrous consequences. In an effort to overcome Meanwhile code documentation was generally poor and did not this, we present a primality test having a simplest-possible API: the distinguish clearly between the different use cases. And developers test accepts a number to be tested and returns a Boolean indicat- were faced with complex APIs requiring them to understand the ing whether the input was composite or probably prime.
    [Show full text]
  • Fast Primality Testing for Integers That Fit Into a Machine Word⋆
    Fast Primality Testing for Integers That Fit into a Machine Word? Michal Foriˇsekand Jakub Janˇcina Comenius University, Bratislava, Slovakia, [email protected], [email protected] Abstract. For large integers, the most efficient primality tests are pro- babilistic. However, for integers with a small fixed number of bits the best tests in practice are deterministic. Currently the best known tests of this type involve 3 rounds of the Miller-Rabin test for 32-bit integers and 7 rounds for 64-bit integers. Our main result in this paper: For 32- bit integers we reduce this to a single computation of a simple hash function and a single round of Miller-Rabin. Similarly, for 64-bit integers we can reduce the number of rounds to two (but with a significantly large precomputed table) or three. Up to our knowledge, our implementations are the fastest one-shot deterministic primality tests for 32-bit and 64-bit integers to date. We also provide empirical evidence that our algorithms are fast in prac- tice and that the data segment is roughly as small as possible for an algorithm of this type. 1 Overview In this section we give an overview of relevant topics related to primality testing. 1.1 Large Primes, Deterministic Tests Since the seminal result by Agrawal, Kayal, and Saxena in 2002, we know that primality testing is in P . Their original paper [1] gives an algorithm that runs in O~(log12 n) when testing whether n is a prime.1 A more recent paper by Pome- rance and Lenstra [10] improves this to O~(log6 n).
    [Show full text]
  • Primality Testing: Then and Now
    Seventy-five years of Mathematics of Computation ICERM, November 1{3, 2018 Primality testing: then and now Carl Pomerance Dartmouth College, Emeritus University of Georgia, Emeritus In 1801, Carl Friedrich Gauss wrote: \The problem of distinguishing prime numbers from composite numbers, and of resolving the latter into their prime factors, is known to be one of the most important and useful in arithmetic. It has engaged the industry and wisdom of ancient and modern geometers to such an extent that it would be superfluous to discuss the problem at length. Nevertheless we must confess that all methods that have been proposed thus far are either restricted to very special cases or are so laborious and difficult that even for numbers that do not exceed the limits of tables constructed by estimable men, they try the patience of even the practiced calculator. And these methods do not apply at all to larger numbers... Further, the dignity of science itself seems to require that every possible means be explored for the solution of a problem so elegant and so celebrated." 1 Two elementary theorems: Wilson: If p is prime, then (p − 1)! ≡ −1 (mod p). p−1 Fermat: If p is prime and p - a, then a ≡ 1 (mod p). How efficient are these as primality criteria? It would seem neither, since they both involve gigantic numbers when p is large. 2 For Fermat, the repeated squaring algorithm is quite efficient: Use 8 k=2 2 <> a mod n mod n; if k is even, ak mod n = (k−1)=2 2 :>a a mod n mod n; if k is odd.
    [Show full text]
  • JMM 2016 Student Poster Session Abstract Book
    Abstracts for the MAA Undergraduate Poster Session Seattle, WA January 8, 2016 Organized by Joyati Debnath Winona State University CELEBRATING A CENTURY OF ADVANCING MATHEMATICS Organized by the MAA Committee on Undergraduate Student Activities and Chapters and CUPM Subcommittee on Research by Undergraduates Dear Students, Advisors, Judges and Colleagues, If you look around today you will see about 341 posters and 551 presenters, record numbers, once again. It is so rewarding to see this session, which offers such a great opportunity for interaction between students and professional mathematicians, continue to grow. The judges you see here today are professional mathematicians from institutionsaround the world. They are advisors, colleagues, new Ph.D.s, and administrators. We have acknowledged many of them in this booklet; however, many judges here volunteered on site. Their support is vital to the success of the session and we thank them. We are supported financially by the National Science Foundation, Tudor Investments, and Two Sigma. We are also helped by the members of the Committee on Undergraduate Student Activities and Chapters (CUSAC) in some way or other. They are: Jiehua Zhu; Pamela A. Richardson; Jennifer Schaefer; Lisa Marano; Dora C. Ahmadi; Andy Niedermaier; Benjamin Galluzzo; Eve Torrence; Gerard A. Venema; Jen- nifer Bergner; Jim Walsh; Kristina Cole Garrett; May Mei; Dr. Richard Neal; TJ Hitchman; William J. Higgins; and Zsuzsanna Szaniszlo. There are many details of the poster session that begin with putting out the advertisement in FOCUS, ensuring students have travel money, online submission work properly, and organizing poster boards and tables in the room we are in today that are attributed to Gerard Venema (MAA Associate Secretary), Linda Braddy (MAA), and Penny Pina (AMS).
    [Show full text]
  • A Survey of Primality Tests
    A SURVEY OF PRIMALITY TESTS STEFAN LANCE Abstract. In this paper, we show how modular arithmetic and Euler's totient function are applied to elementary number theory. In particular, we use only arithmetic methods to prove many facts that are fundamental to the study of prime numbers. These proofs lead us to examine the theorems governing several simple primality tests. Contents 1. Introduction 1 2. Modular Arithmetic and The Fundamental Theorem of Arithmetic 2 3. Applications of The Fundamental Theorem of Arithmetic 4 4. Euler's Totient Function 5 5. Reduced Residue Systems, Euler's Totient Theorem, and Order 9 6. Primality Tests 11 Acknowledgments 14 References 15 1. Introduction The goal of this paper is to provide proofs of several theorems and lemmas essential to introductory number theory and to expose some basic primality tests. The paper will therefore establish a basis for further exploring number theory. We first review properties of modular arithmetic and the fundamental theorem of arithmetic and use them to explore elementary properties of prime numbers. This naturally requires the investigation of Euler's totient (or phi) function and related topics, such as reduced residue systems and order. We finish by using these results to prove the theorems behind some basic primality tests. This paper relies upon and provides detailed explanations of arithmetic rather than analytic methods, so we assume relatively little. Remark 1.1. For the sake of reducing redundancy, we do not consider negative integers in the following proofs. Adjusting these proofs to apply to the negative integers is trivial and not significant for the results we prove.
    [Show full text]
  • Prime and Prejudice: Primality Testing Under Adversarial Conditions
    Prime and Prejudice: Primality Testing Under Adversarial Conditions Martin R. Albrecht1, Jake Massimo1, Kenneth G. Paterson1, and Juraj Somorovsky2 1 Royal Holloway, University of London 2 Ruhr University Bochum, Germany [email protected], [email protected], [email protected], [email protected] Abstract. This work provides a systematic analysis of primality testing under adversarial conditions, where the numbers being tested for primality are not generated randomly, but instead provided by a possibly malicious party. Such a situation can arise in secure messaging protocols where a server supplies Diffie-Hellman parameters to the peers, or in a secure communications protocol like TLS where a developer can insert such a number to be able to later passively spy on client-server data. We study a broad range of cryptographic libraries and assess their performance in this adversarial setting. As examples of our findings, we are able to construct 2048-bit composites that are declared prime with probability 1=16 by OpenSSL's primality testing in its default configuration; the advertised performance is 2−80. We can also construct 1024-bit composites that always pass the primality testing routine in GNU GMP when configured with the recommended minimum number of rounds. And, for a number of libraries (Cryptlib, LibTomCrypt, JavaScript Big Number, WolfSSL), we can construct composites that always pass the supplied primality tests. We explore the implications of these security failures in applications, focusing on the construction of malicious Diffie-Hellman parameters. We show that, unless careful primality testing is performed, an adversary can supply parameters (p; q; g) which on the surface look secure, but where the discrete logarithm problem in the subgroup of order q generated by g is easy.
    [Show full text]