Università Degli Studi Di Trento Dipartimento Di Matematica

Total Page:16

File Type:pdf, Size:1020Kb

Università Degli Studi Di Trento Dipartimento Di Matematica 1 Università degli studi di Trento Dipartimento di Matematica Corso di laurea in Matematica AKS, the proof of "PRIMES is in P" Correctness and time complexity analysis, along with Java implementation Supervisore: Candidato: Prof.ssa Pellegrini Stefano Alessandra Bernardi Matricola: 174856 Anno accademico 2016/2017 CONTENTS 2 Abstract We present the algorithm described in M. Agrawal, N. Kajal, N. Saxena "PRIMES is in P" paper. The first Section provides the historical context in which the algorithm took place, while the second introduces the reader to the notation and preliminary results used. In Sections 3 and 4 there are the correctness and time complexity analyses. Finally, Java implementation occupies Section 5. Contents 1 Historical introduction on primality testing 4 1.1 Some historically relevant primality tests . .5 2 Notation and preliminary results 7 3 The algorithm and its correctness 10 4 Time complexity analysis 15 5 Java implementation 16 CONTENTS 3 Acknowledgements I would like to thank some people without whom, for various reasons, this work wouldn’t have been possible. First and foremost I am grateful to Prof. Alessandra Bernardi for her endless patience and continuous encouragements. I would also like to thank Prof. Alberto Montrsor for his precious help with the Java implementation and all my family for always supporting me. 1 HISTORICAL INTRODUCTION ON PRIMALITY TESTING 4 "Il mondo dei numeri è pieno di sorprese, e scoprirle è uno dei piaceri dello studio della matematica." Philip J. Davis, Il mondo dei grandi numeri, 1961 1 Historical introduction on primality testing Prime numbers have ever played a key role throughout the history of arithmetic, since the works of the great ancient Greek mathematicians. At first sight, though, there is nothing special for a number in having exactly two natural divisors; there is no evident reason for 421 for being somehow more significant than 403. Why are they considered so important, then? Why have they stolen so much time and energies from academics as Euler, Gauss or Fermat? And the interest has not lowered in these years: the Electronic Frontier Foun- dation offers a prize of 150 thousand dollars to whom will find a prime greater than the highest currently known (277;232;917 − 1, a number with 23,249,425 digits). Definition 1.1. A natural number n is called "prime" if for every couple of naturals a;b such that n = a · b then n either divides a or b. Definition 1.2. A natural number is called "irreducible" if it has exactly two distinct divi- sors: 1 and himself. Theorem 1.3. A natural number is prime if and only if it is irreducible. This is a classical result which follows by the fact that N is a unique factorisation domain1. Since they are equivalent, we shall use the notions of prime and irreducible in- differently. Most of the relevance of the set of primes is given by the following two properties: Theorem 1.4. There is no greatest prime number. The fact that primes are infinite was proven by Euclid in its "Elements", composed during the 4th century before Christ, giving an elegant "by absurd" demonstration which we shortly present: Proof. Suppose that the set of all primes is finite and let this be P = fp1;:::; png. Consider n ! p := p1 · ::: · pn + 1 = ∏ pi + 1 i=1 Then p 2= P is not divisible for any of the pi 2 P since pi divides p − 1 = p1 · ::: · pn. We have therefore found another prime number, showing a contradiction in the starting hypothesis 1A unique factorisation domain is a commutative ring in which zero-product property and the fundamen- tal Theorem of arithmetics for irreducibles (1.5) are valid 1 HISTORICAL INTRODUCTION ON PRIMALITY TESTING 5 Theorem 1.5 (Fundamental theorem of arithmetic). Each natural number can be written as the product of a unique set of primes. The proof of this theorem was stated for the first time by Carl Gauss in the "Disquisi- tiones arithmeticae"[1], which was published in 1801. Theorem 1.5 leads prime numbers to be seen as the multiplicative "building blocks" of all naturals, in the sense that every number, except from 0 and 1, can be built by multi- plying some primes; factorization is somehow the signature of each integer. What can be surprising is that this set of "building blocks" is infinite, as stated by Theorem 1.4, while for example we can construct additively the whole set of naturals using only the unity (1;2 = 1 + 1;3 = 1 + 1 + 1;:::). One may now be wondering if prime numbers form a logical sequence, in the sense that if we possess the first n numbers the (n + 1)-th can be easily found. Mathematicians have been searching for an answer for centuries without success and even if several achieve- ments were reached, finding prime numbers is still a hard task and, surprisingly, this reveals to be the main reason of their contemporary practical use. In fact, most of the information security systems that almost everyone uses nowadays, such as internet pass- words, online payments and ATMs, are based on cryptographic systems that rely on prime numbers. As an example, if Alice chooses two big primes and sends their product to Bob, he can send her a message that requires knowing the two starting numbers to be decrypted. Eve, that wants to read their conversation, has to factorize that product; if it is large enough, this work is going to take some million years, causing her to fail in her evil attempt. Luckily or not, despite the great effort spent by the mathematical community, no deter- ministic structure could be found in the primes succession, which means that the only way to produce a prime number is to choose a random one and to test if it is prime2. Primal- ity tests are then a central issue in number theory and since a simple - but slow - test is straightforward from the definition, the efficiency of a test is a feature of basic importance. 1.1 Some historically relevant primality tests The Sieve of Erathostenes, invented in the 4-th century b.C. by the famous Greek mathe- matician that estimated Earth’s diameter in a quite precise way, is not a proper primality test. It can produce all the prime numbers up to a given integer by cutting out, at each step, all the proper multiples of the smaller "not sieved" number, which is surely prime. For example, to find out the primes up to 20, we take 2 as the first prime number and we sieve out all its proper multiples. Now the smallest number that we have not yet consid- ered is 3, which is our next prime and we cut out all of his multiples. The next one is 5 and we continue this way. This algorithm can be implemented quite easily and allowed the production of the first prime tables, however it is not really efficient when treating with really large numbers. The increasing need of efficiency led to the development of a series of tests based on sev- 2Quite surprisingly, this is the main reason why they are currently used in cryptography: if the prime distribution will be found most of digital security systems would quickly become useless. 1 HISTORICAL INTRODUCTION ON PRIMALITY TESTING 6 eral number-theoretic properties, such as the so called "Fermat little Theorem"3. Some of them are probabilistic, in the sense that they can give an upper bound for a number to the probability of being prime. The greatest advantage of such tests is indeed the time complexity, which makes them the most used in practice, but since they cannot give a certain response they have little theoretical value. The simplest of this kind is Fermat primality test, which checks whether p divides ap−1 −1;4 if this is false we are certain that p is composite, otherwise nothing can be said. Repeating this operation for n values of a 1 bounds the probability for p being composite to 2n ; for example if we prove that p divides ap−1 − 1 for 10 distinct values of a then p is composite with probability < 0;001. Several more sophisticated variants of this test, such as Miller-Rabin[2] and Solovay-Strassen[3], can improve its efficiency. There is also a class of primality tests that currently have neither a demonstration nor a counterexample and are hence called heuristic, for example the "Baillie-PSW" test[4]. No composite number below 264 passes this test and there are not known greater counterex- amples, although they are conjectured to be infinitely many. As for deterministic tests, the ones that give a certain output, several results have been achieved, too. Some of them only focus on treating numbers in a particular form, obtaining significant results in efficiency, making them the most suitable in searching for really big primes. Pepin’s test[5], for instance, works on Fermat numbers, namely naturals in the form 2n Fn−1 2 + 1. In this case it suffices to verify whether Fn divides 3 2 + 1 to decide about the primality of the n-th Fermat number Fn. Although this computation can be done in a very rapid way, Fermat numbers grow so rapidly that it is possible to test only few of them in a reasonable amount of time. The test that provided most of the greatest known primes, such as 277;232;917 − 1 that was mentioned above, was developed in 1870 by the French mathematician Edouard Lukas p and optimized by Derrick Normann Lehmer in 1930[6].
Recommended publications
  • Fast Tabulation of Challenge Pseudoprimes Andrew Shallue and Jonathan Webster
    THE OPEN BOOK SERIES 2 ANTS XIII Proceedings of the Thirteenth Algorithmic Number Theory Symposium Fast tabulation of challenge pseudoprimes Andrew Shallue and Jonathan Webster msp THE OPEN BOOK SERIES 2 (2019) Thirteenth Algorithmic Number Theory Symposium msp dx.doi.org/10.2140/obs.2019.2.411 Fast tabulation of challenge pseudoprimes Andrew Shallue and Jonathan Webster We provide a new algorithm for tabulating composite numbers which are pseudoprimes to both a Fermat test and a Lucas test. Our algorithm is optimized for parameter choices that minimize the occurrence of pseudoprimes, and for pseudoprimes with a fixed number of prime factors. Using this, we have confirmed that there are no PSW-challenge pseudoprimes with two or three prime factors up to 280. In the case where one is tabulating challenge pseudoprimes with a fixed number of prime factors, we prove our algorithm gives an unconditional asymptotic improvement over previous methods. 1. Introduction Pomerance, Selfridge, and Wagstaff famously offered $620 for a composite n that satisfies (1) 2n 1 1 .mod n/ so n is a base-2 Fermat pseudoprime, Á (2) .5 n/ 1 so n is not a square modulo 5, and j D (3) Fn 1 0 .mod n/ so n is a Fibonacci pseudoprime, C Á or to prove that no such n exists. We call composites that satisfy these conditions PSW-challenge pseudo- primes. In[PSW80] they credit R. Baillie with the discovery that combining a Fermat test with a Lucas test (with a certain specific parameter choice) makes for an especially effective primality test[BW80].
    [Show full text]
  • Radix-8 Design Alternatives of Fast Two Operands Interleaved
    International Journal of Advanced Network, Monitoring and Controls Volume 04, No.02, 2019 Radix-8 Design Alternatives of Fast Two Operands Interleaved Multiplication with Enhanced Architecture With FPGA implementation & synthesize of 64-bit Wallace Tree CSA based Radix-8 Booth Multiplier Mohammad M. Asad Qasem Abu Al-Haija King Faisal University, Department of Electrical Department of Computer Information and Systems Engineering, Ahsa 31982, Saudi Arabia Engineering e-mail: [email protected] Tennessee State University, Nashville, USA e-mail: [email protected] Ibrahim Marouf King Faisal University, Department of Electrical Engineering, Ahsa 31982, Saudi Arabia e-mail: [email protected] Abstract—In this paper, we proposed different comparable researches to propose different solutions to ensure the reconfigurable hardware implementations for the radix-8 fast safe access and store of private and sensitive data by two operands multiplier coprocessor using Karatsuba method employing different cryptographic algorithms and Booth recording method by employing carry save (CSA) and kogge stone adders (KSA) on Wallace tree organization. especially the public key algorithms [1] which proved The proposed designs utilized robust security resistance against most of the attacks family with target chip device along and security halls. Public key cryptography is with simulation package. Also, the proposed significantly based on the use of number theory and designs were synthesized and benchmarked in terms of the digital arithmetic algorithms. maximum operational frequency, the total path delay, the total design area and the total thermal power dissipation. The Indeed, wide range of public key cryptographic experimental results revealed that the best multiplication systems were developed and embedded using hardware architecture was belonging to Wallace Tree CSA based Radix- modules due to its better performance and security.
    [Show full text]
  • Primality Testing for Beginners
    STUDENT MATHEMATICAL LIBRARY Volume 70 Primality Testing for Beginners Lasse Rempe-Gillen Rebecca Waldecker http://dx.doi.org/10.1090/stml/070 Primality Testing for Beginners STUDENT MATHEMATICAL LIBRARY Volume 70 Primality Testing for Beginners Lasse Rempe-Gillen Rebecca Waldecker American Mathematical Society Providence, Rhode Island Editorial Board Satyan L. Devadoss John Stillwell Gerald B. Folland (Chair) Serge Tabachnikov The cover illustration is a variant of the Sieve of Eratosthenes (Sec- tion 1.5), showing the integers from 1 to 2704 colored by the number of their prime factors, including repeats. The illustration was created us- ing MATLAB. The back cover shows a phase plot of the Riemann zeta function (see Appendix A), which appears courtesy of Elias Wegert (www.visual.wegert.com). 2010 Mathematics Subject Classification. Primary 11-01, 11-02, 11Axx, 11Y11, 11Y16. For additional information and updates on this book, visit www.ams.org/bookpages/stml-70 Library of Congress Cataloging-in-Publication Data Rempe-Gillen, Lasse, 1978– author. [Primzahltests f¨ur Einsteiger. English] Primality testing for beginners / Lasse Rempe-Gillen, Rebecca Waldecker. pages cm. — (Student mathematical library ; volume 70) Translation of: Primzahltests f¨ur Einsteiger : Zahlentheorie - Algorithmik - Kryptographie. Includes bibliographical references and index. ISBN 978-0-8218-9883-3 (alk. paper) 1. Number theory. I. Waldecker, Rebecca, 1979– author. II. Title. QA241.R45813 2014 512.72—dc23 2013032423 Copying and reprinting. Individual readers of this publication, and nonprofit libraries acting for them, are permitted to make fair use of the material, such as to copy a chapter for use in teaching or research. Permission is granted to quote brief passages from this publication in reviews, provided the customary acknowledgment of the source is given.
    [Show full text]
  • The Number Field Sieve for Discrete Logarithms
    The Number Field Sieve for Discrete Logarithms Henrik Røst Haarberg Master of Science Submission date: June 2016 Supervisor: Kristian Gjøsteen, MATH Norwegian University of Science and Technology Department of Mathematical Sciences Abstract We present two general number field sieve algorithms solving the discrete logarithm problem in finite fields. The first algorithm pre- sented deals with discrete logarithms in prime fields Fp, while the second considers prime power fields Fpn . We prove, using the standard heuristic, that these algorithms will run in sub-exponential time. We also give an overview of different index calculus algorithms solving the discrete logarithm problem efficiently for different possible relations between the characteristic and the extension degree. To be able to give a good introduction to the algorithms, we present theory necessary to understand the underlying algebraic structures used in the algorithms. This theory is largely algebraic number theory. 1 Contents 1 Introduction 4 1.1 Discrete logarithms . .4 1.2 The general number field sieve and L-notation . .4 2 Theory 6 2.1 Number fields . .6 2.1.1 Dedekind domains . .7 2.1.2 Module structure . .9 2.1.3 Norm of ideals . .9 2.1.4 Units . 10 2.2 Prime ideals . 10 2.3 Smooth numbers . 13 2.3.1 Density . 13 2.3.2 Exponent vectors . 13 3 The number field sieve in prime fields 15 3.1 Overview . 15 3.2 Calculating logarithms . 15 3.3 Sieving . 17 3.4 Schirokauer maps . 18 3.5 Linear algebra . 20 3.5.1 A note about smooth t and g .............. 22 3.6 Run time .
    [Show full text]
  • Advanced Synthesis Cookbook
    Advanced Synthesis Cookbook Advanced Synthesis Cookbook 101 Innovation Drive San Jose, CA 95134 www.altera.com MNL-01017-6.0 Document last updated for Altera Complete Design Suite version: 11.0 Document publication date: July 2011 © 2011 Altera Corporation. All rights reserved. ALTERA, ARRIA, CYCLONE, HARDCOPY, MAX, MEGACORE, NIOS, QUARTUS and STRATIX are Reg. U.S. Pat. & Tm. Off. and/or trademarks of Altera Corporation in the U.S. and other countries. All other trademarks and service marks are the property of their respective holders as described at www.altera.com/common/legal.html. Altera warrants performance of its semiconductor products to current specifications in accordance with Altera’s standard warranty, but reserves the right to make changes to any products and services at any time without notice. Altera assumes no responsibility or liability arising out of the application or use of any information, product, or service described herein except as expressly agreed to in writing by Altera. Altera customers are advised to obtain the latest version of device specifications before relying on any published information and before placing orders for products or services. Advanced Synthesis Cookbook July 2011 Altera Corporation Contents Chapter 1. Introduction Blocks and Techniques . 1–1 Simulating the Examples . 1–1 Using a C Compiler . 1–2 Chapter 2. Arithmetic Introduction . 2–1 Basic Addition . 2–2 Ternary Addition . 2–2 Grouping Ternary Adders . 2–3 Combinational Adders . 2–3 Double Addsub/ Basic Addsub . 2–3 Two’s Complement Arithmetic Review . 2–4 Traditional ADDSUB Unit . 2–4 Compressors (Carry Save Adders) . 2–5 Compressor Width 6:3 .
    [Show full text]
  • Obtaining More Karatsuba-Like Formulae Over the Binary Field
    1 Obtaining More Karatsuba-Like Formulae over the Binary Field Haining Fan, Ming Gu, Jiaguang Sun and Kwok-Yan Lam Abstract The aim of this paper is to find more Karatsuba-like formulae for a fixed set of moduli polynomials in GF (2)[x]. To this end, a theoretical framework is established. We first generalize the division algorithm, and then present a generalized definition of the remainder of integer division. Finally, a previously generalized Chinese remainder theorem is used to achieve our initial goal. As a by-product of the generalized remainder of integer division, we rediscover Montgomery’s N-residue and present a systematic interpretation of definitions of Montgomery’s multiplication and addition operations. Index Terms Karatsuba algorithm, polynomial multiplication, Chinese remainder theorem, Montgomery algo- rithm, finite field. I. INTRODUCTION Efficient GF (2n) multiplication operation is important in cryptosystems. The main advantage of subquadratic multipliers is that their low asymptotic space complexities make it possible to implement VLSI multipliers for large values of n. The Karatsuba algorithm, which was invented by Karatsuba in 1960 [1], provides a practical solution for subquadratic GF (2n) multipliers [2]. Because time and space complexities of these multipliers depend on low-degree Karatsuba-like formulae, much effort has been devoted to obtain Karatsuba-like formulae with low multiplication complexity. Using the Chinese remainder theorem (CRT), Lempel, Seroussi and Winograd obtained a quasi-linear upper bound of the multiplicative complexity of multiplying Haining Fan, Ming Gu, Jiaguang Sun and Kwok-Yan Lam are with the School of Software, Tsinghua University, Beijing, China. E-mails: {fhn, guming, sunjg, lamky}@tsinghua.edu.cn 2 two polynomials over finite fields [3].
    [Show full text]
  • Cs51 Problem Set 3: Bignums and Rsa
    CS51 PROBLEM SET 3: BIGNUMS AND RSA This problem set is not a partner problem set. Introduction In this assignment, you will be implementing the handling of large integers. OCaml’s int type is only 64 bits, so we need to write our own way to handle very 5 large numbers. Arbitrary size integers are traditionally referred to as “bignums”. You’ll implement several operations on bignums, including addition and multipli- cation. The challenge problem, should you choose to accept it, will be to implement part of RSA public key cryptography, the protocol that encrypts and decrypts data sent between computers, which requires bignums as the keys. 10 To create your repository in GitHub Classroom for this homework, click this link. Then, follow the GitHub Classroom instructions found here. Reminders. Compilation errors: In order to submit your work to the course grading server, your solution must compile against our test suite. The system will reject 15 submissions that do not compile. If there are problems that you are unable to solve, you must still write a function that matches the expected type signature, or your code will not compile. (When we provide stub code, that code will compile to begin with.) If you are having difficulty getting your code to compile, please visit office hours or post on Piazza. Emailing 20 your homework to your TF or the Head TFs is not a valid substitute for submitting to the course grading server. Please start early, and submit frequently, to ensure that you are able to submit before the deadline. Testing is required: As with the previous problem sets, we ask that you ex- plicitly add tests to your code in the file ps3_tests.ml.
    [Show full text]
  • Quadratic Frobenius Probable Prime Tests Costing Two Selfridges
    Quadratic Frobenius probable prime tests costing two selfridges Paul Underwood June 6, 2017 Abstract By an elementary observation about the computation of the difference of squares for large in- tegers, deterministic quadratic Frobenius probable prime tests are given with running times of approximately 2 selfridges. 1 Introduction Much has been written about Fermat probable prime (PRP) tests [1, 2, 3], Lucas PRP tests [4, 5], Frobenius PRP tests [6, 7, 8, 9, 10, 11, 12] and combinations of these [13, 14, 15]. These tests provide a probabilistic answer to the question: “Is this integer prime?” Although an affirmative answer is not 100% certain, it is answered fast and reliable enough for “industrial” use [16]. For speed, these various PRP tests are usually preceded by factoring methods such as sieving and trial division. The speed of the PRP tests depends on how quickly multiplication and modular reduction can be computed during exponentiation. Techniques such as Karatsuba’s algorithm [17, section 9.5.1], Toom-Cook multiplication, Fourier Transform algorithms [17, section 9.5.2] and Montgomery expo- nentiation [17, section 9.2.1] play their roles for different integer sizes. The sizes of the bases used are also critical. Oliver Atkin introduced the concept of a “Selfridge Unit” [18], approximately equal to the running time of a Fermat PRP test, which is called a selfridge in this paper. The Baillie-PSW test costs 1+3 selfridges, the use of which is very efficient when processing a candidate prime list. There is no known Baillie-PSW pseudoprime but Greene and Chen give a way to construct some similar counterexam- ples [19].
    [Show full text]
  • Modern Computer Arithmetic
    Modern Computer Arithmetic Richard P. Brent and Paul Zimmermann Version 0.3 Copyright c 2003-2009 Richard P. Brent and Paul Zimmermann This electronic version is distributed under the terms and conditions of the Creative Commons license “Attribution-Noncommercial-No Derivative Works 3.0”. You are free to copy, distribute and transmit this book under the following conditions: Attribution. You must attribute the work in the manner specified • by the author or licensor (but not in any way that suggests that they endorse you or your use of the work). Noncommercial. You may not use this work for commercial purposes. • No Derivative Works. You may not alter, transform, or build upon • this work. For any reuse or distribution, you must make clear to others the license terms of this work. The best way to do this is with a link to the web page below. Any of the above conditions can be waived if you get permission from the copyright holder. Nothing in this license impairs or restricts the author’s moral rights. For more information about the license, visit http://creativecommons.org/licenses/by-nc-nd/3.0/ Preface This is a book about algorithms for performing arithmetic, and their imple- mentation on modern computers. We are concerned with software more than hardware — we do not cover computer architecture or the design of computer hardware since good books are already available on these topics. Instead we focus on algorithms for efficiently performing arithmetic operations such as addition, multiplication and division, and their connections to topics such as modular arithmetic, greatest common divisors, the Fast Fourier Transform (FFT), and the computation of special functions.
    [Show full text]
  • Modern Computer Arithmetic (Version 0.5. 1)
    Modern Computer Arithmetic Richard P. Brent and Paul Zimmermann Version 0.5.1 arXiv:1004.4710v1 [cs.DS] 27 Apr 2010 Copyright c 2003-2010 Richard P. Brent and Paul Zimmermann This electronic version is distributed under the terms and conditions of the Creative Commons license “Attribution-Noncommercial-No Derivative Works 3.0”. You are free to copy, distribute and transmit this book under the following conditions: Attribution. You must attribute the work in the manner specified • by the author or licensor (but not in any way that suggests that they endorse you or your use of the work). Noncommercial. You may not use this work for commercial purposes. • No Derivative Works. You may not alter, transform, or build upon • this work. For any reuse or distribution, you must make clear to others the license terms of this work. The best way to do this is with a link to the web page below. Any of the above conditions can be waived if you get permission from the copyright holder. Nothing in this license impairs or restricts the author’s moral rights. For more information about the license, visit http://creativecommons.org/licenses/by-nc-nd/3.0/ Contents Contents iii Preface ix Acknowledgements xi Notation xiii 1 Integer Arithmetic 1 1.1 RepresentationandNotations . 1 1.2 AdditionandSubtraction . .. 2 1.3 Multiplication . 3 1.3.1 Naive Multiplication . 4 1.3.2 Karatsuba’s Algorithm . 5 1.3.3 Toom-Cook Multiplication . 7 1.3.4 UseoftheFastFourierTransform(FFT) . 8 1.3.5 Unbalanced Multiplication . 9 1.3.6 Squaring.......................... 12 1.3.7 Multiplication by a Constant .
    [Show full text]
  • The Factoring Dead: Preparing for the Cryptopocalypse
    THE FACTORING DEAD: PREPARING FOR THE CRYPTOPOCALYPSE Javed Samuel — javed[at]isecpartners[dot]com iSEC Partners, Inc 123 Mission Street, Suite 1020 San Francisco, CA 94105 https://www.isecpartners.com March 20, 2014 Abstract This paper will explain the latest breakthroughs in the academic cryptography community and look ahead at what practical issues could arise for popular cryptosystems. Specifically, we will focus on the recent major devel- opments in discrete mathematics and their potential ability to undermine our trust in the most basic asymmetric primitives, including RSA. We will explain the basic theories behind RSA and the state-of-the-art in large number- ing factoring, and how several recent papers may point the way to massive improvements in this area. The paper will then switch to the practical aspects of the doomsday scenario, and will answer the question “What happens the day after RSA is broken?” We will point out the many obvious and hidden uses of RSA and related algorithms and outline how software engineers and security teams can operate in a post-RSA world. We will also discuss the results of our survey of popular products and software, and point out the ways in which individuals can prepare for the “zombie cryptopocalypse”. 1 INTRODUCTION Over the past few years, there have been numerous attacks on the current SSL infrastructure. These have ranged from BEAST [97], CRIME [88], Lucky 13 [2][86], RC4 bias attacks [1][91] and BREACH [42]. These attacks all show the fragility of the current SSL architecture as vulnerabilities have been found in a variety of features ranging from compression, timing and padding [90].
    [Show full text]
  • Use of SIMD-Based Data Parallelism to Speed up Sieving in Integer-Factoring Algorithms ?
    Use of SIMD-Based Data Parallelism to Speed up Sieving in Integer-Factoring Algorithms ? Binanda Sengupta and Abhijit Das Department of Computer Science and Engineering Indian Institute of Technology Kharagpur, West Bengal, PIN: 721302, India binanda.sengupta,[email protected] Abstract. Many cryptographic protocols derive their security from the appar- ent computational intractability of the integer factorization problem. Currently, the best known integer-factoring algorithms run in subexponential time. Effi- cient parallel implementations of these algorithms constitute an important area of practical research. Most reported implementations use multi-core and/or dis- tributed parallelization. In this paper, we use SIMD-based parallelization to speed up the sieving stage of integer-factoring algorithms. We experiment on the two fastest variants of factoring algorithms: the number-field sieve method and the multiple-polynomial quadratic sieve method. Using Intel’s SSE2 and AVX in- trinsics, we have been able to speed up index calculations in each core during sieving. This performance enhancement is attributed to a reduction in the pack- ing and unpacking overheads associated with SIMD registers. We handle both line sieving and lattice sieving. We also propose improvements to make our im- plementations cache-friendly. We obtain speedup figures in the range 5–40%. To the best of our knowledge, no public discussions on SIMD parallelization in the context of integer-factoring algorithms are available in the literature. Keywords: Integer Factorization, Sieving, Multiple-Polynomial Quadratic Sieve Method, Number-Field Sieve Method, Single Instruction Multiple Data, Stream- ing SIMD Extensions, Advanced Vector Extensions 1 Introduction Let n be a large composite integer having the factorization k v vp1 vp2 pk vpi n = p1 p2 ··· pk = ∏ pi : i=1 The integer factorization problem deals with the determination of all the prime divisors p1; p2;:::; pk of n and their corresponding multiplicities vp1 ;vp2 ;:::;vpk .
    [Show full text]