On the Amount of Sieving in Factorization Methods Proefschrift

Total Page:16

File Type:pdf, Size:1020Kb

On the Amount of Sieving in Factorization Methods Proefschrift On the Amount of Sieving in Factorization Methods Proefschrift ter verkrijging van de graad van Doctor aan de Universiteit Leiden, op gezag van Rector Magnificus prof.mr. P.F. van der Heijden, volgens besluit van het College voor Promoties te verdedigen op woensdag 20 januari 2010 klokke 13.45 uur door Willemina Hendrika Ekkelkamp geboren te Almelo in 1978 Samenstelling van de promotiecommissie: promotoren: Prof.dr. R. Tijdeman Prof.dr. A.K. Lenstra (EPFL, Lausanne, Zwitserland) copromotor: Dr.ir. H.J.J. te Riele (CWI) overige leden: Prof.dr. R.J.F. Cramer (CWI / Universiteit Leiden) Prof.dr. H.W. Lenstra Prof.dr. P. Stevenhagen Dr. P. Zimmermann (INRIA, Nancy, Frankrijk) The research in this thesis has been carried out at the Centrum Wiskunde & Informa- tica (CWI) and at the Universiteit Leiden. It has been financially supported by the Netherlands Organisation for Scientific Research (NWO), project 613.000.314. On the Amount of Sieving in Factorization Methods Ekkelkamp, W.H., 1978 – On the Amount of Sieving in Factorization Methods ISBN: THOMAS STIELTJES INSTITUTE FOR MATHEMATICS c W.H. Ekkelkamp, Den Ham 2009 The° illustration on the cover is a free interpretation of line and lattice sieving in combination with the amount of relations found. Contents 1 Introduction 1 1.1 Factoringlargenumbers .......................... 1 1.2 Multiple polynomial quadratic sieve . 2 1.3 Number field sieve . 4 1.4 Smoothandsemismoothnumbers. 6 1.5 Simulating the sieving . 7 1.6 Conclusion ................................. 7 2 Smooth and Semismooth Numbers 9 2.1 Introduction................................. 9 2.2 Basictools.................................. 11 2.3 Type1expansions ............................. 11 2.3.1 Smoothnumbers .......................... 12 2.3.2 1-semismoothnumbers. 12 2.3.3 2-semismoothnumbers. 13 2.3.4 k-semismoothnumbers....................... 14 2.4 Type2expansions ............................. 15 2.4.1 Smoothnumbers .......................... 15 2.4.2 k-semismoothnumbers....................... 16 2.5 ProofofTheorem6............................. 18 2.6 ProofofTheorem7............................. 22 3 From Theory to Practice 29 3.1 Introduction................................. 29 3.2 Computing the approximating Ψ-function of Corollary5 ................................. 30 β1 β2 α 3.2.1 Main term of Ψ2(x,x ,x ,x ) ................. 31 β1 β2 α 3.2.2 Second order term of Ψ2(x,x ,x ,x )............. 36 3.3 ExperimentingwithMPQS ........................ 40 3.3.1 Equal upper bounds . 42 3.3.2 Different upper bounds . 48 3.3.3 Optimal bounds . 50 iv CONTENTS 3.4 ExperimentingwithNFS ......................... 52 4 Predicting the Sieving Effort for the Number Field Sieve 61 4.1 Introduction................................. 61 4.2 Simulatingrelations ............................ 63 4.2.1 CaseI................................ 64 4.2.2 CaseII ............................... 68 4.2.3 Special primes . 70 4.3 Thestopcriterion ............................. 71 4.3.1 Duplicates.............................. 72 4.4 Experiments................................. 72 4.4.1 CaseI,linesieving ......................... 73 4.4.2 Case I, lattice sieving . 76 4.4.3 CaseII,linesieving ........................ 77 4.4.4 Case II, lattice sieving . 79 4.4.5 Comparing line and lattice sieving . 79 4.5 Whichcasetochoose............................ 82 4.6 Additional experiments . 85 4.6.1 Size of the sample sieve test . 85 4.6.2 Expected sieving area and sieving time . 89 4.6.3 Growth behavior of useful relations . 92 4.6.4 Oversquareness and matrix size . 95 5 Conclusions and Suggestions for Future Research 99 5.1 Smoothandsemismoothnumbers. 99 5.2 Simulating the sieving . 100 Bibliography 103 Samenvatting 107 Acknowledgements 109 Curriculum Vitae 111 Chapter 1 Introduction 1.1 Factoring large numbers In 1977 Rivest, Shamir, and Adleman introduced the RSA public-key cryptosystem [44]. The safety of this cryptosystem is closely related to the difficulty of factoring large integers. It has not been proved that breaking RSA is equivalent with factor- ing, but it is getting close: see, for instance, the work of Aggarwal and Maurer [1]. Nowadays RSA is widely used, so factoring large integers is not merely an academic exercise, but has important practical implications. The searches for large primes and for the prime factors of composite numbers have a long history. Euclid ( 300 B.C.) was one of the first persons to write about compositeness of numbers, followed± by Eratosthenes (276–194 B.C.), who came up with an algorithm that finds all primes up to a certain bound, the so-called Sieve of Eratosthenes. For an excellent overview of the history of factorization methods, we refer to the thesis of Elkenbracht-Huizing ([18], Ch. 2). Over the years, people came up with faster factorization methods, when the fac- tors to be found are large. Based on ideas of Kraitchik, and Morrison and Brillhart, Schroeppel introduced the linear sieve (1976/1977). With a small modification by Pomerance this became the quadratic sieve; albeit a very basic version. Improvements are due to Davis and Holdridge, and Montgomery (cf. [15], [39], [45]). Factoring al- gorithms based on sieving are much faster than earlier algorithms, as the expensive divisions are replaced by additions. The expected running time of the quadratic sieve, based on heuristic assumptions, is L(N) = exp((1 + o(1))(log N)1/2(log log N)1/2), as N , → ∞ where N is the number to be factored and the logarithms are natural [14]. An improvement of the quadratic sieve is the multiple polynomial quadratic sieve. The same idea of sieving is used in the number field sieve, based on ideas of Pollard (pp. 4–10 of [27]) and further developed by H.W. Lenstra, among others. 2 Introduction The heuristic expected running time of the number field sieve is given by (cf. [27]) L(N) = exp(((64/9)1/3 + o(1))(log N)1/3(log log N)2/3), as N . → ∞ Asymptotically, the number field sieve is the fastest known general method for factor- ing large integers, and in practice for numbers of about 90 and more decimal digits. We will go into the details of the multiple polynomial quadratic sieve and the number field sieve in the next two sections, as we use them for experiments in this thesis. Sections 1.4, 1.5, and 1.6 contain an overview of the thesis. 1.2 Multiple polynomial quadratic sieve The multiple polynomial quadratic sieve (MPQS) is based on the quadratic sieve, so we start with a short description of the basic quadratic sieve (QS). Take N as the (odd) composite number (not a perfect power) to be factored. Then QS searches for integers x and y that satisfy the equation x2 y2 mod N. ≡ If this holds, and x y mod N and x y mod N, then gcd(x y, N) is a proper factor of N. 6≡ 6≡ − − The algorithm starts with the polynomial f(t) = t2 N, t Z. For every prime factor p of f(t) the congruence t2 N mod p must hold.− For odd∈ primes p this can N ≡ be expressed as the condition ( p ) = 1 (Legendre symbol). A number is called F -smooth if all its prime factors are below some given bound F . These numbers are collected in the sieving step of the algorithm. The factorbase FB is defined as the set of 1 (for practical reasons), 2, and all the odd primes p −N up to a bound F for which ( p ) = 1. Now we evaluate f(t) for all integer values t [√N M, √N + M], where M is subexponential in N, and keep only those for which∈ f(−t) is F -smooth. We can write this as | | t2 pep(t) mod N, (1.1) ≡ p FB ∈Y where ep(t) 0 and we call expression (1.1) a relation. The left-hand≥ side is a square. Hence the product of such expressions is also a square. If we generate relations until we have more distinct relations than primes in the factorbase, there is a non-empty subset of this set of relations such that multipli- cation of the relations in this subset gives a square on the right-hand side. To find a correct combination of relations, we use linear algebra to find dependencies in a matrix with the exponents ep(t) mod 2 (one relation per row). Possible algorithms for finding such a dependency include Gaussian elimination, block Lanczos, and block Wiedemann; the running time of Gaussian elimination is O(n3) and the running time of block Lanczos and block Wiedemann is O(nW ), where n is the dimension of the matrix and W the weight of the matrix ([12], [13], [33], [49]). 1.2 Multiple polynomial quadratic sieve 3 Once we have a dependency of k relations for t = t1,t2,...,tk, we set x = t1t2 ...tk mod N and ··· ep(t1)+ +ep(tk) y = p 2 mod N. p FB ∈Y Then x2 y2 (mod N). Now we compute gcd(x y, N) and hopefully we have found a proper≡ factor. If not, we continue with the next− dependency. As the probability of finding a factor is at least 1/2 for each dependency (as N has at least two distinct prime factors), we are in practice almost certain to have found the prime factors after having computed a small number of gcd’s as above. To make clear where the term ‘sieving’ from the title comes from, we take a closer look at the polynomial f(t). If we know that 2 is a divisor of f(3), then 2 is a di- visor of f(3 + 2k), for all k Z; more generally, once we have located a t-value for which a given prime p divides∈ f(t), we know a residue class modulo p of t-values for which p divides f(t). This can be exploited as follows. Initialize an array a as log( f(t) ) for all integers t [√N M, √N + M], so we have a[i] = log( f(i) ) for i| [√| N M, √N + M]∈ for some− appropriate M.
Recommended publications
  • Fast Generation of RSA Keys Using Smooth Integers
    1 Fast Generation of RSA Keys using Smooth Integers Vassil Dimitrov, Luigi Vigneri and Vidal Attias Abstract—Primality generation is the cornerstone of several essential cryptographic systems. The problem has been a subject of deep investigations, but there is still a substantial room for improvements. Typically, the algorithms used have two parts – trial divisions aimed at eliminating numbers with small prime factors and primality tests based on an easy-to-compute statement that is valid for primes and invalid for composites. In this paper, we will showcase a technique that will eliminate the first phase of the primality testing algorithms. The computational simulations show a reduction of the primality generation time by about 30% in the case of 1024-bit RSA key pairs. This can be particularly beneficial in the case of decentralized environments for shared RSA keys as the initial trial division part of the key generation algorithms can be avoided at no cost. This also significantly reduces the communication complexity. Another essential contribution of the paper is the introduction of a new one-way function that is computationally simpler than the existing ones used in public-key cryptography. This function can be used to create new random number generators, and it also could be potentially used for designing entirely new public-key encryption systems. Index Terms—Multiple-base Representations, Public-Key Cryptography, Primality Testing, Computational Number Theory, RSA ✦ 1 INTRODUCTION 1.1 Fast generation of prime numbers DDITIVE number theory is a fascinating area of The generation of prime numbers is a cornerstone of A mathematics. In it one can find problems with cryptographic systems such as the RSA cryptosystem.
    [Show full text]
  • Sieving for Twin Smooth Integers with Solutions to the Prouhet-Tarry-Escott Problem
    Sieving for twin smooth integers with solutions to the Prouhet-Tarry-Escott problem Craig Costello1, Michael Meyer2;3, and Michael Naehrig1 1 Microsoft Research, Redmond, WA, USA fcraigco,[email protected] 2 University of Applied Sciences Wiesbaden, Germany 3 University of W¨urzburg,Germany [email protected] Abstract. We give a sieving algorithm for finding pairs of consecutive smooth numbers that utilizes solutions to the Prouhet-Tarry-Escott (PTE) problem. Any such solution induces two degree-n polynomials, a(x) and b(x), that differ by a constant integer C and completely split into linear factors in Z[x]. It follows that for any ` 2 Z such that a(`) ≡ b(`) ≡ 0 mod C, the two integers a(`)=C and b(`)=C differ by 1 and necessarily contain n factors of roughly the same size. For a fixed smoothness bound B, restricting the search to pairs of integers that are parameterized in this way increases the probability that they are B-smooth. Our algorithm combines a simple sieve with parametrizations given by a collection of solutions to the PTE problem. The motivation for finding large twin smooth integers lies in their application to compact isogeny-based post-quantum protocols. The recent key exchange scheme B-SIDH and the recent digital signature scheme SQISign both require large primes that lie between two smooth integers; finding such a prime can be seen as a special case of finding twin smooth integers under the additional stipulation that their sum is a prime p. When searching for cryptographic parameters with 2240 ≤ p < 2256, an implementation of our sieve found primes p where p + 1 and p − 1 are 215-smooth; the smoothest prior parameters had a similar sized prime for which p−1 and p+1 were 219-smooth.
    [Show full text]
  • Selecting Cryptographic Key Sizes
    J. Cryptology (2001) 14: 255–293 DOI: 10.1007/s00145-001-0009-4 © 2001 International Association for Cryptologic Research Selecting Cryptographic Key Sizes Arjen K. Lenstra Citibank, N.A., 1 North Gate Road, Mendham, NJ 07945-3104, U.S.A. [email protected] and Technische Universiteit Eindhoven Eric R. Verheul PricewaterhouseCoopers, GRMS Crypto Group, Goudsbloemstraat 14, 5644 KE Eindhoven, The Netherlands eric.verheul@[nl.pwcglobal.com, pobox.com] Communicated by Andrew Odlyzko Received September 1999 and revised February 2001 Online publication 14 August 2001 Abstract. In this article we offer guidelines for the determination of key sizes for symmetric cryptosystems, RSA, and discrete logarithm-based cryptosystems both over finite fields and over groups of elliptic curves over prime fields. Our recommendations are based on a set of explicitly formulated parameter settings, combined with existing data points about the cryptosystems. Key words. Symmetric key length, Public key length, RSA, ElGamal, Elliptic curve cryptography, Moore’s law. 1. Introduction 1.1. The Purpose of This Paper Cryptography is one of the most important tools that enable e-commerce because cryp- tography makes it possible to protect electronic information. The effectiveness of this protection depends on a variety of mostly unrelated issues such as cryptographic key size, protocol design, and password selection. Each of these issues is equally important: if a key is too small, or if a protocol is badly designed or incorrectly used, or if a pass- word is poorly selected or protected, then the protection fails and improper access can be gained. In this article we give some guidelines for the determination of cryptographic key sizes.
    [Show full text]
  • Sieving by Large Integers and Covering Systems of Congruences
    JOURNAL OF THE AMERICAN MATHEMATICAL SOCIETY Volume 20, Number 2, April 2007, Pages 495–517 S 0894-0347(06)00549-2 Article electronically published on September 19, 2006 SIEVING BY LARGE INTEGERS AND COVERING SYSTEMS OF CONGRUENCES MICHAEL FILASETA, KEVIN FORD, SERGEI KONYAGIN, CARL POMERANCE, AND GANG YU 1. Introduction Notice that every integer n satisfies at least one of the congruences n ≡ 0(mod2),n≡ 0(mod3),n≡ 1(mod4),n≡ 1(mod6),n≡ 11 (mod 12). A finite set of congruences, where each integer satisfies at least one them, is called a covering system. A famous problem of Erd˝os from 1950 [4] is to determine whether for every N there is a covering system with distinct moduli greater than N.In other words, can the minimum modulus in a covering system with distinct moduli be arbitrarily large? In regards to this problem, Erd˝os writes in [6], “This is perhaps my favourite problem.” It is easy to see that in a covering system, the reciprocal sum of the moduli is at least 1. Examples with distinct moduli are known with least modulus 2, 3, and 4, where this reciprocal sum can be arbitrarily close to 1; see [10], §F13. Erd˝os and Selfridge [5] conjectured that this fails for all large enough choices of the least modulus. In fact, they made the following much stronger conjecture. Conjecture 1. For any number B,thereisanumberNB,suchthatinacovering system with distinct moduli greater than NB, the sum of reciprocals of these moduli is greater than B. A version of Conjecture 1 also appears in [7].
    [Show full text]
  • Factoring Integers with a Brain-Inspired Computer John V
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS—I: REGULAR PAPERS 1 Factoring Integers with a Brain-Inspired Computer John V. Monaco and Manuel M. Vindiola Abstract—The bound to factor large integers is dominated • Constant-time synaptic integration: a single neuron in the by the computational effort to discover numbers that are B- brain may receive electrical potential inputs along synap- smooth, i.e., integers whose largest prime factor does not exceed tic connections from thousands of other neurons. The B. Smooth numbers are traditionally discovered by sieving a polynomial sequence, whereby the logarithmic sum of prime incoming potentials are continuously and instantaneously factors of each polynomial value is compared to a threshold. integrated to compute the neuron’s membrane potential. On a von Neumann architecture, this requires a large block of Like the brain, neuromorphic architectures aim to per- memory for the sieving interval and frequent memory updates, form synaptic integration in constant time, typically by resulting in O(ln ln B) amortized time complexity to check each leveraging physical properties of the underlying device. value for smoothness. This work presents a neuromorphic sieve that achieves a constant-time check for smoothness by reversing Unlike the traditional CPU-based sieve, the factor base is rep- the roles of space and time from the von Neumann architecture resented in space (as spiking neurons) and the sieving interval and exploiting two characteristic properties of brain-inspired in time (as successive time steps). Sieving is performed by computation: massive parallelism and constant time synaptic integration. The effects on sieving performance of two common a population of leaky integrate-and-fire (LIF) neurons whose neuromorphic architectural constraints are examined: limited dynamics are simple enough to be implemented on a range synaptic weight resolution, which forces the factor base to be of current and future architectures.
    [Show full text]
  • On Distribution of Semiprime Numbers
    ISSN 1066-369X, Russian Mathematics (Iz. VUZ), 2014, Vol. 58, No. 8, pp. 43–48. c Allerton Press, Inc., 2014. Original Russian Text c Sh.T. Ishmukhametov, F.F. Sharifullina, 2014, published in Izvestiya Vysshikh Uchebnykh Zavedenii. Matematika, 2014, No. 8, pp. 53–59. On Distribution of Semiprime Numbers Sh. T. Ishmukhametov* and F. F. Sharifullina** Kazan (Volga Region) Federal University, ul. Kremlyovskaya 18, Kazan, 420008 Russia Received January 31, 2013 Abstract—A semiprime is a natural number which is the product of two (possibly equal) prime numbers. Let y be a natural number and g(y) be the probability for a number y to be semiprime. In this paper we derive an asymptotic formula to count g(y) for large y and evaluate its correctness for different y. We also introduce strongly semiprimes, i.e., numbers each of which is a product of two primes of large dimension, and investigate distribution of strongly semiprimes. DOI: 10.3103/S1066369X14080052 Keywords: semiprime integer, strongly semiprime, distribution of semiprimes, factorization of integers, the RSA ciphering method. By smoothness of a natural number n we mean possibility of its representation as a product of a large number of prime factors. A B-smooth number is a number all prime divisors of which are bounded from above by B. The concept of smoothness plays an important role in number theory and cryptography. Possibility of using the concept in cryptography is based on the fact that the procedure of decom- position of an integer into prime divisors (factorization) is a laborious computational process requiring significant calculating resources [1, 2].
    [Show full text]
  • Congress on Privacy & Surveillance
    Ecole polytechnique fédérale de Lausanne School of Computer and Communication Sciences Congress on Privacy & Surveillance September 30th, 2013 • 8:45am - 6:30pm • EPFL Rolex Learning Center Forum A one-day event triggered by recent announcements about secret Internet mass surveillance A number of prominent international speakers will discuss your right to information self- determination, the politics of privacy, how to deal with the secret cosmopolitan state within a state, and how to go forward. It is a congress of individuals to represent what is not (yet?) represented by institutions. We are looking forward to welcoming you at this unique event! It is free and open to those who have registered (registration had to be closed on Thursday Sept 19). Guest Speakers Jacob Appelbaum Nikolaus Forgó Axel Arnbak Richard Hill Bill Binney Bruce Schneier Caspar Bowden Organized and chaired by Prof. Arjen K. Lenstra From ic.epfl.ch/privacy-surveillance 1 14 October 2013 Slides Axel_Arnbak - slides Bill_Binney - slides Caspar_Bowden - slides Nikolaus_Forgo - slides Richard_Hill_- slides Richard_Hill - summaries From ic.epfl.ch/privacy-surveillance 2 14 October 2013 Speaker Biographies Editor’s note: to provide background of the speakers, the following is excerpts from the links provided in the conference program. Jacob Appelbaum From Wikipedia Jacob Appelbaum is an independent computer security researcher and hacker. He was employed by the University of Washington,[1] and is a core member of the Tor project. Appelbaum is known for representing Wikileaks at the 2010 HOPE conference.[5] He has subsequently been repeatedly targeted by US law enforcement agencies, who obtained a court order for his Twitter account data, detained him 12[6] times at the US border after trips abroad, and seized a laptop and several mobile phones.
    [Show full text]
  • Four Integer Factorization Algorithms NA Carella, September, 2010 Abstract
    Four Integer Factorization Algorithms N. A. Carella, September, 2010 Abstract: The theoretical aspects of four integer factorization algorithms are discussed in detail in this note. The focus is on the performances of these algorithms on the subset of difficult to factor balanced integers N = pq , p < q < 2 p. The running time complexity of these algorithms ranges from deterministic exponential time complexity O(N1/2 ) to heuristic and unconditional logarithmic time complexity O(log( N)c), c > 0 constant. 1. INTRODUCTION Let p and q be a pair of primes. As far as time complexity is concerned, the subset of balanced integers N = pq , where p < q < ap , and a > 0 is a fixed parameter, is the most important subset of integers. The subset of balanced integers B(x) = { N = pq ≤ x : p < q < ap } of cardinality B(x) = O(x / log 2 x) has zero density in the set of all nonnegative integers, see Proposition 4. Accordingly, the factorization of a random integer is unlikely to be as difficult as a balanced integer of the same size. This article discusses the theoretical aspects of four integer factorization algorithms acting on the subset of balanced integers in details. These algorithms are described in Theorems 2, 3, 5, 13 and 14, respectively. The emphasis is on the performances of these algorithms on the subset of difficult to factor balanced integers. The running time complexity of these algorithms ranges from deterministic exponential time complexity O(N1/2 ) to heuristic and unconditional deterministic logarithmic time complexity O(log( N)c), c > 0 constant.
    [Show full text]
  • Primality Testing and Sub-Exponential Factorization
    Primality Testing and Sub-Exponential Factorization David Emerson Advisor: Howard Straubing Boston College Computer Science Senior Thesis May, 2009 Abstract This paper discusses the problems of primality testing and large number factorization. The first section is dedicated to a discussion of primality test- ing algorithms and their importance in real world applications. Over the course of the discussion the structure of the primality algorithms are devel- oped rigorously and demonstrated with examples. This section culminates in the presentation and proof of the modern deterministic polynomial-time Agrawal-Kayal-Saxena algorithm for deciding whether a given n is prime. The second section is dedicated to the process of factorization of large com- posite numbers. While primality and factorization are mathematically tied in principle they are very di⇥erent computationally. This fact is explored and current high powered factorization methods and the mathematical structures on which they are built are examined. 1 Introduction Factorization and primality testing are important concepts in mathematics. From a purely academic motivation it is an intriguing question to ask how we are to determine whether a number is prime or not. The next logical question to ask is, if the number is composite, can we calculate its factors. The two questions are invariably related. If we can factor a number into its pieces then it is obviously not prime, if we can’t then we know that it is prime. The definition of primality is very much derived from factorability. As we progress through the known and developed primality tests and factorization algorithms it will begin to become clear that while primality and factorization are intertwined they occupy two very di⇥erent levels of computational di⇧culty.
    [Show full text]
  • Factorization Techniques, by Elvis Nunez and Chris Shaw
    FACTORIZATION TECHNIQUES ELVIS NUNEZ AND CHRIS SHAW Abstract. The security of the RSA public key cryptosystem relies upon the computational difficulty of deriving the factors of a partic- ular semiprime modulus. In this paper we briefly review the history of factorization methods and develop a stable of techniques that will al- low an understanding of Dixon's Algorithm, the procedural basis upon which modern factorization methods such as the Quadratic Sieve and General Number Field Sieve algorithms rest. 1. Introduction During this course we have proven unique factorization in Z. Theorem 1.1. Given n, there exists a unique prime factorization up to order and multiplication by units. However, we have not deeply investigated methods for determining what these factors are for any given integer. We will begin with the most na¨ıve implementation of a factorization method, and refine our toolkit from there. 2. Trial Division p Theorem 2.1. There exists a divisor of n; a such that 1 > a ≤ n. Definition 2.2. π(n) denotes the number of primes less than or equal to n. Proof. Suppose bjn and b ≥ p(n) and ab = n. It follows a = n . Suppose p p p p b pn b = n, then a = n = n. Suppose b > n, then a < n. By theorem 2.1,p we can find a factor of n by dividing n by the numbers in the range (1; n]. By theorem 1.1, we know that we can express n as a productp of primes, and by theorem 2.1 we know there is a factor of npless than n.
    [Show full text]
  • Integer Sequences
    UHX6PF65ITVK Book > Integer sequences Integer sequences Filesize: 5.04 MB Reviews A very wonderful book with lucid and perfect answers. It is probably the most incredible book i have study. Its been designed in an exceptionally simple way and is particularly just after i finished reading through this publication by which in fact transformed me, alter the way in my opinion. (Macey Schneider) DISCLAIMER | DMCA 4VUBA9SJ1UP6 PDF > Integer sequences INTEGER SEQUENCES Reference Series Books LLC Dez 2011, 2011. Taschenbuch. Book Condition: Neu. 247x192x7 mm. This item is printed on demand - Print on Demand Neuware - Source: Wikipedia. Pages: 141. Chapters: Prime number, Factorial, Binomial coeicient, Perfect number, Carmichael number, Integer sequence, Mersenne prime, Bernoulli number, Euler numbers, Fermat number, Square-free integer, Amicable number, Stirling number, Partition, Lah number, Super-Poulet number, Arithmetic progression, Derangement, Composite number, On-Line Encyclopedia of Integer Sequences, Catalan number, Pell number, Power of two, Sylvester's sequence, Regular number, Polite number, Ménage problem, Greedy algorithm for Egyptian fractions, Practical number, Bell number, Dedekind number, Hofstadter sequence, Beatty sequence, Hyperperfect number, Elliptic divisibility sequence, Powerful number, Znám's problem, Eulerian number, Singly and doubly even, Highly composite number, Strict weak ordering, Calkin Wilf tree, Lucas sequence, Padovan sequence, Triangular number, Squared triangular number, Figurate number, Cube, Square triangular
    [Show full text]
  • Integer Factoring
    Designs, Codes and Cryptography, 19, 101–128 (2000) c 2000 Kluwer Academic Publishers, Boston. Manufactured in The Netherlands. Integer Factoring ARJEN K. LENSTRA [email protected] Citibank, N.A., 1 North Gate Road, Mendham, NJ 07945-3104, USA Abstract. Using simple examples and informal discussions this article surveys the key ideas and major advances of the last quarter century in integer factorization. Keywords: Integer factorization, quadratic sieve, number field sieve, elliptic curve method, Morrison–Brillhart Approach 1. Introduction Factoring a positive integer n means finding positive integers u and v such that the product of u and v equals n, and such that both u and v are greater than 1. Such u and v are called factors (or divisors)ofn, and n = u v is called a factorization of n. Positive integers that can be factored are called composites. Positive integers greater than 1 that cannot be factored are called primes. For example, n = 15 can be factored as the product of the primes u = 3 and v = 5, and n = 105 can be factored as the product of the prime u = 7 and the composite v = 15. A factorization of a composite number is not necessarily unique: n = 105 can also be factored as the product of the prime u = 5 and the composite v = 21. But the prime factorization of a number—writing it as a product of prime numbers—is unique, up to the order of the factors: n = 3 5 7isthe prime factorization of n = 105, and n = 5 is the prime factorization of n = 5.
    [Show full text]