A Promenade Through the New Cryptography of Bilinear Pairings

Total Page:16

File Type:pdf, Size:1020Kb

A Promenade Through the New Cryptography of Bilinear Pairings Invited paper appearing in the Proceedings of the IEEE Information Theory Workshop 2006—ITW 2006, Punta del Este, Uruguay, March 2006. c IEEE. Available online at http://www.cs.stanford.edu/˜xb/itw06/. A Promenade through the New Cryptography of Bilinear Pairings Xavier Boyen Voltage Inc. Arastradero Road Palo Alto, California Email: [email protected] Abstract— This paper gives an introductory account of Assumptions rooted in Factoring and the Discrete the origin, nature, and uses of bilinear pairings, arguably Logarithm problem are interesting in that they tend to the newest and hottest toy in a cryptographer’s toolbox. have complementary properties. Factoring, via the RSA A handful of cryptosystems built on pairings are briefly surveyed, including a couple of realizations of the famously and Strong-RSA assumptions, offers a realization of the elusive identity-based encryption primitive. tremendously useful primitive of trapdoor permutation. Discrete Logarithm, in the guises of the Computational I. INTRODUCTION and Decision Diffie-Hellman (CDH and DDH), offers It can be said that much of contemporaneous cryp- us the option to work with either a computational or a tography can be traced to Shannon’s legacy of “secrecy decisional complexity assumption, the latter being use- systems”, an information theoretic foundation. To escape ful to build public-key encryption systems with formal from the one-time pad, however, it has become neces- proofs of security; by contrast, Factoring and RSA-like sary to appeal to a variety of computational complexity problems are essentially computational, and are thus notions, e.g., to leverage short secrets in order to protect inherently more suited for signature and authentication long messages. Modern cryptography is, in essence, a schemes—although we note that the Random Oracle computational game of cat and mouse between honest heuristic blurs that disctinction, and exceptions abound. secret holders and resource-bounded adversaries. The common situation before the apparition of bilinear Traditionally, cryptographic scheme development has pairings was that CDH and DDH were concomitantly evolved along two separate paths that differ in their hard in all algebraic groups of cryptographic interest. complexity theoretic meanderings. On the one hand, the Bilinear maps—or pairings as they are often called— delicate art of “symbolic obfuscation” has roots in the first appeared in cryptographic constructions around the early ciphers of the Antiquity; from it most of today’s turn of the millennium [1] [2], although they had been secret-key systems and hash functions have been crafted, used cryptanalytically a few years prior [3]. Pairings due to its unsurpassed performance. On the other hand, have vastly expanded the world of Discrete Logarithm the comparatively recent advent of algebraic methods has assumptions. The simplest examples of this is that they started to open, a few decades ago, a Pandora’s Box of provide algebraic groups in which the DDH problem is cryptosystems with astonishing features, encompassing easy even though CDH is still believed to be hard, a virtually all of today’s public-key cryptography. state of affair abstractly referred to as the Gap Diffie- Many public-key systems have been proposed in the Hellman (GapDH) assumption. GapDH is perhaps the past three decased, based on sundry algebra and a fair simplest non-trivial assumption to be made in bilinear share of complexity theoretic assumptions. Some are groups, although there are many others as we shall see. based on NP-hard problems or coding theoretic tricks We start in §II with a simple definition of pairings. (e.g., the McEliece cryptosystem); others involve exotic In §III we give a brief overview of a concrete number branches of group theory such as braid groups, or Lattice theoretic implementation of pairings, and in §IV discuss reduction problems that blur the boundary between the a few useful complexity theoretic abstractions. We then discrete and the continuous. The most prolific and suc- turn our attention to the good uses that cryptographers cessful approaches, however, all rely on the same handful have made of pairings, with a keen interest in §V for of complexity assumptions, which are rooted either in the realizations of the identity-based encryption primitive, hardness of integer factorization (e.g., RSA) or in that and in §VI for signature schemes with novel properties. of taking discrete logarithms (e.g., Diffie-Hellman). We conclude in §VII with long-standing open problems. II. BILINEAR GROUPS III. ALGEBRAIC REALIZATIONS Before delving into the actual realization of bilinear Algebraic curves and elliptic curves in particular have groups and maps, it is helpful to understand why they provided an avenue for the construction of cryptograph- are so desirable in cryptography. ically suitable bilinear pairings, known as the Weil and For illustration, consider a cyclic group G of (finite) the Tate pairings. In the next few paragraphs we give size or order n, such as the set Zn of integer residues a very brief overview of how these pairing come to modulo n. If we use the multiplication symbol ‘·’ to existence. denote the group operation (which in Zn is the arithmetic To start, consider a finite field (or Galois field) Fq addition modulo n), then we know that every element of size q, usually a large prime. Roughly speaking, an h ∈ G can be expressed as an integral power of some elliptic curve over the field Fq is defined by a bivariate fixed element g ∈ G called a generator of the group, i.e., equation in x and y, such as E : y2 = x3 + a x + b, a (a mod n) ∃a ∈ Z : g · g ··· g = g = g = h. where a and b are constants in Fq. We can view a pair | {z } a times (x, y) ∈ Fq × Fq as representing the coordinates of a In this context, the CDH problem is the task of point on the doubly periodic integer “plane” (or torus) a b a b calculating g given only g, g , g , and an implicit de- Fq × Fq. We say that such a point is on the curve if scription of Zn. The DDH problem is to decide whether its coordinates satisfy the curve equation in Fq. It turns or not h = ga b in a given quadruple of group elements out that the set of points on the curve (to which we hg, ga, gb, hi. For G the set of integers under addition add a special zero point to serve as neutral element) modulo n, both problems are of course easy to solve. forms a group, denoted E(Fq), under a simple group On the contrary, if we take G to be the multiplicative operation called point addition on the curve. Hasse’s subgroup of order p in the set of integer residues Zq, theorem states that the group size, #E(Fq), is always roughly the same as the field size, # = q; more such that p and q = 1 + 2 p are prime numbers, then √ Fq suddenly both CDH and DDH are believed to be hard precisely, |#E(Fq) − (q + 1)| ≤ 2 q. With a suitable problems. choice of field and curve, the group size #E(Fq) can Now, abstractly, a bilinear pairing is an efficiently be made to contain a large prime factor p, in which case ˆ computable map e : G × G → GT , where for simplicity the group E(Fq) will have a cyclic subgroup of prime G and Gˆ are two cyclic groups of equal order p that order p, which can be our candidate for G. are respectively generated by some g ∈ G and gˆ ∈ Gˆ . To proceed, we consider the same curve equation E as ˆ (We could even have G = G and g =g ˆ, though in above, but on a larger field Fqk which for k > 1 is called general we do not.) The range of e is a multiplicative an algebraic extension of the ground field Fq. Elements group of order p, denoted GT , and generated by e[g, gˆ]. of Fqk can be represented, e.g., as polynomials of degree The pairing must be non-trivially bilinear, meaning that (up to) k with coefficients in Fq. What this entails is a b c the equality e[g , gˆ ] = e[g, gˆ] holds if and only if the that the elements of Fqk are compatible with those of integer exponents satisfy a b = c (mod p). Fq, so that the product of, say, a ∈ Fq and x ∈ Fqk is a With this definition, it is easy to see that the pairing is well-defined element of Fqk . Thus, and by analogy with a powerful tool that lets us compute something similar what we did earlier, we can define E(Fqk ) as the set of to a Diffie-Hellman operation, with the important caveat points (x, y) ∈ Fqk × Fqk that satisfy the curve equation ˆ that the process takes us from G and G to the different in Fqk . Under the point addition rule, E(Fqk ) forms a group GT from which we generally cannot get back. group, albeit a much larger one than E(Fq). The point of Nevertheless, this is sufficient to give us a direct test for this discussion is that as k is increased, there is a special ˆ the DDH problem (for G = G) or a dual-group version value of k > 1 for which E(Fqk ) contains a subgroup of DDH (when G 6= Gˆ ). Indeed, given an instance of order p. This subgroup will be our candidate for Gˆ , a b ˆ 2 ˆ 2 hg, g , gˆ , hi ∈ G × G , it is easy to determine whether and the smallest such k its embedding degree in E(Fq). hˆ =g ˆa b by verifying the equality e[ga, gb] = e[g, hˆ].
Recommended publications
  • Overview of the Mceliece Cryptosystem and Its Security
    Ø Ñ ÅØÑØÐ ÈÙ ÐØÓÒ× DOI: 10.2478/tmmp-2014-0025 Tatra Mt. Math. Publ. 60 (2014), 57–83 OVERVIEW OF THE MCELIECE CRYPTOSYSTEM AND ITS SECURITY Marek Repka — Pavol Zajac ABSTRACT. McEliece cryptosystem (MECS) is one of the oldest public key cryptosystems, and the oldest PKC that is conjectured to be post-quantum se- cure. In this paper we survey the current state of the implementation issues and security of MECS, and its variants. In the first part we focus on general decoding problem, structural attacks, and the selection of parameters in general. We sum- marize the details of MECS based on irreducible binary Goppa codes, and review some of the implementation challenges for this system. Furthermore, we survey various proposals that use alternative codes for MECS, and point out some at- tacks on modified systems. Finally, we review notable existing implementations on low-resource platforms, and conclude with the topic of side channels in the implementations of MECS. 1. Introduction R. J. M c E l i e c e proposed in 1978 [37] a new public key cryptosystem based on the theory of algebraic codes, now called the McEliece Cryptosystem (MECS). Unlike RSA, it was not adopted by the implementers, mainly due to large public key sizes. The interest of researchers in MECS increased with the advent of quantum computing. Unlike systems based on integer factorisation problem and discrete logarithm problem, MECS security is based on the general decoding problem which is NP hard and should resist also attackers with access to the quantum computer. In this article we provide an overview of the MECS, in its original form, and its alternatives.
    [Show full text]
  • Overview of Post-Quantum Public-Key Cryptosystems for Key Exchange
    Overview of post-quantum public-key cryptosystems for key exchange Annabell Kuldmaa Supervised by Ahto Truu December 15, 2015 Abstract In this report we review four post-quantum cryptosystems: the ring learning with errors key exchange, the supersingular isogeny key exchange, the NTRU and the McEliece cryptosystem. For each protocol, we introduce the underlying math- ematical assumption, give overview of the protocol and present some implementa- tion results. We compare the implementation results on 128-bit security level with elliptic curve Diffie-Hellman and RSA. 1 Introduction The aim of post-quantum cryptography is to introduce cryptosystems which are not known to be broken using quantum computers. Most of today’s public-key cryptosys- tems, including the Diffie-Hellman key exchange protocol, rely on mathematical prob- lems that are hard for classical computers, but can be solved on quantum computers us- ing Shor’s algorithm. In this report we consider replacements for the Diffie-Hellmann key exchange and introduce several quantum-resistant public-key cryptosystems. In Section 2 the ring learning with errors key exchange is presented which was introduced by Peikert in 2014 [1]. We continue in Section 3 with the supersingular isogeny Diffie–Hellman key exchange presented by De Feo, Jao, and Plut in 2011 [2]. In Section 5 we consider the NTRU encryption scheme first described by Hoffstein, Piphe and Silvermain in 1996 [3]. We conclude in Section 6 with the McEliece cryp- tosystem introduced by McEliece in 1978 [4]. As NTRU and the McEliece cryptosys- tem are not originally designed for key exchange, we also briefly explain in Section 4 how we can construct key exchange from any asymmetric encryption scheme.
    [Show full text]
  • Comprehensive Efficient Implementations of ECC on C54xx Family of Low-Cost Digital Signal Processors
    Comprehensive Efficient Implementations of ECC on C54xx Family of Low-cost Digital Signal Processors Muhammad Yasir Malik [email protected] Abstract. Resource constraints in smart devices demand an efficient cryptosystem that allows for low power and memory consumption. This has led to popularity of comparatively efficient Elliptic curve cryptog- raphy (ECC). Prior to this paper, much of ECC is implemented on re- configurable hardware i.e. FPGAs, which are costly and unfavorable as low-cost solutions. We present comprehensive yet efficient implementations of ECC on fixed-point TMS54xx series of digital signal processors (DSP). 160-bit prime field ECC is implemented over a wide range of coordinate choices. This paper also implements windowed recoding technique to provide better execution times. Stalls in the programming are mini- mized by utilization of loop unrolling and by avoiding data dependence. Complete scalar multiplication is achieved within 50 msec in coordinate implementations, which is further reduced till 25 msec for windowed- recoding method. These are the best known results for fixed-point low power digital signal processor to date. Keywords: Elliptic curve cryptosystem, efficient implementation, digi- tal signal processor (DSP), low power 1 Introduction No other information security entity has been more extensively studied, re- searched and applied as asymmetric cryptography, thanks to its ability to be cryptographically strong over long spans of time. Asymmetric or public key cryptosystems (PKC) have their foundations in hard mathematical problems, which ensure their provable security at the expanse of implementation costs. So-called “large numbers” provide the basis of asymmetric cryptography, which makes their implementation expensive in terms of computation and memory requirements.
    [Show full text]
  • Public-Key Encryption Schemes with Bounded CCA Security and Optimal Ciphertext Length Based on the CDH and HDH Assumptions
    Public-Key Encryption Schemes with Bounded CCA Security and Optimal Ciphertext Length Based on the CDH and HDH Assumptions Mayana Pereira1, Rafael Dowsley1, Goichiro Hanaoka2, and Anderson C. A. Nascimento1 1 Department of Electrical Engeneering, University of Bras´ılia Campus Darcy Ribeiro, 70910-900, Bras´ılia,DF, Brazil email: [email protected],[email protected], [email protected] 2 National Institute of Advanced Industrial Science and Technology (AIST) 1-18-13, Sotokanda, Chyioda-ku, 101-0021, Tokyo, Japan e-mail: [email protected] Abstract. In [5] Cramer et al. proposed a public-key encryption scheme secure against adversaries with a bounded number of decryption queries based on the decisional Diffie-Hellman problem. In this paper, we show that the same result can be obtained based on weaker computational assumptions, namely: the computational Diffie-Hellman and the hashed Diffie-Hellman assumptions. Keywords: Public-key encryption, bounded chosen ciphertext secu- rity, computational Diffie-Hellman assumption, hashed Diffie-Hellman assumption. 1 Introduction The highest level of security for public-key cryptosystems is indistinguishability against adaptive chosen ciphertext attack (IND-CCA2), proposed by Rackoff and Simon [28] in 1991. The development of cryptosystems with such feature can be viewed as a complex task. Several public-key encryption (PKE) schemes have been proposed with either practical or theoretical purposes. It is possible to obtain IND-CCA2 secure cryptosystems based on many assumptions such as: decisional Diffie-Hellman [7, 8, 26], computational Diffie-Hellman [6, 20], factor- ing [22], McEliece [13, 12], quadratic residuosity [8] and learning with errors [26, 25].
    [Show full text]
  • On the Equivalence of Mceliece's and Niederreiter's Public-Key Cryptosystems Y
    Singapore Management University Institutional Knowledge at Singapore Management University Research Collection School Of Information Systems School of Information Systems 1-1994 On the equivalence of McEliece's and Niederreiter's public-key cryptosystems Y. X. LI Robert H. DENG Singapore Management University, [email protected] X. M. WANG DOI: https://doi.org/10.1109/18.272496 Follow this and additional works at: https://ink.library.smu.edu.sg/sis_research Part of the Information Security Commons Citation LI, Y. X.; DENG, Robert H.; and WANG, X. M.. On the equivalence of McEliece's and Niederreiter's public-key cryptosystems. (1994). IEEE Transactions on Information Theory. 40, (1), 271-273. Research Collection School Of Information Systems. Available at: https://ink.library.smu.edu.sg/sis_research/94 This Journal Article is brought to you for free and open access by the School of Information Systems at Institutional Knowledge at Singapore Management University. It has been accepted for inclusion in Research Collection School Of Information Systems by an authorized administrator of Institutional Knowledge at Singapore Management University. For more information, please email [email protected]. IKANJALIIUNJ UN INrUKb’lAIlUN IHCUKX, VUL. W,IYU. 1, JAIYUAKX IYY4 LI1 REERENCES of Niederreiter’s cryptosystem, by Niederreiter [3] and by Brickell and Odlyzko [4]. Furthermore, we employ the best known attack, the J. Ziv and A. Lempel, “Compression of individual sequences via Lee-Brickell attack [5], to cryptanalyze the two systems. Some new variable-rate coding,” IEEE Trans. Inform. Theory, vol. IT-24, pp. 530-536, Sept. 1978. optimal parameter values and work factors are obtained.
    [Show full text]
  • A CCA2 Secure Public Key Encryption Scheme Based on the Mceliece Assumptions in the Standard Model
    A CCA2 Secure Public Key Encryption Scheme Based on the McEliece Assumptions in the Standard Model Rafael Dowsley1, J¨ornM¨uller-Quade2, Anderson C. A. Nascimento1 1 Department of Electrical Engineering, University of Brasilia. Campus Universit´arioDarcy Ribeiro,Brasilia, CEP: 70910-900, Brazil, Email:[email protected], [email protected] 2 Universit¨atKarlsruhe, Institut f¨urAlgorithmen und Kognitive Systeme. Am Fasanengarten 5, 76128 Karlsruhe, Germany. E-mail: [email protected] We show that a recently proposed construction by Rosen and Segev can be used for obtaining the first public key encryption scheme based on the McEliece assumptions which is secure against adaptive chosen ciphertext attacks in the standard model. 1 Introduction Indistinguishability of messages under adaptive chosen ciphertext attacks is the strongest known notion of security for public key encryption schemes (PKE). Many computational assumptions have been used in the literature for obtain- ing cryptosystems meeting such a strong security requirements. Given one-way trapdoor permutations, we know how to obtain CCA2 security from any seman- tically secure public key cryptosystem [14, 20, 12]. Efficient constructions are also known based on number-theoretic assumptions [6] or on identity based encryp- tion schemes [3]. Obtaining a CCA2 secure cryptosystem (even an inefficient one) based on the McEliece assumptions in the standard model has been an open problem in this area for quite a while. Recently, Rosen and Segev proposed an elegant and simple new computa- tional assumption for obtaining CCA2 secure PKEs: correlated products [19]. They provided constructions of correlated products based on the existence of certain lossy trapdoor functions [16] which in turn can be based on the deci- sional Diffie-Hellman problem and on Paillier's decisional residuosity problem [16].
    [Show full text]
  • Coding Theory-Based Cryptopraphy: Mceliece Cryptosystems in Sage
    College of Saint Benedict and Saint John's University DigitalCommons@CSB/SJU Honors Theses, 1963-2015 Honors Program 2013 Coding Theory-Based Cryptopraphy: McEliece Cryptosystems in Sage Christopher Roering College of Saint Benedict/Saint John's University Follow this and additional works at: https://digitalcommons.csbsju.edu/honors_theses Part of the Computer Sciences Commons, and the Mathematics Commons Recommended Citation Roering, Christopher, "Coding Theory-Based Cryptopraphy: McEliece Cryptosystems in Sage" (2013). Honors Theses, 1963-2015. 17. https://digitalcommons.csbsju.edu/honors_theses/17 This Thesis is brought to you for free and open access by DigitalCommons@CSB/SJU. It has been accepted for inclusion in Honors Theses, 1963-2015 by an authorized administrator of DigitalCommons@CSB/SJU. For more information, please contact [email protected]. Coding Theory-Based Cryptography: McEliece Cryptosystems in Sage An Honors Thesis College of St. Benedict/St. John’s University In Partial Fulfillment of the Requirements for Distinction in the Departments of Mathematics and Computer Science by Christopher Roering May 2013 PROJECT TITLE: Coding Theory-Based Cryptography: McEliece Cryptosystems in Sage Approved by: Dr. Sunil Chetty Assistant Professor of Mathematics, Project Advisor Dr. Lynn Ziegler Professor of Computer Science, Project Advisor Dr. Bret Benesh Professor of Mathematics, Department Reader Dr. Robert Campbell Assistant Professor of Mathematics, Department Reader Dr. Robert Hesse Chair, Department of Mathematics Associate Professor Dr. Jim Schnepf Chair, Department of Computer Science Associate Professor Dr. Tony Cunningham Director, Honors Thesis Program 2 Abstract Unlike RSA encryption, McEliece cryptosystems are considered secure in the presence of quantum computers. McEliece cryptosystems leverage error-correcting codes as a mechanism for encryption.
    [Show full text]
  • Codes, Cryptography, and the Mceliece Cryptosystem
    CODES, CRYPTOGRAPHY, AND MCELIECE 1 Codes, Cryptography, and the McEliece Cryptosystem Bethany L. Matsick A Senior Thesis submitted in partial fulfillment of the requirements for graduation in the Honors Program Liberty University Spring 2020 CODES, CRYPTOGRAPHY, AND MCELIECE 2 Acceptance of Senior Honors Thesis This Senior Honors Thesis is accepted in partial fulfillment of the requirements for graduation from the Honors Program of Liberty University. Ethan Smith, Ph.D. Thesis Chair Scott Long, Ph.D. Committee Member David E. Schweitzer, Ph.D. Assistant Honors Director Date CODES, CRYPTOGRAPHY, AND MCELIECE 3 Abstract Over the past several decades, technology has continued to develop at an incredible rate, and the importance of properly securing information has increased significantly. While a variety of encryption schemes currently exist for this purpose, a number of them rely on problems, such as integer factorization, that are not resistant to quantum algorithms. With the reality of quantum computers approaching, it is critical that a quantum-resistant method of protecting information is found. After developing the proper background, we evaluate the potential of the McEliece cryptosystem for use in the post-quantum era by examining families of algebraic geometry codes that allow for increased security. Finally, we develop a family of twisted Hermitian codes that meets the criteria set forth for security. CODES, CRYPTOGRAPHY, AND MCELIECE 4 Codes, Cryptography, and the McEliece Cryptosystem Introduction In 1978, Robert McEliece introduced a public key cryptosystem based on the difficult problem of decoding a random linear code. Due to its large key size, the McEliece cryptosystem has yet to see widespread use.
    [Show full text]
  • Ecrypt Ii Ecrypt Ii
    ECRYPT II ICT-2007-216676 ECRYPT II European Network of Excellence in Cryptology II Network of Excellence Information and Communication Technologies D.MAYA.5 Jointly Executed Research Activities on Design and Analysis of Primitives and Protocols Due date of deliverable: N.A. Actual submission date: N.A. Start date of project: 1 August 2008 Duration: 4 years Lead contractor: Katholieke Universiteit Leuven (KUL) Revision 1.1 Project co-funded by the European Commission within the 7th Framework Programme Dissemination Level PU Public X PP Restricted to other programme participants (including the Commission services) RE Restricted to a group specified by the consortium (including the Commission services) CO Confidential, only for members of the consortium (including the Commission services) Jointly Executed Research Activities on Design and Analysis of Primitives and Protocols Editor Nigel Smart (BRIS) N.A. Revision 1.1 The work described in this report has in part been supported by the Commission of the European Com- munities through the ICT program under contract ICT-2007-216676. The information in this document is provided as is, and no warranty is given or implied that the information is fit for any particular purpose. The user thereof uses the information at its sole risk and liability. Contents 1 Introduction 1 2 Work on Standardized or Deployed Schemes 3 2.1 TLS/SSL . 3 2.2 SSH . 4 2.3 EMV . 4 2.4 RSA PKCS# v1.5 Encryption . 6 2.5 RSA-FDH Signature Scheme . 7 2.6 Timestamps in ISO-9798 . 7 2.7 Machine Readable Travel Documents (e-Passports) . 7 2.8 Direct Anonymous Attestation .
    [Show full text]
  • The Mceliece Cryptosystem and Its Variants 1. Introduction to Coding
    Quantum resistant code-based cryptosystems: the McEliece cryptosystem and its variants Jon-Lark Kim Sogang University, South Korea March 1, 2019 At Crypto seminar, Univ. of California-Irvine Abstract: The McEliece cryptosystem is the first code-based public key cryptosystem proposed by Robert McEliece in 1978 a few years after the appearance of RSA. The original McEliece cryptosystem uses binary Goppa codes which are a subclass of Algebraic Geometric Codes and it is still unbroken under quantum attack. In this talk, we introduce basic facts about coding theory and discuss various code-based public key cryptosystems including our new cryptosystem McNie, which is a combination of the McEliece cryptosystem and the Niederreiter cryptosystem. 1. Introduction to Coding Theory Shannon(Ph.D. in Math) tried to understand both coding and cryptography from information theory point of view. In particular, the concept of Information Entropy is due to him. Shannon’s communication channel is given below. Main idea is to add redundant bits to message so that whenever there is an error in the coded message(i.e. codeword) we can find an error position in the codeword and correct it. Shannon’s work on information theory is motivated by Richard Hamming(Ph.D. in Math), a colleague at Bell Lab. Hamming found the first nontrivial one error correcting codes. Shannon showed that there exist codes which can correct any number of errors keeping the information rate below the channel capacity. From now on we give some basic concepts from coding theory. A linear [n,k,d] code with d=n-k+1 is called an MDS code.
    [Show full text]
  • The Factoring Dead: Preparing for the Cryptopocalypse
    THE FACTORING DEAD: PREPARING FOR THE CRYPTOPOCALYPSE Javed Samuel — javed[at]isecpartners[dot]com iSEC Partners, Inc 123 Mission Street, Suite 1020 San Francisco, CA 94105 https://www.isecpartners.com March 20, 2014 Abstract This paper will explain the latest breakthroughs in the academic cryptography community and look ahead at what practical issues could arise for popular cryptosystems. Specifically, we will focus on the recent major devel- opments in discrete mathematics and their potential ability to undermine our trust in the most basic asymmetric primitives, including RSA. We will explain the basic theories behind RSA and the state-of-the-art in large number- ing factoring, and how several recent papers may point the way to massive improvements in this area. The paper will then switch to the practical aspects of the doomsday scenario, and will answer the question “What happens the day after RSA is broken?” We will point out the many obvious and hidden uses of RSA and related algorithms and outline how software engineers and security teams can operate in a post-RSA world. We will also discuss the results of our survey of popular products and software, and point out the ways in which individuals can prepare for the “zombie cryptopocalypse”. 1 INTRODUCTION Over the past few years, there have been numerous attacks on the current SSL infrastructure. These have ranged from BEAST [97], CRIME [88], Lucky 13 [2][86], RC4 bias attacks [1][91] and BREACH [42]. These attacks all show the fragility of the current SSL architecture as vulnerabilities have been found in a variety of features ranging from compression, timing and padding [90].
    [Show full text]
  • Performance Analysis of TLS for Quantum Robust Cryptography On
    Performance Analysis of TLS for Quantum Robust Cryptography on a Constrained Device Jon Barton, William J Buchanan, Will Abramson, Nikolaos Pitropakis Blockpass ID Lab, School of Computing, Edinburgh Napier University, Edinburgh, UK. Abstract—Advances in quantum computing make Shor’s algo- ing codes, and involves matrix-vector multiplication. An rithm for factorising numbers ever more tractable. This threatens example of a linear code is Hamming code. the security of any cryptographic system which often relies on Multivariate polynomial cryptography. This classi- the difficulty of factorisation. It also threatens methods based • on discrete logarithms, such as with the Diffie-Hellman key fication involves the difficulty of solving systems of exchange method. For a cryptographic system to remain secure multivariate polynomials over finite fields. Unfortunately, against a quantum adversary, we need to build methods based many of the methods that have been proposed have on a hard mathematical problem, which are not susceptible to already been broken [2]. Shor’s algorithm and which create Post Quantum Cryptography Hash-based signatures. This classification involves cre- (PQC). While high-powered computing devices may be able to • run these new methods, we need to investigate how well these ating digital signatures using hashing methods. The draw- methods run on limited powered devices. This paper outlines an back is that a signer needs to keep a track of all of evaluation framework for PQC within constrained devices, and the messages that have been signed, and that there is a contributes to the area by providing benchmarks of the front- limit to the number of signatures that can be produced.
    [Show full text]