Lattice Based Cryptography

Total Page:16

File Type:pdf, Size:1020Kb

Lattice Based Cryptography Indian Institute of Technology Kanpur CS682A Quantum Computing Course Project Report Lattice Based Cryptography Advisor: Author: Prof. Rajat Mittal Nikhil Vanjani (14429) IIT Kanpur November 15, 2017 Lattice Based Cryptography Nikhil Vanjani (14429) Contents 1 Abstract 2 2 Background 2 2.1 Cryptography........................................2 2.2 Post Quantum Cryptography...............................3 3 Lattices in Computer Science3 3.1 Introduction.........................................3 3.2 LLL Algorithm.......................................7 3.3 Dual Lattices........................................8 3.4 Fourier Transform...................................... 10 4 On Lattices, Learning With Errors, Random Linear Codes, and Cryptography 10 4.1 Preliminaries........................................ 10 4.2 Some Results that will be useful in proving Main Theorem............... 12 4.3 Main Theorem....................................... 13 4.4 Public Key Cryptosystem................................. 17 1 Lattice Based Cryptography Nikhil Vanjani (14429) 1 Abstract There are three main contributions of Regev's paper[1] studied in this project - first being the reduction from worst-case lattice problems to the Learning With Errors (LWE) problem introduced by Regev. Second being explaining the necessity of this reduction being quantum and its implication in showing relation between two worst-case lattice problems - Bounded Distance Decoding (a variant of CVP) and Discrete Gaussian Sampling (DGS) problem. Third being the proposal of a first of its kind classical cryptosystem whose hardness is based on quantum hardness assumptions. 2 Background 2.1 Cryptography Cryptography is the study of techniques used for secure communication in presence of third party adversaries. The central problems that cryptography tries to solve are - • Confidentiality : If Alice wants to communicate something confidential to Bob, any third party adversary Eve who can tap the communication should not be able to understand what is being communicated. • Integrity : If Bob is receiving some message from Alice, he needs to be sure that the message hasn't been tampered with while being transmitted to him. • Authentication : If Alice is communicating something to Bob, Bob needs to be sure that it is indeed Alice who is sending the message and not someone else. • Non-repudiation : If Alice communicates something to Bob, at a later point she should not be able to deny authority of what she communicated. Modern day cryptographic protocols are based on mathematical theory. When we say that some stan- dard protocol is secure, we mean that the protocol is based on computational hardness assumptions, meaning that breaching security of the protocol would be equivalent to solving the computationally hard problem it is based on. This is considered to be not possible as by definition, the hard problems are the ones which can't be solved efficiently in polynomial time. Today's most popular public key algorithms are based on the problems - Integer Factorization Problem[2], Discrete Logarithm Problem[3], Elliptic Curve Discrete Logarithm Problem[4]. Peter Shor, in 1994 formulated Shor's algorithm[5] which can solve all these three problems in poly- nomial time on a quantum computer. In 2001, IBM demonstrated an implementation of Shor's Algorithm to factor 15. An intutive question which follows after Shor's algorithm is could Quantum Computers solve NP-complete problems ? Lance Fortnow[6] explains that this is unlikely to happen and the majority of researchers believes likewise for now. 2 Lattice Based Cryptography Nikhil Vanjani (14429) Unlike public key cryptography, secret key cryptography is considered to be secure against quan- tum computers. Though Grover's Algorithm[7] reduces the run time quatratically, it can be tackled by doubling the key size[8]. 2.2 Post Quantum Cryptography After Shor's breakthrough algorithm, people have started exploring and building algorithms which would be resistant to attacks by quantum computers. Such algorithms come under the purview of Post Quantum Cryptography (PQC). Note Often post quantum cryptography and quantum cryptography are confused to be same, but they are not. Quantum Cryptography explores using quantum mechanical properties to achieve con- fidentiality, integrity, authentication and non-repudibility. More recently, advances in PQC have been made majorly by the following 3 approaches: • Lattice Based Cryptography : This approach is based on Lattice-based constructions. Ajtai[9], in 1996 introduced the first lattice based cryptographic protocol, based on the lattice problem - Short Integer Solutions. More recently, works revolve around Regev's[1] lattice based public key encryption key based on Learning With Errors problem. • Code Based Cryptography : This apporach is based on Error Correction Codes. The most popular algorithm based on this apporach is McEliece's Algorithm which is based on random Goppa codes. • Hash Based Cryptography : Hash based digital signatures were introduced by Merkle in 1970s through Merkle Signature Scheme[10]. Research in this approach revived when people came to that this was resistant to attacks by quantum computers. Note Proofs of Lemmas/Theorems/Claims marked with # in subsequent sections are provided in the appendix 3 Lattices in Computer Science 3.1 Introduction In this subsection, we define lattices, thier span, fundamental parallelopipid and its properties, de- terminant of a lattice. Then, we proceeded to defining Successive Minima and finding some upper bounds on them, namely Blichfeld's Theorem and Minkowski's theorems. Lastly, popular computa- tional problems in lattices - Shortest Vector Problem and Closest Vector Problem are defined along with their approximation variants. m Definition 3.1.1 (Lattice) Given n linearly independent vectors b1; b2; :::; bn 2 R , the lattice generated by them is defined as 3 Lattice Based Cryptography Nikhil Vanjani (14429) n L(B) = L(b1; b2; :::; bn) = fΣxibijxi 2 Zg = fBxjx 2 Z g We refer to b1; b2; :::; bn as a basis of the lattice Figure 1: A lattice in R2[11] Definition 3.1.2 (Span) The span of a lattice L(B) is the linear space spanned by its vectors, span(L(B)) = span(B) = fByjy 2 Rng Definition 3.1.3 (Fundamental Parallelopipid) For any lattice basis B we define n P (B) = fBxjx 2 R ; 8i : 0 ≤ xi < 1g Figure 2: A lattice in R2[11] # Lemma 3.1.4 Let Λ be a lattice of rank n and let b1; b2; :::; bn 2 Λ be n independent lattice vectors. Then b1; b2; :::; bn form a basis of Λ if and only if P (b1; b2; :::; bn) \ Λ = f0g mxn Lemma 3.1.5 Two bases B1;B2 2 R are equivalent iff B2 = B1U for some unimodular matrix U. Definition 3.1.6 (Determinant) For a rank n lattice Λ, its determinant denoted by det(Λ) is defined as the n-dimensional volume of P(B). Mathematically, det(Λ) := pdet(BT B). When Λ is full rank, det(Λ) := jdet(B)j Definition 3.1.7 (Successive Minima) Let Λ be a lattice of rank n. For i 2 f1; 2; :::ng we define the ith successive minimum as λi(Λ) = inffrjdim(span(Λ \ B(0; r))) ≥ ig where B(0; r) = fx 2 Rmj jjxjj ≤ rg is the closed ball of radius r around 0. 4 Lattice Based Cryptography Nikhil Vanjani (14429) Figure 3: Some lattice bases[11] Figure 4: Successive Minimas: λ1(Λ) = 1; λ2(Λ) = 2:3[11] Blichfeld's Theorem : For any full rank lattice Λ ⊆ Rn and set S ⊆ Rn with vol(S) > det(Λ), there exist two non-equal points z1; z2 2 S such that z1 − z2 2 Λ Figure 5: Blichfeld's Theorem[11] 5 Lattice Based Cryptography Nikhil Vanjani (14429) Minkowski's Convex Body Theorem : Let Λ be a full rank lattice of rank n. Then for any centrally symmetric convex set S, if vol(S) > 2ndet(Λ) then S contains a non-zero lattice point. ^ 1 ^ Figure 6: Intuitive proof of Minkowski's Convex Body Theorem : S = 2 S; S satisfies Blichfeld's Theorem; Lastly, z1 − z2 2 S because S is centrally symmetric[11] Minkowski's First Theorem : For any full-rank lattice Λ of rank n, p 1=n λ1(Λ) ≤ n(det(Λ)) Proof p2r n • Claim: The volume of an n-dimensional ball of radius r is vol(B(0; r)) ≥ ( n ) • By definition, the open ball B(0; λ1(Λ)) contains no nonzero lattice points. By Minkowski's Convex Body Theorem and Claim 1, 2λ (Λ) p1 n n ( n ) ≤ vol(B(0; λ1(Λ))) ≤ 2 det(Λ) and we obtain the bound on λ1(Λ) by rearranging. Minkowski's Second Theorem : For any full-rank lattice Λ of rank n, n p Q 1=n 1=n ( λi(Λ)) ≤ n(det(Λ)) i=1 Computational Problems : Minkowski'sp first theorem implies that any lattice Λ of rank n contains a nonzero vector of length at most n(det(Λ))1=n. Its proof, however, is non-constructive: it does not give us an algorithm to find such a lattice vector. In fact there is no known efficient algorithm that finds such short vectors. The computational problems presented below are conjectured to be hard problems. Shortest Vector Problem (SVP) We are given a lattice and we are supposed to find the shortest nonzero lattice point mxn • Search SV P : Given a lattice basis B 2 Z find v 2 L(B) such that jjvjj = λ1(L(B)). mxn • Optimization SV P : Given a lattice basis B 2 Z , find λ1(L(B)). • Decisional SV P : Given a lattice basis B 2 Zmxn and a rational r 2 Q, determine whether λ1(L(B)) ≤ r or not. 6 Lattice Based Cryptography Nikhil Vanjani (14429) Approximation variants of SVP: Here, instead of finding the shortest vector, we are interested in an approximation of it. The factor of approximation is given by some parameter γ ≥ 1 mxn • Search SV Pγ: Given a lattice basis B 2 Z find v 2 L(B) such that v 6= 0 and jjvjj ≤ γλ1(L(B)).
Recommended publications
  • Lectures on the NTRU Encryption Algorithm and Digital Signature Scheme: Grenoble, June 2002
    Lectures on the NTRU encryption algorithm and digital signature scheme: Grenoble, June 2002 J. Pipher Brown University, Providence RI 02912 1 Lecture 1 1.1 Integer lattices Lattices have been studied by cryptographers for quite some time, in both the field of cryptanalysis (see for example [16{18]) and as a source of hard problems on which to build encryption schemes (see [1, 8, 9]). In this lecture, we describe the NTRU encryption algorithm, and the lattice problems on which this is based. We begin with some definitions and a brief overview of lattices. If a ; a ; :::; a are n independent vectors in Rm, n m, then the 1 2 n ≤ integer lattice with these vectors as basis is the set = n x a : x L f 1 i i i 2 Z . A lattice is often represented as matrix A whose rows are the basis g P vectors a1; :::; an. The elements of the lattice are simply the vectors of the form vT A, which denotes the usual matrix multiplication. We will specialize for now to the situation when the rank of the lattice and the dimension are the same (n = m). The determinant of a lattice det( ) is the volume of the fundamen- L tal parallelepiped spanned by the basis vectors. By the Gram-Schmidt process, one can obtain a basis for the vector space generated by , and L the det( ) will just be the product of these orthogonal vectors. Note that L these vectors are not a basis for as a lattice, since will not usually L L possess an orthogonal basis.
    [Show full text]
  • Solving Hard Lattice Problems and the Security of Lattice-Based Cryptosystems
    Solving Hard Lattice Problems and the Security of Lattice-Based Cryptosystems † Thijs Laarhoven∗ Joop van de Pol Benne de Weger∗ September 10, 2012 Abstract This paper is a tutorial introduction to the present state-of-the-art in the field of security of lattice- based cryptosystems. After a short introduction to lattices, we describe the main hard problems in lattice theory that cryptosystems base their security on, and we present the main methods of attacking these hard problems, based on lattice basis reduction. We show how to find shortest vectors in lattices, which can be used to improve basis reduction algorithms. Finally we give a framework for assessing the security of cryptosystems based on these hard problems. 1 Introduction Lattice-based cryptography is a quickly expanding field. The last two decades have seen exciting devel- opments, both in using lattices in cryptanalysis and in building public key cryptosystems on hard lattice problems. In the latter category we could mention the systems of Ajtai and Dwork [AD], Goldreich, Goldwasser and Halevi [GGH2], the NTRU cryptosystem [HPS], and systems built on Small Integer Solu- tions [MR1] problems and Learning With Errors [R1] problems. In the last few years these developments culminated in the advent of the first fully homomorphic encryption scheme by Gentry [Ge]. A major issue in this field is to base key length recommendations on a firm understanding of the hardness of the underlying lattice problems. The general feeling among the experts is that the present understanding is not yet on the same level as the understanding of, e.g., factoring or discrete logarithms.
    [Show full text]
  • On Ideal Lattices and Learning with Errors Over Rings∗
    On Ideal Lattices and Learning with Errors Over Rings∗ Vadim Lyubashevskyy Chris Peikertz Oded Regevx June 25, 2013 Abstract The “learning with errors” (LWE) problem is to distinguish random linear equations, which have been perturbed by a small amount of noise, from truly uniform ones. The problem has been shown to be as hard as worst-case lattice problems, and in recent years it has served as the foundation for a plethora of cryptographic applications. Unfortunately, these applications are rather inefficient due to an inherent quadratic overhead in the use of LWE. A main open question was whether LWE and its applications could be made truly efficient by exploiting extra algebraic structure, as was done for lattice-based hash functions (and related primitives). We resolve this question in the affirmative by introducing an algebraic variant of LWE called ring- LWE, and proving that it too enjoys very strong hardness guarantees. Specifically, we show that the ring-LWE distribution is pseudorandom, assuming that worst-case problems on ideal lattices are hard for polynomial-time quantum algorithms. Applications include the first truly practical lattice-based public-key cryptosystem with an efficient security reduction; moreover, many of the other applications of LWE can be made much more efficient through the use of ring-LWE. 1 Introduction Over the last decade, lattices have emerged as a very attractive foundation for cryptography. The appeal of lattice-based primitives stems from the fact that their security can often be based on worst-case hardness assumptions, and that they appear to remain secure even against quantum computers.
    [Show full text]
  • On Lattices, Learning with Errors, Random Linear Codes, and Cryptography
    On Lattices, Learning with Errors, Random Linear Codes, and Cryptography Oded Regev ¤ May 2, 2009 Abstract Our main result is a reduction from worst-case lattice problems such as GAPSVP and SIVP to a certain learning problem. This learning problem is a natural extension of the ‘learning from parity with error’ problem to higher moduli. It can also be viewed as the problem of decoding from a random linear code. This, we believe, gives a strong indication that these problems are hard. Our reduction, however, is quantum. Hence, an efficient solution to the learning problem implies a quantum algorithm for GAPSVP and SIVP. A main open question is whether this reduction can be made classical (i.e., non-quantum). We also present a (classical) public-key cryptosystem whose security is based on the hardness of the learning problem. By the main result, its security is also based on the worst-case quantum hardness of GAPSVP and SIVP. The new cryptosystem is much more efficient than previous lattice-based cryp- tosystems: the public key is of size O~(n2) and encrypting a message increases its size by a factor of O~(n) (in previous cryptosystems these values are O~(n4) and O~(n2), respectively). In fact, under the assumption that all parties share a random bit string of length O~(n2), the size of the public key can be reduced to O~(n). 1 Introduction Main theorem. For an integer n ¸ 1 and a real number " ¸ 0, consider the ‘learning from parity with n error’ problem, defined as follows: the goal is to find an unknown s 2 Z2 given a list of ‘equations with errors’ hs; a1i ¼" b1 (mod 2) hs; a2i ¼" b2 (mod 2) .
    [Show full text]
  • Making NTRU As Secure As Worst-Case Problems Over Ideal Lattices
    Making NTRU as Secure as Worst-Case Problems over Ideal Lattices Damien Stehlé1 and Ron Steinfeld2 1 CNRS, Laboratoire LIP (U. Lyon, CNRS, ENS Lyon, INRIA, UCBL), 46 Allée d’Italie, 69364 Lyon Cedex 07, France. [email protected] – http://perso.ens-lyon.fr/damien.stehle 2 Centre for Advanced Computing - Algorithms and Cryptography, Department of Computing, Macquarie University, NSW 2109, Australia [email protected] – http://web.science.mq.edu.au/~rons Abstract. NTRUEncrypt, proposed in 1996 by Hoffstein, Pipher and Sil- verman, is the fastest known lattice-based encryption scheme. Its mod- erate key-sizes, excellent asymptotic performance and conjectured resis- tance to quantum computers could make it a desirable alternative to fac- torisation and discrete-log based encryption schemes. However, since its introduction, doubts have regularly arisen on its security. In the present work, we show how to modify NTRUEncrypt to make it provably secure in the standard model, under the assumed quantum hardness of standard worst-case lattice problems, restricted to a family of lattices related to some cyclotomic fields. Our main contribution is to show that if the se- cret key polynomials are selected by rejection from discrete Gaussians, then the public key, which is their ratio, is statistically indistinguishable from uniform over its domain. The security then follows from the already proven hardness of the R-LWE problem. Keywords. Lattice-based cryptography, NTRU, provable security. 1 Introduction NTRUEncrypt, devised by Hoffstein, Pipher and Silverman, was first presented at the Crypto’96 rump session [14]. Although its description relies on arithmetic n over the polynomial ring Zq[x]=(x − 1) for n prime and q a small integer, it was quickly observed that breaking it could be expressed as a problem over Euclidean lattices [6].
    [Show full text]
  • Lower Bounds on Lattice Sieving and Information Set Decoding
    Lower bounds on lattice sieving and information set decoding Elena Kirshanova1 and Thijs Laarhoven2 1Immanuel Kant Baltic Federal University, Kaliningrad, Russia [email protected] 2Eindhoven University of Technology, Eindhoven, The Netherlands [email protected] April 22, 2021 Abstract In two of the main areas of post-quantum cryptography, based on lattices and codes, nearest neighbor techniques have been used to speed up state-of-the-art cryptanalytic algorithms, and to obtain the lowest asymptotic cost estimates to date [May{Ozerov, Eurocrypt'15; Becker{Ducas{Gama{Laarhoven, SODA'16]. These upper bounds are useful for assessing the security of cryptosystems against known attacks, but to guarantee long-term security one would like to have closely matching lower bounds, showing that improvements on the algorithmic side will not drastically reduce the security in the future. As existing lower bounds from the nearest neighbor literature do not apply to the nearest neighbor problems appearing in this context, one might wonder whether further speedups to these cryptanalytic algorithms can still be found by only improving the nearest neighbor subroutines. We derive new lower bounds on the costs of solving the nearest neighbor search problems appearing in these cryptanalytic settings. For the Euclidean metric we show that for random data sets on the sphere, the locality-sensitive filtering approach of [Becker{Ducas{Gama{Laarhoven, SODA 2016] using spherical caps is optimal, and hence within a broad class of lattice sieving algorithms covering almost all approaches to date, their asymptotic time complexity of 20:292d+o(d) is optimal. Similar conditional optimality results apply to lattice sieving variants, such as the 20:265d+o(d) complexity for quantum sieving [Laarhoven, PhD thesis 2016] and previously derived complexity estimates for tuple sieving [Herold{Kirshanova{Laarhoven, PKC 2018].
    [Show full text]
  • Algorithms for the Densest Sub-Lattice Problem
    Algorithms for the Densest Sub-Lattice Problem Daniel Dadush∗ Daniele Micciancioy December 24, 2012 Abstract We give algorithms for computing the densest k-dimensional sublattice of an arbitrary lattice, and related problems. This is an important problem in the algorithmic geometry of numbers that includes as special cases Rankin’s problem (which corresponds to the densest sublattice problem with respect to the Euclidean norm, and has applications to the design of lattice reduction algorithms), and the shortest vector problem for arbitrary norms (which corresponds to setting k = 1) and its dual (k = n − 1). Our algorithm works for any norm and has running time kO(k·n) and uses 2n poly(n) space. In particular, the algorithm runs in single exponential time 2O(n) for any constant k = O(1). ∗Georgia Tech. Atlanta, GA, USA. Email: [email protected] yUniversity of California, San Diego. 9500 Gilman Dr., Mail Code 0404, La Jolla, CA 92093, USA. Email: [email protected]. This work was supported in part by NSF grant CNS-1117936. Any opinions, findings, and con- clusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation. 1 Introduction n A lattice is a discrete subgroup of R . Computational problems on lattices play an important role in many areas of computer science, mathematics and engineering, including cryptography, cryptanalysis, combinatorial optimization, communication theory and algebraic number theory. The most famous problem on lattices is the shortest vector problem (SVP), which asks to find the shortest nonzero vector in an input lattice.
    [Show full text]
  • Lattices in MIMO Spatial Multiplexing
    Lattices in MIMO Spatial Multiplexing: Detection and Geometry Francisco A. T. B. N. Monteiro (Fitzwilliam College) Department of Engineering University of Cambridge May 2012 To Fotini and To my mother iii iv Abstract Multiple-input multiple-output (MIMO) spatial multiplexing (SM) allows unprecedented spectral efficiencies at the cost of high detection complexity due to the fact that the underlying detection problem is equivalent to the closest vector problem (CVP) in a lattice. Finding better algorithms to deal with the problem has been a central topic in the last decade of research in MIMO SM. This work starts by introducing the most prominent detection techniques for MIMO, namely linear filtering, ordered successive interference cancellation (OSIC), lattice- reduction-aided, and the sphere decoding concept, along with their geometrical interpretation. The geometric relation between the primal and the dual-lattice is clarified, leading to the proposal of a pre-processing technique that allows a number of candidate solutions to be efficiently selected. A sub-optimal quantisation-based technique that reduces the complexity associated with exhaustive search detection is presented. Many of the detection algorithms for MIMO have roots in the fields of algorithmic number theory, theoretical computer science and applied mathematics. This work takes some of those tools originally defined for integer lattices and investigates their suitability for application to the rational lattices encountered in MIMO channels. Looking at lattices from a group theory perspective, it is shown that it is possible to approximate the typical lattices encountered in MIMO by a lattice having a trellis representation. Finally, this dissertation presents an alternative technique to feedback channel state information to the transmitter that shifts some of the processing complexity from the receiver to the transmitter while also reducing the amount of data to be sent in the feedback link.
    [Show full text]
  • Hardness of Bounded Distance Decoding on Lattices in Lp Norms
    Hardness of Bounded Distance Decoding on Lattices in `p Norms Huck Bennett∗ Chris Peikerty March 17, 2020 Abstract Bounded Distance Decoding BDDp,α is the problem of decoding a lattice when the target point is promised to be within an α factor of the minimum distance of the lattice, in the `p norm. We prove that BDDp,α is NP-hard under randomized reductions where α ! 1=2 as p ! 1 (and for α = 1=2 when p = 1), thereby showing the hardness of decoding for distances approaching the unique-decoding radius for large p. We also show fine-grained hardness for BDDp,α. For example, we prove that for (1−")n=C all p 2 [1; 1) n 2Z and constants C > 1; " > 0, there is no 2 -time algorithm for BDDp,α for some constant α (which approaches 1=2 as p ! 1), assuming the randomized Strong Exponential Time Hypothesis (SETH). Moreover, essentially all of our results also hold (under analogous non-uniform assumptions) for BDD with preprocessing, in which unbounded precomputation can be applied to the lattice before the target is available. Compared to prior work on the hardness of BDDp,α by Liu, Lyubashevsky, and Micciancio (APPROX- RANDOM 2008), our results improve the values of α for which the problem is known to be NP-hard for all p > p1 ≈ 4:2773, and give the very first fine-grained hardness for BDD (in any norm). Our reductions rely on a special family of “locally dense” lattices in `p norms, which we construct by modifying the integer-lattice sparsification technique of Aggarwal and Stephens-Davidowitz (STOC 2018).
    [Show full text]
  • Limits on the Hardness of Lattice Problems in Lp Norms
    ∗ Limits on the Hardness of Lattice Problems in `p Norms Chris Peikerty Abstract Several recent papers have established limits on the computational difficulty of lattice problems, focusing primarily on the `2 (Euclidean) norm. We demonstrate close analogues of these results in `p norms, for every 2 < p ≤ 1. In particular, for lattices of dimension n: • Approximating the closestp vector problem,p the shortest vector problem, and other related problems to within O( n) factors (or O( n log n) factors, for p = 1) is in coNP. • Approximating the closestp vector and bounded distance decoding problems with prepro- cessing to within O( n) factors can be accomplished in deterministic polynomial time. • Approximating several problems (such as the shortest independent vectors problem) to within O~(n) factors in the worst case reduces to solving the average-case problems defined in prior works (Ajtai, STOC 1996; Micciancio and Regev, SIAM J. on Computing 2007; Regev, STOC 2005). p Our results improve prior approximation factors for `p norms by up to n factors. Taken all together, they complement recent reductions from the `2 norm to `p norms (Regev and Rosen, STOC 2006), and provide some evidence that lattice problems in `p norms (for p > 2) may not be substantially harder than they are in the `2 norm. One of our main technical contributions is a very general analysis of Gaussian distributions over lattices, which may be of independent interest. Our proofs employ analytical techniques of Banaszczyk that, to our knowledge, have yet to be exploited in computer science. 1 Introduction n An n-dimensional lattice Λ ⊂ R is a periodic \grid" of points generated by all integer linear n combinations of n linearly independent vectors b1;:::; bn 2 R , which form what is called a basis of Λ.
    [Show full text]
  • A Decade of Lattice Cryptography
    A Decade of Lattice Cryptography Chris Peikert1 February 17, 2016 1Department of Computer Science and Engineering, University of Michigan. Much of this work was done while at the School of Computer Science, Georgia Institute of Technology. This material is based upon work supported by the National Science Foundation under CAREER Award CCF-1054495, by DARPA under agreement number FA8750-11- C-0096, and by the Alfred P. Sloan Foundation. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of the National Science Foundation, DARPA or the U.S. Government, or the Sloan Foundation. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright notation thereon. Abstract n Lattice-based cryptography is the use of conjectured hard problems on point lattices in R as the foundation for secure cryptographic systems. Attractive features of lattice cryptography include apparent resistance to quantum attacks (in contrast with most number-theoretic cryptography), high asymptotic efficiency and parallelism, security under worst-case intractability assumptions, and solutions to long-standing open problems in cryptography. This work surveys most of the major developments in lattice cryptography over the past ten years. The main focus is on the foundational short integer solution (SIS) and learning with errors (LWE) problems (and their more efficient ring-based variants), their provable hardness assuming the worst-case intractability of standard lattice problems, and their many cryptographic applications. Contents 1 Introduction 3 1.1 Scope and Organization . .4 1.2 Other Resources . .5 2 Background 6 2.1 Notation .
    [Show full text]
  • On the Concrete Hardness of Learning with Errors
    On the concrete hardness of Learning with Errors Martin R. Albrecht1, Rachel Player1, and Sam Scott1 Information Security Group, Royal Holloway, University of London Abstract. The Learning with Errors (LWE) problem has become a central building block of modern cryptographic constructions. This work collects and presents hardness results for concrete instances of LWE. In particular, we discuss algorithms proposed in the literature and give the expected resources required to run them. We consider both generic instances of LWE as well as small secret variants. Since for several methods of solving LWE we require a lattice reduction step, we also review lattice reduction algorithms and use a refined model for estimating their running times. We also give concrete estimates for various families of LWE instances, provide a Sage module for computing these estimates and highlight gaps in the knowledge about algorithms for solving the Learning with Errors problem. Warning (August 2019) While the LWE Estimator1, which accompanies this work, has been continuously updated since its publication, this work itself has not been. We therefore caution the reader against using this work as a reference for the state of the art in assessing the cost of solving LWE or making sense of the LWE estimator. For example, we note: { The cost of lattice reduction is estimated differently in the LWE estimator than as described in Section3. Since commit 9d37ca0, the cost of BKZ in dimension n with block size k is estimated as tBKZ = 8·n·tk, where tk is the cost of calling an SVP oracle in dimension k. The cost estimates for sieving have been updated: the classical estimate follows [BDGL16] since commit 74ee857 and a new quantum sieving estimate following [LMvdP15,Laa15] has been available since commit 509a759.
    [Show full text]