Techniques for Solving Shortest Vector Problem

Total Page:16

File Type:pdf, Size:1020Kb

Techniques for Solving Shortest Vector Problem (IJACSA) International Journal of Advanced Computer Science and Applications, Vol. 12, No. 5, 2021 Techniques for Solving Shortest Vector Problem V. Dinesh Reddy1, P. Ravi2, Ashu Abdul3, Mahesh Kumar Morampudi4, Sriramulu Bojjagani5 School of Engineering and Sciences, Department of CSE SRM University, Amaravati, AP, India1;3;4;5 Assistant Professor, CSE, Anurag University, Hyderabad, Telangana, India2 Abstract—Lattice-based crypto systems are regarded as secure compares different algorithms solving SVP with respect to and believed to be secure even against quantum computers. their efficiency and optimal solution. lattice-based cryptography relies upon problems like the Shortest Vector Problem. Shortest Vector Problem is an instance of lattice problems that are used as a basis for secure cryptographic II. PRELIMINARIES schemes. For more than 30 years now, the Shortest Vector A. Definitions and Related Problems Problem has been at the heart of a thriving research field and n n finding a new efficient algorithm turned out to be out of reach. Let (b1; : : : ; bn) be a basis in R and N a norm on R : This problem has a great many applications such as optimization, Ln Let L denote i=1 Zbi;L is called a lattice. Set communication theory, cryptography, etc. This paper introduces the Shortest Vector Problem and other related problems such as λ(L) = min N(v) the Closest Vector Problem. We present the average case and v2Lnf0g worst case hardness results for the Shortest Vector Problem. We are now able to define SVP. For L a lattice, the exact form Further this work explore efficient algorithms solving the Shortest Shortest Vector Problem, denoted by SV P (L) is as follows. Vector Problem and present their efficiency. More precisely, this paper presents four algorithms: the Lenstra-Lenstra-Lovasz SV P (L): Find v 2 L such that N(v) = λ(L) (LLL) algorithm, the Block Korkine-Zolotarev (BKZ) algorithm, a Metropolis algorithm, and a convex relaxation of SVP. The As it is often the case, we will actually be interested in experimental results on various lattices show that the Metropolis algorithms solving an approximation of SVP. For γ > 1 the algorithm works better than other algorithms with varying sizes approximated form of SV P (L); denoted by SV Pγ (L) is of lattices. SV P (L): Find v 2 Lnf0g such that N(v) γλ(L) Keywords—Lattice; SVP; CVP; post quantum cryptography γ 6 Now, a closely related problem is the Closest Vector Problem CVP (L; x); defined for a lattice L ⊂ Rn and x 2 Rn as I. INTRODUCTION CVP (L): F ind v 2 L such that N(v − x) = λ(L) A lattice an abstract structure defined as the set of all integer linear combinations of some independent vectors in In its exact form. As in the above we can define an Rn. Over the last two centuries, mathematicians have explored the fascinating combinatorial structure of lattices, and it has approximated problem CVPγ also been studied from an asymptotic algorithmic viewpoint CVP (L): F ind v 2 L such that N(v − x) γλ(L) for at least three decades. Most fundamental problems are γ 6 not considered to be effectively solvable on lattices. In ad- In fact, CVPγ solves SV Pγ : Indeed, for i = 1; : : : ; n i dition, hardness results suggest that such problems can not be set L = Zb1 ⊕ ::: ⊕ 2bi ⊕ ::: ⊕ bn and xi a solution to i solved by polynomial-time algorithms unless the hierarchy of CVPγ L ; bi ; then one of the vectors in fxi − bigi=1:::n is polynomial-time collapses. Cryptographic constructions based a solution to SV Pγ (L): For we have the following claim: on lattice hold a clear promise for cryptography with strong i security proof, as was demonstrated by Ajtai [1], who came Claim 1. Let xi be a solution to CVP L ; bi then up with a construction of cryptographic primitives based on fxi − big contains a solution to SV P (L) i=1;:::;n P worst-case hardness of certain lattice problems. Proof. Indeed, let x = nibi be any solution to SV P (L); as N x < N(x) we get that x 2= L and there is i 2 f1; : : : ; ng Two main important and very closely related hard com- 2 2 such that n is odd. Then, x + b 2 Li Therefore, the set putational problems used by cryptographers are the Shortest i i fx − b g contains a element of norm less than or Vector Problem (SVP) and the Closest Vector Problem (CVP). i i i=1;:::;n In the former, for a lattice specified by some basis we are equal to N(x). supposed to find nontrivial and small nonzero vector (length of When N is the l2 norm on Rn;CVP has another formula- vector) in the lattice. The problem CVP is an inhomogeneous tion that allows us to use convex optimization techniques. To variant of SVP, in which given a lattice (specified by some this end, let B be the matrix given by the basis (b1; : : : ; bn) basis) and a vector v, one has to find the vector in L closest and c a vector in Rn: Then, CVP (L; c) is equivalent to to v. The hardness of these problems is partially due to the fact that multiple bases can generate the same lattice. minimize xT BT Bx − 2cT x This work presents the best hardness result known for SVP, subject to x 2 Zn www.ijacsa.thesai.org 841 j P a g e (IJACSA) International Journal of Advanced Computer Science and Applications, Vol. 12, No. 5, 2021 n×m This is readily seen by expanding out the quantity kBx − ck2. for any X 2 (Z=qZ) ; set Λ(X) as the lattice m Following (ref), we will apply a convex relaxation to this fy 2 Z j Xy ≡ 0[q]g and Λn;m;q the space of such lattices. problem to get an efficient randomized algorithm solving We see Λn;m;q as a probability space by choosing X according CVPγ . to Ωn;m;q and computing Λ(X): For any X; Λ(X) is indeed a lattice, as q ( m) ⊂ Λ(X). Finally, a last related problem is the Hermite Shortest Z n×n Vector problem denoted by HSV P . Again, let B 2 R Finally, for c > 1 a lattice L endowed with a norm N is be the matrix given by (b1; : : : ; bn) ; for γ > 1, HSV Pγ (L) said to have a c-unique shortest vector v if the following holds is the following problem. fv 2 L j N(v) 6 cλ(L)g = fv; −vg HSV Pγ (L): Find v 2 L such that N(v) 6 γj det(B)j This result is called a worst-case to average case hardness According to a theorem of Minkowski’s, HSVP and SV P are because it links the lack of a polynomial ranodmized in fact closely related. The LLL algorithm described in Section algorithm over ‘all’ lattices to the lack of such algorithm for III-A actually solves HSV Pγ . a small class of lattices, namely Λn;m;q. We also note that similar results exist for related problems such as CVP . In B. Hardness: Average and Worst Case the following, we list further hardness results. An interesting feature of lattice problem is there hardness. M.Ajtai in his seminal papers [1], [2] showed in particular Theorem 2 SV P is NP -hard under the l1 -norm. that worst-case hardness is related to average-case hardness Moreover, CVP is NP-hard under the lp -norm for all p > 1 and that SVP is NP-hard for randomized reductions. This [8].. makes lattice problems particularly interesting as basis for crypto systems. Example of such systems are systems of Ajtai and Dwork [3], Goldreich, Goldwasser and Halevi [4], Theorem 3 For all γ, CVPγ is NP-hard under any lp the NTRU cryptosystem [5], and the first fully homomorphic -norm [9]. encryption scheme by Gentry [6]. In this section we state a number of hardness results without proofs. For a more Building on results by Ajtai [2], Micciancio proved the precise treatment see also [7]. The main result of worst-case following. to average case NP-hardness is as follows. p Theorem 4 For all > 0; SV P 2− is NP -hard under randomized polynomial time reductions [10]. Theorem 1 Suppose there is a randomized polynomial time algorithm A such that for all n; A returns a vector of 1 Finally: Λ(X) of length 6 n with probability nO(1) for Λ(X) chosen β at random in Λn;m;q where m = αn log(n) and q = n for c > 0 Theorem 5 There is suchc that CVP is NP-hard appropriate α; β: Then, there exist a randomized polynomial to approximate within a factor n log(log(n)) , where n is the time algorithm B and constants c1; c2; c3 such that for all dimension of the lattice [11]. n n; (b1; : : : ; bn) basis of R and L := ⊕Zbi; B performs the following with high probability [3]: 1) Finds a basis fβ1; : : : ; βng for L such that Many other results related to hardness exist. We only decided to mention some of the most fundamental ones. c1 max kβik 6 n min max kcik ; (ci) basis of L III. SOLVING SVP 2) finds an estimate λ¯ of λ(L) such that, This section presents solving SV P and related results. λ1(L) ~ Even though many hardness results have been found, we 6 λ 6 λ(L) nc2 can still solve approximate SV P within some reasonable 3) If moreover, L has a nc3 unique shortest vector, finds this constant depending on the input size.
Recommended publications
  • Arxiv:Cs/0601091V2
    Communication Over MIMO Broadcast Channels Using Lattice-Basis Reduction1 Mahmoud Taherzadeh, Amin Mobasher, and Amir K. Khandani Coding & Signal Transmission Laboratory Department of Electrical & Computer Engineering University of Waterloo Waterloo, Ontario, Canada, N2L 3G1 Abstract A simple scheme for communication over MIMO broadcast channels is introduced which adopts the lattice reduction technique to improve the naive channel inversion method. Lattice basis reduction helps us to reduce the average transmitted energy by modifying the region which includes the constellation points. Simulation results show that the proposed scheme performs well, and as compared to the more complex methods (such as the perturbation method [1]) has a negligible loss. Moreover, the proposed method is extended to the case of different rates for different users. The asymptotic behavior (SNR ) of the symbol error rate of the proposed method and the −→ ∞ perturbation technique, and also the outage probability for the case of fixed-rate users is analyzed. It is shown that the proposed method, based on LLL lattice reduction, achieves the optimum asymptotic slope of symbol-error-rate (called the precoding diversity). Also, the outage probability for the case of fixed sum-rate is analyzed. I. INTRODUCTION In the recent years, communications over multiple-antenna fading channels has attracted the attention of many researchers. Initially, the main interest has been on the point-to-point Multiple-Input Multiple-Output (MIMO) communications [2]–[6]. In [2] and [3], the authors have shown that the capacity of a MIMO point-to-point channel increases linearly with the arXiv:cs/0601091v2 [cs.IT] 18 Jun 2007 minimum number of the transmit and the receive antennas.
    [Show full text]
  • Lectures on the NTRU Encryption Algorithm and Digital Signature Scheme: Grenoble, June 2002
    Lectures on the NTRU encryption algorithm and digital signature scheme: Grenoble, June 2002 J. Pipher Brown University, Providence RI 02912 1 Lecture 1 1.1 Integer lattices Lattices have been studied by cryptographers for quite some time, in both the field of cryptanalysis (see for example [16{18]) and as a source of hard problems on which to build encryption schemes (see [1, 8, 9]). In this lecture, we describe the NTRU encryption algorithm, and the lattice problems on which this is based. We begin with some definitions and a brief overview of lattices. If a ; a ; :::; a are n independent vectors in Rm, n m, then the 1 2 n ≤ integer lattice with these vectors as basis is the set = n x a : x L f 1 i i i 2 Z . A lattice is often represented as matrix A whose rows are the basis g P vectors a1; :::; an. The elements of the lattice are simply the vectors of the form vT A, which denotes the usual matrix multiplication. We will specialize for now to the situation when the rank of the lattice and the dimension are the same (n = m). The determinant of a lattice det( ) is the volume of the fundamen- L tal parallelepiped spanned by the basis vectors. By the Gram-Schmidt process, one can obtain a basis for the vector space generated by , and L the det( ) will just be the product of these orthogonal vectors. Note that L these vectors are not a basis for as a lattice, since will not usually L L possess an orthogonal basis.
    [Show full text]
  • Solving Hard Lattice Problems and the Security of Lattice-Based Cryptosystems
    Solving Hard Lattice Problems and the Security of Lattice-Based Cryptosystems † Thijs Laarhoven∗ Joop van de Pol Benne de Weger∗ September 10, 2012 Abstract This paper is a tutorial introduction to the present state-of-the-art in the field of security of lattice- based cryptosystems. After a short introduction to lattices, we describe the main hard problems in lattice theory that cryptosystems base their security on, and we present the main methods of attacking these hard problems, based on lattice basis reduction. We show how to find shortest vectors in lattices, which can be used to improve basis reduction algorithms. Finally we give a framework for assessing the security of cryptosystems based on these hard problems. 1 Introduction Lattice-based cryptography is a quickly expanding field. The last two decades have seen exciting devel- opments, both in using lattices in cryptanalysis and in building public key cryptosystems on hard lattice problems. In the latter category we could mention the systems of Ajtai and Dwork [AD], Goldreich, Goldwasser and Halevi [GGH2], the NTRU cryptosystem [HPS], and systems built on Small Integer Solu- tions [MR1] problems and Learning With Errors [R1] problems. In the last few years these developments culminated in the advent of the first fully homomorphic encryption scheme by Gentry [Ge]. A major issue in this field is to base key length recommendations on a firm understanding of the hardness of the underlying lattice problems. The general feeling among the experts is that the present understanding is not yet on the same level as the understanding of, e.g., factoring or discrete logarithms.
    [Show full text]
  • On Ideal Lattices and Learning with Errors Over Rings∗
    On Ideal Lattices and Learning with Errors Over Rings∗ Vadim Lyubashevskyy Chris Peikertz Oded Regevx June 25, 2013 Abstract The “learning with errors” (LWE) problem is to distinguish random linear equations, which have been perturbed by a small amount of noise, from truly uniform ones. The problem has been shown to be as hard as worst-case lattice problems, and in recent years it has served as the foundation for a plethora of cryptographic applications. Unfortunately, these applications are rather inefficient due to an inherent quadratic overhead in the use of LWE. A main open question was whether LWE and its applications could be made truly efficient by exploiting extra algebraic structure, as was done for lattice-based hash functions (and related primitives). We resolve this question in the affirmative by introducing an algebraic variant of LWE called ring- LWE, and proving that it too enjoys very strong hardness guarantees. Specifically, we show that the ring-LWE distribution is pseudorandom, assuming that worst-case problems on ideal lattices are hard for polynomial-time quantum algorithms. Applications include the first truly practical lattice-based public-key cryptosystem with an efficient security reduction; moreover, many of the other applications of LWE can be made much more efficient through the use of ring-LWE. 1 Introduction Over the last decade, lattices have emerged as a very attractive foundation for cryptography. The appeal of lattice-based primitives stems from the fact that their security can often be based on worst-case hardness assumptions, and that they appear to remain secure even against quantum computers.
    [Show full text]
  • On Lattices, Learning with Errors, Random Linear Codes, and Cryptography
    On Lattices, Learning with Errors, Random Linear Codes, and Cryptography Oded Regev ¤ May 2, 2009 Abstract Our main result is a reduction from worst-case lattice problems such as GAPSVP and SIVP to a certain learning problem. This learning problem is a natural extension of the ‘learning from parity with error’ problem to higher moduli. It can also be viewed as the problem of decoding from a random linear code. This, we believe, gives a strong indication that these problems are hard. Our reduction, however, is quantum. Hence, an efficient solution to the learning problem implies a quantum algorithm for GAPSVP and SIVP. A main open question is whether this reduction can be made classical (i.e., non-quantum). We also present a (classical) public-key cryptosystem whose security is based on the hardness of the learning problem. By the main result, its security is also based on the worst-case quantum hardness of GAPSVP and SIVP. The new cryptosystem is much more efficient than previous lattice-based cryp- tosystems: the public key is of size O~(n2) and encrypting a message increases its size by a factor of O~(n) (in previous cryptosystems these values are O~(n4) and O~(n2), respectively). In fact, under the assumption that all parties share a random bit string of length O~(n2), the size of the public key can be reduced to O~(n). 1 Introduction Main theorem. For an integer n ¸ 1 and a real number " ¸ 0, consider the ‘learning from parity with n error’ problem, defined as follows: the goal is to find an unknown s 2 Z2 given a list of ‘equations with errors’ hs; a1i ¼" b1 (mod 2) hs; a2i ¼" b2 (mod 2) .
    [Show full text]
  • Lattice Reduction for Modular Knapsack⋆
    Lattice Reduction for Modular Knapsack? Thomas Plantard, Willy Susilo, and Zhenfei Zhang Centre for Computer and Information Security Research School of Computer Science & Software Engineering (SCSSE) University Of Wollongong, Australia fthomaspl, wsusilo, [email protected] Abstract. In this paper, we present a new methodology to adapt any kind of lattice reduction algorithms to deal with the modular knapsack problem. In general, the modular knapsack problem can be solved using a lattice reduction algorithm, when its density is low. The complexity of lattice reduction algorithms to solve those problems is upper-bounded in the function of the lattice dimension and the maximum norm of the input basis. In the case of a low density modular knapsack-type basis, the weight of maximum norm is mainly from its first column. Therefore, by distributing the weight into multiple columns, we are able to reduce the maximum norm of the input basis. Consequently, the upper bound of the time complexity is reduced. To show the advantage of our methodology, we apply our idea over the floating-point LLL (L2) algorithm. We bring the complexity from O(d3+"β2 + d4+"β) to O(d2+"β2 + d4+"β) for " < 1 for the low den- sity knapsack problem, assuming a uniform distribution, where d is the dimension of the lattice, β is the bit length of the maximum norm of knapsack-type basis. We also provide some techniques when dealing with a principal ideal lattice basis, which can be seen as a special case of a low density modular knapsack-type basis. Keywords: Lattice Theory, Lattice Reduction, Knapsack Problem, LLL, Recursive Reduction, Ideal Lattice.
    [Show full text]
  • Making NTRU As Secure As Worst-Case Problems Over Ideal Lattices
    Making NTRU as Secure as Worst-Case Problems over Ideal Lattices Damien Stehlé1 and Ron Steinfeld2 1 CNRS, Laboratoire LIP (U. Lyon, CNRS, ENS Lyon, INRIA, UCBL), 46 Allée d’Italie, 69364 Lyon Cedex 07, France. [email protected] – http://perso.ens-lyon.fr/damien.stehle 2 Centre for Advanced Computing - Algorithms and Cryptography, Department of Computing, Macquarie University, NSW 2109, Australia [email protected] – http://web.science.mq.edu.au/~rons Abstract. NTRUEncrypt, proposed in 1996 by Hoffstein, Pipher and Sil- verman, is the fastest known lattice-based encryption scheme. Its mod- erate key-sizes, excellent asymptotic performance and conjectured resis- tance to quantum computers could make it a desirable alternative to fac- torisation and discrete-log based encryption schemes. However, since its introduction, doubts have regularly arisen on its security. In the present work, we show how to modify NTRUEncrypt to make it provably secure in the standard model, under the assumed quantum hardness of standard worst-case lattice problems, restricted to a family of lattices related to some cyclotomic fields. Our main contribution is to show that if the se- cret key polynomials are selected by rejection from discrete Gaussians, then the public key, which is their ratio, is statistically indistinguishable from uniform over its domain. The security then follows from the already proven hardness of the R-LWE problem. Keywords. Lattice-based cryptography, NTRU, provable security. 1 Introduction NTRUEncrypt, devised by Hoffstein, Pipher and Silverman, was first presented at the Crypto’96 rump session [14]. Although its description relies on arithmetic n over the polynomial ring Zq[x]=(x − 1) for n prime and q a small integer, it was quickly observed that breaking it could be expressed as a problem over Euclidean lattices [6].
    [Show full text]
  • Lower Bounds on Lattice Sieving and Information Set Decoding
    Lower bounds on lattice sieving and information set decoding Elena Kirshanova1 and Thijs Laarhoven2 1Immanuel Kant Baltic Federal University, Kaliningrad, Russia [email protected] 2Eindhoven University of Technology, Eindhoven, The Netherlands [email protected] April 22, 2021 Abstract In two of the main areas of post-quantum cryptography, based on lattices and codes, nearest neighbor techniques have been used to speed up state-of-the-art cryptanalytic algorithms, and to obtain the lowest asymptotic cost estimates to date [May{Ozerov, Eurocrypt'15; Becker{Ducas{Gama{Laarhoven, SODA'16]. These upper bounds are useful for assessing the security of cryptosystems against known attacks, but to guarantee long-term security one would like to have closely matching lower bounds, showing that improvements on the algorithmic side will not drastically reduce the security in the future. As existing lower bounds from the nearest neighbor literature do not apply to the nearest neighbor problems appearing in this context, one might wonder whether further speedups to these cryptanalytic algorithms can still be found by only improving the nearest neighbor subroutines. We derive new lower bounds on the costs of solving the nearest neighbor search problems appearing in these cryptanalytic settings. For the Euclidean metric we show that for random data sets on the sphere, the locality-sensitive filtering approach of [Becker{Ducas{Gama{Laarhoven, SODA 2016] using spherical caps is optimal, and hence within a broad class of lattice sieving algorithms covering almost all approaches to date, their asymptotic time complexity of 20:292d+o(d) is optimal. Similar conditional optimality results apply to lattice sieving variants, such as the 20:265d+o(d) complexity for quantum sieving [Laarhoven, PhD thesis 2016] and previously derived complexity estimates for tuple sieving [Herold{Kirshanova{Laarhoven, PKC 2018].
    [Show full text]
  • Lattice Reduction in Two Dimensions: Analyses Under Realistic Probabilistic Models
    Lattice reduction in two dimensions : analyses under realistic probabilistic models Brigitte Vallée, Antonio Vera To cite this version: Brigitte Vallée, Antonio Vera. Lattice reduction in two dimensions : analyses under realistic proba- bilistic models. 2008. hal-00211495 HAL Id: hal-00211495 https://hal.archives-ouvertes.fr/hal-00211495 Preprint submitted on 21 Jan 2008 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Lattice reduction in two dimensions: analyses under realistic probabilistic models Brigitte Vallee´ and Antonio Vera CNRS UMR 6072, GREYC, Universite´ de Caen, F-14032 Caen, France The Gaussian algorithm for lattice reduction in dimension 2 is precisely analysed under a class of realistic probabilistic models, which are of interest when applying the Gauss algorithm “inside” the LLL algorithm. The proofs deal with the underlying dynamical systems and transfer operators. All the main parameters are studied: execution parameters which describe the behaviour of the algorithm itself as well as output parameters, which describe the geometry of reduced bases. Keywords: Lattice Reduction, Gauss’ algorithm, LLL algorithm, Euclid’s algorithm, probabilistic analysis of algo- rithms, Dynamical Systems, Dynamical analysis of Algorithms. 1 Introduction The lattice reduction problem consists in finding a short basis of a lattice of Euclidean space given an initially skew basis.
    [Show full text]
  • Algorithms for the Approximate Common Divisor Problem
    Algorithms for the Approximate Common Divisor Problem Steven D. Galbraith1, Shishay W. Gebregiyorgis1, and Sean Murphy2 1 Mathematics Department, University of Auckland, New Zealand. [email protected],[email protected] 2 Royal Holloway, University of London, UK. [email protected] Abstract. The security of several homomorphic encryption schemes depends on the hardness of the Approximate Common Divisor (ACD) problem. In this paper we review and compare existing algorithms to solve the ACD prob- lem using lattices. In particular we consider the simultaneous Diophantine approximation method, the orthogonal lattice method, and a method based on multivariate polynomials and Coppersmith’s algorithm that was studied in detail by Cohn and Heninger. One of our main goals is to compare the multivariate polynomial approach with other methods. We find that the multivariate polynomial approach is not better than the orthogonal lattice algorithm for practical cryptanalysis. Another contribution is to consider a sample-amplification technique for ACD samples, and to consider a pre- processing algorithm similar to the Blum-Kalai-Wasserman (BKW) algorithm for learning parity with noise. We explain why, unlike in other settings, the BKW algorithm does not give an improvement over the lattice algorithms. This is the full version of a paper published at ANTS-XII in 2016. Keywords: Approximate common divisors, lattice attacks, orthogonal lattice, Coppersmith’s method. 1 Introduction The approximate common divisor problem (ACD) was first studied by Howgrave-Graham [HG01]. Further inter- est in this problem was provided by the homomorphic encryption scheme of van Dijk, Gentry, Halevi and Vaikun- tanathan [DGHV10] and its variants [CMNT11,CNT12,CS15].
    [Show full text]
  • A Hybrid Lattice-Reduction and Meet-In-The-Middle Attack Against NTRU
    A hybrid lattice-reduction and meet-in-the-middle attack against NTRU Nick Howgrave-Graham [email protected] NTRU Cryptosystems, Inc. Abstract. To date the NTRUEncrypt security parameters have been based on the existence of two types of attack: a meet-in-the-middle at- tack due to Odlyzko, and a conservative extrapolation of the running times of the best (known) lattice reduction schemes to recover the private key. We show that there is in fact a continuum of more efficient attacks between these two attacks. We show that by combining lattice reduction and a meet-in-the-middle strategy one can reduce the number of loops in attacking the NTRUEncrypt private key from 284.2 to 260.3, for the k = 80 parameter set. In practice the attack is still expensive (depen- dent on ones choice of cost-metric), although there are certain space/time tradeoffs that can be applied. Asymptotically our attack remains expo- nential in the security parameter k, but it dictates that NTRUEncrypt parameters must be chosen so that the meet-in-the-middle attack has complexity 2k even after an initial lattice basis reduction of complexity 2k. 1 Introduction It is well known that the closest vector problem (CVP) can be solved efficiently in the case that the given point in space is very close to a lattice vector [7, 18]. If this CVP algorithm takes time t and a set S has the property that it includes at least one point v0 S which is very close to a lattice vector, then clearly v0 can be found in time O∈( S t) by exhaustively enumerating the set S.
    [Show full text]
  • Algorithms for the Densest Sub-Lattice Problem
    Algorithms for the Densest Sub-Lattice Problem Daniel Dadush∗ Daniele Micciancioy December 24, 2012 Abstract We give algorithms for computing the densest k-dimensional sublattice of an arbitrary lattice, and related problems. This is an important problem in the algorithmic geometry of numbers that includes as special cases Rankin’s problem (which corresponds to the densest sublattice problem with respect to the Euclidean norm, and has applications to the design of lattice reduction algorithms), and the shortest vector problem for arbitrary norms (which corresponds to setting k = 1) and its dual (k = n − 1). Our algorithm works for any norm and has running time kO(k·n) and uses 2n poly(n) space. In particular, the algorithm runs in single exponential time 2O(n) for any constant k = O(1). ∗Georgia Tech. Atlanta, GA, USA. Email: [email protected] yUniversity of California, San Diego. 9500 Gilman Dr., Mail Code 0404, La Jolla, CA 92093, USA. Email: [email protected]. This work was supported in part by NSF grant CNS-1117936. Any opinions, findings, and con- clusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation. 1 Introduction n A lattice is a discrete subgroup of R . Computational problems on lattices play an important role in many areas of computer science, mathematics and engineering, including cryptography, cryptanalysis, combinatorial optimization, communication theory and algebraic number theory. The most famous problem on lattices is the shortest vector problem (SVP), which asks to find the shortest nonzero vector in an input lattice.
    [Show full text]