New hardness results for total search problems and non-interactive lattice-based protocols by Aikaterini Sotiraki Diploma, National Technical University of Athens (2013) S.M., Massachusetts Institute of Technology (2016) Submitted to the Department of Electrical Engineering and Computer Science in partial fulfillment of the requirements for the degree of Doctor of Philosophy at the MASSACHUSETTS INSTITUTE OF TECHNOLOGY September 2020 ○c Massachusetts Institute of Technology 2020. All rights reserved.
Author...... Department of Electrical Engineering and Computer Science August 13, 2020
Certified by ...... Vinod Vaikuntanathan Associate Professor of Electrical Engineering and Computer Science Thesis Supervisor
Accepted by ...... Leslie A. Kolodziejski Professor of Electrical Engineering and Computer Science Chair, Department Committee on Graduate Students 2 New hardness results for total search problems and non-interactive lattice-based protocols by Aikaterini Sotiraki
Submitted to the Department of Electrical Engineering and Computer Science on August 13, 2020, in partial fulfillment of the requirements for the degree of Doctor of Philosophy
Abstract We investigate new hardness results for total search problems, namely search problems that always have a solution, and non-interactive lattice protocols.
1. A question of particular importance, raised together with the definition of the class of total search problems, is to identify complete problems that do not have explicitly a Turing machine or a circuit as a part of their input. We provide natural complete problems for various TFNP subclasses. We first consider the classes PPP and PWPP, where our complete problems are inspired by lattice theory and lead towards answering important questions in cryptography. Additionally, we identify complete problems for classes that generalize the class PPA, called PPAq. In this case, our results have connections to number theory, and in particular to the Chevalley-Warning theorem.
2. The Learning With Errors (LWE) problem, introduced by Regev has had a profound impact on cryptography. We consider non-interactive protocols that are secure under the LWE assumption, such as non-interactive zero- knowledge and non-interactive key-exchange. We show new strategies and new barriers towards constructing such protocols.
Thesis Supervisor: Vinod Vaikuntanathan Title: Associate Professor of Electrical Engineering and Computer Science
3 4 Acknowledgments
I have been extremely fortunate to spend the last few years at MIT. I am particu- larly grateful to my advisor Vinod Vaikuntanathan. His enthusiasm for research and his unlimited knowledge have provided me with motivation to continue even on the hardest moments of my PhD. Even short meetings with Vinod were enough to give me new ideas and directions when I most needed them. A decisive point during my PhD was definitely my internship at IDC, Her- zliya with Alon Rosen. This experience gave me the opportunity to interact with amazing people and find new collaborators. I hope that this summer laid the foundations for many more collaborations yet to come. Another important point of my PhD was my internship at Microsoft Research, where I was fortunate to work on applied Cryptography. I am still amazed that in these three months I was able to learn so many new things unrelated to my pre- vious research and contribute to projects. I am certain that this would have been impossible without the help of the group at Microsoft Research, and especially my mentor Esha Ghosh. I would like to thank all of my collaborators: Manolis, Giorgos, Ron, Adam, Esha, Hao, Pritish, Siyao, Alon, Aris, Alexandros, and Mika. It is beyond any doubt that this thesis would not exist without them. I really enjoyed meeting and learning from each one of them and I hope that I will have opportunity to continue in the future. The theory group at MIT is a unique place, where I was very fortunate to develop academically, but also meet new friends. I would like to thank all of the students and faculty of the group for providing such a warm and welcoming environment. I am not going to attempt to mention the other students by name in fear of forgetting someone. But, I am very grateful to each one of them for all the discussions, the retreats, the theory teas etc. that we spend together. Especially, I want to thank Ron Rivest and Yael Kalai for serving in my com- mittee. Ron was also my Master thesis supervisor and I am grateful to him for
5 introducing me to research. Yael was always a person that I could ask for advice and it is amazing how much I have learned from Yael even without directly work- ing with her. I want to also thank Nancy Lynch for giving the opportunity for be a TA for her class and for providing me with guidance throughout the years. I want to thank the wonderful administrative assistants of the theory group, Debbie, Joanne and Rebecca. My life at MIT was enjoyable thanks to my many friends. I really hope that even after MIT these friendships will last. I want to especially thank M˘ad˘alina, Artemisa, Chara and Lydia P, Chistos, Vasso and Dafne, Christina and Konstanti- nos, Giorgos and Will, Ilias, Konstantina, Maryam, Lydia Z and Marinos, Sophie, and most of all my partner Manolis for all the great moments that we had. From having daily tea breaks to sharing important life moments, my memories of MIT are always tied to them. Finally, I cannot thank enough my parents and my extended family for all the love and support that they offer every single day. It is without doubt that the most challenging part of my PhD was being away from them.
6 Contents
1 Introduction 15 1.1 Total Search Problems...... 16 1.2 Non-interactive lattice-based protocols...... 20 1.3 Notation...... 25
2 Total Search Problems 29 2.1 Complete problems...... 30 2.2 TFNP and Cryptography...... 32 2.3 Reductions...... 33 2.4 Set Description Using Circuits...... 34
3 Lattices 37 3.1 Computational Lattice Problems...... 39 3.2 Cryptographic assumptions...... 41
4 Pigeonhole Principle Arguments 45 4.1 Overview of the Results...... 46 4.2 BLICHFELDT is PPP-complete...... 52 4.3 cSIS is PPP-complete...... 57 4.4 Towards universal collision-resistant hashing...... 69 4.5 Natural Complete problem for PMPP ...... 75 4.6 Lattice problems in PPP and PWPP ...... 77
7 5 Modulo q Arguments 81 5.1 Overview...... 82
5.2 The class PPAq ...... 92 5.3 Characterization via Primes...... 95 5.4 A Natural Complete Problem...... 99 5.5 Complete Problems via Small Depth Arithmetic Circuits...... 120 5.6 Applications of Chevalley-Warning...... 122
5.7 Structural Properties of PPAq ...... 125
6 Non-interactive zero-knowledge 131 6.1 Overview...... 131 6.2 Basic Definitions...... 138 6.3 From POCS to NIZKs...... 141 6.4 Instantiating with LWE...... 153
7 Non-interactive key-exchange 167 7.1 Overview...... 170 7.2 Basic Definitions...... 173 7.3 (Information Theoretic) Impossibility of Amplification with Multi- ple Samples...... 174 7.4 (Computational) Impossibility of Noise-Ignorant Key Reconcilia- tion Functions...... 181 7.5 Connections to other cryptographic primitives...... 187
8 Summary and Open Problems 193
A Missing proofs of Chapter 4 197 A.1 Proof of Claim 4.3.4...... 197
B Missing proofs of Chapter 5 199 B.1 Reductions Between Complete Problems...... 199 B.2 Completeness of Succinct Bipartite...... 204
8 B.3 Equivalence with PMODp ...... 206 B.4 Proof of Theorem 5.5.1...... 207
C Missing proofs of Chapter 7 213 C.1 A self-contained proof of Theorem 7.3.1...... 213
9 10 List of Figures
2-1 The landscape of TFNP subclasses...... 30
4-1 Problems in PPP ...... 52 4-2 A simple example of the construction of Lemma 4.3.3...... 68
5-1 The PPAq subclasses of TFNP ...... 91
5-2 Total search problems related to PPAq ...... 95
5-3 Illustration of the proof of PPApk ⊆ PPAp ...... 97
5-4 Illustration of the proof of PPAD ⊆ PPAq ...... 126 5-5 Illustration of the proof of Theorem 5.7.3...... 130
7-1 LWE-based key-exchange through reconciliation...... 168
11 12 List of Tables
4.1 Value and auxiliary variables of graph 풢(i) ...... 62 4.2 Equations of non-input nodes...... 62 4.3 Illustration of the matrix G(i) ...... 64 4.4 Illustration of the matrix A ...... 66
A.1 Values of specific expressions (mod 4) ...... 197
13 14 Chapter 1
Introduction
The main task of Cryptography is to allow computation and communication in adversarial settings. Most cryptographic applications are impossible in the information-theoretic setting as formalized by Shannon [156], and hence require computational assumptions. Traditional cryptographic assumptions include the hardness of factoring and of finding discrete logarithms. More recently, starting with the breakthrough works of Ajtai [4] and Regev [142], new assumptions based on the hardness of computational lattices problems have been introduced. These problems have various features that traditional assumptions lack. One such char- acteristic is the nonexistence of known efficient quantum attacks against these as- sumptions. Another unique feature is that their average case hardness is based on the worst-case hardness of lattice problems. Even though lattice-based assump- tions enjoy these unique features and have been instrumental in the construction of many advanced cryptographic primitives, we still have limited evidence for their hardness, as with any other cryptographic assumption. The contribution of the thesis related to lattice-based assumptions is two-fold. On the one hand, we increase the understanding of the hardness of lattice-based assumption through their connection with complexity theory, and in particular the complexity of total search problems. On the other hand, we investigate what type of protocols can be constructed based on these assumptions. In particular, we are interested in non-interactive protocols, that have been known from traditional
15 assumptions, such as factoring. All the work in this thesis is based on prior publications [158, 91, 146, 96].
1.1 Total Search Problems
The fundamental task of Computational Complexity theory is to classify computa- tional problems according to their inherent computational difficulty. This leads to the definition of various complexity classes, such as NP which contains the decision problems with efficiently verifiable proofs of the “yes” instances. The search analog of the class NP, called FNP, contains the search problems whose decision version is in NP. Similarly, the class FP is the search analog of P. The seminal works of [106, 134] consider search problems in FNP that are total, i.e. their decision version is always affirmative and thus a solution must always exist. Even though the class FNP seems inadequate to capture the intrinsic com- plexity of total problems, as it was first shown in [106], there are evidences for the hardness of total search problems e.g. in [97]. Hence, Megiddo and Papadim- itriou [122] defined the class TFNP that contains the total search problems of FNP, and Papadimitriou [134] proposed the following classification rule of problems in TFNP:
Total search problems should be classified in terms of the profound mathemat- ical principles that are invoked to establish their totality.
Following the above principle, Johnson, Papadimitriou and Yannakakis [106] de- fined the class PLS. A few years later, Papadimitriou [134] defined the complexity classes PPA, PPAD, PPADS and PPP. Recently, the classes CLS and PWPP were defined in [54] and [104], respectively. Finding complete problems for the above classes enhances our understanding of the underlying mathematical principles. Additionally, completeness results re- veal equivalences between total search problems that seem impossible to discover without invoking the definition of these classes. A question of particular impor-
16 tance, raised together with the definition of the TFNP subclasses in [106, 134], is to identify complete problems that do not have explicitly a Turing machine or a circuit as a part of their input. Following the terminology of many TFNP pa- pers, including [94, 70, 71], such problems are called natural problems. Natural complete problems are known for PLS [152] and PPAD [54]. Recently, Filos and Goldberg [70, 71] identified natural complete problems for the class PPA. Finally, the theory of total search problems has found connections beyond its original scope to areas like communication complexity and circuit lower bounds [90] and the Sum-of-Squares hierarchy [112].
Our Contribution
We provide natural complete problems for various TFNP subclasses. We first con- sider the classes PPP and PWPP where our complete problems are inspired by lattice theory and lead towards answering important questions in cryptography. Additionally, we identify complete problems for classes that generalize the class
PPA, called PPAq. In this case, our results have connections to number theory, and in particular to the Chevalley-Warning theorem.
Our main result related to the class PPP is that the cSIS problem, a constrained version of the Short Integer Solution (SIS), is PPP-complete [158]. We also identify a version of cSIS, denoted as weak-cSIS, that is PWPP-complete. These are the first natural complete problems for the classes PPP and PWPP. Additionally, we show that the computational problem BLICHFELDT, associated with Blichfeldt’s theorem, is PPP-complete. The Blichfeldt’s theorem is a generalization of the famous Minkowski’s theorem. Even though BLICHFELDT is not natural, since it requires a circuit as part of its input, establishing its complexity has consequences for the complexity of other lattice problems. We now summarize the applications of our main result to cryptography and lattice theory.
17 Universal Collision-Resistant Hash Function. The class PPP has intrinsic con- nections to cryptography, as it was first pointed out by Papadimitriou [134]. This connection was further investigated through the definition of PWPP by Jerábeck [104]. Building on this inherent connection of PWPP with the cryptographic prim- itive of collision-resistant hash functions, we construct a natural hash function family
ℋcSIS with the following properties: - Worst-Case Universality. No efficient algorithm can find a collision in ev-
ery function of the family ℋcSIS, unless worst-case collision-resistant hash functions do not exist.
Moreover, if an (average-case hard) collision-resistant hash function family
exists, then there exists an efficiently sampleable distribution 풟 over ℋcSIS,
such that (풟, ℋcSIS) is an (average-case hard) collision-resistant hash func- tion family. - Average-Case Hardness. No efficient algorithm can find a collision in a
function chosen uniformly at random from ℋcSIS, unless we can efficiently find short lattice vectors in any (worst-case) lattice. Worst-case universality is reminiscent of the notion of worst-case one-way func- tions from the assumption that P ̸= NP [155]. The worst-case one-way function is defined via a Turing machine, as opposed to our hash function family ℋcSIS that is natural, and hence does not involve any circuit or Turing machine in its definition. Levin [116] initiated the idea of universal constructions of cryptographic prim- itives and constructed the first universal one-way function. Since then, many works construct other universal primitives [115, 113]. In fact, the same ideas al- lows us to construct universal collision-resistant hash function families. However, this hash function invokes an explicit description of a Turing machine in its input, and hence it fails to be natural. In contrast, our candidate construction is natural, simple, and of practical interest.
18 Extension to multi-collision hash functions. The classes k-PMPP, which were recently introduced by [111], capture the principle of finding multi-collisions in a very compressing function. These classes are related to the cryptographic prim- itive of multi-collision hash functions, which has been a useful cryptographic tool [111, 26, 110, 23]. We define an extension of the cSIS problem, called k-cSIS, and we show that this problem is complete for the class k-PMPP. Our completeness result suggests the first candidate natural universal multi-collision hash function family.
Complexity of Lattice Problems in PPP. The use of lattice problems for cryp- tography started with the breakthrough work of Ajtai [4] and has been a pro- lific area of research since then. This wide use of search (approximation) lattice problems motivates the study of their complexity. In fact, Aharonov and Regev showed that the lattice problems that serve as the foundation for numerous cryp- tographic constructions in the past two decades belong to NP ∩ coNP [2]. We expand this research front by showing that numerous approximation lat- tice problems are reducible to BLICHFELDT and cSIS, and hence they belong in subclasses of TFNP. This follows by combining results and techniques from lat- tice theory with our complexity results for BLICHFELDT and cSIS. These results create a new path towards a better understanding of the complexity of lattice problems.
In terms of the PPAq classes, we provide a systematic study of these complex- ity classes and we provide the first natural complete problems when q is a prime [91]. Our complete problems are explicit versions of the Chevalley-Warning the- orem. Following the terminology in [21], by explicit we mean that the system of polynomials, which is the input of the computational problems we define, are given as a sum of monic monomials. As a consequence of the PPAq-completeness of our natural problem for q prime, we show that restricting the input circuits in the definition of PPAq to just constant depth arithmetic formulas does not change the power of the class.
19 We also illustrate the importance of the complexity classes PPAq by showing that many important search problems, whose computational complexity is not
well-understood, belong to PPAq. We complement the study of PPAq with con- nections to other important and well-studied classes, like PPAD. Below we give a more precise overview of these applications.
Structural properties of PPAq. We characterize the class PPAq for general q in
terms of the classes PPAp for prime p. Additionally, we sketch how existing results already paint a near-complete picture of the relative power of PPAq relative to other TFNP subclasses (via inclusions and oracle separations). We also show that
PPAq is closed under Turing reductions.
Connection to lattice problems. We show a connection between PPAq and the Short Integer Solution (SIS) problem from the theory of lattices. This connection
implies that SIS with constant modulus q belongs to PPAq ∩ PPP, but also provides a polynomial time algorithm for solving SIS when the modulus q is constant and has only 2 and 3 as prime factors.
1.2 Non-interactive lattice-based protocols
The learning with errors (LWE) problem, introduced by Regev [142], has had a profound impact on cryptography. The goal in LWE is to find a solution to a set of noisy linear equations modulo a large integer q, where the noise is typically drawn from a discrete Gaussian distribution. The assumption that LWE cannot be broken in polynomial time can be based on worst-case hardness of lattice problems [142, 135] and has drawn immense interest in recent years. Immediately following its introduction, LWE was shown to imply the exis- tence of many important cryptographic primitives such as public-key encryption [142], circular secure encryption [11], oblivious transfer [139], chosen ciphertext security [140, 135], etc. Even more remarkably, in recent years LWE has been used
20 to achieve schemes and protocols above and beyond what was previously known from other assumptions. Notable examples include fully homomorphic encryp- tion [36], predicate encryption and certain types of functional encryption (see, e.g., [1, 87, 92]), and even obfuscation of certain expressive classes of computa- tions [164, 93]. Despite this amazing list of applications, there are still unexplored directions related to LWE-based constructions. We consider questions related to the primitives of non-interactive key-exchange (NIKE) and general purpose Non- Interactive Zero-Knowledge (NIZK) proof systems for NP. A NIZK proof system for a language L ∈ NP, as introduced by Blum et al. [30], is a protocol between a probabilistic polynomial-time prover P and verifier V in the Common Random String (CRS) model. The prover, given an instance x ∈ L, a witness w, and the random string r, produces a proof string π which it sends to the verifier. Based only on x, the random string r and the proof π, the verifier can decide whether x ∈ L. Furthermore, the protocol is zero-knowledge: the proof π reveals nothing to the verifier beyond the fact that x ∈ L. Non-interactive zero-knowledge proof systems have been used extensively in cryptography, with applications ranging from chosen ciphertext security and non-malleability [130, 60, 150], multi-party computation with a small number of rounds (see, e.g., [128]), low-round witness-indistinguishability [61] to various types of signatures (e.g. [19, 22]) and beyond.
A NIKE is a protocol between two probabilistic polynomial-time parties P1 and P2. The parties share some public parameters and simultaneously exchange a single message. Then, based on the exchange messages and the public parame- ters, they agree on a common secret key. Furthermore, no adversary that observes the interaction can gain any information about this key. The paradigmatic example of a NIKE scheme is the Diffie-Hellman protocol [57], which lies at the foundation of public-key cryptography. Non-interactive key exchange has applications in practice, but is also a useful primitive in theoretical results (e.g. [59], [34]). Additionally, NIKE is black-box separated from one-way functions [103].
21 Both primitives can be instantiated from a variety of cryptographic assump- tions. General purpose NIZK proof systems (i.e., NIZK proof systems for all of NP) are known based on number theoretic assumptions (e.g., the hardness of factoring integers [68] or the decisional linear assumption or symmetric external Diffie-Hellman assumption over bilinear groups [95]) or from indistinguishabil- ity obfuscation [151, 27]. Even though the canonical NIKE protocol is based on the Diffie-Hellman assumption [57], there are various security notions for NIKE that are instantiated from different assumptions (e.g., the twin Diffie-Hellman as- sumption [44] or the hardness of factoring in the Random Oracle Model or the hardness of a variant of the Diffie-Hellman problem [73] or indistinguishability obfuscation [32]). We remark that these assumptions can be broken by a quantum computer [157] or are not yet well understood. Very recently, Peikert and Shiehian [137] (building on [42]) constructed gen- eral purpose statistical NIZK arguments in the common random string model and general purpose NIZK proofs in the common reference string model under LWE. While the question of constructing NIZKs from LWE is by now mostly resolved, one important variant (to which our techniques may be applicable) re- mains open: constructing NIZK proofs in the common random string model (i.e., with an unstructured CRS) based on LWE.
Our Contribution
Our main result for a non-interactive zero-knowledge proof system is a complete- ness theorem reducing the question of constructing a NIZK proof system for all of NP from LWE to that of constructing a NIZK proof system for one particular computational problem [146]. Specifically, we consider a decisional variant of the bounded distance decoding (BDD) problem. In the BDD problem, the input is a lattice basis and a target vector which is very close to the lattice. The problem is to find the nearby lattice point. This is very similar to the closest vector problem CVP except that here the vector is guaran-
22 teed to be within the λ1 radius of the lattice, where λ1 denotes the length of the shortest non-zero lattice vector (more specifically, the problem is parameterized
by α ≥ 1 and the guarantee is that the point is at distance λ1/α from the lattice). BDD can also be viewed as a worst-case variant of LWE and is known to be (up to polynomial factors) equivalent to the shortest-vector problem (more precisely, GapSVP) [120]. We consider a decisional variant of BDD, which we denote by dBDD. The dBDDα,γ problem, is a promise problem, parameterized by α ≥ 1 and γ ≥ 1, where the input is a basis B of a lattice ℒ and a point t. The goal is to distinguish
λ1(ℒ) between pairs (ℒ, t) such that the point t has distance at most α from the λ1(ℒ) lattice ℒ from tuples in which t has distance at least γ · α from ℒ.
Our main result states that assuming that LWE holds and that dBDDα,γ has a NIZK proof system (where α and γ depend on the LWE parameters), then every language in NP has a NIZK proof system. Since dBDD is a special case of the well studied GapCVP problem, a NIZK for GapCVP would likewise suffice for obtaining NIZKs for all of NP based on LWE.
Relation to [138]. Theorem (almost) confirms a conjecture of Peikert and Vaikun- tanathan [138]. More specifically, [138] conjectured that a NIZK proof-system for a specific computational problem related to lattices would imply a NIZK proof- system for every NP language. The problem that Peikert and Vaikuntanathan consider is GapSVP whereas the problem that we consider is the closely related dBDD problem. While BDD is known to be no harder than GapSVP [120] (and the same can be shown for dBDD), these results are shown by Cook reductions and so a NIZK for one problem does not necessarily yield a NIZK for the other. In particular, we do not know how to extend our main theorem to hold with respect to GapSVP.
Subsequent works. Subsequent to our work, Canetti et al. [42] constructed gen- eral purpose NIZK arguments in the common random string model and general
23 purpose NIZK proofs in the common reference string model under the LWE as- sumption with an additional circular security assumption (similar to the one used in fully-homomorphic encryption schemes). Later, Peikert and Shiehian [137] re- moved the extra circular security assumption. Both of these papers construct NIZK arguments in the common random string model and NIZK proofs in the common reference string model. One question that remains open and to which our techniques might be applicable is constructing NIZK proofs in the common random string model (i.e., with an unstructured CRS).
Regarding non-interactive key-exchange, we explore the possibility of attain- ing (ring) LWE-based protocols with modulus polynomial in the security param- eter [96]. We focus on the setting where the two parties only send one or a few (ring) LWE samples to each other. The main motivation for studying this setting is that perhaps it is the simplest setting which captures natural non-interactive variants of current LWE-based interactive key exchange protocols. Therefore, im- possibility results give a theoretical justification for the interactive structure of current protocols. In this setting, NIKE is simply characterized by two efficiently computable key reconciliation functions, such that - their outputs agree with overwhelming probability in the security parame- ter, and - their outputs are pseudorandom, even when conditioned on the transcript of the protocol. We show impossibility results for various natural choices of reconciliation func- tions, as summarized below.
Impossibility of agreement amplification by repetition. In many existing LWE- based key exchange protocols [58, 136], the first round is sufficient for approximate key agreement. Namely, after a single round the two parties share a common value with some probability. Then, the protocols use interaction to achieve agreement with overwhelming probability in the security parameter, as required for key
24 agreement. A natural idea for amplifying the agreement probability is to run in parallel multiple copies of the first round of the existing protocols and combine them in order to amplify the agreement probability. We show that for any number of parallel repetitions, any reconciliation func- tions and non-trivial LWE noise distribution, the parties disagree with noticeable probability in the LWE modulus. This implies that such reconciliation functions cannot exist. In fact, this impossibility is information theoretic and holds even for computationally inefficient reconciliation functions. Our results naturally extend to the case of ring LWE (RLWE).
Impossibility of noise-ignorant reconciliation functions. We show that the rec- onciliation functions have to depend on the LWE noises of each party respectively. This impossibility excludes more general reconciliation functions than in the pre- vious case. However, in contrast to the previous result, which holds uncondition- ally, this result assumes the hardness of the LWE problem. Our theorem extends to the case of RLWE and to the case where parties exchange up to polynomial number of LWE samples. The above two results rule out the most natural choices of key reconciliation functions, unconditionally or under the LWE assumption. However, we show that the existence of efficient reconciliation functions, which depend on all of their inputs, cannot be ruled out (at least as long as the existence of iO is a possibility). In particular, we show that there exists an instantiation of the NIKE protocol that is based on indistinguishability obfuscation (iO) and puncturable PRFs [32] in our framework.
1.3 Notation
We use the following notation throughout this thesis.
25 General Notation. We use [m] to denote the set {1, . . . , m}. Let N = {0, 1, 2, . . . } and Z+ = {1, 2, 3, . . . }. We use small bold letters x to refer to vectors and capital T bold letters A to refer to matrices. For a matrix A, we denote by ai its i-th row and by ai,j its (i, j)-th element. Let In denote the n-dimensional identity matrix.
We denote with Ei,j the matrix that has all zeros except that ei,j = 1. A function negl(k) is negligible if negl(k) < 1/kc for any constant c > 0 and sufficiently large k. All logarithms log(·) are in base 2. We denote by x‖y the concatenation of vectors or matrices. For example, if x ∈ Zn and y ∈ Z, then x‖y is a vector in Zn+1. Similarly, if X ∈ Zn×m and y ∈ Zn, then X‖y is a matrix in Zn×(m+1).
Probability distributions. For a distribution µ, we use x ← µ to denote that x is sampled from the distribution µ, and for a set S we use x ← S to denote that x is c s sampled uniformly at random from the set S. We use X ≈ Y, X ≈ Y and X ≡ Y to denote that the distributions X and Y are computationally indistinguishable, statistically close and identically distributed, respectively (where in the case of computational indistinguishability we actually refer to ensembles of distributions parameterized by a security parameter).