Practical Signatures without Random Oracles

Giuseppe Ateniese∗ Jan Camenisch† Susan Hohenberger† Breno de Medeiros‡

Abstract We provide a construction for a group signature scheme that is provably secure in a universally com- posable framework, within the standard model with trusted parameters. Our proposed scheme is fairly simple and its efficiency falls within small factors of the most efficient group signature schemes with provable security in any model (including random oracles). Security of our constructions require new cryptographic assumptions, namely the Strong LRSW, EDH, and Strong SXDH assumptions. Evidence for any assumption we introduce is provided by proving hardness in the generic group model. Our second contribution is the first definition of security for group signatures based on the simulata- bility of real protocol executions in an ideal setting that captures the basic properties of unforgeability, anonymity, unlinkability, and exculpability for group signature schemes.

1 Introduction

Group signature schemes, introduced by Chaum and van Heyst [23], allow a member of a user group to sign anonymously on behalf of the group, where a user’s anonymity may be revoked by a designated group manager, in case of disputes. The motivation of this paper is twofold. Our first motivation is to build a practical group signature scheme provably secure under standard assumptions, in particular without resorting to the model. Prior to this work, there was no group signature scheme known achieving this (with the exception of the recent scheme by Boyen-Waters [16] which, however, fails to meet all required properties–we will later discuss their scheme in more detail). In this paper, we present a full group signature scheme provably secure under new number-theoretic assumptions. Now, one might say that we trade the “assumption” that the Fiat-Shamir heuristic works with proof of knowledge protocols for discrete logarithms (e.g., such as the Schnorr signature scheme) with other possibly false assumptions. However, while one will probably never be able to prove that the Fiat-Shamir heuristic is reasonable for some cases (on the contrary, many cases are known for which it is unreasonable [33]), our new assumptions are algebraic and hence naturally much easier to analyze. Indeed, as a first step towards their justification, we prove that they hold in the generic group model [40, 46]. We hasten to point out that one can of course prove assumptions hard in the generic group model that are not hard against an algebraically unrestricted adversary, and that the generic group model has some of the same faults as random oracles [27]. Rather, a proof in the generic model should be considered a sanity check for an complexity assumption. In the light of the quest for a group signature scheme provably secure in the standard model, one can view both schemes, the Boyen-Waters one and ours, as first steps in this direction. The Boyen-Waters scheme uses

∗Dept. of Computer Science; The Johns Hopkins University; 3400 N. Charles Street; Baltimore, MD 21218, USA. †IBM Research; Zurich Research Laboratory; Saumerstrasse¨ 4, CH-8803 Ruschlikon,¨ Switzerland. ‡Dept of Computer Science; Florida State University; 105-D James Love Bldg Tallahassee; FL 32306-4530 USA.

1 somewhat less complicated assumptions, but sacrifies some desirable properties while our scheme captures these properties but makes more involved assumptions. The second motivation of this paper is the definition of security in the reactive security or universally composablility model [8, 42, 22, 43]. Let us expand here. The security of early schemes was defined in terms of whether they satisfied a number of independent properties, and lacked a comprehensive view of adver- sarial behavior. Indeed, as initially the set of features considered was insufficient, some proposed schemes were subsequently broken. An important realization was the requirement that membership certificates be the equivalent of group manager signatures, in order to prevent against attacks by arbitrary group member coalitions [5]. Later, Bellare, Micciancio, and Warinschi [9] (BMW) introduced a security formalism based on adversarial games that combined the several requirements of previous works into fewer ones (namely, traceability and anonymity). However, the BMW formalization relies on the existence of a (trusted) - issuing entity that generates all keys in the system and distributes them to the group manager and group members. As argued by Kiayias and Yung [36], BMW models a weaker primitive than the group signatures proposed by Chaum and van Heyst, since assuming a “tamper-proof” key setup conflicts with the goals of many practical schemes. The BMW model has been extensively adapted (via incorporating exculpabil- ity, changing the proof model to the random oracle setting and/or to dynamic groups) to allow for security proofs of recently proposed group signature schemes, e.g., works by Boneh et al. [13] and Camenisch and Lysyanskaya [19]. The first formal model to apply to the case of dynamic groups was introduced by Kiayias et al. [36, 35]. The formalization works in the random oracle model, not the standard model, but it captures closely the security requirements of practical group signatures, and it can be readily applied to formally prove the security of practical schemes. Indeed, [36] includes a security proof of a variant of the Ateniese et al. scheme [4]. More recently, Bellare, Shi, and Zhang [11] have proposed a standard-model formalism for dynamic groups, and constructed theoretical schemes based on black-box zero-knowledge primitives. Simultaneously with this evolution of understanding in group signatures was the development of the uni- versally composable (UC)/reactive framework [8, 42, 22, 43]. The UC/reactive framework enables proofs of security of protocols that are valid under concurrent execution and in composition with other arbitrary protocols. It has been shown that this framework is more powerful than a property based definitional ap- proach as it captures all properties at the same time. Indeed, examples are known of schemes that satisfy a property based defintion (i.e., each property individually) but not a UC/reactive framework definition that requires the fullfilment of all the properties at the same time [22]. To date, it remained an open problem to introduce a UC/reactive security model for group signatures. We introduce this definition in Section 2. It was carefully constructed to incorporate the original vision of Chaum and van Heyst and its subsequent developments. Our definition implies many of the guarantees of prior property-based definitions (e.g., [9, 11, 36, 35]). Two properties that we do not require are: (1) membership revocation, and (2) anonymity even after exposure of a user’s secret key (forward anonymity), as in BMW [9]. Regarding point (2), we mean that we provide anonymity only to honest users, i.e., users of which the adversary does not know the secret key of which we believe is sufficient in practise. We emphasize that our model does not require any trusted key-issuing entity and group member secrets are known only to group members. Note that our results are not in contradiction with the work of Datta et al. [26] on the impossibility of realizing an ideal functionality for group signatures with perfect anonymity (i.e., the probability of guessing the identity of one of two signers is exactly 1/2) and perfect traceability (i.e., the probability of a dishonest group manager violating exculpability is exactly zero). Datta et al. admit that they only consider a strong form of group signature, and indeed, their formulation of group signatures has a single entity generating all

2 signing keys. They also require that the scheme remain secure even when all public parameters are chosen by a potentially malicious group manager, i.e., they consider UC security in the plain model. In contrast, our definition (and scheme) does not have a key issuing entity and will allow some global parameters to be given to the group manager—e.g., bilinear map parameters. This is sometimes referred to as a UC model with trusted parameters. We now summarize the contributions of our paper.

1.1 Our Contribution and Comparison with Previous Work Strong security model: We provide the first definition of security for group signature schemes as an ideal functionality, and provide a construction of a practical and provably secure scheme within the new frame- work. Static-membership versions of our group signature schemes are secure in the context of composition and concurrent executions. The dynamic-membership version of our group signature scheme is similarly secure, if the join protocol is restricted to sequential execution. Better Efficiency: Our construction is the first in the standard model to provide constant time and space efficiency with respect to the security parameter. Concurrently, and independently from us, Boyen and Waters [16] proposed a scheme provably secure the standard model (under a different definition of security). However, in their scheme the bit length of the signatures and the complexity of the signature and verification algorithms are O((log n) · k), where n is the number of group members and k is the security parameter. In our scheme, all of these complexities will be O((log n) + k), where the (log n) term is there to guarantee that there are enough unique secret keys for each user in the algebraic group. In practice, log n is smaller than k, so our scheme is an O(k) scheme (i.e., constant in the security parameter). CCA-anonymity: Our scheme achieves CCA anonymity, i.e., anonymity holds even when the adversary continually has access to an open oracle (which, when queried on a signature, returns the signer’s iden- tity). In constrast, the Boyen-Waters scheme achieves only CPA anonymity, where the adversary is not allowed access to the oracle after receiving the challenge signature. Similarly as CCA security is the de facto requirement for schemes, we believe this is a vital property for group signatures. Strong exculpability: We achieve exculpability, an important property originally proposed by Chaum and van Heyst. In our scheme, a user’s secret key is chosen by the user, and we prove that group managers cannot sign on behalf of honest users. The Boyen-Waters scheme lacks (strong) exculpability of users: In their scheme, a trusted key-issuing entity generates and distributes users’ secret keys. Thus, this entity can sign messages on behalf of the user, and thus holding users accountable for any misbehavior is difficult. Forward security: Our scheme does not achieve forward anonymity as defined in BMW [9] under the full anonymity property where members remain untraceable even if their secret keys are exposed, but only ordinary anonymity. The Boyen and Waters scheme does achieve forward anonymity. General setting for bilinear mappings: We use curves with isomorphism-free paired groups, arguably the most efficient, secure, and versatile setting for pairings-based (e.g., see Galbraith et. al. [31]). In contrast, the Boyen-Waters scheme is restricted to symmetric bilinear mapping settings (supersingular curves). Even more significantly, their scheme uses elliptic curve groups of composite order, and they require that this order be hard to factor, implying it must be large (1024 bits or larger). Consequently, the representation of every elliptic curve point is similarly large, which heavily impacts the performance of cryptographic operations in these curves, as well as their bandwidth requirements.

3 1.2 Overview of the Construction We now provide intuition for understanding our construction, described in detail in Section 5. Our group signature scheme has the standard protocols: Setup, Join, GroupSign, GroupVerify, and Open. Let S = (Gen, Sign, Verify) be an efficient signature scheme secure in the standard model. Let the group manager have keypair (GPK , GSK ) and a user have keypair (pk, sk), generated according to Gen. Consider the following scheme. During the Join protocol, the group manager gives the user a certificate on her public key: SignGSK (pk). To sign message m as a group member, the user prepends her certificate and public key to her signature: (SignGSK (pk), pk, Signsk (m)). This scheme is clearly unforgeable, but not anonymous. In order to achieve anonymity, the certificate and user’s public key must be made unlinkably randomizable, while the signature element must be instantiated using a key-private signature scheme. We actually have the manager and user employ different signature schemes, S1 and S2. S1 is based on the pairing-based signature scheme of Camenisch and Lysyanskaya [19] (CL); Specifi- cally it uses the extension of this scheme (CL+) by Ateniese et al. [3]. Consider a bilinear map (pairing) e : G1 × G2 → GT , defined on groups of prime order p, with generators g, g˜, gˆ, respectively. Select s t ∗ random s, t ∈ Zp and set sk = (s, t) and pk = (˜g , g˜ ). To sign a message m ∈ Zp, choose random t s+stm m mt a ∈ G1 and output the tuple (a, a , a , a , a ). Verify signature (A, B, C, D, E) by checking that: (1) e(A, g˜t) = e(B, g˜), (2) e(D, g˜t) = e(E, g˜), and (3) e(AE, g˜s) = e(C, g˜). The user’s certificate is nothing more than a blind CL+ signature from the group manager on the user’s secret key sk, i.e., the group member signs sk without learning its value, a necessary condition for our scheme to provide exculpability. Fortunately, the CL+ signature inherits an efficient blind-signature pro- tocol from the original CL scheme. In the above, we instantiate the elements (SignGSK (pk), pk) with + t s+st(sk) sk t(sk) CLGSK (sk) = (a, a , a , a , a ), which is the group manager’s signature on sk which can be thought of as including an obfuscated version of the user’s public key (a, ask ). As observed in [3], these sig- r tr {s+st(sk)}r natures can be unlinkably re-randomized by choosing a random r ∈ Zp and computing (a , a , a , skr t(sk)r a , a ), assuming DDH is hard in G1. The user may therefore release a random-looking copy of her certificate with each group signature. We implement S2 with a new signature scheme (secure in the standard model) which is based on the weak signatures of Boneh and Boyen [13] (BB). By weak, we mean the Boneh-Boyen signature scheme proven weak message attack secure, where the adversary must submit all challenge messages in advance of learning the public key. The scheme works in a similar pairing setting as the CL+ signature. Select a random sk log |p| sk ∈ Zp and a random generator g ∈ G1 and output pk = (g, g ). To sign a message m ∈ {0, 1} , output A =g ˜1/(sk+m). Verify by checking that e(gmpk,A) = e(g, g˜). As a thought experiment, consider our group signature structure using weak BB signatures to implement + t s+st(sk) sk t(sk) 1/(sk+m) S2. The construction is (CLGSK (sk); BB sk (m)) = (a, a , a , a , a ; g˜ ) = (A, B, C, D, E, F ), verifiable by checking the CL+ signature first and then testing if e(DAm,F ) = e(A, g˜). Unfortunately, as the BB signatures are deterministic, it will be obvious when the same user signs the same message a second time. This violates our privacy definition, so we modify this basic scheme to provide more privacy and enable longer messages. In their paper [13], Boneh and Boyen present one method for adapting the weak scheme to longer mes- sages. In this paper, we present another method, which we denote BB +, that is more suited to our purposes. ∗ v 1/(sk+v) 1/(v+m) To sign a message m ∈ Zp, select a random v ∈ Zp and output the tuple (g , g˜ , g˜ ). Verify signature triple (A, B, C) by checking that e(A pk,B) = e(g, g˜) and e(A gm,C) = e(g, g˜). We arrive + + t s+st(sk) sk t(sk) v 1/(sk+v) at the construction (CLGSK (sk); BB sk (m)), or more exactly (a, a , a , a , a ; a , g˜ , 1/(v+m) ∗ g˜ ) for message m ∈ Zp, where a ∈ G1 and v ∈ Zp are randomly chosen for each new signature. At this point, we have described the entire construction, except for how the Open algorithm works. The

4 simplest method is for the user to give the group manager a tracing value g˜sk during the Join protocol. Later, the group manager can open a signature (a, at, as+st(sk), ask , at(sk); av, g˜1/(sk+v), g˜1/(v+m)) = (A, B, C, D, E; F , G, H) by testing if e(A, g˜sk ) = e(D, g˜) for each user. Obviously, this algorithm runs linearly Open in the number of group members.√ We improve on this result in Section 6, providing two alternative algorithms. The first has O( n · k) complexity, for membership groups of size n and security parameter k, under the same cryptographic assumptions as the basic scheme. The second reduces to ((log n) + k) under an additional assumption. Remark 1.1 (Security under concurrent executions.) The join protocol is the only protocol in our scheme that requires sequential composition for security, because it involves a zero-knowledge proof of knowledge. In Appendix B, we discuss some techniques for securely achieving limited forms of concurrency. Remark 1.2 (Revocation.) Finding an elegant revocation mechanism is a pervasive problem among group signature schemes. To revoke a user in our scheme, the group manager could publish that user’s tracing information obtained during the join protocol. We explore more efficient alternatives in the full version.

Length of Signatures. The signatures produced by this scheme are short. In the following comparisons we are going use the NIST suggested equivalence of 80-bit symmetric security with RSA-1024 and 128-bit symmetric security with RSA-3072. 1 For our schemes, we estimate the key sizes from our generic-model security reductions to be 1/3 of the equivalent symmetric , i.e., 240 bits for 80-bit symmetric security and 384 bits for 128-bit symmetric security. This 3:1 ratio should be also used to compare schemes based on the q-Strong Diffie Hellman assumption, due to recent results by Cheon [24]. Assuming that the bitlength of elements in G1 is 241 (an extra bit is needed to indicate the y-coordinate among two options), and that the curves implemented in the MIRACL library are used [44], the basic scheme achieves roughly the same level of security as a 1024-bit RSA signature [13]. For these curves, the bitlengths of elements in G2 are roughly three times that of G1 (more precisely 721 bits), and our scheme would take approximately 2888 bits to represent a group signature, comprised of six elements in G1 and two elements in G2. If the newer curves of embedding degree 12 are used [7], one could employ 385-bit groups (for 128-bit generic security) to achieve RSA-3072 security equivalence. These new curves have better ratios, with log |G2|/ log |G1| = 2. In this case, our signatures would take 3848 bits to be represented, about 25% larger than a plain RSA signature with the same security level. As mentioned before, this efficiency is incomparable with that of Boyen and Waters [16], which (1) grow logarithmically with the number of system members and (2) require elliptic curve group orders over a 1000 bits long. Our scheme can be compared with the most efficient short group signatures secure in the random oracle setting. For instance, the scheme by Boneh, Boyen, and Shacham [13], which achieves only CPA-anonymity, would require about 2163 bits for the RSA-1024 security level (or about 1442 in the MNT setting). 2 A shorter scheme by Boneh and Shacham [15] requires 1682 bits to achieve RSA-1024 comparable security. 3

2 Group Signature Security Definition

Notation: if P is a protocol between parties A and B, then P (A(x),B(y)) denotes that A’s input is x and B’s input is y. 1http://www.nsa.gov/ia/industry/crypto elliptic curve.cfm 2Their paper provides the value 1533 bits instead of 2163, but this does not take into account the above mentioned results about the concrete security of the q-Strong Diffie-Hellman assumption [24]. 3Again, this value is larger than the one provided by the authors to account for the concrete security of q-Strong Diffie-Hellman.

5 A group signature scheme consists of the usual types of players: a group manager GM and a user Ui. These players can execute the algorithms: GroupSetup, UserKeyGen, Join, GroupSign, GroupVerify, Open, and VerifyOpen. We now specify the input-output specifications for these algorithms as well as providing some informal intuition for what they do. Let params be global parameters generated during a setup phase; ideally params is empty.

– The GroupSetup(1k, params) algorithm is a key generation algorithm for the group manager GM. It k takes as input the security parameter 1 and outputs the key pair (pk GM, sk GM). (Assume that sk GM contains the params, so we do not have to give params explicitly to the group manager again.) – The UserKeyGen(1k, params) algorithm is a key generation algorithm for a group member U, which outputs (pk U , sk U ). (Assume that sk U contains the params, so we do not have to give params explicitly to the user again.) – In the Join(U(pk GM, sk U ), GM(pk U , sk GM)) protocol, the user U joins the signatory group managed by GM. The user’s output is a personalized group signing credential CU , or an error message. GM’s output is some information TU which will allow the group manager to revoke the anonymity of any signatures produced by U. The group manager maintains a database D for this revocation information, to which it adds the record (pk U ,TU ). – The GroupSign(sk U , CU , m) algorithm allows group members to sign messages. It takes as input the user’s secret key sk U , the user’s signing credential CU , and an arbitrary string m. The output is a group signature σ. – The GroupVerify(pk GM, m, σ) algorithm allows to publicly verify that σ is a signature on message m generated by some member of the group associated with group public key pk GM. – The Open(sk GM, D, m, σ) algorithm allows the group manager, with sk GM and database D, to iden- tify the group member U who was responsible for creating the signature σ on message m. The output is a member identity pk U or an error message. – In the VerifyOpen(GM(sk GM, D, m, σ, pk), V(pk GM, m, σ, pk)) protocol, GM convinces a verifier that the user with public key pk was responsible for creating the signature σ on message m. The verifier outputs either 1 (accept) or 0 (reject).

In addition to supporting the above algorithms, a group signature scheme must also be correct and secure. Correctness is fairly straightforward. Informally, if an honest user runs Join with an honest group manager, then neither will output an error message. If an honest user runs GroupSign, then the output will be accepted by an honest verifier running GroupVerify. If a signature passes GroupVerify and a honest manager runs Open, then the result will be accepted by an honest verifier running VerifyOpen.

2.1 The Group Signature Ideal Functionality, Fgs Our security model uses the ideal/real world model as in multiparty computation [20, 21, 22] and reactive systems [42, 43] to capture the security properties of group signatures in a single definition. In the real world, there are a number of parties who together execute some . A number of these parties may be corrupted by the adversary A (all corrupted parties are combined into this single adversary). Each party receives its input and reports its output to the environment Z. The environment Z and the adversary A may arbitrarily interact. In the ideal world, we have the same parties. As before, each party receives its input and reports its output to the environment. However, instead of running a cryptographic protocol, the parties provide their inputs to and receive their outputs from a trusted party T . The specification for how T behaves is formalized as an ideal functionality.

6 We say that a cryptographic protocol securely implements an ideal functionality if for every real-world adversary A and every environment Z, there exists a simulator S, which controls the same parties in the ideal world as A does in the real world, such that Z cannot distinguish whether it is interacting in the real world with A or in the ideal world with S. Group Signature Ideal Functionality. We now describe Fgs. In addition to the environment Z, we have two types of players: a group manager GM and users Ui. We work in the non-adaptive setting.

– Non-adaptive Setup: Each user Ui tells the functionality Fgs whether or not it is corrupted. Option- ally, in this stage the global parameters are broadcast to all parties. – GroupSetup: Upon receiving (GM, “group setup”) from GM, send to S tuple (GM, “group setup”). – UserKeyGen: Similarly, upon receiving (Ui, “keygen”) from Ui inform S. – Join: Upon receiving (Ui, “enroll”) from Ui, ask the group manager GM if Ui may join the group. The GM responds with resi ∈ {0, 1}. Record the pair (Ui, resi) in database D and return resi to Ui. Additionally, if the group manager is corrupted, then register a special user corrupt-GM. – GroupSign: Upon receiving (Ui, “sign”, m), where m is an arbitrary string, check that Ui is a valid member of the group by checking that the entry for Ui in D has resi = 1. If not, deny the command. Otherwise, tell the simulator S that GroupSign has been envoked on message m. If the GM is corrupted, also tell the simulator the identity Ui. Ask S for a signature index id. Record the entry (Ui, m, id) in database L and return the value id to Ui. – GroupVerify: Upon receiving (Ui, “verify”, m, id) from Ui (or GM), search database L for an entry containing message m, and if one exists, return 1. Otherwise, return 0. – Open: This ideal operation combines both the Open and VerifyOpen cryptographic protocols. Upon receiving (Ui, “open”, m, id) from Ui, search database L for an entry (Uj, m, id) for any Uj. Ask GM if it will allow Fgs to open id for user Ui. If GM agrees and Uj 6= corrupt-GM, then output the identity Uj. Otherwise, output ⊥. Let us provide some intuition for understanding this model. Informally, the properties that we capture are unforgeability, anonymity, and exculpability. This definition is general enough to capture unforgeability under adaptive chosen message attack [34] without requiring schemes to be strongly unforgeable [2]. In a strongly unforgeable scheme, a new signature on a previously signed message is considered a forgery; while in the standard notion, a forgery must be on a new message. The definition also captures the important exculpability property (i.e., even a rogue group manager cannot frame an honest user). Indeed, the environment Z may instruct a user to sign any messages of its choosing and may interact freely with the adversary A. Our model, however, enforces that unless an honest user Ui requested a signature on m (i.e., sent (“sign”, m) to Fgs), then for all values of id, the Open command on (Ui, m, id) will return ⊥. Furthermore, there is a strong anonymity guarantee for a user: unless the group manager is corrupted, the users remain anonymous. When the group manager is honest, the simulator must create signatures for A knowing only the message contents, but not the identity of the honest user. Finally, the definition ensures that, whenever the group manager is honest, he will be able to open all group signatures. During the Open command, Fgs only asks S for permission to execute the opening if the group manager is corrupted. Thus, if a user honestly runs the verification algorithm and accepts a signature as valid, then this user may be confident that an honest GM will later be able to open it, reveal the identity of the original signer, and prove this to the user. The above definition does not define membership revocation. However, it is not difficult to extend Fgs to address revocation, and we plan to do so in the full version of the paper.

7 It is not hard to see that our definition implies prior many of the guarantees of the property-based def- initions (e.g., [9, 11, 36, 35]). Two properties that we do not require are: (1) membership revocation, and (2) anonymity even after exposure of a user secret key (forward anonymity), as in BMW [9]. While of course both of these properties could be added easily to the model, our scheme does not statisfy them. Note, however, that our definition (and scheme) does provide anonymity to (honest) users, i.e., users of which the adversary is not privy of their secret keys. Finally, notice that our definition implies CCA anonymity, i.e., an anonymity property based definition where the adversary is allowed to query the open oracle even after having been presented the challenge signature. This is in contrast to BMW model (and to the Boyen-Waters group signature scheme) which provide only CPA anonymity, i.e., where the adversary is no longer allowed access to the open oracle after seeing the challenge signature.

3 Preliminaries and Complexity Assumptions

Notation: The notation G = hgi means that g generates the group G. Pairings: Let BilinearSetup be an algorithm that, on input the security parameter 1k, outputs the parameters for a pairing as γ = (p, G1, G2, GT , g, g˜), where G1 = hgi and G2 = hg˜i. We follow the notation of Boneh, k Lynn, and Shacham [14]. Let G1, G2, and GT all be (multiplicative) groups of prime order p = Θ(2 ), where each element of each group has a unique binary representation. Furthermore, let e : G1 × G2 → GT be an efficient pairing, i.e., a mapping with the following properties: (Bilinearity) for all g ∈ G1, g˜ ∈ G2, a b ab and a, b ∈ Zp, e(g , g˜ ) = e(g, g˜) ; and (Non-degeneracy) if g is a generator of G1 and g˜ is a generator of G2, then e(g, g˜) generates GT .

3.1 Complexity Assumptions The security of our construction in Section §5 is based on the following assumptions about pairing groups. In Appendix §D, we provide generic group proofs for EDH and Strong SXDH. A generic group proof for Strong LRSW was previously given in [3]. Two criticisms could be made of these assumptions. The first could be that they are closely related to specific security properties of the scheme. With regards to this, we point out that even if we were to assume a specific property, such as unforgeability (which we don’t do), security in our model would not follow. Indeed, it is non-trivial to show that our assumptions imply that our scheme realizes our ideal functionality. We also point out that the generic group proofs of these assumptions are highly non-trivial and required new techniques, which may be useful elsewhere. A second criticism could be that the assumptions are interactive and thus not black-box falsifiable [39]. However, we believe that our provided generic-model hardness proofs show that these assumptions are reasonable: Violating them would result in the design of elliptic curve algorithms with better than generic efficiency, a major cryptographic breakthrough with likely wider ramifications. In addition, our proofs provide estimates for the key sizes required for particular security levels, making our security assumptions indeed very concrete: The resulting Ω(p1/3)-generic security of our interactive assumptions (for elliptic curve subgroups of order p) puts them on a similar footing with related falsifiable assumptions, such as the q-Strong Diffie-Hellman assumption [24]. k Let BilinearSetup(1 ) → (p, G1, G2, GT , g, g˜), where G1 = hgi and G2 = hg˜i, be public. Assumption 1 (Symmetric External Diffie-Hellman (SXDH) [6, 3, 30]) The Decisional Diffie-Hellman (DDH) problem is hard in both G1 and G2. This implies that there do not exist efficiently 0 computable isomorphisms ψ : G1 → G2 or ψ : G2 → G1.

8 Note that SXDH also subsumes a traditional Co-Gap assumption, i.e., the Co-CDH problem [14] is hard in the pairing groups: Given (g, g,˜ gx, g˜y), it is hard to compute g˜xy or gxy. Good candidates for pairing groups where SXDH is hard are certain MNT curve implementations where no efficient isomorphisms between G1 and G2 are known [6, 3, 47, 32, 30]. The asymmetric version of this assumption, simply called XDH, only requires that DDH be hard in G1 [29, 45, 38, 13, 6]. The LRSW assumption is a discrete-logarithm assumption introduced in 1999 by Lysyanskaya et al. [37] and used in many subsequent works. Recently, a stronger form of the LRSW assumption which implies the SXDH assumption, called Strong LRSW, was introduced by Ateniese et al. [3].

x y Assumption 2 (Strong LRSW [3]) Let X,Y ∈ G2,X =g ˜ ,Y =g ˜ . Let OX,Y (·) be an oracle that takes ∗ x y+yxm as input a value m ∈ Zp, and outputs an LRSW-tuple (a, a , a ) for a random a ∈ G1. Then for all (·) ∗ probabilistic polynomial-time adversaries A and all m ∈ Zp,

R R x y OX,Y Pr[x ← Zp, y ← Zp,X =g ˜ ,Y =g ˜ , (a1, a2, a3, a4, a5) ← A (g, g,˜ X, Y ): x y+yxm m mx m 6∈ Q ∧ a1 ∈ G1 ∧ a2 = a1 ∧ a3 = a1 ∧ a4 = a1 ∧ a5 = a1 ] < 1/poly(k), where Q is the set of queries A makes to OX,Y (·).

The q-Strong Diffie-Hellman (q-SDH) assumption, as introduced by Boneh and Boyen [12], states that: ∗ for all probabilistic polynomial-time adversaries A, and all c ∈ Zp:

R x (xq) x (xq) 1/(x+c) Pr[x ← Zp : A(g, g,˜ g , . . . , g , g˜ ,..., g˜ ) = (c, g˜ )] < 1/poly(k).

We make an interactive version of this assumption. As we mentioned in the introduction, our efficiency analysis takes into account Cheon’s recent results [24] on q-SDH.

∗ ∗ Assumption 3 (Extended Diffie-Hellman (EDH)) Let x ∈ Zp. Let oracle Ox(·) take input ci ∈ Zp and vi 1/(x+vi) 1/(vi+ci) ∗ produce output (g , g˜ , g˜ ), for a random vi ∈ Zp. For all probabilistic polynomial-time ∗ adversaries A, all v, c ∈ Zp, and all a ∈ G1 such that a 6= 1,

R Ox x x x v 1/(x+v) 1/(v+c) Pr[x ← Zp : A (g, g , g,˜ g˜ ) = (c, a, a , a , g˜ , g˜ ) ∧ c 6∈ Q] < 1/poly(k) where Q is the set of queries A makes to oracle Ox(·).

The assumptions discussed so far are underlying the unforgeability of our group signature scheme. Its anonymity is based on a single assumption: that SXDH holds even when the adversary is given oracle access to additional information about the DDH instance.

Assumption 4 (Strong SXDH) Let g ∈ G1, g˜ ∈ G2, and x ∈ Zp. Let Ox(·) be an oracle that takes as ∗ v 1/(x+v) 1/(v+m) ∗ input m ∈ Zp and outputs (g , g˜ , g˜ ) for a random v ∈ Zp. Let Qy(·) be an oracle that r ry rv 1/(y+v) 1/(v+m) ∗ takes the same input type and outputs (g , g , g , g˜ , g˜ ) for a random r, v ∈ Zp. Then for all (·) probabilistic polynomial-time adversaries A , and for randomly chosen g ∈ G1, g˜ ∈ G2, and x, y ∈ Zp,

|Pr[AOx,Qx (g, gx, g˜) = 1] − Pr[AOx,Qy (g, gx, g˜) = 1]| < 1/poly(k).

In Theorem D.3, we show that Strong SXDH also has the same complexity as q-SDH for generic groups.

9 4 Key Building Blocks: CL+ and BB+ Signatures

As mentioned in Section §1, our group signature scheme is built out of two standard signature schemes secure without random oracles. We review the important details now.

4.1 Camenisch-Lysyanskaya Signatures Recall the basic Pedersen commitment scheme [41], in which the public parameters are a group G of prime order p, and two generators g and h of G. To commit to the value m ∈ Zp, pick a random r ∈ Zp and set C = PedCom(m; r) = gmhr. The Camenisch-Lysyanskaya (CL) signature scheme is secure without random oracles under the LRSW assumption [19]. CL signatures are also useful, because they support an efficient two-party protocol for obtaining a CL signature on the value (message) committed to in a Pedersen commitment. The common inputs are C = PedCom(m; r) and the verification key of the signer pk. The signer additionally knows the corresponding signing key sk, while the receiver additionally knows m and r. As a result of this protocol, the receiver obtains the signature σsk (m), while the signer does not learn anything about m. For our current purposes, it will not matter how this protocol actually works. Fortunately, a recent extension of the CL signatures by Ateniese et al. [3], denoted CL+, inherits this protocol. CL+ Signatures. Let the security parameter be 1k. The global parameters are the description of a pair- k ing params = (p, G1, G2, GT , g, g˜), where G1 = hgi and G2 = hg˜i, obtained by running BilinearSetup(1 ). s t 2 Keypairs are of the form pk = (˜g , g˜ ) and sk = (s, t) ∈ Zp. t s+stm m mt – Signing: Choose random a ∈ G1, output (a, a , a , a , a ) as the signature on hidden message ∗ m ∈ Zp. – Verification: On input a purported signature (A, B, C, D, E) accept that σ authenticates the message t t hidden as logA(D) if and only if: (1) e(B, g˜) = e(A, g˜ ), (2) e(D, g˜ ) = e(E, g˜), and (3) e(C, g˜) = e(A, g˜s)e(E, g˜s). ∗ – Re-Randomization: On input a signature (A, B, C, D, E), choose a random r ∈ Zp and output (Ar,Br,Cr,Dr,Er).

CL+ signatures are secure assuming SXDH and Strong LRSW. As previously observed in [3], when CL+ signatures are set in pairing groups where SXDH is hard, this re-randomization is unlinkable. We formally argue this second point in Lemma A.2.

4.2 Boneh-Boyen Signatures Recall the weak Boneh-Boyen (BB) signature scheme [12]. Let the security parameter be 1k. The global pa- k rameters are the description of a pairing params = (p, G1, G2, GT ) obtained by running BilinearSetup(1 ) (here, we ignore the generators output by BilinearSetup). Keypairs are of the form pk = (g, gsk , g˜) and ∗ ∗ sk ∈ Zp, for random generators g ∈ G1 and g˜ ∈ G2. To sign a message m ∈ Zp, output the signa- ture g˜1/(sk+m). To verify signature σ, accept if and only if e(σ, gsk gm) = e(g, g˜). Note that in this work we reverse the roles of G1 and G2 from the original description in [12]. As in other implementation of pairing-based schemes where distortion maps are not available, one chooses the role of each pairing group to maximize the efficiency of one’s protocol. This scheme was proven unforgeable only against weak chosen-message attack under the q-SDH as- sumption [12], where the adversary must submit all of his signature queries in advance of the public key

10 generation. Boneh and Boyen gave one method of modifying this weak scheme into an adaptively-secure one [12]. We provide a second method, which is more suited to our purposes. BB+ Signatures. The intuition here is that to issue a signature on a message m, a weak BB signature under sk is issued on a one-time signing key v, and then another weak BB signature under v is issued on message m. The additional randomness v allows to prove adaptive security.

– Key generation: Same as before. (Although, now the same bases may be used for all keys.) ∗ ∗ – Signing: On input a secret key sk and a message m ∈ Zp, select a random r ∈ Zp, and output the signature (gr, g˜1/(sk+r), g˜1/(r+m)). – Verification: On input a public key (g, gsk , g˜), a message m, and a purported signature (A, B, C), accept if and only if: (1) e(gsk A, B) = e(g, g˜) and (2) e(Agm,C) = e(g, g˜).

Lemma 4.1 The BB+ signature scheme is existentially unforgeable under adaptive chosen-message attack under the EDH assumption.

5 Our Basic Group Signature Construction

Notation: BB+ and CL+, respectively, denote our Section §4 modifications of the Boneh-Boyen [12] and CL+ Camenisch-Lysyanskaya [19] signature schemes. When we write A = SignGSK (m; a), we mean that A is a CL+ signature under key GSK on message m using base a; that is, A = (a, at, as+stm, am, amt) for BB+ + GSK = (s, t). Similarly, when we write A = Signsk (m; g, g˜), we mean that A is a BB signature under v 1/(sk+v) 1/(v+m) ∗ key sk on message m using bases (g, g˜); that is, A = (g , g˜ , g˜ ) for some v ∈ Zp. k Let BilinearSetup(1 ) → params = (p, G1, G2, GT , g, g˜), where G1 = hgi and G2 = hg˜i. GroupSetup(1k, params): The group manager establishes the public parameters for the Pedersen com- mitment scheme [41] and adds those to params. Then, the group manager executes GenCL+(1k, params) to obtain GPK = (params, S =g ˜s, T =g ˜t) and GSK = (s, t). k ∗ UserKeyGen(1 , params): Each user U selects random sk ∈ Zp and random h ∈ G1, and outputs a public key pk = (h, e(h, g˜)sk ).

Join(Ui(GPK , sk i), GM(pk i, GSK )): In this interactive protocol, the user’s inputs are her secret key sk i and the public key of the group manager GPK . Likewise, the group manager receives as input GSK and pk i. They interact as follows:

sk i 1. Ui submits her public key pk i = (p1, p2) and tracing information Qi =g ˜ to GM. If e(p1,Qi) 6= p2 or sk i was already in D, GM aborts. Else, GM enters Qi in database D. 2. The user sends a commitment A = PedCom(sk i) to GM. The user and GM run the CL protocol (see Section 4.1) for obtaining GM’s signature on the committed value contained in commitment A. ∗ r CL+ GM picks a random r ∈ Zp and sets f1 = g . Then, GM computes SignGSK (sk i; f1) = (f2, f3) and sends all three values to the user. If the CL signature (f1, f2, f3) does not verify for message sk i, the user aborts. 3. The user provides a zero-knowledge proof that the committed value sk i contained in commitment A is consistent with the public key pk i and a zero-knowledge proof of knowledge of sk i using any proof technique that is extractable. (For more on such proofs, see Appendix B.) 4. The group manager provides an extractable zero-knowledge proof of GSK = (s, t). sk i sk i 5. Next, the user locally computes the values f4 = f1 and f5 = f2 .

11 6. At the end of this protocol, the user obtains the following membership certificate:

t s+st(sk i) sk i (sk i)t Ci = (f1, . . . , f5) = (a, a , a , a , a ).

GroupSign(sk i,Ci, m): A user with secret key sk i and membership certificate Ci = (f1, . . . , f5) may ∗ sign a message m ∈ Zp as follows. r r 1. Re-randomize Ci using a random r ∈ Zp, i.e., compute (a1, . . . , a5) = (f1 , . . . , f5 ). 2. Compute SignBB+(m; a , g˜) = (a , a , a ). sk i 5 6 7 8 t s+st(sk ) sk (sk )t v 1/(sk +v) 1/(v+m) 3. Output the signature (a1, . . . , a8) of the form (b, b , b i , b i , b i , b , g˜ i , g˜ ).

GroupVerify(GPK , m, σ): To verify that σ = (a1, . . . , a8) is a group signature on m, do: + 1. Check that (a1, a2, a3, a4, a5) is a valid CL signature for public key GPK where the hidden message is the exponent of a4 (base a1). Specifically, verify that: (1) e(a1, T ) = e(a2, g˜), (2) e(a4, T ) = e(a5, g˜), and (3) e(a1a5, S) = e(a3, g˜). + 2. Check that (a6, a7, a8) is a valid BB signature for public key (a1, a4, g˜) on message m. Specifically, m verify that: (1) e(a4a6, a7) = e(a1, g˜) and (2) e(a6a1 , a8) = e(a1, g˜). 3. If both checks pass, accept; otherwise, reject.

Open(GSK , m, σ): On input any valid signature σ = (a1, . . . , a8) and tracing database D, GM may run the following algorithm to identify the signer. For each entry Qi ∈ D, the group manager checks whether e(a4, g˜) = e(a1,Qi). If a match is found, then GM outputs Ui as the identity of the original signer.

VerifyOpen(GM(GSK , m, σ, pk i,Qi), V(GPK , m, σ, pk i)): First, GM checks that σ is a valid group signature; that is, GroupVerify(GPK , σ, m) = 1. Next, GM checks that Ui is responsible for creating σ; sk i that is, using tracing information Qi =g ˜ from database D and pk i = (p1, p2), test that e(p1,Qi) = p2. If both of these conditions hold, then GM proceeds to convince a verifier that Ui was responsible for σ. We call this step anonymity revocation. Here, the GM provides a zero-knowledge proof of knowledge of a value α ∈ G2 (i.e., the tracing information for Ui) such that e(p1, α) = p2 and e(a1, α) = e(a4, g˜) [1]. The revocation described above revokes the anonymity of a particular signature, however, the GM could instead revoke the anonymity of all signatures belonging to user Ui by publishing the tracing information Qi. Then anyone can verify that the user with public key pk = (p1, p2) must be responsible by checking that: (1) e(p1,Qi) = p2, and (2) e(a1,Qi) = e(a4, g˜).

Theorem 5.1 In the plain model, the above group signature scheme realizes Fgs from Section §2 under the Strong LRSW, the EDH, and the Strong SXDH assumptions. Proof of Theorem 5.1 appears in Appendix A.

6 Opening Signatures in Sublinear Time

The basic Open algorithm described in Section §5 takes O(n · k) for a signing group of n members and security parameter k. Practically, this precludes this scheme from being used for many applications with large groups. We provide several options to remedy this situation√ in Appendix C. First, we present an Open algorithm with complexity O( n · k) which can be extended to one with complexity O((log n)·k) at the cost of group signatures becoming of size O((log n)·k). This improvement requires no additional assumptions, but does add two elements in G1 (resp. log n elements) to the signature length. Next, we present a O((log n) + k) Open algorithm. This increases the basic signature by three elements in G1 and requires a slightly different anonymity assumption.

12 References

[1] B. Adida, S. Hohenberger, and R. L. Rivest. Ad-Hoc-Group Signatures from Hijacked Keypairs, 2005. At http://theory.lcs.mit.edu/∼rivest/publications. [2] J. H. An, Y. Dodis, and T. Rabin. On the security of joint signature and encryption. In EUROCRYPT, volume 2332 of LNCS, pages 83–107, 2002. [3] G. Ateniese, J. Camenisch, and B. de Medeiros. Untraceable RFID tags via insubvertible encryption. In ACM CCS, pages 92–101, 2005. [4] G. Ateniese, J. Camenisch, M. Joye, and G. Tsudik. A practical and provably secure coalition-resistant group signature scheme. In CRYPTO, volume 1880 of LNCS, pages 255–270, 2000. [5] G. Ateniese and G. Tsudik. Some open issues and new directions in group signatures. In Financial Cryptography, volume 1648 of LNCS, pages 196–211, 1999. [6] L. Ballard, M. Green, B. de Medeiros, and F. Monrose. Correlation-resistant storage. Technical Report TR-SP- BGMM-050705, Johns Hopkins University, CS Dept, 2005. http://spar.isi.jhu.edu/∼mgreen/ correlation.pdf. [7] P. S. L. M. Barreto and M. Naehrig. Pairing-friendly elliptic curves of prime order, 2005. Cryptology ePrint Archive: 2005/133. [8] D. Beaver. Secure multi-party protocols and zero-knowledge proof systems tolerating a faulty minority. Journal of Cryptology, 4:75–122, 1991. [9] M. Bellare, D. Micciancio, and B. Warinschi. Foundations of group signatures: Formal definition, simplified requirements and a construction based on general assumptions. In EUROCRYPT, volume 2656 of LNCS, pages 614–629, 2003. [10] M. Bellare and A. Palacio. GQ and Schnorr Identification Schemes: Proofs of Security against Impersonation under Active and Concurrent Attacks. In CRYPTO, volume 2442 of LNCS, pages 162–177, 2002. [11] M. Bellare, H. Shi, and C. Zhang. Foundations of group signatures: The case of dynamic groups. In CT-RSA, volume 3376 of LNCS, pages 136–153, 2005. [12] D. Boneh and X. Boyen. Short signatures without random oracles. In EUROCRYPT 2004, volume 3027 of LNCS, pages 56–73, 2004. [13] D. Boneh, X. Boyen, and H. Shacham. Short group signatures. In CRYPTO, volume 3152 of LNCS, pages 41–55, 2004. [14] D. Boneh, B. Lynn, and H. Shacham. Short signatures from the Weil pairing. In ASIACRYPT, volume 2248 of LNCS, pages 514–532, 2001. [15] D. Boneh and H. Shacham. Group signatures with verifier-local revocation. In Proc. of the ACM Conf. on Computer and Communications Security (ACM CSS 2004), pages 168–177. ACM Press, 2004. [16] X. Boyen and B. Waters. Compact Group Signatures Without Random Oracles. In EUROCRYPT ’06, volume 4004 of LNCS, pages 427–444, 2006. [17] E. Brickell, J. Camenisch, and L. Chen. Direct anonymous attestation. In ACM CCS, pages 132–145, 2004. [18] J. Camenisch and I. Damgard.˚ Verifiable encryption, group encryption, and their applications to group signatures and signature sharing schemes. In ASIACRYPT, volume 1976 of LNCS, pages 331–345, 2000. [19] J. Camenisch and A. Lysyanskaya. Signature Schemes and Anonymous Credentials from Bilinear Maps. In CRYPTO, volume 3152 of LNCS, pages 56–72, 2004. [20] R. Canetti. Studies in Secure Multiparty Computation and Applications. PhD thesis, Weizmann Institute of Science, Rehovot 76100, Israel, June 1995. [21] R. Canetti. Security and composition of multi-party cryptographic protocols. Journal of Cryptology, 13(1):143– 202, 2000. [22] R. Canetti. Universally composable security: A new paradigm for cryptographic protocols. In FOCS, pages 136–145, 2001. [23] D. Chaum and E. van Heyst. Group Signatures. In EUROCRYPT, volume 547 of LNCS, pages 257–265, 1991. [24] J. H. Cheon. Security Analysis of the Strong Diffie-Hellman Problem. In EUROCRYPT ’06, volume 4004 of LNCS, pages 1–11, 2006. [25] I. Damgard.˚ Efficient concurrent zero-knowledge in the auxiliary string model. In EUROCRYPT, volume 1807 of LNCS, pages 418–430, 2000.

13 [26] A. Datta, A. Derek, J. C. Mitchell, A. Ramanathan, and A. Scedrov. Games and the impossibility of realizable ideal functionality. In TCC, volume 3876 of LNCS, pages 360–379, 2006. [27] A. W. Dent. Adapting the weaknesses of the random oracle model to the generic group model. In ASIACRYPT ’02, volume 2501 of LNCS, pages 100–109, 2002. [28] M. Fischlin. Communication-Efficient Non-Interactive Proofs of Knowledge with Online Extractors. In CRYPTO, pages 152–168, 2005. [29] S. D. Galbraith. Supersingular curves in cryptography. In ASIACRYPT, volume 2248 of LNCS, pages 495–513, 2001. [30] S. D. Galbraith. Personal communication, August, 2005. [31] S. D. Galbraith, K. G. Paterson, and N. P. Smart. Pairings for cryptographers. Technical Report 2006/165, International Association for Cryptological Research, 2006. [32] S. D. Galbraith and V. Rotger. Easy decision Diffie-Hellman groups. Journal of Computation and Mathematics, 7:201–218, 2004. [33] S. Goldwasser and Y. T. Kalai. On the (In)security of the Fiat-Shamir Paradigm. In FOCS, pages 102–115, 2003. [34] S. Goldwasser, S. Micali, and R. Rivest. A scheme secure against adaptive chosen-message attacks. SIAM J. of Computing, 17(2):281–308, 1988. [35] A. Kiayias, Y. Tsiounis, and M. Yung. Traceable signatures. In EUROCRYPT, volume 3027 of LNCS, pages 571–589, 2004. [36] A. Kiayias and M. Yung. Group signatures: Provable security, efficient constructions and anonymity from trapdoor-holders, 2004. Cryptology ePrint Archive: 2004/076. [37] A. Lysyanskaya, R. L. Rivest, A. Sahai, and S. Wolf. Pseudonym systems. In SAC, volume 1758 of LNCS, pages 184–199, 1999. [38] N. McCullagh and P. S. L. M. Barreto. A new two-party identity-based authenticated key agreement. In CT-RSA, volume 3376 of LNCS, pages 262–274, 2004. [39] M. Naor. Cryptographic assumptions and challenges. In Proc. Adv. in Cryptology (CRYPTO 2003), volume 2729 of LNCS, pages 96–109. Springer, 2003. [40] V. I. Nechaev. Complexity of a determinate algorithm for the . Mathematical Notes, 55:165– 172, 1994. [41] T. P. Pedersen. Non-interactive and information-theoretic secure verifiable secret sharing. In CRYPTO, volume 576 of LNCS, pages 129–140, 1991. [42] B. Pfitzmann and M. Waidner. Composition and integrity preservation of secure reactive systems. In ACM CCS, pages 245–254, 2000. [43] B. Pfitzmann and M. Waidner. A model for asynchronous reactive systems and its application to secure message transmission. In IEEE S&P, pages 184–200, 2001. [44] M. Scott. MIRACL library. Indigo Software. http://indigo.ie/∼mscott. [45] M. Scott. Authenticated ID-based and remote log-in with simple token and PIN number, 2002. Cryptology ePrint Archive: 2002/164. [46] V. Shoup. Lower bounds for discrete logarithms and related problems. In EUROCRYPT, LNCS, pages 256–266, 1997. Update: http://www.shoup.net/papers/. [47] E. R. Verheul. Evidence that XTR is more secure than supersingular elliptic curve . In EURO- CRYPT, volume 2045 of LNCS, pages 195–210, 2001.

A Security Proof of Basic Construction

We now prove Theorem 5.1 on the security of our basic construction.

Proof. Our goal is to show that for every adversary A and environment Z, there exists a simulator S such that Z cannot distinguish whether it is interacting in the real world with A or the ideal world with S. The proof is structured in two parts. First, for arbitrary fixed A and Z, we describe a simulator S. Then, we argue that S satisfies our goal.

14 Recall that the simulator interacts with the ideal functionality Fgs on behalf of all corrupted parties in the ideal world, and also simulates the real-world adversary A towards the environment. S is given black-box access to A. In our description, S will use A to simulate conversations with Z. Specifically, S will directly forward all messages from A to Z and from Z to A. The simulator will be responsible for handling several different operations within the group signature system. The operations are triggered either by messages from Fgs to any of the corrupted parties in the ideal system (and thus these messages are sent to S) or when A wants to send any messages to honest parties. In our description, S will simulate the (real-world) honest parties of towards A. Finally, we assume that when a signature is created, it becomes public information. Likewise, whenever a signature is opened, the corresponding identity is announced to all. (However, each user may still require individual proof from the GM that this identity is correct.) Notation: The simulator S may need to behave differently depending on which parties are corrupted. There are two parties of interest: the group manager and a user. We adopt previous notation [17] for this: a capital letter denotes that the corresponding party is not corrupted and a small letter denotes that it is. For example, by “Case (Gu)” we refer to the case where the group manager is honest, but the user is corrupted. We will refer to a user as Ui and a user’s public key as pk i. We assume throughout that a party in possession of one of these two identifiers is also in possession of the other. We now describe how the simulator S behaves. Intuitively, when the group manager is corrupt, S will sign messages for whatever user Fgs tells it. When the group manager is honest, however, S will be asked to sign messages on behalf of unknown users and might later be asked to open them. In this case, S will sign all messages using the same secret key, which we denote sk ∗. Then, whenever S is told to open this signature to a particular user later revealed by Fgs, it will fake the corresponding proof.

Non-Adaptive Setup: Each party that S corrupts reports to Fgs that it is corrupted. The global parameters k BilinearSetup(1 ) → (p, G1, G2, GT , g, g˜) = params, where G1 = hgi and G2 = hg˜i, are broadcast to all parties.

Simulation of the Real World’s Setup: The group manager has an associated key pair (GPK , GSK ). Regardless of the honesty of the group manager, S sets up any public parameters needed later for the registration of a user’s key in Join (e.g., the hash function used in the Fischlin transformation). Case (g): If the group manager is corrupted, then S receives the group key GPK from A. Case (G): If the group manager is honest, then S runs the GroupSetup algorithm to generate a group public key GPK which it then gives to A. Note that in this case S knows the corresponding secrets and the relation of the Pedersen commitment bases. (Although, an ideal group manager exists outside of S, the simulator will internally act as a real-world manager toward A.)

Simulation of Honest Parties’ Setup: Each party must have an associated key pair (pk i, sk i).

Case (u): If user Ui is corrupted, then S receives the user’s public key pk i from A.

Case (U): If Ui is honest, then S runs the UserKeyGen algorithm to generate a public key pk i which it then gives to A. (Although, ideal honest parties exist outside of S, the simulator will internally create real-world public keys for them.)

Simulation of the Join Protocol: In this operation, a user asks the group manager, via Fgs, if it can join the group and receives an answer bit. Case (gu): If both the group manager and the user are corrupt, then S does nothing.

15 Case (Gu): The group manager is honest, but the user is corrupt. Then, A will start by sending S the public key pk = (p1, p2) and tracing information Q associated with some corrupt user Ui. S will verify the tracing information by checking that e(p1,Q) = p2. If this check does not pass, S returns an error message to the corrupt user and ends the Join protocol. Otherwise, S stores the pair (pk,Q) in a database D. Now S, acting as the honest group manager with knowledge of GSK , executes the remainder of the real-world Join protocol with A, exiting with an error message when necessary according to the protocol. If S does not output an error message, then S submits (Ui,“enroll”) to Fgs. Case (gU): The group manager is corrupt, but the user is honest. S will be triggered in this case by Fgs asking if some honest user Ui may enroll. S will internally simulate a real-world version of Ui towards A using the key pair S generated for Ui during the user setup phase. If A stops before the + end of the protocol, S returns the answer “no” to Fgs. If the CL signature obtained by S during step 2 of the Join protocol verifies, then S records this certificate and returns “yes” to Fgs. Otherwise, it returns “no”. Case (GU): If both the group manager and the user are honest, then S does nothing.

Simulation of the GroupSign Operation: Let id be a counter initialized to zero. In this operation, a user anonymously obtains a signature on a message via Fgs. When an honest member of the group requests to sign a message m, Fgs will forward (“sign”, m) to S. When A outputs a real-world signature, S will be responsible for translating it into the ideal world. Here, we denote by sk ∗ the special signing key that S uses to sign all messages, for all honest parties when the group manager is honest.

Case (u): The user is corrupt. When A outputs a valid signature σ = (a1, . . . , a8) on message m, S tests if it is a (partial) re-randomization of any previous signature; that is, for all signatures (b1, . . . , b8) 1/(sk+v) on message m in L, test if a7 = b7 (this corresponds to the value g˜ ). If any match is found, S takes no further action.

However, when no match is found, S must register the signature with Fgs. To do so, S must first discover the signer of σ. For every registered user, S uses the tracing information in database D to sk check if e(a1,Qi) = e(a4, g˜). Suppose a match is found for some Qi =g ˜ i .

∗ 1. If sk i = sk , then the simulation has failed. S aborts and outputs “Failure 2”. ∗ 2. If sk i 6= sk and Ui is honest, the simulation has failed. S aborts and outputs “Failure 3”.

3. If Ui is corrupted, then S records (Ui, σ, m, id) in L, and sends (Ui, “sign”, m, id) on behalf of corrupt Ui to Fgs. S increments the counter id. If no match for any registered user was found, then:

– Subcase (gu): The group manager being corrupt, S records (corrupt-GM, σ, m, id) in L, chooses an identity Ui at random among the corrupt users, and sends (Ui, “sign”, m, id) on behalf of corrupt-GM to Fgs. S increments the counter id. – Subcase (Gu): If the group manager is honest, the simulation has failed. S aborts and outputs “Failure 4”.

Case (gU): The group manager is corrupt, but the user is honest. Since the group manager is corrupt, Fgs additionally tells S the identity Ui of the honest user requesting a signature. Then S generates a real-world group signature σ for the simulated Ui, using that user’s certificate (obtained during

16 Join) and that user’s secret key (which S created during the user setup phase). S records this entry (Ui, σ, m, id) in an internal database L. Finally, S provides A with the real-world signature (σ, m), and returns the “ideal signature” id to Fgs and increments id. Case (GU): Both the group manager and the user are honest. As stated above, S is triggered by a request (“sign”,m) from Fgs. This time the ideal-world identity of the honest user is not known to S. However, S still needs to provide A with some group signature, thus it proceeds as follows. S generates a real-world group signature σ using the secret key GSK of the group manager (which S created during the group setup phase) and the secret key of the first honest group member sk ∗ that it simulates towards A (which S also created during the user setup phase). Since all signatures are considered public information, S must forward the values (σ, m) to A. As before, S records the entry (?, σ, m, id) in an internal database L. Finally, S returns the “ideal signature” id to Fgs and increments id.

Simulation of the GroupVerify Operation: The simulator does not take any action on this operation. A will be able to verify all real-world signatures within its view itself. Furthermore, Fgs verifies signa- tures for honest users without informing S.

Simulation of the Open Operation: The simulator is triggered on this operation in a variety of ways. There are two parties to consider: the group manager and the user requesting the opening (i.e., the verifier). On the request (“open”, σ, m) from a real-world corrupted user, S first runs its ideal-world Group- Sign algorithm for receiving (σ, m) from A. Case (gu): Both the group manager and the verifier are corrupted. S does nothing.

Case (gU): The group manager is corrupted, but the verifier is honest. Fgs asks S (as the corrupted group manager) if it may open the ideal-world tuple (Ui, m, id). (Recall that if Ui = corrupt-GM, then Fgs refuses to open the signature.) S searches its database L for an entry (Uj, σ, m, id), where σ is a real-world signature on m for some user Uj. Since the id’s are unique, only one such entry will exist. Next, S, acting as an honest, real-world verifier toward A, engages A in the VerifyOpen protocol with common input (Ui, m, σ). If S, as an honest verifier, does not accept this proof, then S tells Fgs to refuse to open this signature. If S accepts this verification from A and Ui = Uj, then S tells Fgs to open the signature. Finally, if S accepts this verification and yet Ui 6= Uj, then our simulation has failed. S aborts and outputs “Failure 1”. Case (Gu): The group manager is honest, but the verifier is corrupted. Since the verifier is corrupted, it may ask about the openings of any signatures it likes. (For example, it may re-randomize a valid signature, etc.) Suppose A, acting as a corrupt verifier, requests an opening on (m, σ). The first thing that S does is to check if σ is a valid group signature (according to the real-world verification algorithm) on m under the group public key GPK . If it is not, then S returns an error message, ⊥, to A. Otherwise, S proceeds. Now, S must figure out which user, if any, is responsible for σ. First, S uses its tracing database to test if any registered user is responsible for σ. Specifically for σ = (a1, . . . , a8) and tracing information Qj, S checks if e(a1,Qj) = e(a4, g˜).

If σ opens to some registered user Uj, then there are three cases.

1. Uj is corrupted. This is not considered a forgery. S honestly runs the real-world VerifyOpen

17 protocol with A on common inputs (Uj, m, σ). This transaction can be completely simulated by S without involving Fgs. ∗ 2. Uj is honest, and sk j = sk . Here, S needs to further differentiate if σ is a forgery or merely a re-randomization of a previous signature. (Observe that the first part of our signatures may be re- randomized.) To do this, S searches database L and complies a list of all entries (?, σi, m, id i) containing message m. Next, S checks whether σ = (a1, . . . , a8) is derived from any σi = (b1,i, . . . , b8,i) by checking if a7 = b7,i.

(a) If S finds a match for some entry i, then it sends the request (“open”, m, id i) to Fgs. Suppose Fgs returns the identity Ux. Now, S must prove this opening to A. S did not know who the ideal-world signer was at the time it created σ under sk ∗ (recall that our simulator creates all signatures using sk ∗), thus it must now fake a real-world VerifyOpen opening sk x towards A. That is, S must open σ = (a1, . . . , a8) to user Ux with pk x = (hx, e(hx, g˜) ). S simulates the interactive VerifyOpen proof as follows [1]. Let pk x = (p1, p2). Recall that this is proof of knowledge of a value α ∈ G2 such that e(p1, α) = p2 and e(a1, α) = e(a4, g˜). i. A selects a random challenge c ∈ Zp and sends C = PedCom(c) to S. r r ii. S selects a random r ∈ Zp and sends (t1, t2) = (e(p1, g˜), e(a1, g˜)) to A. iii. A sends c along with the opening of commitment C. iv. S verifies that C opens to c and, if so, sends s = (˜gsk x )cg˜r to A. c c v. A accepts if and only if: (1) e(p1, s) = (p2) t1 and (2) e(a1, s) = e(a4, g˜) t2. (b) If S does not find a match for any entry i, then A has succeeded in a forgery against user ∗ Uj with sk j = sk . The simulation fails. S aborts and outputs “Failure 2”. ∗ 3. Uj is honest, and sk j 6= sk . S immediately knows σ is a forgery, because S signs for all honest users with the key sk ∗. The simulation fails. S aborts and outputs “Failure 3”.

If σ does not open to any registered user, then A has succeeded in creating a valid group signature for a non-registered user. That is, for all tracing information Qi known to S and letting σ = (a1, . . . , a8), we have e(a1,Qi) 6= e(a4, g˜). In this case, S aborts and outputs “Failure 4”. Case (GU): Both the group manager and the verifier are honest. S does nothing. S will not even know that this transaction has taken place. This ends our description of simulator S. It remains to show that S works; that is, under the Strong LRSW, the EDH, and the Strong SXDH assumptions, the simulator will not abort, except with negligible probability, and that the environment will not be able to distinguish between the real and ideal worlds.

Claim A.1 Conditioned on the fact that S never aborts, Z cannot distinguish between the real world and the ideal world under the Strong LRSW, the EDH, and the Strong SXDH assumptions.

Proof. To see this, let us explore each operation. First, we observe that in GroupSetup and UserKeyGen, the simulator S performs all key generation operations as the respective players in the real world would do. The simulator never deviates from the actions of any honest player during Join and it need not take any action during GroupVerify. In the real world, anyone may verify a signature autonomously. The remaining operations to consider are GroupSign and VerifyOpen. Let us begin with GroupSign. In this operation, S only needs to take action when it must translate an honest party ideal-world signature into a real-world signature, or a corrupted party real-world signature into

18 an ideal-world one. When the user is corrupted, S submits “sign” requests for A whenever it outputs a new signature. There is nothing here for A to observe. When the user is honest, however, then S must generate real-world signatures towards A. When the group manager is corrupted, then Fgs tells S which user is signing the message, and thus S may perfectly generate a real-world signature for A. S is only forced to deviate in case (GU) when it must simulate both the honest group manager and honest signer towards A. The problem is that S does not know which user is requesting a signature on some message m; thus S always signs with the same honest user key sk ∗. By Lemma A.2, we know that neither A nor Z can distinguish between this homogeneous, ideal-world distribution of signatures and the heterogeneous, real-world distribution. Now, it remains to consider VerifyOpen. In this operation, S only takes action when one of the two parties is corrupted. In the case (gU), S behaves exactly as an honest verifier would towards A; that is, S finds the (single) σ associated with id, and acts as an honest verifier towards A. We will later argue that it does not abort, due to Failure 1, in this step. The case (Gu), however, is more complicated. Suppose S is being asked by A to open (m, σ). If σ opens to a corrupted user or does not open to any registered user, then S behaves exactly as an honest GM would. However, what happens when σ opens to an honest user? We will later argue that S is not forced to abort due to Failures 2, 3, or 4. Even conditioned on this fact, S will almost always be forced to deviate since it signed ∗ using key sk for all honest users and now must open the signatures to whichever honest party Fgs dictates. ∗ Suppose S is told to open σ = (a1, . . . , a8) to some honest user Ui, where sk i 6= sk , then S must fake the VerifyOpen protocol toward A. S succeeds in doing this, in the usual way, by requiring A to commit to his challenge and then resetting A after seeing the challenge. That is, after seeing c ∈ Zp, S chooses a random c c value s ∈ G2 and computes t1 = e(p1, s)/p2 and t2 = e(a1, s)/e(a4, g˜) , where pk i = (p1, p2). Now S rewinds A to right after it sent a commitment to c, then sends (t1, t2), receives c with a valid opening, and returns the response s. Indeed, S only fails in this step in the unlikely event that A is able to break the binding property of the Pedersen commitments (i.e., CDH in G1). 2 This concludes our proof of Claim A.1. It remains to show that, except with negligible probability, S will not abort. Recall that S may abort under the following conditions:

• Failure 1: A breaks exculpability. We argue that it is not possible for a dishonest group manager to falsely open a signature; i.e., A is not able to successfully complete the VerifyOpen protocol with S on common input (Ui, m, σ) where Ui is not the real signer. Here, the simulation fails, because Fgs will only open signatures honestly.

We now argue that, for a given VerifyOpen instance (Ui, m, σ), an adversary that can cause Failure 1 with probability ε can be used to break the Co-CDH assumption with probability ≥ (ε − 1/p)2. (Recall from Section 3.1 that Co-CDH is implied by the Strong SXDH assumption.) On Co-CDH input (g, g,˜ gx, g˜y), the goal is to compute g˜xy and the simulator proceeds as follows.

z zx y 1. Step 1: S initiates the VerifyOpen protocol with A on input (Ui, m, σ), setting pk i = (g , e(g , g˜ )), ∗ for random z ∈ Zp, and computing σ as a valid signature on m for the user with sk . 2. Step 2: S commits to all zeros, as C = PedCom(0|p|).

3. Step 3: After receiving (t1, t2) from A, S using its knowledge of the relation of the Pedersen public parameters to fake the openings as:

– selects a random challenge c1 ∈ Zp, opens C to c1, and obtains A’s response s1.

19 – rewinds A, selects a different random challenge c2 ∈ Zp, opens C to c2, and obtains A’s response s2. 1/(c −c ) xy 4. Step 4: S computes and outputs (s1/s2) 1 2 (which hopefully corresponds to g˜ ).

∗ In Step 1, the adversary cannot tell that it was given a signature under sk instead of sk i due to Lemma A.2. The fake openings in Step 4 are perfectly indistinguishable from an honest opening due to the perfect hiding property of Pedersen commitments. If both ((t1, t2), c1, s1) and ((t1, t2), c2, s2) are valid transcripts, then S outputs g˜xy in Step 4 with probability ≥ (ε − 1/p)2. Our bound of (ε − 1/p)2 comes from the well-known Reset Lemma [10], where the advantage of A was given as ε and the size of the challenge set is p.

∗ • Failure 2: A creates a forgery against the honest user with sk . A produces signature σ = (a1, . . . , a8) ∗ ∗ and message m s.t. GroupVerify(GPK , σ, m) = 1, σ opens to U (i.e., e(a1,Q ) = e(a4, g˜)), and yet S never gave A this user’s signature on m. This scenario occurs with only negligible probability under the EDH assumption, regardless of whether the group manager is corrupted. x x Recall that EDH takes as input (g, g , g,˜ g˜ ) together with access to oracle Ox(·) that takes input ∗ x 1/(x+v) 1/(v+c) ∗ c ∈ Zp and produces output (g , g˜ , g˜ ) for a random v ∈ Zp. The goal is to produce a 1 1 v x+v v+c ∗ tuple (c, a, a , g˜ , g˜ ) for any a ∈ G1 and any v, c ∈ Zp such that c was not queried to the oracle. Let τ be the number of honest users in the system. When A succeeds with probability ε, then B solves the EDH problem with probability ε/τ. B proceeds as follows:

1. Setup: B must establish the global parameters and key generation. (a) Output (g, g˜) as the public parameters for the group signature scheme, and GPK = (S,˜ T˜) = (˜gs, g˜t) on behalf of the group manager. If GM is corrupt, GPK is given to S by A. Setup all remaining keys and parameters as S would normally do. (b) Guess which of the τ honest users A will attack. Give this user U ∗ the public key pk ∗ = r r x ∗ (g , e(g , g˜ )), for random r ∈ Zp. (Logically this assigns the user’s secret key as sk = x.) (c) Obtain group certificates for all honest users; B fakes the proof of knowledge of sk ∗ using any of the techniques discussed in Section 5 (Join). Finally, B submits the correct tracing ∗ information, Q∗ =g ˜sk =g ˜x, for this user. (d) If the group manager is corrupt, extract the group key GSK = (s, t) during the proof of knowledge. 2. Signing: When A requests a signature from a user not associated with sk ∗ = x, sign as normal. ∗ Now, when A asks for a group signature on m ∈ Zp from the honest user associated with secret key sk ∗ = x, do:

(a) Query oracle Ox(m) to get output (f1, f2, f3). (b) Select a random r ∈ Zp. Use GSK = (s, t), to output the group signature on m as

r tr sr x str x r x tr r (g , g , g (g ) , (g ) , (g ) , f1 , f2, f3).

3. Opening: B honestly executes the VerifyOpen protocol with A. 0 0 ∗ 4. Output: Suppose A produces a valid signature σ = (a1, . . . , a8) for a new message m ∈ Zp for ∗ 0 the user with key sk = x. Then B outputs (m , a1, a4, a6, a7, a8) to solve the EDH problem.

20 It is easy to observe that B perfectly simulates the group signature world for A. B has probability 1/τ of choosing which honest user A will forge against. Thus, when A succeeds with probability ε, then B solves the EDH problem with probability ε/τ.

∗ • Failure 3: A creates a forgery against a user with sk j 6= sk . Proof that this failure occurs with only negligible probability follows directly from that of Failure 2. Indeed, A has strictly less information at its disposal; that is, A never sees real signatures under key sk j. • Failure 4: A creates a valid signature for a non-registered user. In this case, A produces a signature- message pair (σ, m) such that GroupVerify(GPK , σ, m) = 1 and yet it cannot be opened by S to any registered user. We now argue that this is not possible under the Strong LRSW assumption, except with negligible probability. Suppose we are given (g, g,˜ g˜s, g˜t) as the Strong LRSW input. Instead of running GroupSetup, let the public parameters g, g˜ ∈ params and the public key GPK = (S,˜ T˜) = (˜gs, g˜t). During the UserKeyGen operation, for any honest users, S queries the Strong t s+st(sk i) LRSW oracle OS,˜ T˜ on a random sk i ∈ Zp to obtain a membership certificate (a, a , a ), for any a ∈ G1. (This tuple is, in fact, a CL signature on sk i [19].) S now uses sk i as the secret key for this honest user. When S is asked to execute Join with an honest user, S simply finds the corresponding CL signature and uses it to output the certificate (a, at, as+st(sk i), ask i , at(sk i)). When S is asked to execute Join with a corrupted user, S extracts the user’s secret key sk j using any of the techniques discussed in Section 5 (Join), queries the Strong LRSW oracle on input sk j, and uses the oracle’s output to create a valid certificate for this corrupt user. Now, the adversary can sign any message for a corrupt user, and S can honestly respond to any GroupSign call for an honest user. Suppose that Failure 1 has occurred during VerifyOpen, meaning that A output a signature σ = (a1, . . . , a8) such that the following relations hold:

e(a1, T˜) = e(a2, g˜), e(a4, T˜) = e(a5, g˜), e(a1a5, S˜) = e(a3, g˜)

and yet S did not query OS,˜ T˜ on the corresponding secret key; that is, for all sk i known to S, we have sk i a1 6= a4. Then, S may output (a1, a2, a3, a4, a5) to break the Strong LRSW assumption. Combining Claim A.1 with the above arguments that S will not abort, except with negligible probability, concludes our main proof. 2 We end by proving a Lemma used in the above proof. Intuitively, this Lemma captures the anonymity of our signatures. In the below, the values u1, . . . , uτ may be thought of as the secret keys of τ different honest users.

Lemma A.2 (Anonymity of Signatures) Suppose we have the group signature parameters from Section 5; k that is, security parameter 1 , params, and GPK . Suppose u1, . . . , uτ are random elements of Zp. Let ∗ Ou1,...,uτ (·, ·) be an oracle that takes as input a message m ∈ Zp and an index 1 ≤ i ≤ τ, and outputs a group signature (a1, . . . , a8) on m with user secret key ui. Then, under the Strong SXDH assumptions, for all probabilistic polynomial-time adversaries A, the following value is negligible in k:

Ou ,u ,...,uτ Pr [A 1 2 (params, GPK , {pk i}i∈[1,τ]) = 1]− Ou ,u ,...,u  Pr[A 1 1 1 (params, GPK , {pk i}i∈[1,τ]) = 1 .

21 Proof. First, if A can distinguish between oracles Ou1,u2,...,uτ and Ou1,u1,...,u1 , then we can create an adver- sary B that can distinguish between oracles Ou1,u2 and Ou1,u1 . Next, we show that adversary B can be used to break the Strong SXDH assumption. Overall, if A succeeds with probability ε, then we can break Strong SXDH with probability ≥ ε/τ.

Stage One. First, we make the simple hybrid argument that given A, which can distinguish the signatures of τ distinct honest users from those of a single user, we can create an adversary B that can distinguish the signatures of only 2 distinct users from those of a single user. Indeed, by the hybrid argument, we know that if A distinguishes with probability ε, then for some 1 ≤ ` ≤ τ, A can distinguish with probability ≥ ε/τ between the oracle instantiated with ` u1’s followed by τ − ` different seeds and the oracle instantiated with ` + 1 u1’s followed by τ − ` − 1 different values. The obvious reduction follows; that is, the two oracles of B will be applied to this hybrid point for A. B will then return whatever answer A does.

Stage Two. Now, we show that B can be used to create another adversary C that breaks Strong SXDH. On Strong SXDH input (g, gx, g˜), the adversary C proceeds as follows:

˜ ˜ s t 1. Generate group public key GPK as (S, T ) = (˜g , g˜ ) for random s, t ∈ Zp. Give GPK to B; store GSK = (s, t). (Remember, anonymity only makes sense when the group manager is honest, so the adversary does not get to set these keys.)

y 2. Query Qy on a random input, disregard all output except (h, h ) for some h ∈ G1. x y 3. Generate the two user keys as pk 1 = (g, e(g , g˜)) for user U1 and pk 2 = (h, e(h , g˜)) for user U2. Give pk 1, pk 2 to B. (This first key could be re-randomized away from the public parameters by r x r choosing a random r ∈ Zp and setting pk 1 = (g , e(g , g˜) ). This has no effect on the remainder, and for clarity we omit it.)

∗ 4. When B requests a signature for index i ∈ {1, 2} on m ∈ Zp, if i = 1, use Ox(·) to do:

v 1/(x+v) 1/(v+m) ∗ (a) Query Ox(m) to obtain the output (g , g˜ , g˜ ), where v ∈ Zp is a fresh random value chosen by the oracle. Denote this output as (f6, . . . , f8). t (b) Using GSK = (s, t), compute the remaining parts of the group signature: f2 = g , f3 = s x st x x t g (g ) , f4 = g , and f5 = (g ) . ∗ r r r r r r (c) Select a random r ∈ Zp, and return the signature (g , f2 , f3 , f4 , f5 , f6 , f7, f8).

If i = 2, use oracle Oy(·) to do:

y v 1/(y+v) 1/(v+m) ∗ (a) Query Qy(m) to obtain the output (a, a , a , g˜ , g˜ ), where a ∈ G1 and v ∈ Zp are fresh random values chosen by the oracle. Denote this output as (f1, f4, f6, . . . , f8). t (b) Using GSK = (s, t), compute the remaining parts of the group signature: f2 = a , f3 = s x st y t a (a ) , and f5 = (a ) . (c) Return the signature (f1, . . . , f8).

5. Eventually, B will attempt to distinguish whether he’s been talking to oracle Ox,x or oracle Ox,y. If B says that he’s been talking to oracle Ox,x, then output 1 corresponding to “x = y”. Otherwise, output 0 corresponding to “x 6= y”.

22 It is easy to see that the stage 2 simulation is perfect; the output is always correct and perfectly distributed. Indeed, C and B will succeed with identical probabilities. This concludes our proof. We find that if any A can break the anonymity of our signatures with probability ε, then A can be used to break Strong SXDH with probability at least ε/τ, where τ is the number of honest users in the system. 2

B Towards a Concurrent Join Protocol

In Section 5, we specified that the group manager runs the Join protocol sequentially with the different users. The reason for this is technical, i.e., to prove security we require that the users’ secrets keys sk i are extractable. To this end we require the users to commit to their secret key and then prove knowledge of them. If one uses the standard proof of knowledge protocol for the latter, extracting the users’ secret keys requires rewinding of the users. It is well known that if these proofs of knowledge protocols are run concurrently with many users, then extracting all the secret keys can take time exponential in the number of users. There are, however, alternatives which allow for concurrent execution of these proofs and thus also of the Join protocol. First of all, one could require the group manager to run the protocol concurrently only with a limited numbers of users, i.e., by defining time intervals within which the group manager runs the protocol concur- rently with a logarithmic number of users and enforcing a time-out if a protocol does not finish within this time interval. This solution would not give a group signature scheme that can be concurrently composed with other schemes. Solutions that would allow for concurrent executions come from applying one of the various transforma- tions of a standard proof of knowledge protocol (or Σ-protocol) into one that can be executed concurrently.

1. Common random reference string. Assuming that the parties have a common random reference string available, one can interpret this as the key for an encryption scheme such that the corresponding secret key is not know to any party. Alternatively, one could have a (distributed) trusted third party generate such a public key (cf. [25]). Then, the users would be required to verifiably encrypt their secret key sk i under this reference public key (e.g., using the techniques of Camenisch and Damgard˚ [18]). For extraction of the secret keys in the security proof, the reference string would need to be patched such that the simulator knows the reference decryption key and thus can extract the users’ secret keys by simple decryption.

2. Non-concurrent setup phase. When having a common random reference string or a trusted third party is impractical, each user can instead generate their own public key and then prove knowledge of the corresponding secret key in a setup phase where non-concurrent execution can be guaranteed (e.g, because the user’s part is run by an isolated smart card). Then, during the Join protocol, the user would verifiably encrypt her secret key sk i under her own public key pk i. 3. Assuming random oracles for Join only. A third alternative that comes to mind, in the random oracle model, is to apply Fischlin’s results [28]. Fischlin recently presented a transformation for turning any standard proof of knowledge (or Σ-protocol) into a non-interactive proof in the random oracle model that supports an online extractor (i.e., no rewinding).

The parameters required for any of these options (e.g., the hash function for option (c)) are assumed to be global information outside the control of the group manager.

23 √ C Open Algorithm with O( n · k) Complexity from Trees.

As before, let (p, G1, G2, GT , g, g˜), where G1 = hgi and G2 = hg˜i, be the global parameters for the bilinear groups. The intuition here is that group members are logically divided into a 2-level tree; then to revoke the anonymity of a signature, the group manager first locates the correct branch√ and then the correct leaf (user) 2k n for that branch. For a balanced tree, this results in a search time of . Now√ we present the details. During the Join protocol, the group manager secretly assigns each user to one of nk logical branches. ∗ Each branch is associated with a unique ID as a value ID ∈ Zp. Now, the group manager and the user run a protocol such that at the end the user obtains a CL+ signature on the pair of messages (sk, ID) without learning its branch identity ID and the group manager does not learn the user’s secret key sk. Following Camenisch and Lysyanskaya [19] this CL+ signature would be of the following form for GPK = s t z tz (˜g , g˜ , g˜ , g˜ ), GSK = (s, t, z), and some a ∈ G1:

t s+st(sk)+stz(ID) sk t(sk) ID tz(ID) (a1, . . . , a7) = (a, a , a , a , a , a , a )

+ This CL signature would be used as the user’s certificate. Let the user submit tracing information Qj = g˜sk j during the Join protocol as before. Then to open a group signature, the group manager now does: For ID each branch identity IDi, check if e(a6, g˜) = e(a1, g˜) i ; then for each member of the matching branch, check if e(a4, g˜) = e(a1,Qj). Under the DDH assumption in G1, a user’s branch identity remains hidden from everyone except the group manager, so full anonymity is preserved. By the Strong LRSW assumption, a user cannot change which branch he is associated with, and thus the group manager will be able to find him (i.e., open the signature).

Theorem C.1 In the plain model, the above extension to the Section §5 scheme realizes Fgs from Section §2 under the the Strong LRSW, the EDH, and the Strong SXDH assumptions.

In practice, one can achieve a “constant time” open algorithm by having less branches per node but more levels. Assume we want to be able to handle 240 members. Then we could have 210 branches and a tree depth of 4. This would result is a scheme where signature would have an additional 8 elements (i.e., this would double the length of the signature) the group manager would need to do at most 3072 exponentations (to walk through the first three levels) and 1024 pairings (to find the group member) to open a signature. Furthermore, the 3072 exponentations could be considerably sped-up by giving all branches of the same node the same (but random) ID except the last 10 bits. Given that opening signatures is an exceptional event, we believe such a scheme would be practical.

Open Algorithm with O((log n) + k) Complexity from Encryption. The intuition here is to have the signer include an encryption of her identity under the group manager’s encryption key as part of every signature. The trick is to do this in such a way that the correctness of the encryption is publicly-verifiable, and yet, the anonymity of the signer is preserved. Let (eGPK , eGSK ) be Elgamal encryption keys generated by the group manager, where eGSK ∈ Zp and eGPK =g ˜eGSK . Then in addition to a regular signature from Section §5, a user would add a version of Elgamal encryption of their identity as the last three items:

CL+ BB+ Elgamal+  SignGSK (sk; a), Signsk (m; a, g˜), EnceGPK (sk; a, g˜) = a, at, as+st(sk), ask , at(sk), av, g˜1/(sk+v), g˜1/(v+m), ac, g˜sk+c, (eGPK )c.

24 + + To verify the signature σ = (a1, . . . , a11), in addition to the usual CL and BB checks, a verifier must be sure that the is correctly formed by checking that: (1) e(a1, a10) = e(a4, g˜)e(a9, g˜) and (2) e(a9, eGPK ) = e(a1, a11). Now, to the key point: the group manager may, at any time, open the signature by simply decrypting 1/eGSK sk the last portion as a10/(a11) =g ˜ , which reveals the user’s identity. Recall that the group manager obtains this same tracing information from the user during the Join protocol.

Theorem C.2 In the plain model, the above extension to the Section §5 scheme realizes Fgs from Section §2 under the Strong LRSW, the EDH, and (an extension of) the Strong SXDH assumptions.

The extension of the Strong SXDH assumption mentioned above requires changes to oracles O and Q, from Definition 4 in Section §3. Specifically, we change the oracles as follows: Select a value eGPK ∈ G2 0 ∗ at random and give as input the adversary. Let Ox(·) be an oracle that takes as input m ∈ Zp and outputs r rx rv 1/(x+v) 1/(v+m) rc sk+c c ∗ (g , g , g , g˜ , g˜ , g , g˜ , eGPK ) for a random r, v, c ∈ Zp. Then, we say that on input x 0 0 0 0 (g, g , g,˜ eGPK ), the adversary cannot distinguish access to oracles (Ox(·),Oy(·)) from (Ox(·),Ox(·)). The proof of Theorem D.3 that Strong SXDH is hard in generic groups can be modified to cover this extended version as well.

D Generic Security of the New Assumptions

To provide more confidence in our scheme, we prove lower bounds on the complexity of our assumptions for generic groups [40, 46]. Let us begin by recalling the basics. We follow the notation and general outline of Boneh and Boyen [12]. In the generic group model, elements of the bilinear groups G1, G2, and GT are encoded as unique random strings. Thus, the adversary cannot directly test any property other than equality. Oracles are assumed to perform operations between group elements, such as performing the group operations in G1, G2, and GT . ∗ The opaque encoding of the elements of G1 is defined as the function ξ1 : Zp → {0, 1} , which maps all a ∗ a ∈ Zp to the string representation ξ1(a) of g ∈ G1. Likewise, we have ξ2 : Zp → {0, 1} for G2 and ∗ ξT : Zp → {0, 1} for GT . The adversary A communicates with the oracles using the ξ-representations of the group elements only. We achieve the same asymptotic complexity bound for EDH as was shown for q-SDH.

Theorem D.1 (EDH is Hard in Generic Groups) Let A be an algorithm that solves the EDH problem in the generic group model, making a total of qG queries to the oracles computing the group action in ∗ G1, G2, GT , the oracle computing the bilinear pairing e, and oracle Ox(·). If x ∈ Zp and ξ1, ξ2, ξT are Ox chosen at random, then the probability ε that A (p, ξ1(1), ξ1(x), ξ2(1), ξ2(x)) outputs (c, ξ1(r), ξ1(r · 1 1 ∗ x), ξ1(r · v), ξ2( x+v ), ξ2( v+c )) with c ∈ Zp not previously queried to Ox, is bounded by

(q + 4)2(8q + 8) q3  ε ≤ G = O G . p p

Proof. Consider an algorithm B that interacts with A in the following game. B maintains three lists of pairs L1 = {(F1,i, ξ1,i): i = 0, . . . , τ1 − 1}, L2 = {(F2,i, ξ2,i): i = 0, . . . , τ2 − 1}, LT = {(FT,i, ξT,i): i = 0, . . . , τT − 1}, such that, at step τ in the game, we have τ1 + τ2 + τT = τ + 4. The only twist between our setup and that of Boneh and Boyen is that we will let the F1,i,F2,i and FT,i’s be rational functions (i.e, fractions whose numerators and denominators are polynomials); and

25 all polynomials are multivariate polynomials in Zp[x, . . . ] where additional variables will be dynamically ∗ added. The ξ1,i, ξ2,i, and ξT,i are set to unique random strings in {0, 1} . Of course, we start the EDH game at step τ = 0 with τ1 = 2, τ2 = 2, and τT = 0. These correspond to the polynomials F1,0 = F2,0 = 1 and F1,1 = F2,1 = x, and the random strings ξ1,0, ξ1,1, ξ2,0, ξ2,1. B begins the game with A by providing it with the 4 strings ξ1,0, ξ1,1, ξ2,0, ξ2,1. Now, we describe the oracles A may query.

Group action: A inputs two group elements ξ1,i and ξ1,j, where 0 ≤ i, j < τ1, and a request to multi-

ply/divide. B sets F1,τ1 ← F1,i ± F1,j. If F1,τ1 = F1,u for some u ∈ {0, . . . , τ1 − 1}, then B sets ∗ ξ1,τ1 = ξ1,u; otherwise, it sets ξ1,τ1 to a random string in {0, 1} \{ξ1,0, . . . , ξ1,τ1−1}. Finally, B

returns ξ1,τ1 to A, adds (F1,τ1 , ξ1,τ1 ) to L1, and increments τ1. Group actions for G2 and GT are handled the same way.

Pairing: A inputs two group elements ξ1,i and ξ2,j, where 0 ≤ i < τ1 and 0 ≤ j < τ2. B sets FT,τT ←

F1,i · F2,j. If FT,τT = FT,u for some u ∈ {0, . . . , τT − 1}, then B sets ξT,τT = ξT,u; otherwise, ∗ it sets ξT,τT to a random string in {0, 1} \{ξT,0, . . . , ξT,τT −1}. Finally, B returns ξT,τT to A, adds

(FT,τT , ξT,τT ) to LT , and increments τT . ∗ Oracle Ox(·): Let τv be a counter initialized to 1. A inputs c in Zp, followed by B choosing a new variable

vτv and setting F1,τ1 ← vτv . If F1,τ1 = F1,u for some u ∈ {0, . . . , τ1 − 1}, then B sets ξ1,τ1 = ξ1,u; ∗ otherwise, it sets ξ1,τ1 to a random string in {0, 1} \{ξ1,0, . . . , ξ1,τ1−1}. B sends ξ1,τ1 to A, adding

(F1,τ1 , ξ1,τ1 ) to L1.

Next, B set F2,τ2 ← 1/(x + vτv ) and F2,τ2+1 ← 1/(vτv + m). For j ∈ {0, 1}, if F2,τ2+j =

F2,u for some u ∈ {0, . . . , τ2 − 1 + j}, then B sets ξ2,τ2+j = ξ2,u; otherwise, it sets ξ2,τ2+j to a ∗ random string in {0, 1} \{ξ2,0, . . . , ξ2,τ2−1+j}. B sends (ξ2,τ2 , ξ2,τ2+1) to A, adding (F2,τ2 , ξ2,τ2 )

and (F2,τ2+1, ξ2,τ2+1) to L2.

Finally, B adds one to τ1, two to τ2, and one to τv.

We assume SXDH holds in (G1, G2, GT ) and therefore no ismorphism oracles exist. Eventually A stops and outputs a tuple of elements (c, ξ1,a, ξ1,b, ξ1,k, ξ2,d, ξ2,f ), where 0 ≤ a, b, k < τ1 and 0 ≤ d, f < τ2. To later test the correctness of A’s output within the framework of this game, B computes the polynomials:   F1,k FT,∗ = + x · F2,d − 1. (1) F1,a   F1,k FT,◦ = + c · F2,f − 1. (2) F1,a

Intuitively, this correspond to the equalities “e(hxhv, g˜1/(x+v)) = e(h, g˜) = e(hvhc, g˜1/(v+c))”, where v 1/(x+v) h denotes the element of G1 represented by ξ1,a, h denotes the element of G1 represented by ξ1,k, g˜ 1/(v+c) denotes the element of G2 represented by ξ2,d, and g˜ denotes the element of G2 represented by ξ2,f . Analysis of A’s Output. For A’s response to always be correct, then it must be the case that FT,∗(x) = FT,◦(x) = 0 for any value of x. We now argue that it is impossible for A to achieve this. Each output polynomial must be some linear combination of polynomials corresponding to elements available to A in

26 the respective groups:

q X F1,a = a0 + a1x + a2,ivi (3) i=1 q X F1,b = b0 + b1x + b2,ivi (4) i=1 q X F1,k = k0 + k1x + k2,ivi (5) i=1 q q X d2,i X d3,i F = d + d x + + (6) 2,d 0 1 x + v v + c i=1 i i=1 i i q q X f2,i X f3,i F = f + f x + + (7) 2,f 0 1 x + v v + c i=1 i i=1 i i where q is the number of queries to oracle Qx. Now, we know, by definition, that F1,b/F1,a = x, thus one can verify, using equations (3) and (4), that a1 = a2,i = 0 and F1,a = a0 is a constant. Notation: For readability of our later analysis, we denote the following values, where constant values y0 = b0/a0, y1 = b1/a0 + 1, y2,i = b2,i/a0:

q X Y = (F1,k/a0 + x) = y0 + y1x + y2,ivi (8) i=1 q X Z = (F1,k/a0 + c) = c + y0 + (y1 − 1)x + y2,ivi (9) i=1 We also give names to the following frequently-used products:

q q Y Y P = (x + vj) and Pi6=j = (x + vj) (10) j=1 i6=j q q Y Y Q = (vj + cj) and Qi6=j = (vj + cj) (11) j=1 i6=j (12)

Using our above notation, consider the polynomials F2,∗ and FT,◦ from equations (1) and (2) when both sides are multiplied by PQ and we substitute in equations (6), (7), (8), and (9). For some constants di and fi, the new polynomials:

q q X X P QFT,∗ = 0 = d0YPQ + d1xY P Q + d2,iYPi6=jQ + d3,iYPQi6=j − PQ (13) i=1 i=1 q q X X P QFT,◦ = 0 = f0ZPQ + f1xZP Q + f2,iZPi6=jQ + f3,iZPQi6=j − PQ (14) i=1 i=1 Now, we inspect equations (13) and (14). We consider two cases.

27 Pq Case 1: y1 = 0. Then we have Z = c + y0 − x + i=1 y2,ivi. Now, we inspect the terms of equation q+2 Qq (14). We deduce that f1 = 0, because it is the only term containing x i=1 vi. Then, f0 = 0, q+1 Qq because it is the only term containing x i=1 vi. Next, each f3,i = 0, because they have unique q+1 Q Pq terms x i6=j vj. We are left with 0 = i=1 f2,iZPi6=jQ − PQ. We divide by Q, and the result Pq is 0 = i=1 f2,iZPi6=j − P . It follows then that at least one f2,i must be non-zero for the equation to be solvable; denote the first non-zero f2,i as f2,β.

Now, suppose some constant y2,i 6= 0 in Z, meaning that Z contains a vi term. If β 6= i, then 2 Q y2,iZPi6=j contains a unique term vi i6=j6=α vj that cannot be canceled. Thus, y2,i = 0, and further- more f2,i, for all i 6= β. Now we are left with the equation 0 = f2,β(c + y0 − x + y2,βvβ)Pβ6=j − P , we divide out Pβ6=j and observe that f2,β = −1, to get 0 = −(c + y0 − x + y2,βvβ) − (x + vβ).

Now, since c, y0, y2,β are all constants and vβ is a variable, we conclude that y2,β = 1 and c = −y0. That means that Y = y0 + vβ, where y0 6= 0. Plugging back into equation (13), we have d1 = 0 q+1 q Qq due to unique x term, then it must be the case that d0 = 0 due to x vβ i=1 vi. Next, it must be q 2 Q that d3,i = 0 for all i 6= β due to unique x vβ j6=i6=β vj. Next, we see that terms corresponding to q d3,βYPQβ6=j = d3,β(y0 + vβ)PQβ6=j and PQ are the only two left with a x term; thus, d3,β 6= 0. q Qq Further, cancelling the term x i=1 vi from PQ requires that d3,1 = 1. Thus, we find that to cancel q all related x terms, it must be that c = cβ. Since c, which represents the message corresponding to A’s signature, is an old value, this is not a valid forgery.

Case 2: y1 6= 0. Now Y contains an x term, and we inspect equation (13). We deduce that d1 = 0, because q+2 Qq it is the only term containing x i=1 vi. Then, d0 = 0, because it is the only term containing q+1 Qq q+1 Q x i=1 vi. Next, each d3,i = 0, because they have unique terms x i6=j vj. We are left with Pq Pq 0 = i=1 d2,iYPi6=jQ − PQ. We divide by Q, and the result is 0 = i=1 d2,iYPi6=j − P . To satisfy this equation, for some d2,i, we must have d2,i 6= 0. We denote this value d2,β. From this point, we proceed in a fashion similar to case 1. By inspecting the above equation, we Qq find that y0 = 0, and for all i 6= β, y2,i = 0, otherwise there exist unique terms: e.g., vβ β6=j vj. q Furthermore, d2,βy1 = 1, since the x term of P has coefficient 1. And, d2,βy2,β = 1, since the Qq i=1 vi term of P has coefficient 1. So, y1 = y2,β = 1/d2,β and we plug into Z as Z = c + (1/d2,β − 1)x + vβ/d2,β. q+1 Qq q Qq From equation (14), we have that f1 = 0 due to x i=1 vi, f0 = 0 due to vβx i=1 vi. For all 2 q Q i 6= β, f3,i = 0, otherwise unique terms vβx i6=j6=β vj appear. Given that all f3,i = 0 except for 3 f3,β, it follows all f2,i = 0 except for f2,β due to unique terms containing vi for i 6= β. Thus, we now have the equation 0 = f2,βZPβ6=jQ + f3,βZPQβ6=j − PQ. We divide by Pβ6=jQβ6=j to obtain 0 = f2,βZ(vβ + cβ) + f3,βZ(x + vβ) − (x + vβ)(vβ + cβ).

Now, suppose d2,β = 1 and thus Z = (c + vβ). Then we know that f3,β = 1 to cancel the term xvβ; 2 this forces f2,β = 0 because the vβ term is already canceled in whole by the f3,β component. Thus, it is immediate that c = cβ, which is not a valid forgery.

On the other hand, suppose d2,β 6= 1 and thus Z contains an x term. Then, we know that f3,β = 0, 2 because its x term would be unique. This forces c = 0 because otherwise the constant term f2,βccβ ∗ would be unique. However, c must be an element in Zp, and thus this is also not a valid forgery. Thus, we conclude that A’s success depends solely on his luck when the variables are instantiated. ∗ ∗ Analysis of B’s Simulation. At this point B chooses a random x ∈ Zp. B now tests (in equa- tions 15,16,17) if its simulation was perfect; that is, if the instantiation of x by x∗ does not create any

28 equality relation among the polynomials that was not revealed by the random strings provided to A. B also tests (in equations 18, 19) whether or not A’s output was correct. Thus, A’s overall success is bounded by the probability that any of the following holds:

∗ ∗ F1,i(x ) − F1,j(x ) = 0, for some i, j such that F1,i 6= F1,j, (15) ∗ ∗ F2,i(x ) − F2,j(x ) = 0, for some i, j such that F2,i 6= F2,j, (16) ∗ ∗ FT,i(x ) − FT,j(x ) = 0, for some i, j such that FT,i 6= FT,j, (17) ∗ FT,∗(x ) = 0, (18) ∗ FT,◦(x ) = 0. (19)

We observe that FT,∗ and FT,◦ are non-trivial polynomials of degree at most ≤ 2q + 2. Each polynomial F1,i and F2,i has degree at most 1 and 2q + 1, respectively. For fixed i and j, the first case occurs with probability ≤ 1/p; the second occurs with probability ≤ (2q +1)/p; and the third occurs with probability ≤ (2q +2)/p. (We already take into account multiplying out the denominators of any rational polynomials.) Finally, the fourth and fifth cases happen with probability ≤ (2q + 2)/p. Now summing over all (i, j) pairs in each case, we bound A’s overall success probability τ1 1 τ2 2q+1 τT  2q+2 2(2q+2) 2 ε ≤ 2 p + 2 p + 2 p + p . Since τ1 + τ2 + τT ≤ qG + 4, we end with ε ≤ (qG + 4) (8q + 3 8)/p) = O(qG/p). 2 The following corollary is immediate.

Corollary D.2 Any adversary that breaks the EDH assumption with constant probability ε > 0 in generic √ p groups of order p such that q < o( 3 p) requires Ω( εp/q) generic group operations.

We now turn our attention from a computational to a decisional problem. Recall from Section 2 ∗ that the Strong SXDH assumption involves oracle Ox(·) that take as input a value m ∈ Zp and returns v 1/(x+v) 1/(v+m) ∗ (g , g˜ , g˜ ) for v ∈ Zp randomly chosen by the oracle, and an oracle Qy(·) that takes the same y v 1/(y+v) 1/(v+m) ∗ type of input and returns (a, a , a , g˜ , g˜ ), for a ∈ G1 and v ∈ Zp chosen randomly by the oracle. These random values are freshly chosen at each invocation of the oracle.

∗ Theorem D.3 (Strong SXDH is Hard in Generic Groups) Let x ∈ Zp, b ∈ {0, 1}, and ξ1, ξ2, ξT be cho- sen at random. Also, if b = 1, set y = x, but if b = 0, then set y to be a value selected randomly from ∗ Zp \ x. Let A be an algorithm that solves the Strong SXDH problem in the generic group model, making a total of qG queries to the oracles computing the group action in G1, G2, GT , the oracle computing the bilinear pairing e, and the two oracles Ox(·) and Qy(·) as described above. Then the probability ε that A(p, ξ1(1), ξ1(x), ξ2(1)) = b is bounded by

1 (q + 3)2(3q ) 1 q3  ε ≤ + G G = + O G . 2 p 2 p

Proof. B maintains the lists L1, L2, and LT as in the proof of Theorem D.1. (Consider the bit b as not yet set.) At step τ in the game, we now have τ1 + τ2 + τT = τ + 3, where at τ = 0, we set τ1 = 2, τ2 = 1, and τT = 0. These correspond to the polynomials F1,0 = F2,0 = 1 and F1,1 = x. B also selects unique, random strings ξ1,0, ξ1,1, and ξ2,0. B begins the game with A by providing it with the strings ξ1,0, ξ1,1, and ξ2,0. A may, at any time, make the group action or pairing queries as in the proof of Theorem D.1. A may additionally query the following two oracles. Let τv = 1 and τw = 1 be counters.

29 ∗ Oracle Ox(·): A inputs m in Zp, followed by B choosing a new variable vτv and setting F1,τ1 ← vτv . If

F1,τ1 = F1,u for some u ∈ {0, . . . , τ1 − 1}, then B sets ξ1,τ1 = ξ1,u; otherwise, it sets ξ1,τ1 to a ∗ random string in {0, 1} \{ξ1,0, . . . , ξ1,τ1−1}. B sends ξ1,τ1 to A, adding (F1,τ1 , ξ1,τ1 ) to L1.

Next, B set F2,τ2 ← 1/(x + vτv ) and F2,τ2+1 ← 1/(vτv + m). For j ∈ {0, 1}, if F2,τ2+j =

F2,u for some u ∈ {0, . . . , τ2 − 1 + j}, then B sets ξ2,τ2+j = ξ2,u; otherwise, it sets ξ2,τ2+j to a ∗ random string in {0, 1} \{ξ2,0, . . . , ξ2,τ2−1+j}. B sends (ξ2,τ2 , ξ2,τ2+1) to A, adding (F2,τ2 , ξ2,τ2 )

and (F2,τ2+1, ξ2,τ2+1) to L2.

Finally, B adds one to τ1, two to τ2, and one to τv.

Oracle Qy(·): B responds similarly, except that it chooses new variables rτw and wτw , and sets F1,τ1 ←

rτw , F1,τ1+1 ← rτw · y, F1,τ1+2 ← rτw · wτw , F2,τ2 ← 1/(y + wτw ), and F2,τ2+1 ← 1/(wτw + m). At the end, B adds three to τ1, two to τ2, and one to τw. Eventually A stops and outputs a guess b0 ∈ {0, 1}. Analysis of A’s Output. First, we argue that, provided B’s simulation is perfect, the bit b0 is independent of b; that is, A cannot output a string such that the corresponding polynomial is always equal when x = y (b = 1) and non-zero otherwise (b = 0). We show this for each group G1, G2, and GT . Showing this for GT is the hardest case. Here, we sum over all expressions containing i or j.

Group G1: The polynomials corresponding to elements in G1 that the adversary can compute as a linear combination of elements in its view are:

F1,a = a0 + a1 · x + a2,i · vi + a3,j · rj + a4,j · rj · y + a5,j · rj · wj (20)

where i = 1 to τv and j = 1 to τw. For F1,a = 0, both a1 and a4,j must be zero whether y is replaced by x or not; otherwise those terms cannot be canceled. The remaining polynomial does not contain the variables x or y.

Group G2: The polynomials corresponding to elements in G2 that the adversary can compute as a linear combination of elements in its view are:

b1,i b2,i b3,j b4,j F2,b = b0 + + + + (21) x + vi vi + mi y + wj wj + mj

∗ where i = 1 to τv, j = 1 to τw, and each mi, mj ∈ Zp was chosen by the adversary. Suppose F2,b = 0. We multiply out the denominators in equation (21) to obtain:

0 F2,b = b0(x + vi)(vi + mi)(y + wj)(wj + mj)+ b1,i(vi + mi)(y + wj)(wj + mj) + b2,i(x + vi)(y + wj)(wj + mj)+ (22) b3,j(x + vi)(vi + mi)(wj + mj) + b4,j(x + vi)(vi + mi)(y + wj)

0 Now, for Fb,2 = 0, regardless of whether we substitute x for y, we see that b0 = 0, otherwise the term 2 xviywj (or x viwj) cannot be canceled. Similarly, b1,i = 0 because of the unique summand xviy (or 2 2 x vi), which makes b2,i = 0 because of the summand vi wj. Then, b3,j = 0 because of the summand 2 2 xywj (or x wj), which makes b4,j = 0 because of the summand viwj . We are left with the constant zero.

30 Group GT : The polynomials corresponding to elements in GT that the adversary can compute as a linear combination of elements in its view are: X FT,c = F1,a · F2,b. (23)

Now, a simple expansion of FT,c has thirty terms. Suppose we clear the denominators in FT,c = 0 by multiplying out by (x + vi)(vi + mi)(y + wj)(wj + mj), then we have

0 X 0 FT,c = F1,a · F2,b. (24)

0 Now, each of the terms in F1,a is unique and F2,b contains the following unique summands (xviywj, 2 2 2 2 xviy, vi wj, xywj, viwj ). (Here, the summands vi wj and viwj are actually not unique, but since they also do not contain x or y, it will not matter.) Multiplying these key components out and dropping the subscript for clarity, we obtain: 00 2 2 FT,c = c0(vwxy) + c1(vxy) + c2(v w) + c3(wxy) + c4(vw ) 2 2 2 2 2 + c5(vwx y) + c6(vx y) + c7(v wx) + c8(wx y) + c9(vw x) 2 2 2 3 2 2 + c10(v wxy) + c11(v x y) + c12(v w) + c13(vwxy) + c14(v w ) 2 2 + c15(vwxyz) + c16(vxyz) + c17(v wz) + c18(wxyz) + c19(vw z) 2 2 2 2 2 + c20(vwxy z) + c21(vxy z) + c22(v wyz) + c23(wxy z) + c24(vw yz) 2 2 2 2 3 + c25(vw xyz) + c26(vwxyz) + c27(v w z) + c28(w xyz) + c29(vw z) (25) 00 Now, we are only interested in differences in the polynomial FT,c when y is replaced by x or not. For clarity, we drop all terms containing neither x nor y, resulting in c2 = c4 = c12 = c14 = c17 = c19 = c27 = c29 = 0. We substitute x = y to obtain. 000 2 2 2 FT,c = c0(vwx ) + c1(vx ) + + c3(wx ) + 3 3 2 3 2 + c5(vwx ) + c6(vx ) + c7(v wx) + c8(wx ) + c9(vw x) 2 2 2 3 2 + c10(v wx ) + c11(v x ) + + c13(vwx ) + 2 2 2 + c15(vwx z) + c16(vx z) + + c18(wx z) + 3 3 2 3 2 + c20(vwx z) + c21(vx z) + c22(v wxz) + c23(wx z) + c24(vw xz) 2 2 2 2 2 + c25(vw x z) + c26(vwx z) + + c28(w x z) (26) We want to know if there are any two terms that are symbolically equal when x = y and not otherwise. Scanning the above, we see that the only non-unique terms are in positions 0 and 13, and in positions 15 and 26. Looking back to equation (25), we see that both positions 0 and 13 correspond to term vwxy, and that both positions 15 and 26 correspond to term vwxyz. Obviously, these terms will be the same regardless of the substitution of x for y. Since all other terms are unique, we conclude that the adversary’s only chance of distinguishing comes from a lucky instantiation of these variables.

∗ ∗ ∗ ∗ Analysis of B’s Simulation. At this point B chooses random values x , y , {vd}d∈[1,τv], {wd}d∈[1,τw], ∗ ∗ {rd}d∈[1,τw] ∈ Zp. B’s simulation is perfect, and therefore reveals nothing to A about b, provided that none

31 of the following non-trivial equality relations hold:

∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ F1,i(x , y , {v }, {w }, {r }) − F1,j(x , y , {v }, {w }, {r }) = 0 : d d d d d d (27) for some i, j, such that F1,i 6= F1,j, ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ F1,i(x , x , {v }, {w }, {r }) − F1,j(x , x , {v }, {w }, {r }) = 0 : d d d d d d (28) for some i, j, such that F1,i 6= F1,j, ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ F2,i(x , y , {v }, {w }, {r }) − F2,j(x , y , {v }, {w }, {r }) = 0 : d d d d d d (29) for some i, j, such that F2,i 6= F2,j, ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ F2,i(x , x , {v }, {w }, {r }) − F2,j(x , x , {v }, {w }, {r }) = 0 : d d d d d d (30) for some i, j, such that F2,i 6= F2,j, F (x∗, y∗, {v∗}, {w∗}, {r∗}) − F (x∗, y∗, {v∗}, {w∗}, {r∗}) = 0 : T,i d d d T,j d d d (31) for some i, j, such that FT,i 6= FT,j, F (x∗, x∗, {v∗}, {w∗}, {r∗}) − F (x∗, x∗, {v∗}, {w∗}, {r∗}) = 0 : T,i d d d T,j d d d (32) for some i, j, such that FT,i 6= FT,j.

For fixed i and j, the probability of the first and second cases occurring are no more than 2/p, where this results from the maximum degree of equation (20). For fixed i and j, the probability of the third and fourth cases occurring are no more than τ2/p, where this results from the maximum degree of equation (22). Finally, for the fifth and sixth cases, the probability is at most 2τ2/p, where this results from the maximum degree of equation (24). Therefore, by summing over all (i, j) pairs in each case, we bound A’s overall success probability τ1 2 τ2 τ2 τT  2τ2 2 ε ≤ 2 2 p +2 2 p +2 2 p . Since τ1+τ2+τT ≤ qG+3, we end with ε ≤ (qG+3) (2+qG+2qG)/p = 3 O(qG/p). 2 1 The following corollary is immediate. Here γ = ε − 2 ; that is, γ is the adversary’s advantage beyond guessing.

Corollary D.4 Any adversary that breaks the Strong SXDH assumption with constant probability γ > 0 in √ generic groups of order p requires Ω( 3 γp) generic group operations.

32