Masarykova universita, Fakulta informatiky

Diplomová práce

Unconditional Security in Classical

Petr Štika 2010 Prohlášení

Prohlašuji, že tato práce je mým původním autorským dílem, které jsem vypracoval samostatně. Všechny zdroje, prameny a literaturu, které jsem při vypracování používal nebo z nich čerpal, v práci řádně cituji s uvedením úplného odkazu na příslušný zdroj.

Brno 2010

Děkuji RNDr. Janu Boudovi, Ph.D. za odborné vedení diplomové práce a poskytování cenných rad při jejím zpracování.

2 Abstract

This thesis focuses on classical cryptography, its primitives and their security, especially for the purposes of unconditional security. In recent years, the use of cryptography has increased dramatically, accompanied by growing attention from across the field of computer science. Despite the invention of quantum computing and the introduction of quantum cryptography, classical cryptography is still relevant. The work explores various cryptographic primitives and studies and presents results about their relative security. This thesis concludes that unconditional security in classical cryptography is possible and satisfactory. In the opening chapter it provides an overview of cryptography and its history, then it gives some of the cryptographic basics so that we are finally able to focus on the topic of this work – the unconditionally security of specific cryptographic primitives. Quantum cryptography and the possibility of limiting the ability of adversaries to gain desirable security properties are also mentioned. In the end there is described the result of this work – remarks and conclusions.

Keywords classical cryptography, private key cryptography, public key cryptography, unconditional security, cryptographic primitives, anonymous transfer, authentication, bit commitment, blind signatures, Byzantine agreement, encryption, coin tossing, digital signatures, digital pseudosignatures, key exchange, oblivious transfer, secret sharing

3 Contents 1 Introduction...... 6 1.1 Background of this Work...... 6 2 The Basics of Cryptography...... 9 2.1 Private Key Cryptography...... 9 2.2 The Probability Theory...... 10 2.3 Entropy...... 12 2.4 Unconditional Security...... 13 2.4.1 Using Quantum Cryptography...... 14 2.4.2 Limiting the Adversary...... 15 2.5 Public Key Cryptography...... 15 2.6 Conclusion...... 16 3 Cryptographic Primitives...... 17 3.1 Encryption...... 17 3.1.1 The Shift Cipher...... 17 3.1.2 One-time Pad...... 19 3.2 Authentication...... 21 3.2.1 Wegman-Carter Authentication ...... 21 3.3 Anonymous Transfer ...... 28 3.3.1 DC-net...... 28 3.3.2 DC-net Protocol with Waidner-Pfitzmann's Improvements...... 30 3.4 Bit Commitment...... 34 3.4.1 Security of Bit Commitment Schemes...... 34 3.4.2 Examples of Bit Commitment Protocols...... 35 3.5 Digital Signatures...... 39 3.5.1 Pseudosignatures...... 40 3.5.2 Blind Signatures...... 42 3.6 Byzantine Agreement...... 46 3.6.1 Byzantine Agreement Problem...... 46 3.6.2 Byzantine Agreement Protocol with Pseudosignatures...... 47 3.7 Coin Tossing...... 49 3.7.1 Blum's Coin Flipping Protocol...... 49 3.7.2 Perfectly Secure Coin Tossing Using Bit Commitment...... 50 3.8 Key Exchange...... 51 3.8.1 Key Predistribution Protocol...... 51

4 3.8.2 Key Distribution and Agreement Protocols ...... 53 3.9 Oblivious Transfer ...... 55 3.9.1 1-out-of-2 Oblivious Transfer Based on Blind Signatures...... 55 3.9.2 Oblivious Transfer with Trusted Initializer...... 56 3.10 Secret Sharing...... 57 4 Conclusion...... 58 5 Bibliography...... 59

5 1 Introduction

In recent years, the use of cryptography has increased dramatically. Every time a person buys something with a credit card, uses online banking, sends a password to access his e-mail, he uses cryptography. With this growth, it is not surprising the interest of academics and professionals has grown accordingly. Classical cryptography can be traced to the Ancient Greeks, but started to be widely used during the Second World War. New developments in the field have concentrated on the potential use of quantum computers for the purposes of cryptography. Because of the invention of quantum computing and the introduction of quantum cryptography, it may appear that classical cryptography is obsolete. This thesis will show that this is not the case. Cryptographic primitives of classical cryptography can provide unconditional security in situations where quantum cryptography cannot achieve it. There are existing primitives which are unconditionally secure, and if not, ways how to make present ones unconditionally secure are investigated, as will be shown later. As the author has already indicated, the scope of this work is the unconditional security of primitives in classical cryptography. The author is of the opinion that there is lack of studies exploring in depth the use of classical cryptography, and that cryptography's desired property – unconditional security – is poorly researched in comparison to the enormous current research in the field of quantum computing. This thesis concludes that unconditional security in classical cryptography is possible and satisfactory. The main contribution of this thesis to the field is mapping and summarizing current research results in the unconditional security in classical cryptography, as this topic is in our opinion not completely and exhaustively studied and presented. The text is divided into four chapters. After a brief introduction into the subject, the second chapter defines the fundamental elements of cryptography. The third part describes cryptographic primitives while the fourth chapter summarizes the results. The analysis is to determine whether cryptographic primitives can be unconditionally secure or not, and if not, what “workaround” may be presented.

1.1 Background of this Work

Cryptography – in Greek this word means "hidden writing" – is an old discipline, primarily used for secure exchanging of information between parties. The main objective is to prevent a potential adversary from understanding this communication, which has been widely used especially by the military and in political circles.

6 After the Second World War, when cryptographers and cryptography showed their value, there arose a need to lay down a theoretical background for this promising science. This was mainly achieved by Claude E. Shannon in his works. Shannon used terms from the theory of information to measure the security of known ciphers, which allowed him to prove the unconditional security of Vernam's cipher. In the 70's, during the computer revolution, symmetric ciphers like DES, public cryptography and others were introduced. The main goals of modern cryptography are confidentiality, data integrity, entity authentication, data origin authentication, non-repudiation, and anonymity [1]. It provides encryption, key distribution, digital signatures, one-way functions and other useful primitives, schemes and protocols. Cryptography has become pervasive far outside its origins and finds application in modern life: industry, banking, private communications, telecommunications, networking, authentication, electronic voting, or signing contracts – all these areas widely use cryptography. Classical cryptography aims to build solid foundations to provide cryptographic schemes, which meet security requirements no matter which attack the adversary use on the system. In spite of the era of quantum computing, the classical cryptography can still provide and maintain very useful and unbreakable primitives and protocols based on them to fulfill everyday needs without any special quantum device, just only based on mathematical science. Despite the power of quantum cryptography and computers, there are still problems which can not be solved in an unconditionally secure way. Classical cryptography can provide solutions in these situations. This work aims to map and study classical cryptographic primitives and their (in)ability of being executed in an unconditionally secure manner. Also possible is the weakening of presumptions and thereby reaching the desired level of security, which will be mentioned in the corresponding sections about particular primitives and protocols. The following section defines the key terms used throughout this work. • Classical cryptography uses mathematical methods for transforming an intelligible message into one that is unintelligible, and then re-transforming that message back to its original form. Quantum cryptography uses nature's laws to achieve the same objective. Cryptographic primitives are basic cryptographic algorithms and protocols used for building security systems. There are many of them: anonymous transfer, bit commitment, blind signatures, Byzantine agreement, encryption, coin tossing, , digital pseudosignatures, key exchange, authentication, oblivious transfer, secret sharing and more. They can be divided according to the level of security they provide into the following categories: unconditionally secure, provably secure, computationally secure and secure ones. • Unconditional security – a cryptographic system is unconditionally secure if it is secure even in the case where an adversary has unlimited resources for solving a problem e.g.

7 breaking a cipher. An adversary has not sufficient information for doing that because is unable to gain any meaningful data from a cipher text. • Complexity-theoretic security – An appropriate model of computation is defined and adversaries are modelled as having polynomial computational power. A proof of security relative to the model is then constructed. An objective is to design a cryptographic method based on the weakest assumptions possible anticipating a powerful adversary. Asymptotic analysis and usually also worst-case analysis is used and so care must be exercised to determine when proofs have practical significance. In contrast, polynomial attacks which are feasible under the model might, in practice, still be computationally infeasible [1]. • Provable security – this means that breaking such cryptographic system is as difficult as solving some supposedly difficult problem e.g. discrete logarithm computation, discrete square root computation, very large integer factorization. • Computational security – a cryptographic system is computationally secure if breaking it requires the utilization of well-known attacks which for an adversary is infeasible as it exceeds the adversary's resources. It is important to take into account fast progress of research and computers. • Ad-hoc security – a cryptographic system has this security if it is not worth to try to break into such system because of inadequate price of data with comparison to price of work needed to do so. Or an attack can't be done in sufficiently short time [1].

8 2 The Basics of Cryptography

This chapter provides a broad overview of the mathematical methods used in cryptography which will be necessary for further analysis of the security of cryptographic primitives. The most prominent and productive author in this field has been Claude E. Shannon, who is known as the father of information theory. He has famously founded this theory with one landmark paper published in 1948. But he is also credited with inventing both digital computing and digital circuit design theory in 1937, when, as a 21-year-old master's student at MIT, he wrote a thesis demonstrating that electrical application of Boolean algebra could construct and resolve any logical, numerical relationship [2]. Amongst others he published A Mathematical Theory of Communication and Communication Theory of Secrecy Systems. The first book can be considered as the groundwork of the theory of information, as is the second one for modern cryptography. In Communication Theory of Secrecy Systems he also proved that all theoretically unbreakable ciphers must have the same requirements as the one-time pad [2]. The proof will be discussed in detail later in this work. This chapter lays down the theoretical basis for the further analysis of unconditional (or any kind of) security. It consists of several main parts. The first part defines private key cryptography, in other words a cryptosystem. The following parts explore probability theory and subsequently the principles of entropy. The fourth part explores unconditional security while the last part analyses public key cryptography. In addition, the chapter contains a brief introduction to quantum cryptography and how to use the methods for creating and distributing a key in an unconditionally secure way. By way of conclusion, technologies which indicate how to gain better results in the area of security by lowering an adversary's abilities will be explored. According to the latest developments practical and provably secure cryptosystems become possible when some small modifications of Shannon's model are made [3].

2.1 Private Key Cryptography

One of the main goals of cryptography is to enable two parties to exchange data over often an insecure channel and make it unintelligible for a possible adversary. The communicating parties are usually named Alice and Bob, while Oscar is the adversary. The message Alice wants to send to Bob is called a plaintext. They both agree on some key, which has to be kept secret. Alice encrypts the plaintext with a previously chosen key. The plaintext after encryption is called ciphertext. The ciphertext is then send to Bob. Bob now can decrypt – because he knows the key – ciphertext to plaintext. While the adversary doesn't know the key, even if he is able to catch the ciphertext, he is not able to convert it to plaintext. In formal terms this process is

9 called a private key cryptosystem, formally defined by Stinson[4] as:

Definition 2.1 A cryptosystem is a five-tuple (P, C, K, E, D), where the following conditions are satisfied: 1. P is a finite set of possible plaintexts 2. C is a finite set of possible ciphertexts 3. K, the keyspace, is a finite set of possible keys ∈ ∈ 4. For each K K, there is an encryption rule eK E and a corresponding decryption

rule dK ∈ D. Each eK : P → C and dK : C → P are functions such that dK (eK(x)) = x for every plaintext x ∈ P .

The main property is property 4. It says that if a plaintext x is encrypted using eK, and the resulting ciphertext is subsequently decrypted using dK, then the original plaintext x results. Alice and Bob will employ the following protocol to use a specific cryptosystem. First, they choose a random key K ∈ K. This is done when they are in the same place and are not being observed by Oscar, or, alternatively, when they do have access to a secure channel, in which case they can be in different places. At a later time, suppose Alice wants to communicate a message to Bob over an insecure channel. We suppose that this message is a string x = x1x2...xn ∈ for some integer n ≥ 1, where each plaintext symbol xi P, 1 ≤ i ≤ n. Each xi is encrypted using the encryption rule eK specified by the predetermined key K. Hence, Alice computes yi = eK(xi),

1 ≤ i ≤ n, and the resulting ciphertext string y = y1y2...yn is sent over the channel. When Bob receives y1y2...yn, he decrypts it using the decryption function dK, obtaining the original plaintext string, x1x2...xn [4].

2.2 The Probability Theory

Probability theory is the branch of mathematics concerned with analysis of random phenomena. The central objects of probability theory are random variables, stochastic processes, and events: mathematical abstractions of non-deterministic events or measured quantities that may either be single occurrences or evolve over time in an apparently random fashion. Although an individual coin toss or the roll of a die is a random event, if repeated many times the sequence of random events will exhibit certain statistical patterns, which can be studied and predicted. Two representative mathematical results describing such patterns are the law of large numbers and the central limit theorem. As a mathematical foundation for statistics, probability theory is essential to many human

10 activities that involve quantitative analysis of large sets of data. Methods of probability theory also apply to descriptions of complex systems given only partial knowledge of their state, as in statistical mechanics. A great discovery of twentieth century physics was the probabilistic nature of physical phenomena at atomic scales, described in quantum mechanics [40]. It is also necessary in order to study unconditional security. Most definitions in this chapter are quoted from [4] for this purpose:

Definition 2.2 Suppose X and Y are random variables. We denote the probability that X takes on the value x by p(x), and the probability that Y takes on the value y by p(y). The joint probability p(x, y) is the probability that X takes on the value x and Y takes on the value y. The conditional probability p(x|y) denotes the probability that X takes on the value x given that Y takes on the value y. The random variables X and Y are said to be independent if p(x, y) = p(x)p(y) for all possible values x of X and y of Y.

Joint probability can be related to conditional probability by the formula px , y= px | y p y. Interchanging x and y, we have that px , y= p y | x px. From these two expressions, we immediately obtain the following result, which is known as Bayes’ Theorem [4]:

Theorem 2.1 (Bayes’ Theorem) If p(y) > 0, then px p y |x px| y= . p y

Corollary 2.2 X and Y are independent variables if and only if px | y= px for all x, y. Let us suppose that there is a probability distribution on the plaintext space, P. We denote the a priori probability that plaintext x occurs by pP(x). We also assume that the key K is chosen using some fixed probability distribution. Denote the probability that key K is chosen by pK(K). We can make the assumption that the key K and the plaintext x are independent events, as the key is chosen before Alice knows what the plaintext will be. The two probability distributions

11 on P and K induce a probability distribution on C. The probability pC(y) that y is the ciphertext that is transmitted can be computed as follows. For a key K ∈ K, define  =   ∈ C K { eK x : x P } . C(K) represents the set of possible ciphertexts if K is the key. Then, for every y ∈ C, we have that  = ∑      pC y pK K pP d K y . { K : y∈C K }

For any y ∈ C and x ∈ P, we can compute the conditional probability, pC(y|x) (i.e., the probability that y is the ciphertext, given that x is the plaintext) to be  = ∑   pC y | x pK K . =   { K: x dK y }

It is now possible to compute the conditional probability pP(x|y) (i.e., the probability that x is the plaintext, given that y is the ciphertext) using Bayes’ Theorem. The following formula is obtained:   ∑   p P x pK K {K : x=d  y} p x | y= K P ∑      [4]. pK K pP d K y { K : y∈C K  }

2.3 Entropy

Entropy, both as term and concept, as used in this work, is defined in a well-known paper by Claude Shannon – A Mathematical Theory of Communication, where he defines the entropy as a set of probabilities [5]. In fact, entropy is a measure of the amount of information, for example in a transmitted message. Detailed information can be found in [4, 5]. In this work we will just cite necessary definitions and theorems (without proofs, to be brief) from [4]:

Definition 2.3 Suppose X is a random variable which takes on a finite set of values according to a probability distribution p(X). Then, the entropy of this probability distribution is defined to be the quantity

n  =−∑ H X pi log2 pi i=1

If the possible values of X are xi, 1 ≤ i ≤ n, then we have

n  =−∑  =   =  H X p X xi log2 p X xi i=1

12 We can now think about the entropy of the cryptosystem and its components. We can compute H(K), H(P), H(C) as the entropies of the key, plaintext and ciphertext according to their probability distributions.

Definition 2.4 Suppose X and Y are two random variables. Then for any fixed value y of Y, we get a (conditional) probability distribution p(X|y). Clearly,  =−∑     H X | y p x | y log2 p x| y . x We define the conditional entropy H(X|Y) to be the weighted average (with respect to the probabilities p(y)) of the entropies H(X|y) over all possible values y. It is computed to be  =−∑∑       H X |Y p y p x| y log2 p x| y . y x The conditional entropy measures the average amount of information about X that is revealed by Y.

2.4 Unconditional Security

A cryptosystem is unconditionally secure if there are no limitations on the computational resources required of an adversary to crack the system. The resources could be infinite and the adversary would still not able to break the cryptosystem. Unconditional security for encryption systems is called perfect secrecy. For perfect secrecy, the uncertainty in the plaintext, after observing the ciphertext, must be equal to the a priori uncertainty about the plaintext – observation of the ciphertext provides no information whatsoever to an adversary. A necessary condition for a symmetric-key encryption scheme to be unconditionally secure is that the key be at least as long as the message. The one-time pad is an example of an unconditionally secure encryption algorithm [1] and will be discussed later. Stinson defines a cryptosystem in the following way [4]:

Definition 2.5

A cryptosystem has perfect secrecy if pP(x|y) = pP(x) for all x ∈ P, y ∈ C. That is, the a posteriori probability that the plaintext is x, given that the ciphertext y is observed, is identical to the a priori probability that the plaintext is x.  =   ∈ ∈ As we can see, using Bayes’ Theorem, the condition that p P x | y p P x for all x P, y

13  =   ∈ ∈ ∈ C is equivalent to pC y | x pC y for all x P, y C. Then if pC(y) > 0 for all y C (if pC(y) = 0, then ciphertext y is never used and can be omitted from C). Fix any x ∈ P. For each y ∈ C, we have pC(y|x) = pC(y) > 0. Hence, for each y ∈ C, there must be at least one key K such that eK(x) = y. It follows that |K| ≥ |C|. In any cryptosystem, we must have |C| ≥ |P| since each encoding rule is an injection. In the boundary case |K| = |C| = |P|, we can give a nice characterization of when perfect secrecy can be obtained [4].

Theorem 2.3 Suppose (P, C, K, E, D) is a cryptosystem where |K| = |C| = |P|. Then the cryptosystem provides perfect secrecy if and only if every key is used with equal probability 1/|K|, every x ∈

P, and every y ∈ C, there is a unique key K such that eK(x) = y. Proof Suppose the given cryptosystem provides perfect secrecy. As observed above, for each x ∈ P and y ∈ C there must be at least one key K such that eK(x) = y. So we have the inequalities:

|C| = |{eK(x): K ∈ K}| ≤ |K|. But we are assuming that |C| = |K|. Hence, it must be the case that

|{eK(x): K ∈ K}| ≤ |K|.

That is, there do not exist two distinct keys K1 and K2 such that eK2(x) = y. Hence, we have shown that for any x ∈ P and y ∈ C, there is exactly one key K such that eK(x) = y [4].

2.4.1 Using Quantum Cryptography

Quantum cryptography is the use of quantum systems to do cryptographic tasks. The most famous example (but by no means the only one) is quantum key distribution (QKD) which uses quantum mechanics to guarantee secure communication. It enables two parties to produce a shared random bit string known only to them, which can be used as a key to encrypt and decrypt messages. An important and unique property of quantum cryptography is the ability of the two communicating users to detect the presence of any third party trying to gain knowledge of the key. This results from a fundamental aspect of quantum mechanics: the process of measuring a quantum system inevitably disturbs the system. A third party trying to eavesdrop on the key must in some way measure it, thus introducing detectable anomalies. By using quantum superpositions or quantum entanglement and transmitting information in quantum states, a

14 communication system can be implemented which detects eavesdropping. If the level of eavesdropping is below a certain threshold, a key can be produced that is guaranteed to be secure (i.e. the eavesdropper has no information about), otherwise no secure key is possible and communication is aborted. The security of quantum cryptography relies on the foundations of quantum mechanics, in contrast to traditional cryptography which relies on the computational difficulty of certain mathematical functions, and cannot provide any indication of eavesdropping or guarantee of key security. Quantum cryptography is only used to produce and distribute a key, not to transmit any message data. This key can then be used with any chosen encryption algorithm to encrypt (and decrypt) a message, which can then be transmitted over a standard communication channel. The algorithm most commonly associated with QKD is the one-time pad, as it is provably secure when used with a secret, random key [41].

2.4.2 Limiting the Adversary

While Shannon's theorem means that unconditional security is possible between parties only when they share a key of length at least equal to the entropy of a message to be transmitted, this condition is quite impractical [6]. If we don't want to use the quantum cryptography, we can think about an adversary and their realistic chances, computational capabilities, resources, etc. So if we take into account some considerations, like noise in communication channels, limited memory of an adversary, etc. we can get more optimistic results [6]. While these assumptions are quite rational, they are still just assumptions and can not establish any proof for unconditional security. This thesis does not aim to explore in detail quantum cryptography and its alternatives. These two brief sections were included in order to complete the summary of possible means of unconditional security. This study is solely about unconditional security without limitations in terms of classical cryptography.

2.5 Public Key Cryptography

A new approach in cryptography was introduced in 1976 when Whitfield Diffie and Martin E. Hellman published a paper called New Directions in Cryptography [19]. The paper demonstrated a new way how to encrypt and decrypt messages by using two different keys. This was a radical change in comparison with private key cryptography, which uses for both actions the same key, that has to be kept secret. This approach employs two different keys, one

15 is public and one is a secret key. The public key is used for encrypt a message and the secret key is used to decrypt a message. The scheme works as follows: one way functions are used – from an input it should be very easy to compute an output, but the opposite direction should be almost impossible. For example the multiplication of very large (prime) numbers is very easy, a factorization of the result not. Here is a list of some established public key cryptosystems:

• RSA – The security of RSA is based on the difficulty of factoring large integers. • Merkle-Hellman Knapsack – This and related systems are based on the difficulty of the subset sum problem; however, all of the various knapsack systems have been shown to be insecure (with the exception of the Chor-Rivest Cryptosystem mentioned below). • McEliece – The McEliece Cryptosystem is based on algebraic coding theory and is still regarded as being secure. It is based on the problem of decoding a linear code. • ElGamal – The ElGamal Cryptosystem is based on the difficulty of the discrete logarithm problem for finite fields. • Chor-Rivest – This is also referred to as a “knapsack” type system, but it is still regarded as being secure. • Elliptic Curve – The Elliptic Curve Cryptosystems are modifications of other systems (such as the ElGamal Cryptosystem, for example) that work in the domain of elliptic curves rather than finite fields. The Elliptic Curve Cryptosystems appear to remain secure for smaller keys than other public-key cryptosystems [4].

2.6 Conclusion

In this chapter the author has introduced mathematical background needed for further work, from defining private key cryptosystem, entropy and from it following definition of unconditional security. Also next observation can be presented here that a public key cryptosystem can never provide unconditional security. This is because an opponent, on observing a ciphertext y, can encrypt each possible plaintext in turn using the public key until he finds the unique x such that y = eK(x). This x is the decryption of y. Consequently, it is possible to study only the computational security of public key systems [4]. Therefore public key cryptosystems and applications of their principles will not be considered further in this work.

16 3 Cryptographic Primitives

In previous chapters, the introduction to cryptography and the definition of unconditional security were presented, so we are now able to present cryptographic primitives and the outcomes of research into their (unconditional) security.

3.1 Encryption

Encryption is the oldest cryptographic discipline and until modern times it was possible to put between cryptography and encryption the equal sign. Encryption means a process of converting a plaintext to a ciphertext which for a potential adversary was an unreadable piece of nonsense which could be read/decrypted only by someone who has the key. While a public key cryptosystem can not be unconditionally secure as we stated above in the chapter 2.5 Public key cryptography, we show in following text unconditionally secure private key cryptosystem.

3.1.1 The Shift Cipher

Definition 3.1 (Shift Cipher) ℤ Let P = C = K = 26. For 0 ≤ K ≤ 25, define:

eK (x) = x + K mod 26 and

dK (x) = x – K mod 26  ∈ℤ  x , y 26 . [4] The Shift Cipher provides perfect secrecy. This seems quite obvious intuitively. For, if we ∈ℤ ∈ℤ are given any ciphertext element y 26 , then any plaintext element x 26 is a possible decryption of y, depending on the value of the key. The following theorem gives the formal statement and proof using probability distributions [4]:

Theorem 3.1 Suppose the 26 keys in the Shift Cipher are used with equal probability 1/26. Then for any plaintext probability distribution, the Shift Cipher has perfect secrecy.

17 Proof ℤ Recall that P = C = K = 26 , and for 0 ≤ K ≤ 25, the encryption rule eK is eK (x) = x + K mod 26  ∈ℤ   ∈ℤ  x 26 . First, we compute the distribution pC. Let x 26 , then  = ∑      pC y pK K pP d K y ∈ℤ K 26 ∑ 1  −  = pP y K ∈ℤ 26 K 26 1 ∑  −  = pP y K . 26 ∈ℤ K 26 ℤ Now, for fixed y, the values y – K mod 26 comprise a permutation of 26 , and pP is a probability distribution. Hence we have that ∑  − = ∑   p P y K pP y ∈ℤ ∈ℤ K 26 y 26 =1. Consequently, 1 p  y= C 26  ∈ℤ  for any y 26 . Next, we have that  =  −  pC y |x pK y xmod 26 1 = 26 for every x, y, since for every x, y the unique key K such that eK(x) = y is K = y – x mod 26. Now, using Bayes’ Theorem, it is trivial to compute      = p P x pC y| x pP x | y   pC y 1 p x P 26 = 1 26   = pP x , so we have perfect secrecy. So, the Shift Cipher is “unbreakable” provided that a new random key is used to encrypt every plaintext character [4].

18 3.1.2 One-time Pad

In 1917 Gilbert Vernam invented a cipher which is called after him. It was intended to be used for telegraph encryption. Vernam's cipher combined a stream of plaintext characters with a keystream values using the XOR function. Some time after that, U.S. Army captain Joseph Mauborgne proposed the use of a random, non-repeated key, which improved security. This is also known as the One-time Pad and this cipher provides perfect secrecy as Shannon showed in his work [1, 5].

Definition 3.2 (One-time Pad) ℤ n ∈ℤ n Let n ≥ 1 be an integer and P = C = K = 2 . For K 2 define eK (x) to be the vector sum modulo 2 of K and x (equivalently the exclusive-or of the two associated bitstrings).

So if x = (x1, …, xn) and K = (K1, …, Kn), then

eK (x) = (x1 + K1, …, xn + Kn) mod 2. and if y = (y1, …, yn), then

dK (y) = (y1 + K1, …, yn + Kn) mod 2 [4].

Theorem 3.2 The One-time Pad provides perfect secrecy. Proof

We can see it using Theorem 2.3. Denote n = |K|. Let P = {xi : 1 ≤ i ≤ n} and fix a y ∈ C. We ≤ ≤ can name the keys K1, K2,..., Kn, in such a way that eKi(xi) = y, 1 i n. Using Bayes’ theorem, we have      = pC y |xi pP xi pP xi | y   pC y     pK K i pP xi =   . pC y

Consider the perfect secrecy condition pP(xi|y) = pP(xi). From this, it follows that pK(Ki) = pC(y), for 1 ≤ i ≤ n. This says that the keys are used with equal probability (namely, pC(y)). But since the number of keys is |K|, we must have that pK(K) = 1/|K| for every K ∈ |K|. Conversely, suppose the two hypothesized conditions are satisfied. Then the cryptosystem is easily seen to provide perfect secrecy for any plaintext probability distribution [4]. The system is also attractive because of the ease of encryption and decryption – XOR

19 function of a plaintext / ciphertext with a key is used for encryption and decryption as well. There is one big disadvantage – the fact that |K| ≥ |P| means that the amount of key that must be communicated securely is at least as big as the amount of plaintext. For example, in the case of the One-time Pad, we require n bits of key to encrypt n bits of plaintext. This would not be a major problem if the same key could be used to encrypt different messages; however, the security of unconditionally secure cryptosystems depends on the fact that each key is used for only one encryption. This is the reason for the term “one-time” in the One-time Pad [4]. Hence, a new key needs to be generated and communicated over a secure channel for every message that is going to be sent. This creates severe key management problems, which has limited the use of the One-time Pad in commercial applications. However, the One-time Pad has seen application in military and diplomatic contexts, where unconditional security may be of great importance [4]. This disadvantage can be eliminated for example by quantum key distribution.

20 3.2 Authentication

One of the objectives that cryptography seeks to provide is authentication. It is a service used to confirm the identity of respective parties communicating together (entity authentication) or that messages which are sent by communicating parties over an insecure channel were not maliciously altered (data origin authentication). In this subchapter we focus on data origin authentication or message authentication procedures, which are provably unconditionally secure. For these purposes, hash functions are used, namely their class called message authentication codes. These codes have two inputs: a message and a secret key and produce a fixed length result which is impossible for an adversary to achieve without knowing the secret key [1].

Definition 3.3 (Authentication scheme) The authentication scheme is a five-tuple (a; v; M, K, B), where M is the set of possible messages, K is the set of possible keys and B is the set of possible authenticated messages. There are two functions, the authenticating function a : M × K → B and the verifying function v : B × K → M × {accept, reject}. When Alice wants to send a message m ∈ M to Bob, she encodes it using the authenticating function to obtain b ∈ B, b = a(m, k), where k ∈ K is some secret key shared by Alice and Bob. When Bob receives b, he applies the function v(b, k) to decide whether the message is authentic or not [10]. While the One-time Pad is a well-known unconditionally secure way how to encrypt an message, it seems that awareness about unconditionally secure message authentication is not so common. In 1979 [7] and 1981 [8] there were published works by J. Lawrence Carter and Mark N. Wegman, where they showed new hash functions and their use in authentication. Their works showed that application of their class of functions in an unconditionally secure authentication technique would allow the receiver to be sure that a received message was not forged and it is genuine, and that even an adversary with infinite resources could not forge or modify a message without detection [8]. Wegman and Carter achieved that by the combination of one-time pads with hash functions. Their work was further polished by Hugo Krawczyk [9] and another researchers [1].

3.2.1 Wegman-Carter Authentication

For this subchapter, we will work mainly with the Wegman and Carter's works [7,8]. Let us start with a notation for definitions, which will be used farther: If S is a set, |S| will denote the

21 number of elements in S. If x and y are bit strings, x ⊕ y is the exclusive-or between x and y. All hash functions map a set A into a set B. We will assume|A| > |B|. A is sometimes called the set of possible keys, and B the set of indices. If f is a hash function and x, y ∈ A, we define:

δf(x, y) = 1 if x ≠ y and f(x) = f(y) = 0 otherwise.

If δf(x, y) = 1, then we say that x and y collide under f. If f, x or y is replaced in δf(x, y) by a set, we sum over all the elements in the set. Thus if H is a collection of hash functions, x ∈ A and

S ⊂ A then δH(x, S) means ∑ ∑   δ f x , y [7]. f ∈H y∈S Known universal hash classes contain a quite large number of hash functions. We will present a set of functions which is almost strongly universal2 and number of bits to specify a randomly chosen function is much smaller than when using a typical hash function – only log(n) bits are required opposite to O(n) bits [8]. And this is the ideal feature which makes such hashing useful for authentication. More, it is necessary for the sender and the receiver to share a secret key whose length is on the order of the log (the length of the message) [8]. It can be proved that such system is unconditionally secure as we will see later.

To be universal2, a set of functions from A to B must only satisfy a requirement on the probability that a randomly chosen function will map two points of A to the same value. For a set of functions to be strongly universaln, a randomly chosen function must, with equal probability, map any n distinct points of A to any n values in B; in other words, any n points must be distributed randomly throughout B by the functions. [8]. To be more formal:

Definition 3.4 (Universal sets of hash functions)

Let H be a set of functions from A to B. We say that H is universal2 if for all x, y in A,

δH(x, y) ≤ |Η|/|Β|. That is, H is universal2 if no pair of distinct keys collide under more than (1/|B|)th of the functions [7].

Definition 3.5 (Strongly universal sets of hash functions) Suppose H is a set of hash functions, each element of H being a function from A to B. H is strongly universaln if given any n distinct elements a1,...,an of A and any n (not necessarily n distinct) elements b1,...,bn of B, then |H|/(|B| ) functions take a1 to b1, a2 to b2, etc. A set of hash functions is strongly universalω if it is strongly universaln for all values of n [8]. (Strongly universaln sets of functions can be created using polynomials over finite fields, for more details please see [8].)

22 Consider following scenario, we want to send a message over an insecure channel and we want a receiver to be able to verify an identity of a sender. In following text we will denote A as a sender and B as a receiver. A could be also mentioned as “he” and B as “she”. Our objective could be accomplished for example by digital signatures, but this solution, based on public key cryptography, is not unconditionally secure, as we mentioned in Chapter 2.5. The same holds for encryption schemes used for signing messages [8]. Wegman and Carter proposed in their works use of authentication tags. They also proved that this solution provides unconditional security. They proved that carefully constructed authentication tag system has the property that it is provably impossible for a forger to have more than an arbitrarily small chance of creating a message which the receiver will accept as genuine [8].

Definition 3.6 (Authentication tag system) An authentication tag system is formalized as follows: There is a set M of possible messages and a set T of authentication tags. There is also a publicly known set of functions F, where each function in F maps M into T. To use the system, A and B agree on a secret key which specifies one of the functions f in F. When A transmits a message m in M, he also sends the authentication tag f(m). B checks that f applied to the message she received is indeed the tag she received. If so, she has certain assurance the received message is not a forgery. It must be impossible to find the function from the message and its tag. Knowing the value of f on one message must give no information about the value of f on any other message [8]. To make it more precise, we say that an authentication system is unbreakable with certainty p if after a function f is randomly chosen and after forgers we given any message m and the corresponding tag f(m), the forgers cannot find a different message m' for which they have better than a probability of p of guessing the correct tag. This has to hold for any m, even one chosen by the forgers [8].

Definition 3.7 (Authentication system unbreakable with certainty p) To create an authentication system which is unbreakable with certainty p, we can simply choose T to have at least 1/p elements, and let F be a strongly universal2 class of hash functions from M to T. If we let H' to be the subset of H which maps m to f(m), we see that the only information that the forgers have available is that the secret function is one of the functions in H'. However the definition of strongly universal2 implies that for any m' distinct from m, the proportion of functions in H' which map m' to any particular tag t' is 1/|T|. Since |T| ≥ 1/p, any choice the forger makes has no more than a probability of p of being correct [8].

23 Definition 3.8 (A small, almost strongly universal2 class) We will construct a set of hash functions from some large space A' to a space B'. A' stands for the set of messages, B' stands the set of possible tags. Let a' be the length of the messages and and b' the length of tags. Let s = b' + log2 log2(a'). Let H be some strongly unviversal2 class of functions which map bit strings of length 2s to ones of length s. For this purpose, the multiplicative scheme (from Wegman and Carter's previous work [7]) could be used. Each member of H' will be constructed from a sequence of length log2a' – log2b' of members of H.

Suppose f1, f2,... is some such sequence. We will specify how to apply the associated member f' of H' to a message. The message is broken into substrings of length 2s. If necessary, the last string should be padded with blanks. The message will be broken into a'/2s substrings. f1 is applied to all substrings and resulting substrings are concatenated. by concatenating the resulting substrings, we have obtained a string whose length is roughly half the originals string's length. This process is repeated using f2, f3,... until only one substring of length s is left. The tag (i.e. the result of the hash function f') is the low-order b' bits of this substring. The key needed to specify f' is the concatenation of the keys needed to specify f1, f2,.. . The multiplicative scheme suggested above has a key roughly twice the size of the input. If this class is used for H, the size of the key for H' will be 4slog2(a'). Thus, the key is roughly four times the length of the tags times the log of the lengths of the message. By comparison, the multiplicative scheme by itself would have a key whose length was twice that of the message [8].

Theorem 3.3

Given any two distinct messages m1 and m2 and two tag values t1 and t2, the number of functions which take m1 to t1 is 1/|B'| times the total number of functions. However, the fewer than 2/|B'| of these functions will also take m2 to t2. Proof (sketch) Each time we halve the length of the messages, there is a small (1/(2s)) chance that the two resulting strings are now identical. Since we iterate the halving process log2a' – log2b' times, the s chance that the two strings are identical at the next to last step is at most log2a'/(2 ), which is equal to 1/2b'). Now the fact that the function that does the last reduction is chosen from a strongly universal2 class can be used to show that m1 will be taken into any tag with equal probability, and as long as the penultimate strings were different, m2 will also be taken into any string with probability equal to 1/|B'|. Thus, if t1 ≠ t2, then less than 1/|B'| of the functions will take m2 to t2, and otherwise, less than 2/|B'| will [8]. In terms of the authentication scheme, the theorem states after the enemy knows one message-tag pair, he can do no better than to find another message-tag pair which has probability 2/|B'| of being correct. Thus this scheme is unbreakable with certainty 2/|B'|, and this certainty can be made smaller than any predetermined value [8].

24 While the above scheme does not allow the authentication of multiple messages, i.e. the tagging of more than one message with the same function, due to security reasons, Wegman and Carter suggested in their work [8] also scheme which is able to do so:

Definition 3.9 (Authentication scheme for multiple messages)

Let F be strongly universal2 set of functions from M to B, where B is the set of bit strings of length k. Each message in M must contain a message number between 1 and n. The secret key shared by the sender and receiver now consists of two parts. The first part specifies a function f in F. The second part of the key is a sequence (b1, …, bn) of elements of B. The sender must be certain never to send two messages with the same message number. To create the authentication tag ti for the message mi the sender first calculates f(mi) and then exclusive-or's this result with bi. Since each message contains a message number, the receiver can duplicate this process to verify the tag is correct [8]. In the next theorem we will show that this scheme is unbreakable with certainty 1/(2k).

Theorem 3.4

Suppose some key (f, (b1, …, bn)) has been chosen randomly from the set of keys. Let m1, …, mn be any n messages, with the restriction that the message numbers must all be different. Suppose a forger knows only the set F and the set of messages and their corresponding tags

k ti = f(mi) ⊕ bi. Then there is no new message for which the forger has a better than 1/(2 ) chance of correctly guessing the tag. Proof Suppose the forger wishes to guess the tag to the new message m. Without loss of generality, we assume m has the message number 1.

For each t in B, define St = {(g, b)| g ∈ F, b ∈ B, g(m1) ⊕ b = t1 and g(m) ⊕ b = t}.

St is the set of partial keys which are consistent with the fact that m1 has the tag t1, and which give the bogus message m the tag t. Since F is strongly universal2, each of the St's have the same size and there is exactly one way to extend each partial key in St to a complete key which also assigns tag ti to message mi for i = 2, …, n (namely let bi = g(mi) ⊕ ti). Thus, of all keys which are consistent with the information which the forger has available, as many will assign to m any one tag as any other tag. So the forger's probability of guessing the current tag for m is 1/ (2k) [8]. Suppose the following scenario. Let OPT(n) be the smallest key size which is necessary to prevent a forger from having more than a specified probability of success at forging at least one out of n messages. Suppose a function has been selected from a set F. The forger chooses a

25 message m1 and tries to guess the correct tag. He is then told the correct tag t1. Now the forger selects a second message m2, tries to guess the tag and then is told the correct tag t2. This process is repeated n times. If we wish, we may require the forger to choose each message from a restricted subset of the set of all messages, or we may even have a fixed sequence of messages – these variations don't affect the following theorem [8].

Theorem 3.5

In the above scenario, if the forger's probability of success on his i-th guess is ≤ pi ,then F must contain at least 1/(p1, p2, …, pn) functions. Proof

Let F0 = F and Fk = {f ∈ F | f(mi) = ti for ι = 1, …, k}. The forger might use the following strategy in his guessing: after choosing the ith message, he enumerates the set Fi–1, randomly chooses a member of it, and guesses the tag f(mi). Since this has ≤ pi chance of success, it must be the case that |{f ∈ Fi–1 | f(mi) = ti}| ≤ pi |Fi–1|. The set on the left-hand side is Fi , so we have

|Fi–1| ≥ (1/ pi)|Fi–1|. This is true for each i, so we have |F0| ≥ (1/p1), …, (1/pn) |Fn|. This theorem follows since F0 = F and |Fn| ≥ 1.

Corollary 3.6

When p1 = p2 = … = p, then it requires at least n(–log2(p)) bits to specify a randomly chosen member of F for any scheme which is unbreakable with certainty p and be able to send n messages. Note that the scheme we presented above requires n(–log2(p)) + k bits, where k is the number of bits needed to specify an element of a strongly universal2 class of hash functions [8]. Wegman and Carter's authentication schemes were deeper studied and extended for example by Krawczyk in [9], by Shoup in [11], etc. All new works focused on improvements in computational efficiency, especially in the size of the key, and memory and CPU requirements.

Shoup showed [11] that is not necessary to use universal2 classes of hash functions, in his work he defines ɛ-AXU (almost exclusive-or universal) class:

Definition 3.10 (ɛ-AXU class) We say that H is an ɛ-AXU family of hash functions, if its satisfies this property for suitably –l small ɛ ≥ 2 : for any pair of inputs x1 ≠ x2 and for any l-bit string z, for a random h ∈ Η, the probability that h(x1) ⊕ h(x2) = z is no more than ɛ. These classes allow the very efficient implementation of unconditionally secure message authentication, which is for instance used in SECOQC (Secure Communication based on

26 Quantum Cryptography) project [10].

27 3.3 Anonymous Transfer

In the preceding chapters we have showed that it is possible to unconditionally encrypt information, to send information in such way that it can not be altered in any way and now we will describe a technique, how to retain anonymity, i.e. untraceable, when sending or receiving such information. Anonymity is be defined as the state of being not identifiable within a set of subjects, the anonymity set [12]. Untraceability means that a it is impossible to track or track down for example an originator or a recipient of a message. A protocol is fault-tolerant if honest participants can't be prevented permanently from broadcasting, that they have a chance to send messages [16]. in his work The Dining Cryptographers Problem: Unconditional Sender and Recipient Untraceability presented a protocol, which guarantees above mentioned in the unconditional manner. His solution is known as DC-net. Let us describe this in more detail.

3.3.1 DC-net

Chaum introduces his work with a scene in which three cryptographers are having a dinner in a restaurant, the bill is going to be paid anonymously – either one of the cryptographers pays or U.S. National Security Agency pays for it. While the cryptographers respect their anonymity in this scenario, they want to know if NSA is paying or not. A solution is to follow the protocol below: Each cryptographer flips an unbiased coin behind his menu, between him and the cryptographer on his right, so that only the two of them can see the outcome. Each cryptographer then states aloud whether the two coins he can see – the one he flipped and the one his left-hand neighbor flipped – fell on the same side or on different sides. If one of the cryptographers is the payer, he states the opposite of what he sees. An odd number of differences uttered at the table indicates that a cryptographer is paying; an even number indicates that NSA is paying (assuming that the dinner was paid for only once). Yet if a cryptographer is paying, neither of the other two learns anything from the utterances about which cryptographer it is [13]. Illustration of this protocol is here [14]:

28 Picture 1: Dining cryptographers problem illustration As Chaum states in his work, such a protocol is unconditionally secure if carried out faithfully: considering the dilemma of a cryptographer who is not the payer and wishes to find out which cryptographer is. (If NSA pays, there is no anonymity problem.) There are two cases. In case (1) the two coins he sees are the same, one of the other cryptographers said “different”, and the other one said “same”. If the hidden outcome was the same as the two outcomes he sees, the cryptographer who said “different” is the payer; if the outcome was different, the one who said “same” is the payer. But since the hidden coin is fair, both possibilities are equally likely. In case (2) the coins he sees are different; if both other cryptographers said “different”, then the payer is closest to the coin that is the same as the hidden coin; if both said “same”, then the payer is closest to the coin that differs from the hidden coin. Thus, in each subcase, a nonpaying cryptographer learns nothing about which of the other two is paying [13]. This protocol has one big disadvantage – any user who wants to disrupt the system can do it by preventing the other sending messages, for example by sending invalid messages or just by dropping out of the protocol [15]. Chaum in [13] suggests something like a trap system to prevent and catch this kind of malicious behaviour: before sending a message, there is a reservation protocol performed, where users reserve message slots (a series of rounds in which is each message transferred). At this time, each player commits to a declaration of “trap” or “non-trap” for her reserved slot. To jam the DC-net, a dishonest player must transmit a message in a slot she has not reserved. But if she tries to transmit a message in a slot that is a “trap”, then the attack may be detected during a decommitment phase. [15]. To fight this drawback, many contributions were introduced by various researchers, in our opinion the most important were made by Birgit Pfitzmann and Michael Waidner, who suggested several improvements of the DC-net in their subsequent works, concluding with the

29 protocol proposed in Unconditionally Untraceable and Fault-tolerant Broadcast and Secret Ballot Election [16], in which they utilized their other achievement presented in Unconditional Byzantine Agreement for any Number of Faulty Processors [17].

3.3.2 DC-net Protocol with Waidner-Pfitzmann's Improvements

The original Chaum's protocol [13] suffers from being easily jammed by dishonest participants. Waidner-Pfitzmann's way to clear away this disturbance is as follows [16]: they utilize consecutively three protocols. The DC-net protocol, pseudosignatures and Byzantine agreement protocol. There is also a reliable broadcast assumption which means that if a sender broadcasts a message, then all honest participants agree on a message v, and if the sender is honest, v is the message the sender meant to send [16].

3.3.2.1 DC-net with Collision Resolution, Pseudosignatures and Byzantine Agreement As a basic service the DC-net protocol with collision resolution, which was presented by Jurjen Bos and Bert den Boer in [18], where they suggest some improvements for the original Chaum's protocol, is as follows:

DC-net protocol with collision resolution Let G be an arbitrary finite Abelian group. For each execution of the DC-protocol, called a

DC-round, each pair of participants {Pi, Pj} needs a common secret key, Kij ∈ G, and each participant Pi has a message Mi ∈ G. Each participant Pi computes and publishes her local sum

Oi := Mi + (K1,i + … + Ki–1,i) – (Ki+1,i + … + Kn,i), and computes the global sum S := O1 + … + On from all published local sums. Obviously, S =

M1 + … + Mn. If just one participant had chosen Mi ≠ 0, this message has been sent successfully. Otherwise, a collision has occurred. These collisions are resolved by an untraceability preserving multi-access protocol [16]. We will use the protocol from [18], which is described next. To resolve collisions, Waidner and Pfitzman use a multi-access protocol described in [18]:

Let G be a finite field. Each participant Pi chooses a message Ri ∈ G. If no participant disturbs the protocol, then in n DC-rounds the multiset R of all Ri's is computed. It is possible to execute all n DC-rounds simultaneously. This protocol is used as a reservation technique: Ri is randomly chosen from G and called a reservation message. G must be sufficiently large so that, almost certainly, all honest participants choose different reservation messages. Let ci be

30 the number of occurrences of Ri in R, let NotColl the set {Ri ∈ R | ci = 1}. Then the arithmetical order of all Ri ∈ NotColl defines an order of all participants Pi with ci = 1. This protocol offers unconditional untraceability – the ensemble of the local sums contains no information about the Mi's except the global sum, thus sending is untraceable. Since all local sums are published, each participant receives the same messages, thus receiving is untraceable, too. Unfortunately, the DC-protocol offers no fault tolerance as we have already mentioned: Each attacker can easily and untraceably disturb the whole DC-protocol by publishing wrong local sums. The trap protocol by Chaum [13] should guarantee cryptographic fault tolerance, but needs some refinements and modifications before being secure, these will be described later. The discussed Waidner-Pfitzmann's broadcast protocol is also based on this trap idea [16].

Pseudosignatures Pseudosignatures will be depicted in more detail in one of the following chapters, therefore we will mention just basics needed for understanding the broadcast protocol. Pseudosignatures are an authentication technique which is unconditionally correct and unforgeable [16]. They are also λ-transferable for a fixed parameter λ ∈ IN: A message pseudosigned by participant S can be transferred λ times, e.g., via S = Pi0 → Pi1 → … → Piλ, so that for each j < λ, if Pij accepts the message, then it knows that Pij+1 will accept the message, too, almost certainly. If Piλ forwards the message, it is not guaranteed that the (λ+1)-st recipient accepts the message, too. Thus λ must be chosen carefully, depending on the specific protocol [16]. Pseudosignature needs an initialization phase: For a parameter ∆ ∈ IN (which determines the error probability), let m := (λ–1)∆+1. There are m subphases where each participant sends a randomly chosen key of an authentication code untraceably, using the DC-protocol and a variant of the multi-access protocol from above – instead of reservation messages, keys are sent. Thus the signer receives all the keys, but does not know which keys come from which participants. Nobody except the signer should receive these keys; therefore, while the keys are sent, the signer publishes randomly chosen values, instead of her correct local sums. Note that there is no vicious circle although fault tolerance is needed for the DC-protocol during this initialization: If the signer detects a disturbance, she can always cause an investigation and repeat the whole procedure until no disturbance has occurred, since the keys are not sensitive in themselves [16]. To pseudosign a message v, the signer uses all received keys to authenticate v. The set of all these authentications forms the pseudosignature ψ. If an honest participant Pi receives a pseudosignature ψ on a message v, she determines the number, ai, of her authentications contained in ψ. According to ai she assigns a level of acceptance to it: If ai ≥ m–(h–1)∆, then Pi

31 h-accepts (ψ, v), h = 1, …, λ. Pi rejects ψ if ai = 0. If ∆ is appropriately chosen, it is guaranteed that if an honest participant k-accepts ψ for any k > 1, then each other honest participant will (k–1)-accept ψ, almost certainly [16].

Byzantine agreement As pseudosignatures, Byzantine agreement has its own chapter later in this work, so we again briefly mention only the basis for this protocol. Pseudosignatures are needed as a subprotocol of an unconditional Byzantine agreement protocol. A Byzantine protocol, based on pseudosignatures, with λ = n, provide the unconditional Byzantine agreement protocol [17]. It needs an initialization phase during which a certain number of pseudosignatures are initialized. The discussed broadcast protocol needs a Byzantine agreement protocol to ensure two requirements: The first one is implementing a reliable broadcast. Even if there is a physically reliable broadcast network, the Byzantine agreement protocol is needed for the second issue, finding agreement on the acceptability of a pseudosignature. For this, it is sufficient to know that, if a participant X sends a pseudosignature ψ to each other participant the Byzantine agreement protocol is be used to guarantee that • if X is honest, all honest participants will accept ψ, and • even if X is dishonest, all honest participants will agree on whether to accept ψ or not. The unconditional unforgeability of pseudosignatures guarantees that a forged ψ is never accepted by honest participants [16].

3.3.2.2 Untraceable Broadcast The main problem with traps is that before a DC-round, it must be unpredictable for other participants (in particular, attackers) whether the round will be a trap, whereas after a DC-round it must be unambiguously decidable whether this round was a trap, so that traps, and only traps, can be investigated. The first property is necessary for fault tolerance, the second for untraceability; thus the broadcast protocol must satisfy them both unconditionally [16]. Waidner-Pfitzmann's untraceable broadcast protocol, inspired by and derived from Chaum's DC-net protocol and his trap system [13], is performed in several phases as it is described in [16]: 1. Reservation phase – during this phase, each participant tries to send a reservation message using the reservation technique mentioned previously. From NotColl, one reservation message, i.e., one participant, X, is randomly chosen by one participant, called the current manager. If NotColl = ∅, the first phase must be repeated (this will happen at most n times).

32 2. Trap-proof setup phase – in the second phase, a pseudosignature with X as the signer is initialized. This is possible because the undisturbed initialization protocol hides the signer's identity unconditionally: X publishes randomly chosen local sums, which are indistinguishable from the correct local sums of all other participants. If an initialization protocol is disturbed, an investigation is needed and the first and the second phase must be repeated. This can happen at most n2/2-times, then all attackers are disqualified. 3. Non anonymous palaver phase – here, each participant who detected a disturbance in the first or the second phase can non anonymously initiate the investigation of all previous DC-rounds by publishing the message “investigation”, and the attacker will be punished. If no disturbance is detected, the complainer is disqualified. This prevents attackers from always declaring the first two phases as disturbed. As said, after an investigation, the protocol restarts with the first phase. 4. Sending phase – this phase could start when no participant complains and X sends either a real message or a trap. A trap must be the message 0. The probability m of sending a trap is fixed for all participants. If a trap is disturbed, X recognizes this, and, 5. Trap investigation phase – in the last phase, she non anonymously sends her pseudosignature ψ on the message “disturbance” to all other participants, who agree on whether to accept ψ using the unconditional Byzantine agreement protocol. If ψ is accepted, the trap is investigated. Each participant who has sent anything ≠ 0 is disqualified, and if no disturbance is found, the complaining X is disqualified [16]. Like Waidner and Pfitzman state in [16], the presented protocol guarantees unconditional untraceability with an exponentially small error probability. All the rules due to which an honest participant initiates an investigation ensure that the investigation will be successful, and that the fourth phase is investigated only if X decides to publish her pseudosignature. Then, the sending phase is as untraceable as the original DC-protocol.

33 3.4 Bit Commitment

A commitment scheme is used in cryptography by parties to commit to a value which remains hidden and can be later unveiled. The reason is to bind a party (sender) to a value, so the party can not change it later, for example to gain any advantage. The scheme has two parts: • committing – a value is chosen by a sender, the value can't be changed later. revealing – the value is revealed to a receiver, which has no clue until revealing the value what it might be. Let a value be either b = 0 or 1, and Alice encrypts b in some way. The encrypted form of b is called a blob and the encryption method is called a bit commitment scheme. A bit commitment scheme will be a function f: {0, 1} × X → Y, where X and Y are finite sets. An encryption of b is any value f(b, x), x ∈ X. The two properties that a bit commitment scheme should satisfy are as follows: 1. for a bit b = 0 or 1, Bob cannot determine the value of b from the blob f(b, x). – we call this the hiding property. 2. Alice can later “open” the blob, by revealing the value of x used to encrypt b, to convince Bob that b was the value encrypted – we call this the binding property [4].

3.4.1 Security of Bit Commitment Schemes

The security of bit commitment schemes can be defined with respect to the above mentioned properties – the hiding and binding ones. Let Commit(x, open) be an algorithm used for commit a value x, with open the randomness used for computing a commitment c and CheckReveal(c, x, open) an algorithm used for verifying the computed commitment [23].

Perfect Binding For any x ≠ x' the set of commitments to x is disjoint from the set of commitments to x'. This means that there does not exist open and open' such that Commit(x,open) = Commit(x',open').

Computational Binding Let open be chosen from a set of size 2k, i.e., it can represented as a k bit string, and let

Commitk be the corresponding commitment scheme. As the size of k determines the security of the commitment scheme it is called the security parameter. Then for all non-uniform

34 probabilistic polynomial time algorithms that output x, x' and open, open' of increasing length k, the probability that x ≠ x' and Commitk(x, open) = Commitk(x', open') is a negligible function in k. This is a form of asymptotic analysis. It is also possible to state the same requirement using concrete security: A commitment scheme Commit is (t, ε) secure, if for all algorithms that run in time t and output x, x', open, open' the probability that x ≠ x' and Commit(x, open) = Commit(x', open') is at most ε.

Perfect, Statistical and Computational hiding k Let Uk be the uniform distribution over the 2 opening values for security parameter k. A commitment scheme is perfect, statistical or computational hiding, if for all x ≠ x' the     probability ensembles Commitk x ,U k k∈ℕ and Commitk x ' ,U k k∈ℕ are equal, statistical close or computationally indistinguishable [23].

Perfect Hiding and Binding Bit commitment protocol can't have at the same time both of these properties: Let us assume perfect binding. Then for an honest run of the commit protocol, it is only possible to convince the receiver to accept a reveal of either 0 or 1 (not both). An unbounded receiver could iterate through all possibilities for randomness associated with either input and determine which input generated the commitment [24].

3.4.2 Examples of Bit Commitment Protocols

In this subsection, we will present two protocols, one with perfect hiding property and one for perfect binding property. The hiding property means that it is infeasible for Bob to compute b given y, and the binding property means that Alice can’t “change her mind” after Bob reveals his guess [4].

3.4.2.1 Perfectly Hiding Bit Commitment Protocol As an example of such protocol, we will mention one from [4] which utilizes for bit commitment the Goldwasser-Micali Probabilistic Cryptosystem. In this system, n = pq, where  p and q are primes, and m∈QR n . The integers n and m are public; the factorization n = pq is ℤ* known only to Alice. In this bit commitment scheme, we have X = Y = n and f b , x=mb x2 mod n. Alice encrypts a value b by choosing a random x and computing

35 y= f b , x ; the value y comprises the blob. Later, when Alice wants to open y, she reveals the values b and x. Then Bob can verify that y≡mb x2 mod n. A blob is an encryption of 0 or of 1, and reveals no information about the value x provided that the quadratic residues problem is infeasible.

This scheme is perfectly secure for a receiver, because there are no such x1, x2 2= 2   incommensurable with n so mx1 x2 mod n . For a sender, this scheme is only computationally secure, its security is based on non-existence of an effective algorithm for factorization of natural numbers [4, 25].

3.4.2.2 Perfectly Binding Bit Commitment Protocol Following protocol is based on discrete logarithm problem [4]: if p = 3 (mod 4) is a prime such ℤ* that the discrete logarithm problem in p is infeasible, then the second least significant bit of a discrete logarithm is secure. Actually, it has been proved for primes p ≡ 3 (mod 4) that any Monte Carlo algorithm for the second bit problem having error probability 1/2 = ε with ε > 0 ℤ* can be used to solve the discrete logarithm problem in p . This much stronger result is the basis for the bit commitment scheme. ℤ* This bit commitment scheme will have X = {1, …, p – 1} and Y = p . The second least significant bit of an integer x, denoted by SLB(x), is defined as follows: SLB(x) = 0 if x ≡ 0, 1 (mod 4) = 1 if x ≡ 2, 3 (mod 4). The bit commitment scheme f is defined by f(b, x) = αx mod p if SLB(x) = b  −  = α p x mod p if SLB(x) ≠ b. A bit b is encrypted by choosing a random element having second last bit b, and raising α to that power modulo p. The scheme is binding (i.e. perfectly secure for a sender), and by the ℤ* remarks made above, it is hiding provided that the discrete logarithm problem in p is infeasible (i.e. computationally secure for a receiver) [4, 25].

3.4.2.3 Unconditionally Secure Bit Commitment Protocol with Trusted Initializer Bit commitment schemes and the impossibility of construction of an unconditionally secure ones have incited many researchers to look for a such solution. Some of them explored models based on noisy channels, Ronald R. Rivest in his work Unconditionally Secure Commitment and Oblivious Transfer Schemes Using Private Channels and a Trusted Initializer introduced a third party – a trusted initializer, which is able to ensure that according to this innovation the

36 adapted bit commitment protocol provides the unconditional security. While the two party protocol can't assure desired unconditional solution, it seems to be a good idea to introduce a third party, and for initialization phase only, which can ensure it. Private channels between each pair of communicating parties are also presented [26]. Let us portray mentioned proposal in more detail.

Rivest's Unconditionally Secure Bit Commitment Protocol In [26], the following scheme is presented: All computations are performed modulo p for some fixed suitably large globally known prime number p. We assume that Alice's secret value x satisfies 0 ≤ x0 ≤ p. The communications patterns are very simple: during setup trusted initializer (TI) sends some different information to Alice and Bob. During commit Alice sends one number to Bob. During reveal Alice sends three numbers to Bob. Each protocol is thus minimal: just one pass. ∈ * ∈ * 1. For the setup phase TI randomly chooses two numbers a R Z p and b R Z p . These numbers define a line: y=axbmod p. TI sends the values a and b privately to ℤ Alice TI also picks another value x1 uniformly at random from p and computes the =    value y1 ax1 b mod p . TI privately sends Bob the pair (x1, y1); this is a point on the line. =    2. For the commit phase Alice computes the value y0 ax0 b mod p and privately sends the value y to Bob. 3. For the reveal phase Alice privately sends Bob her secret value x and also the pair (a,

b). Bob checks that (x0, y0) and (x1, y1) satisfy above equations. If so he accepts x0 otherwise he rejects [26]. The proposed commitment scheme is according to Rivest's results: • unconditionally hiding – obvious since all Bob learns during setup and commit is x1, y1

and y0. There is no way to infer x0 from this information. More precisely every value in ℤ p is equally likely to be x0 given what he has seen. If Bob has unlimited powers of computation it doesn't help him[26]. • unconditionally binding – after commit Alice knows a, b, x0 and y0, but not Bob's

values x1 and y1. Suppose Alice then changes her mind and wishes reveal some value x'0

that is different than x0. For Bob to accept, she needs to find values x'0, a', and b' such =  =  =  that y0 a' x' 0 b' and y1 a' x1 b' . The new line y a' x b' must be different =  than the old line y ax b, otherwise nothing has changed and she reveals x0. Either

this new line doesn't intersect the old line at all, in which case Bob rejects because (x1,

y1) should be on the new line ,or else the new and old lines intersect at a point (x2, y2).

Alice only succeeds at cheating if (x2, y2) = (x1, y1); however the chance that (x1, y1) =

(x2, y2) is precisely 1/p, so Alice's chances of cheating are at most 1/p [26].

37 • TI never learns the value of x0 – obvious since TI only participates in the setup phase [26]. Rivest's work was further elaborated by C. Blundo, B. Masucci, D. R. Stinson, R. Wei in Constructions and Bounds for Unconditionally Secure Non-Interactive Commitment Schemes, where they presented a formal mathematical model for unconditionally secure non-interactive commitment schemes with a trusted initializer, they also show there that such schemes cannot be perfectly binding: there is necessarily a small probability that Alice can cheat Bob by committing to one value but later revealing a different value [27]. Another successors were Alexandre Pinto, André Souto, Armando Matos, Luís Antunes with a paper called Commitment and Authentication Systems. There is presented a relation between unconditionally secure commitment schemes and unconditionally secure authentication schemes and that an unconditionally secure commitment scheme with trusted initializer can be built from a composition of an unconditionally secure authentication code and an unconditionally secure cipher system [28].

38 3.5 Digital Signatures

Diffie and Hellman, besides introducing a new concept of public key cryptography in [19], also presented a concept of digital signatures, a primitive used for authentication, authorization – conveyance, to another entity, of official sanction to do or be and for non-repudiation – preventing the denial of previous commitments or actions [1]. Digital signatures allow signing information and the verification by a signature of it. A party is able to verify signatures, but is not able to gain any knowledge about how to generate signatures from it.

Definition 3.11 (Digital signature scheme) Let M is the set of messages which can be signed. S is a set of elements called signatures, possibly binary strings of a fixed length. SA is a transformation from the message set M to the signature set S, and is called a signing transformation for entity A. The transformation SA is kept secret by A, and will be used to create signatures for messages from M. VA is a transformation from the set M × S to the set {true, false}. VA is called a verification transformation for A’s signatures, is publicly known, and is used by other entities to verify signatures created by A. The transformations SA and VA provide a digital signature scheme for A [1].

Signing Procedure Entity A (the signer) creates a signature for a message m ∈ M by doing the following:

1. Compute s = SA(m). 2. Transmit the pair (m, s). s is called the signature for message m.

Verification Procedure To verify that a signature s on a message m was created by A, an entity B (the verifier) performs the following steps:

1. Obtain the verification function VA of A.

2. Compute u = VA(m, s). 3. Accept the signature as having been created by A if u = true, and reject the signature if u = false [1]. The properties which the signing and verification transformations must satisfy are: • s is a valid signature of A on message m if and only if VA(m, s) = true. • It is computationally infeasible for any entity other than A to find, for any m ∈ M, an s

∈ S such that VA(m, s) = true [1].

39 As we have already mentioned in Chapter 2.5, public key cryptography and its derivations like digital signature systems can't provide unconditional security, but there have been very interesting protocols suggested which behave in a similar way to digital signatures and provide the unconditional security.

3.5.1 Pseudosignatures

David Chaum and Sandra Roijakkers in their work Unconditionally-Secure Digital Signatures focused on digital signatures (based on public key cryptography) and the possibility to forge them by someone with unlimited (enough) computational power. They presented a mechanism for fighting this disadvantage for a price that signatures are unconditionally secure on the condition that there are only a finite set of participants [20]. In following text, we will present their protocol. They start their work with setting assumptions and objectives of proposed protocol:

3.5.1.1 Assumptions and Objectives

Let the “world” consists of a finite set P of n participants (P1, P2,..., Pn). Lets assume the following means of communication between participants: • an authenticated broadcast channel – this enables each participant to send the same message to all other participants, identifies the sender, and is completely reliable. In particular, if any participant receives a message via the broadcast channel, all other participants will receive the same message at the same time. • a private, authenticated channel between each pair of participants – such a channel cannot be read or tampered with by other participants, and each of the communicants is absolutely sure of the identity of the other. We want a participant S to be able to send a bit b to a participant R such that the following conditions hold: 1. only R receives b. 2. R can prevent S from convincing other participants that he sent b ⊕ l. 3. R can convince any participant that he got b from S. 4. R cannot convince any participant that he got b ⊕ l from S. The aim is to obtain the four conditions, using a protocol that is polynomial time in a security parameter m, but with an error probability that is exponentially small in m, and we do not require any limitations on the computing power available to each participant [20]. As a protocol used for establishing the public key for this scheme, Chaum and Roijakkers utilize Chaum's protocol for anonymous transfer we have described in Chapter 3.3. because of observation that since the signer does not know who sent what, he will be unable (except with

40 very small probability) to give a signature that one participant will accept that will not similarly be accepted by any participant [20].

3.5.1.2 Basic Protocol If we want some participant S ∈ P to send a random bit b, with his signature attached to it, to some participant R ∈ P all participants have initially to agree on a security parameter m, such 1 that m⋅0,65m , upper bounding the error probability (for more details please see 2 computations in [20]), is sufficiently small, and they also have to agree on a prime p, 2m < p < 2m+1.

1. Preparation phase – each of the n–1 participants unequal to S sends untraceably m pairs of random numbers to S. Round μ of the preparation phase (1 < μ < m) looks like [20]:

1. The first step: The participants start with a subprotocol, called the reservation phase, to determine the order in which they have to send their messages. We have mentioned this protocol in section dedicated to anonymous transfer. If it is not successful, this can be due to collision or to disruption. In the first case the participants just start again with the reservation phase; the second case results in disagreement about a key or in detection of a disrupting participant. After a successful reservation, the only thing each participant unequal to S knows is when he is allowed to send. 2. The second step: Each participant (≠ S) sends S untraceably and in the defined 0 1 ℤ ×ℤ order a pair of numbers (N , N ) chosen uniformly from p p , and their product C := N0 ∙ N1 mod p. A disrupter can modify the pair by adding (modulo p) some non-zero pair to it, but since S only accepts pairs (N'0, N'1) for which the received C' equals the product modulo p of N'0 and N'1 , the probability that S accepts a modified pair is smaller than 2–m. If S does not accept a pair, this round is opened, there is a disagreement about a key or detection of a disrupting participant, and the participants start again with the first step [20].

0 1 2. Signing phase – S has obtained the (n–1) × m matrix A: Aij = (N' ij, N' ij, C'ij) for 1 ≤ i ≤ n–1, 1 ≤ j ≤ m. S only knows that each participant has sent him one entry of each column, while the participants distinct from S also know which entries of each column b b are theirs. S sends his bit b to R by sending him b and the (n–1) × m matrix A : A ij = b b N' ij. R accepts this bit b if all the N he sent to S are correctly contained in this matrix [20].

41 R can convince an other participant P that he got b from S by sending him Ab. P accepts b from R (i.e. he is convinced that R accepted b from S) if he sees at least half of his Nb correctly in this matrix. If the protocol would require P to see all his Nb correctly, it would be rather easy for a disruptive S to convince R, while R could not convince anyone else [20].

3.5.1.3 Extended Protocol In discussed work [20], there is in addition to the basic protocol mentioned above an extended version described, version which allows that the first receiver can convince the second receiver and that each participant who receives the signature later on knows a priori if he can convince a next receiver [20]. This is achieved by adding a key K to information being send between participants, K is ℤ 0 1 0 1 also chosen from p , as N and N . So participants send a set (N , N , K, C), where C equals N0 ∙ N1 ∙ K mod p and only triples (N'0, N'1, K') for which the received C' equals their product modulo p. K is a key that determines a hash function HK from an universal class of hash functions (we described these functions in Chapter 3.2). Given K, each participant knows HK. More detailed description can be found in [20].

3.5.2 Blind Signatures

This concept was also introduced by David Chaum in Blind signatures for untraceable payments [21]. In this work he proposed how to sign a message without knowing its content, a kind of signature which is very useful in electronic voting or payment systems. We will closer look at proposed protocol, functions and properties and then we will mention an unconditionally secure solution.

3.5.2.1 Chaum's Blind Signature System Chaum defined his blind signature scheme as follows [21]:

Functions Blind signature cryptosystem comprise of three functions: 1. A signing function s' known only to the signer, corresponding publically known inverse s, such that s(s'(x)) = x and s give no clue about s'. 2. A commuting function c and its inverse c', both known only to the provider, such that c'(s'(c(x))) = s'(x) and c(x) and s' give no clue about x. 3. A redundancy checking predicate r, that checks for sufficient redundancy to make

42 search for valid signature impractical [21].

Protocol Utilization of preceding functions is as follows: 1. Provider chooses x at random such that r(x), forms c(x), and supplies c(x) to signer. 2. Signer signs c(x) by applying s' and returns the signed matter s'(c(x)) to provider. 3. Provider strips signed matter by application of c', yielding c'(s'(c(x))) = s'(x). 4. Anyone can check that the stripped matter s'(x) was formed by the signer, by applying the signer's public key s and checking that r(s(s'(x))) [21].

Properties Desired security properties for blind signature scheme are: 1. Digital signature – anyone can check that a stripped signature s'(x) was formed using signer's private key s'. 2. Blind signature – signer knows nothing about the correspondence between the elements

of the set of stripped signed matter s'(xi) and the elements of the set of unstripped

signed matter s'(c(xi)). 3. Conservation of signatures – provider can create at most one stripped signature for each

thing signed by signer (i.e. even with s'(c(x1)) … s'(c(xn)) and choice of c, c', and xi, it is

impractical to produce s'(y), such that r(y) and y ≠ xi) [21].

3.5.2.2 Unconditionally Secure Blind Signatures The concept of the blind signatures was further studied by various researchers, one example of the unconditionally secure solution of this scheme was published by Yuki Hara et al. in Unconditionally Secure Blind Signatures [22]. Hara et al. in [22] introduce a model of unconditionally secure blind signature (USBS), propose security notions, formalize them and finally construction method for USBS that is provably secure in stated unconditional security setting.

Model The model of unconditionally secure blind signature scheme assumes an existence of: • n + 2 participants • a signer S • n users U1, U2, …, Un • a trusted authority TA A trusted authority's role is to work with participants' secret keys:

43 1. At first, TA produces a secret key for each participant. 2. TA deletes distributed keys after successful distribution from memory. 3. User generates a blinded message with a given key from TA for a message and sends the blinded message to the signer. 4. The signer generates a signature for the blinded message by his key, and sends it back to the user. 5. The user can now produce a signature for the original message by using his key from the received signature created by the signer, and then the user verifies the validity of the signature by using his key. The signer can also verify the validity of signature by using his key. In the case of dispute between the signer and a user, an arbiter (an honest user) can resolve the dispute by using his key [22].

Security Notions In this subsection we will depict three newly defined security notions of blind signature schemes: • Unconditional unforgeability: The notion of unconditional unforgeability means that it is difficult for malicious colluders not including the signer S to create a signature that has not been legally generated by S but will be accepted as valid by a user Uj or the signer S [22]. • Unconditional undeniability: The notion of unconditional undeniability means that it is difficult for malicious colluders including the signer S to generate an invalid signed message (m, σ) that will be accepted as valid by the target user Uj. Note that the purpose of the colluders is to deny having sent (m, σ) [22]. • Unconditional blindness: The notion of unconditional blindness means that it is difficult for malicious colluders to succeed in the following attack. They observe (m, σ), and then try to guess the user who requested it [22].

Construction Method A construction method for unconditionally secure blind signature scheme is presented in [22]. Polynomials over finite fields are used for the construction of an unconditionally secure blind signature scheme which is provably secure in defined security notions: The main idea of this construction is to combine unconditionally secure blind signature scheme in a one-time model. In the model of one-time unconditionally secure blind signature scheme, there are four kinds of secret-keys, a blinding key, an unblinding key, a signing key and verification keys. And, the user who has the pair of blind key and unblinding key can request the signer to generate a

44 signature only once, and other users who have only the verification-key can check the validity of the message-signature pair [22]. In presented construction method, there is at the first step provided a set of blinding key, unblinding key, signing key and verification key for one-time model of unconditionally secure blind signature scheme. Pairs of blind key and unblinding key are given to each users, and signing keys to the signer, respectively. After that the user requests the signer to generate a signature only once with provided keys. There is a need to keep the relation among the keys, because the signer cannot generate a valid signature unless he uses the signing key corresponding to the blinding key of the user in the construction. So each key should be indexed. Indexes has to be kept of course secret. The details of the construction can be found in [22]. We will mention here just a digest: 1. Key Generation Algorithm: for a security parameter, the algorithm outputs matching key for each participant. 2. Blinding Algorithm: blinds a message for m. 3. Signing Algorithm: for a blinded message m the algorithm outputs a signature for it. 4. Unblinding Algorithm: a signature for the original message m. 5. Verification Algorithm by User: for a message-signature pair (m, σ), outputs valid or invalid according to a result of check for an user. 6. Verification Algorithm by Signer: for a message-signature pair (m, σ), outputs valid or invalid according to a result of check for a signer [22]. More detailed description of this scheme could be found in [22]. There are also definitions, theorems and their proofs elaborated and presented, but this exceeds scope of our work.

45 3.6 Byzantine Agreement

Topic of this chapter will be about reaching an agreement among parties, with possibility that part of participants can behave maliciously. This problem is often presented as a problem of getting consensus on an attack between generals. Traitors are present. How to successfully cooperate to succeed in the attack, just by exchanging messages between generals, with an option that traitors may lie. This problem is called the Byzantine agreement, according to Leslie Lamport, Marshall Pease and Robert Shostak's paper The Byzantine Generals Problem [30] where they continued on research presented in Reaching Agreement in the Presence of Faults where they have introduced a problem of dealing with common agreement in presence of failures [29].

3.6.1 Byzantine Agreement Problem

Let n be the number of processors, t be an upper bound on the number of faulty processors. Let

P = {p1, …, pn} is set of processors communicating by exchanging of messages. It is not known which processors will fail and which will stay correct. Failures are categorized as: • crash failure - a crash failure means that the processor no longer operates, other processors will not receive messages from it. • omission failure - a faulty processor fails to send and receive some messages. • Byzantine failure - a faulty processor behaves arbitrarily [31].

In the beginning, each processor pi has an externally provided input value vi, from some set

V, |V| ≥ 2. Every correct processor pi is required to decide on an output value di ∈ V such that the following holds: • termination – eventually, pi decides, the algorithm cannot run forever • validity - if the input of all the processors is v, then the correct processors decide v • agreement - all the correct processors decide on the same value [31].

Lamport et. al in [29] showed that Byzantine agreement without authentication can be n achieved iff t , authenticated version of Byzantine agreement, which employs a kind of 3 authentication scheme, permits tn faulty processors. The authenticated Byzantine agreement protocol enables processors to authenticate their messages and to verify them after receiving. Till introducing the pseudosignatures, this type of Byzantine agreement were based on digital signatures, which rely on computational assumptions like discrete logarithms problem, factorization of natural numbers, etc., for more details please see previous subchapter 3.5. The

46 unconditionally secure solution of Byzantine agreement problem was presented by Birgit Pfitzmann and Michael Waidner in Unconditional Byzantine Agreement for any Number of Faulty Processors [17].

3.6.2 Byzantine Agreement Protocol with Pseudosignatures

Here we will present a Waidner-Pfitzmann's version of Byzantine agreement protocol, which is, provided that there is a reliable broadcast available in precomputation phase and each pair of processors can communicate securely during the protocol, unconditionally secure for any number of faulty processors [17]. Below described Byzantine agreement protocol utilizes pseudosignatures in the same way as in subchapter 3.3.2.1 are used, i.e. the λ-transferable ones. A basic authenticated protocol for Byzantine agreement was taken from Authenticated Algorithms for Byzantine Agreement by Danny Dolev and H. Raymond Strong [33] where Waidner and Pfitzmann replaced digital signatures with pseudosignatures.

Assumptions

Let P = {p1, …, pn} be the set of processors, T be a transmitter for a correct agreement, σ ∈ IN be a security parameter so that an error rate 2–σ is acceptable and t < n be the upper bound on the number of faulty processors [17]. The protocol, each its execution needs a precomputation phase, in which are pseudosignatures initialized. A transmitter T acts as the pseudosigner for one of them, each other processor for two of them. The pseudosignatures of T ≠ P are denoted as A-signature and B-signature. Pseudosignatures are used in triples (i, α, ψ), where i is the index of the pseudosigner Pi, α ∈ {A, B} a type of a pseudosignature and ψ a pseudosignature. A message M consists of a value v and a set of up to t+l triples (i, α, ψ). If M is the first message Pi relays, Pi pseudosigns v using its A-signature, otherwise using its B-signature. Pi adds the resulting triple (i, α, ψ) to M. Or Pi forwards M (with Pi's signature included) to each processor of whom no pseudosignature is contained in M yet [17].

There are two sets in protocol: the set ACCi contains all the values which Pi has accepted from the transmitter. At the end, if ACCi contains exactly one value v, Pi accepts this value as its final value. Otherwise, Pi knows that the transmitter was faulty. The set OLDi ensures that a good processor Pi reacts to at most two protocol messages from each processor Px.

47 The Protocol • A good transmitter T starts with a value v, which is to be distributed [17]: Transmitter T relays v, i.e., T pseudosigns v with its A-signature and forwards it to all the other

processors. Each T ≠ Pi initializes two empty sets OLDi and ACCi. • For k = 1, ..., t+l and each processor T ≠ Pi:

1. Pi forms the set Ni,k of new protocol messages: For each protocol message M that

Pi received in round [k-l]: Let Px be the sender of M. If M fulfills the first and second condition of k-consistency (for definition please see [33] and [17], where is the definition slightly changed to achieve smaller messages) and contains a triple (i,

α, ψ) and (i, α) ∉ OLDi then Pi adds M to Ni,k and (i, α) to OLDi. Otherwise Pi

ignores M. (In this case, Pi knows that Px is faulty, and there is no need to consider messages from faulty processors.)

2. Pi forms the set Vi,k of all k-consistent messages from Ni,k. Let Vi,k be

lexicographically ordered. Pi adds all the values which are included in a message in

Vi,k to ACCi.

3. If k ≤ t and Vi,k ≠ ∅, and if Pi has not relayed any message before, then Pi relays the

first two messages in Vi,k which contain different values, or just the first one, if all messages contain the same value.

4. If k ≤ t and Vi,k ≠ ∅, and if Pi relayed exactly one message, which contained a value

v', then Pi relays the first message in Vi,k which contains a value v" ≠ v', if there is one. • The final value of T is its own local input v. If ACCi contains exactly one value v,

processor Pi takes this v as its final value; otherwise, Pi decides on “faulty transmitter”. Here presented protocol for Byzantine agreement is in [17] showed to be secure thanks to precomputation phase and to utilizing for authentication pseudosignatures as the unconditionally secure alternative for digital signatures, which security rely on computational assumptions [17].

48 3.7 Coin Tossing

Coin tossing or coin flipping is very often used for solving disputes between parties, for example, which team will start a football match with a ball. If Alice and Bob want to resolve an argument by flipping a coin and they are physically at the same place, there is no problem to decide: Alice tells her choice, Bob flips a coin, and after the coin falls, Alice and Bob can see a result of their argument. Problem arises when Alice and Bob want to agree on a result when they are tossing a coin by telephone. There are some requirements for this protocol: • a result should be from a set {0, 1, reject} • if Alice and Bob don't cheat, the result has to be 0 or 1, both outputs are possible with probability 1/2 • if anyone cheats, the protocol has to end with reject [25].

3.7.1 Blum's Coin Flipping Protocol

A version of coin tossing protocol for tossing coin by telephone was published by Manuel Blum in Coin flipping by telephone A protocol for solving impossible problems [37].

Assumptions Blum's ideal version assumes existence of a completely secure one-way function, i.e. an efficiently computable function of some set into itself whose inverse cannot he computed efficiently except on a negligible fraction of values. Completely secure one-way function has the additional property that from a knowledge of f(x), one cannot have more than a 50-50 chance to guess efficiently if x has some not trivial property, e.g., is even (lsb = 0) or odd (lsb = 1) [37].

Protocol To flip coins, Alice and Bob should agree on a completely secure 1-1 one-way function f. Alice then selects an integer x unknown to Bob, computes f(x) and sends f(x) to Bob. Bob tells Alice whether he thinks x is even or odd (this is where Bob flips a coin to Alice). At this point, Alice can tell if he guessed right or wrong. To convince Bob, she sends him x [37]. Such completely secure one-way functions, however, may not only be hard to discover, they may not even exist. In [37] is shown how a normally secure one-way function can be used to

49 flip coins in a completely secure fashion. Blum's one-way function is 2-1, i.e., it maps exactly two elements from its domain to each element of its range. A simple property, say even or oddness, will distinguish the two elements x, y that map to the same element f(x) = f(y). If Alice selects x and sends f(x) to Bob, he absolutely cannot determine whether she selected the x or the y, x ≠ y, such that f(x) = f(y). He guesses whether she picked the even or odd number. His guess as told to Alice constitutes his coin flip to her. Since f is one-way, Alice cannot cheat, i.e., cannot tell him y if in fact she selected x [37]. A coin-flipping protocol with the right properties is: 1. If either participant following protocol does not catch the other cheating, he or she can be sure that the coins each have exactly 50-50 independent chance to come up heads 2. If either participant catches the other cheating, he or she can prove it to a judge – if all messages are sent signed 3. After Bob flips coins to Alice, she knows which coins came up heads, which tails. He should have absolutely no idea how they came up. 4. After the sequence of coin flips, Alice should be able to prove to Bob which coins came up heads, which tails [37].

A coin-flipping protocol with these properties can be used to generate and use a random number, x, without revealing it to her opponent, Bob. Even if it is to her advantage to select a particular nonrandom x, she will not be able to do so. Bob forces Alice to choose x at random by flipping coins to her. The resulting sequence of bits is completely unknown to Bob and, provided Bob follows protocol, completely random. Alice uses the sequence as the required random number, x. Later, she proves to Bob that he flipped the sequence x to her, thus assuring him it was chosen at random [37].

3.7.2 Perfectly Secure Coin Tossing Using Bit Commitment

It is possible to toss coin in perfectly secure way by using a bit commitment scheme. We have described bit commitment in subsection 3.4. Let S is a bit commitment scheme. The protocol is as follows:

1. Alice and Bob execute S's commit phase on a random bit bA

2. Bob tries to guess a value of bit bA and sends to Alice a random bit bB

3. Alice and Bob execute S's on bA

4. A result of this coin tossing protocol is bA ⊕ bB

As it is bit commitment protocol defined, Bob is not able to gain any information about bA before the second step of protocol's execution, he has to guess it. Moreover Alice is not able to change her choice according to a value of dB. If the execution of protocol finishes correctly its output is a random bit which means that it is perfectly secure [25].

50 3.8 Key Exchange

There are basically two kinds of key protocols, key distribution (or transport) protocols and key agreement protocols. After the execution of both types of protocols, parties involved in such communication share a common secret key. While the first type of protocol only distributes a previously known key, the second one sets it up during communication between parties. There is also a third possible option – key predistribution protocol – it utilizes a trusted authority

(TA) – for every pair of users {U, V}, the TA chooses a random key KU,V = KV,U and transmits it “off-band” to U and V over a secure channel. A transmission of keys does not take place over the network, because the network is not secure. This approach is unconditionally secure, but it requires a secure channel between the TA and every user in the network [4].

3.8.1 Key Predistribution Protocol

n This protocol has some disadvantages, while the TA generates   keys, and gives each key to 2 a unique pair of users in a network of n users. Also a secure channel between the TA and each user to transmit these keys is required. And if n is large, this solution is quite impractical, because of amount of data being transmitted and also of amount of data each user has to store securely [4]. Below is a protocol by Blom, who has tried to fight these disadvantages [4].

3.8.1.1 Blom Key Predistribution Scheme

Assumptions ℤ Let there is a network of n users and keys are chosen from a finite field p , where p ≥ n is prime. Let k be an integer, 1 ≥ k ≥ n – 2. The value k is the largest size coalition against which ℤ the scheme will remain secure. In the Blom Scheme, the TA will transmit k + 1 elements of p to each user over a secure channel. Each pair of users, U and V, will be able to compute a key

KU,V = KV,U. The security condition is as follows: any set of at most k users disjoint from {U,

V} must be unable to determine any information about KU,V [4]. Lets first present the special case of Blom’s scheme where k = 1. Here, the TA will transmit ℤ two elements of p to each user over a secure channel, and any individual user W will be unable to determine any information about KU,V if W ≠ U, V. Blom’s scheme is presented below [4].

51 The Protocol

∈ℤ 1. A prime number p is made public, and for each user U, an element rU p is made

public. The elements rU must be distinct. ∈ℤ 2. The TA chooses three random elements a ,b,c p and forms polynomial f x , y=abx ycxy mod p. 3. For each user U the TA computes the polynomial  =   g U x f x ,rU mod p   and transmits g U x to U over a secure channel. 4. If U and V want to communicate, then they use the common key  =     KU,V = KV,U = f rU ,rV a b rU rV crU rV mod p ,

where U computes KU,V as  =   f rU ,rV gU rV

and V computes KV,U as  =   f rU , rV gV rU .

Theorem 3.7 (The Blom scheme) The Blom Scheme with k = 1 is unconditionally secure against any individual user.

Proof Let’s suppose that user W wants to try to compute the key     KU,V = a b rU rV crU rV mod p.

The values rU and rV are public, but a, b and c are unknown. W does know the values =  =  aW a brW mod p and bW b crW mod p   since these are the coefficients of the polynomial g W x that was sent to W by the TA. What we will do is show that the information known by W is consistent with any possible ∈ℤ value l p of the key KU,V. Hence, W cannot rule out any values for KU,V. Consider the ℤ following matrix equation (in p ):  1 rU rV rU rV a l   = . 1 rW 0 b aW

0 1 rW c bW

52 The first equation represents the hypothesis that KU,V = l; the second and third equations contain   the information that W knows about a, b and c from g W x . The determinant of the coefficient  −   = −  −  ℤ matrix is rW 2  rU rV rU rV rW rW rU rW rV , where all arithmetic is done in p .

Since rW ≠ rU and rW ≠ rV , it follows that the coefficient matrix has non-zero determinant, and hence the matrix equation has a unique solution for a, b, c. In other words, any possible value l of KU,V is consistent with the information known to W. On the other hand, a coalition of two ∅ users, say {W, X}, will be able to determine any key KU,V where {W, X} ∩ {U, V} = . W and X know that =  aW a brW =  bW b crW =  a X a br X =  bX b cr X . Thus they have four equations in three unknowns, and they can easily compute a unique solution for a, b and c. Once they know a, b and c, they can form the polynomial f(x, y) and compute any key they wish. It is straightforward to generalize the scheme to remain secure against coalitions of size k. The only thing that changes is step 2. The TA will use a polynomial f(x, y) having the form

k k  =∑∑ i j f x , y ai,j x y mod p , i=0 j=0

∈ℤ       where ai,j p , 0 i k ,0 j k , and ai.j = aj,i for all i, j. The remainder of the protocol is unchanged [4].

3.8.2 Key Distribution and Agreement Protocols

An interesting protocol to generate and distribute a secret key was presented by Bowen Alpern, Fred B. Schneider in Key exchange using “Keyless cryptography”. It utilizes keyless cryptography, in which information is hidden by keeping secret an originator of a message. It allows parties to agree on a value of a key while preventing passive wiretappers from working out its value [38]. There are also key agreement protocols based for example on Shamir's secret sharing scheme (this will be discussed later) which provide a possibility to detect and identify incorrect behaviour of parties [25].

53 3.8.2.1 Keyless Key Exchange

For parties A and B to agree on a secret key kAB, A and B choose their key. Let the bit string kA1kA2 … kA2n be kA, the key chosen by A and kB1kB2 … kB2n be kB, the key chosen by B. A then constructs 2n messages, one for each bit in kA, where each message has the form

Concerning kAB: bit i is kA[i] and B also constructs 2n messages, one for each bit in kB, where each message has the form

Concerning kAB: bit i is kB[i]. Each of this messages is then made available to other users in a way that leaves its contents readable, but makes it impossible to determine its origin. By reading these messages, user A can determine kB: A knows the message it constructed, thus the other value for each bit position can be attributed to B. Similarly it holds for B. An passive adversary E. E can read the contents of all messages. This allows E to deduce values only for those positions of kA and kB that are the same. On the average, E will learn n bits in this manner. However A and B can determine which values E will deduce, so A can delete the bits known to E from kA to form k'A (the same holds for B). Then

kAB = k'A = complement(k'B) This procedure can be repeated to obtain a larger key, if necessary [38]. In [38] a reader can find also outlines for implementing this protocol, in each is an adversary present, has an ability to read messages but has no way how to determine their originators.

54 3.9 Oblivious Transfer

This primitive is very interesting because it could serve to realize any two party computation and its unconditionally secure realization could be of a great importance, as is stated in Joe Kilian's paper Founding Cryptography on Oblivious Transfer [39]. Unfortunately this seems to be impossible, while it should be unattainable to assure unconditional security in two party communication schemes, the same situation holds for bit commitment protocol. There are several versions of oblivious transfer protocols, mutually equivalent: Rabin's, 1-out-of-2, 1-out- of-n, k-out-of-n oblivious transfer protocol [25].

Let Alice and Bob not trust each other. Alice wants to send to Bob two messages m0 and m1 in following way: Bob has a bit b, Alice wants Bob to receive a one of the messages and Bob wants to receive a message mb so Alice learns nothing about his bit, but she can be sure, he received one of the messages. This protocol is called 1-out-of-2 oblivious transfer protocol.

3.9.1 1-out-of-2 Oblivious Transfer Based on Blind Signatures

This protocol is based on Chaum's blind signature system, we have been talking about in subchapter 3.5.2.1. Let Alice has two messages m1 and m2, both are the same length. Then the protocol is executed as follows: 1. Alice chooses public (n, e) and secret key d: • randomly picks two large primes p and q of the same length • computes n = pq and φ(n) = (p – 1)(q – 1) • randomly chooses d incommensurable with φ(n) so that 1 < d < φ(n) • computes e so that 1 < e < φ(n) and ed ≡ 1 (mod φ(n)) • public key is (n, e), private key is d.

2. Alice negotiates with Bob on suitable hash function h, randomly chooses u0 < n, u1 < n incommensurable with n and sends them to Bob.

d d 3. Alice computes u0 mod n and u1 mod n. 4. Bob randomly chooses b < n incommensurable with n and sends to Alice

e vσ = uσb mod n, where σ ∈ {1, 0} is an index of a message Bob wants to receive.

5. Alice receives vσ and randomly chooses r < n incommensurable with n and computes

d yσ = v r mod n.  d   d  6. Alice computes keys k0 = h u0 r mod n ,0 , k1 = h u1 r mod n ,0 , and encrypts

55 transmitted messages so c0 = k0 ⊕ m0 a c1 = k1 ⊕ m1 and sends them along with yσ to Bob. −1 = d 7. Bob computes y b mod n u r mod n. . Now is Bob able to compute key  d  kσ = h u r mod n , and decrypt mσ = kσ ⊕ cσ. 8. Alice can send to Bob d to assure correctness, but by this she enables Bob to decrypt her messages.

Presented protocol is unconditionally secure from a receiver's point of view, because of randomly chosen b a sender can't gain any information from vσ about which message was chosen by the receiver. From the sender's point of view the protocol is only computationally secure, its security is based on non-existence of effective algorithm for factorization of natural numbers [25].

3.9.2 Oblivious Transfer with Trusted Initializer

Rivest in his work [26] presented besides a protocol for bit commitment with trusted initializer (for more detaisl please see subchapter 3.4.2.3) also a protocol for oblivious transfer which utilizes a trusted initializer too. Assumptions are the same as in the previous protocol, setup is as follows:

1. TI privately gives Alice two random k-bit strings r0, r1. TI flips a bit d and privately

gives Bob d and rd. TI is now done and can go home.

2. Bob determines somehow a bit b he wants to obtain mb. He privately sends Alice the bit e = b ⊕ d.

3. Alice privately sends Bob the values f0 = m0 ⊕ re, f1 = m1 ⊕ r1–e. Bob now computes

mb = fb ⊕ rd.

Bob obtains the value of mb where b is his choice. Alice learns nothing about b and Bob learns nothing about m1–b. The scheme clearly generalizes to 1-out-of-n in an easy manner using n random strings r1, r2, …, rn [26].

56 3.10 Secret Sharing

Another cryptographic primitive we will describe in this subsection is secret sharing. This problem was introduced (not only) by Adi Shamir in How to Share a Secret [34]. The secret sharing problem is defined as a dividing an information D to n pieces so that D can be easily reconstructed from any k pieces, but even complete knowledge of k – 1 pieces reveals absolutely no information about D [34].

The goal of this issue is to divide D into n parts (D1, …, Dn) by following way: • knowledge of any k or more Di pieces makes D easily computable; • knowledge of any k – 1 or fewer Di pieces leaves D completely undetermined This is called (k, n) threshold scheme [34], the basic Shamir's scheme can be described as follows: a dealer and n participants are present. The dealer divides an information into n parts and gives each participant one part so that any k parts can be put together to recover the secret, but any k – 1 parts reveal no information about the secret. A secret sharing scheme is perfect if any group of at most k – 1 participants (insiders) has no advantage in guessing the secret over the outsiders [35]. Shamir's secret sharing scheme is an interpolating scheme based on polynomial interpolation. An (k – 1)-degree polynomial over the finite field k–1 GF(q): F(x) = a0 + a1x + ... + am – 1 x is constructed such that the coefficient a0 is the secret and all other coefficients are random elements in the field. Each of the n shares is a point (xi, yi) on the curve defined by the polynomial, where xi not equal to 0. Given any k shares, the polynomial is uniquely determined and hence the secret a0 can be computed. However, given k – 1 or fewer shares, the secret can be any element in the field. Therefore, Shamir's scheme is a perfect secret sharing scheme [35]. There are also another, non-interactive, unconditionally secure secret sharing schemes, for example one by Torben Pryds Pedersen published in Non-Interactive and Information- Theoretic Secure Verifiable Secret Sharing [36].

57 4 Conclusion

In this work we have presented various cryptographic primitives considering their security. Especially we have focused on unconditional security. If it is possible to find unconditionally secure solution or not. We present our results in following Table 1: Cryptographic primitive Unconditionally secure Comments Encryption Yes One-time pad Authentication Yes Wegman-Carter message authentication Anonymous transfer Yes Dining cryptographers problem Bit commitment No Yes, under certain conditions, e.g. Trusted initializer Digital signature No Yes, under certain conditions, e.g. pseudosignatures are unconditionally secure Byzantine agreement No Yes, under certain conditions, e.g. Waidner-Pfitzmann's solution with precomputing and pseudosignatures Coin tossing No Yes, under certain conditions, e.g. coin tossing using bit commitment Key exchange Yes Blom key predistribution scheme Oblivious transfer No Yes, under certain conditions, e.g. Trusted initializer Secret sharing Yes Shamir's scheme Table 1: Results on security of cryptographic primitives As it could be seen, almost every primitive is under certain conditions possible to present as unconditionally secure. But the real unconditional security is only possible with: encryption, message authentication, anonymous transfer, key exchange and secret sharing primitives. There are special “subprimitives”, as for example digital pseudosignatures, which provides unconditional security too, but with some limitations. Details can be found inside of this work. Because of limited space we have some topics just sketched and some avoided as well. In my opinion there are researchers for example as Chaum, Waidner and Pfitzmann, whose ideas deserve to study in more detail.

58 5 Bibliography

[1] Alfred J. Menezes, Paul C. van Oorschot and Scott A. Vanstone: Handbook of Applied Cryptography, CRC Press, 1996, book available online: http://www.cacr.math.uwaterloo.ca/hac/ (spring 2010)

[2] From Wikipedia, the free encyclopedia: Claude Shannon, document available online: http://en.wikipedia.org/wiki/Claude_Shannon (spring 2010)

[3] Christian Cachin: Entropy Measures and Unconditional Security in Cryptography, 1997, document available online: http://www.zurich.ibm.com/~cca/papers/diss.ps.gz (spring 2010)

[4] Douglas Stinson: Cryptography: Theory and Practice, CRC Press, CRC Press LLC, 1995, book available online: http://www.kumanov.com/docs/prog/Cryptography_Theory_and_Practice/ewtoc.html (spring 2010)

[5] Claude E. Shannon: A Mathematical Theory of Communication, document available online: http://cm.bell-labs.com/cm/ms/what/shannonday/shannon1948.pdf (spring 2010)

[6] Stefan Wolf: Unconditional Security in Cryptography, document available online: http://www.daimi.au.dk/~ivan/wolf.ps (spring 2010)

[7] J. Lawrence Carter and Mark N. Wegman: Universal Classes of Hash Functions, Journal of Computer and System Sciences Volume 18, Issue 2, April 1979

[8] Mark N. Wegman, J. Lawrence Carter: New Hash Functions and Their Use in Authentication and Set Equality, Journal of Computer and System Sciences Volume 22, Issue 3, June 1981

[9] Hugo Krawczyk: LSFR-based Hashing and Authentication, document available online: http://dsns.csie.nctu.edu.tw/research/crypto/HTML/PDF/C94/129.PDF (spring 2010)

[10] Jan Bouda, Oliver Maurhart, Thomas Themel, Stephan Jank, Philipp Pluch and Rajagopal Nagarajan: Encryption and Authentication in SECOQC, document available online: http://www.fi.muni.cz/~xbouda1/teaching/IV055/reading/message_authentization/SECOQC_en cryption_authentication.pdf (February 2010)

59 [11] Victor Shoup: On Fast and Provably Secure Message Authentication Based on Universal Hashing, document available online: http://www.shoup.net/papers/macs.pdf (spring 2010)

[12] Claudia Díaz, Joris Claessens, Stefaan Seys, and Bart Preneel: Information Theory and Anonymity, document available online: http://homes.esat.kuleuven.be/~cdiaz/papers/itanon.pdf (spring 2010)

[13] David Chaum: The Dining Cryptographers Problem: Unconditional Sender and Recipient Untraceability, document available online: http://www.ece.cmu.edu/~adrian/731- sp04/readings/dcnets.html (spring 2010)

[14] From Wikipedia, the free encyclopedia: Dining cryptographers problem, document available online: http://upload.wikimedia.org/wikipedia/commons/0/02/Dinning_cryptographers.png (spring 2010)

[15] Philippe Golle, Ari Juels: Dining cryptographers revisited, document available online: http://crypto.stanford.edu/~pgolle/papers/nim.pdf (spring 2010)

[16] Birgit Pfitzmann, Michael Waidner: Unconditionally Untraceable and Fault-tolerant Broadcast and Secret Ballot Election, document available online: www.semper.org/sirene/publ/PfWa5_92DC1_1IB.ps.gz (spring 2010)

[17] Birgit Pfitzmann, Michael Waidner: Unconditional Byzantine Agreement for any Number of Faulty Processors, document available online: www.semper.org/sirene/publ/PfWa_92BA1- 1.ps.gz (spring 2010)

[18] Jurjen Bos, Bert den Boer: Detection of Disrupters in the DC Protocol, document available online: http://www.springerlink.com/content/pktk2f70qaftj2q7/fulltext.pdf (spring 2010)

[19] Whitfield Diffie, Martin E. Hellman: New Directions in Cryptography, document available online: http://www.cs.rutgers.edu/~tdnguyen/classes/cs671/presentations/Arvind- NEWDIRS.pdf (spring 2010)

[20] David Chaum, Sandra Roijakkers: Unconditionally-Secure Digital Signatures, document

60 available online: http://www.springerlink.com/content/y90r6dgad60fn4yl/fulltext.pdf (spring 2010)

[21] David Chaum: Blind signatures for untraceable payments, document available online: http://seas.gwu.edu/~poorvi/Classes/CS381_2005/ChaumBlindSignatures.pdf (spring 2010)

[22] Yuki Hara, Takenobu Seito, Junji Shikata, Tsutomu Matsumoto: Unconditionally Secure Blind Signatures, document available online: http://www.springerlink.com/content/d8l721843p3w3401/fulltext.pdf (spring 2010)

[23] From Wikipedia, the free encyclopedia: Commitment scheme, document available online: http://en.wikipedia.org/wiki/Commitment_scheme (spring 2010)

[24] Sara Krehbiel: Introduction to Cryptography (Lecture 20: Bit Commitment), document available online: http://userweb.cs.utexas.edu/~sarak/cs388h/Lecture20Nov9.pdf (spring 2010)

[25] Ivan Fialík: Aplikace kryptografických primitiv, document available online: http://is.muni.cz/th/60488/fi_m/dp.pdf (spring 2010)

[26] Ronald L. Rivest: Unconditionally Secure Commitment and Oblivious Transfer Schemes Using Private Channels and a Trusted Initializer, document available online: http://people.csail.mit.edu/rivest/Rivest-commitment.pdf (spring 2010)

[27] C. Blundo, B. Masucci, D. R. Stinson, R. Wei: Constructions and Bounds for Unconditionally Secure Non-Interactive Commitment Schemes, document available online: http://www.springerlink.com/content/mvv8qfmw1187puvb/fulltext.pdf (spring 2010)

[28] Alexandre Pinto, André Souto, Armando Matos, Luís Antunes: Commitment and Authentication Systems, document available online: http://www.springerlink.com/content/d52w577r2m84316g/fulltext.pdf (spring 2010)

[29] Leslie Lamport, Marshall Pease and Robert Shostak: Reaching Agreement in the Presence of Faults, document available online: http://research.microsoft.com/en- us/um/people/lamport/pubs/reaching.pdf (spring 2010)

61 [30] Leslie Lamport, Marshall Pease and Robert Shostak: The Byzantine Generals Problem, document available online: http://research.microsoft.com/en- us/um/people/lamport/pubs/byz.pdf (spring 2010)

[31] Michael Okun: Byzantine Agreement, document available online: http://www.weizmann.ac.il/neurobiology/labs/lampl/mush/entry401.ps (spring 2010)

[32] Bernd Altmann, Matthias Fitzi and Ueli Maurer: Byzantine Agreement Secure against General Adversaries in the Dual Failure Model , document available online: http://www.springerlink.com/content/8ypyh47fdd9cf26d/fulltext.pdf (spring 2010)

[33] Danny Dolev, H. Raymond Strong: Authenticated Algorithms for Byzantine Agreement, document available online: https://www.cs.huji.ac.il/~dolev/pubs/authenticated.pdf (spring 2010)

[34] Adi Shamir: How to Share a Secret, document available online: http://www.cs.tau.ac.il/~bchor/Shamir.html (spring 2010)

[35] Kryptographie FAQ 3.0, document available online: http://www.iks- jena.de/mitarb/lutz/security/cryptfaq/ (spring 2010)

[36] Torben Pryds Pedersen: Non-Interactive and Information-Theoretic Secure Verifiable Secret Sharing, document available online: http://www.cs.huji.ac.il/~ns/Papers/pederson91.pdf (spring 2010)

[37] Manuel Blum: Coin flipping by telephone. A protocol for solving impossible problems, document available online: http://www.cs.cmu.edu/~mblum/research/pdf/coin/ (spring 2010)

[38] Bowen Alpern, Fred B. Schneider: Key exchange using “Keyless cryptography”, document available online: http://ecommons.library.cornell.edu/bitstream/1813/6353/1/82-513.pdf (spring 2010)

[39] Joe Kilian: Founding Cryptography on Oblivious Transfer, document available online: http://delivery.acm.org/10.1145/70000/62215/p20-kilian.pdf? key1=62215&key2=5130564721&coll=GUIDE&dl=GUIDE&CFID=91369696&CFTOKEN= 3

62 4547416 (spring 2010)

[40] From Wikipedia, the free encyclopedia: Probability theory, document available online: http://en.wikipedia.org/wiki/Probability_theory (spring 2010)

[41] From Wikipedia, the free encyclopedia: Quantum cryptography, document available online: http://en.wikipedia.org/wiki/Quantum_cryptography (spring 2010)

63