Building Secure Block Ciphers on Generic Attacks Assumptions

Total Page:16

File Type:pdf, Size:1020Kb

Building Secure Block Ciphers on Generic Attacks Assumptions Building Secure Block Ciphers on Generic Attacks Assumptions Jacques Patarin1 and Yannick Seurin1,2 1 University of Versailles, France 2 Orange Labs, Issy-les-Moulineaux, France [email protected], [email protected] Abstract. Up to now, the design of block ciphers has been mainly driven by heuristic arguments, and little theory is known to constitute a good guideline for the development of their architecture. Trying to remedy this situation, we introduce a new type of design for symmetric cryptographic primitives with high self-similarity. Our design strategy enables to give a reductionist security proof for the primitive based on plausible assumptions regarding the complexity of the best distinguishing attacks on random Feistel schemes or other ideal constructions. Under these assumptions, the cryptographic primitives we obtain are perfectly secure against any adversary with computational resources less than a given bound. By opposition, other provably secure symmetric primitives, as for example C [3] and KFC [4], designed using information-theoretic results, are only proved to resist a limited (though significant) range of attacks. Our construction strategy leads to a large expanded key size, though still usable in practice (around 1 MB). Keywords: block ciphers, Feistel schemes, generic attacks, provable security. 1 Introduction Provable Security. Building provably secure but still efficient block ciphers is certainly the most desired but also the most challenging goal of symmetric cryptography. In the area of asymmetric cryptography, “provable security” means that one is mathematically able to reduce the security of a primitive to a well studied and presumably difficult problem such as integer factorisation or discrete logarithm (see [17] for an overview but also a critical look at “provable security” in public key cryptography). The situation in symmetric cryptography is quite different: the security of the most widely deployed primitives often relies on heuristic arguments of one of the three following types: – lack of known attacks whose complexity is less than “brute-force” attacks or less than the desired security level (typically 280 operations nowadays). – provable security against some classes of attacks, typically differential and linear cryptanalysis when dealing with block ciphers. For example, AES does possess such security arguments. R. Avanzi, L. Keliher, and F. Sica (Eds.): SAC 2008, LNCS 5381, pp. 66–81, 2009. c Springer-Verlag Berlin Heidelberg 2009 Building Secure Block Ciphers on Generic Attacks Assumptions 67 – provable security when some components of the primitive are replaced by “ideal” ones. This kind of arguments apply for example for all Feistel ciphers such as DES, for which the celebrated result of Luby and Rackoff [19] shows that when the internal functions are pseudorandom, the cipher is secure in the sense that it is a pseudorandom permutation. This, however, does not yield any security proof for the real primitive, but only ensures that the general structure of the algorithm does not present intrinsic weaknesses. Provable security in symmetric cryptography in the reductionist sense discussed for asymmetric cryptography is rather rare. Most notable examples include some number-theoretic hash functions like VSH [10] and the stream cipher QUAD [6] whose security relies on the difficulty of solving systems of multivariate quadratic equations. However, there is to the best of our knowledge no block cipher with security reduction to some hard problem proposed so far. More concernedly, no difficult problems have been identified as suitable for such a design goal. We will see that the problem of distinguishing a Feistel scheme from a random permutation could be a potential candidate. The Proposal. We propose to build a block cipher whose security can be reduced to some simple and well studied problem. The hard problem we propose is not number-theoretic like for most schemes of asymmetric cryptography. We will use the problem of distinguishing a random Feistel scheme from a random permutation. The rational for such a choice is that Feistel schemes have been extensively studied in the cryptographic literature since the introduction of DES. Though most of this literature is primarily concerned with the information- theoretic properties of these schemes, some authors have studied the so-called “generic attacks” on them. The term generic attacks, introduced by Kilian and Rogaway in [16], means any attack performed on Feistel schemes instantiated using uniformly random and independent functions in each round (which we will name a “random Feistel scheme” in the following), and hence not making use of the underlying structure of the function generator of a real cipher such as DES. Though we will primarily use Feistel schemes, any well studied structure with similar properties could be used. We propose to go beyond the intrinsic limitations of information-theoretic designs. For Feistel schemes, information theory is “stuck” at five rounds in the sense that increasing the number of rounds beyond five does not increase the number of queries needed by a computationally unbounded adversary to dis- tinguish the Feistel scheme from a random permutation. Indeed, whatever the number r ≥ 5 of rounds used in a random Feistel schemes from 2n bits to 2n bits, there is always an oracle adversary making Θ(2n) queries and distinguish- ing a random Feistel scheme from a random permutation with high probability. However the computational complexity of this distinguisher can be extremely high. Taking the problem in the opposite way, we will make the hypothesis (and give arguments supporting it) that the best generic attacks described against Feistel schemes cannot be improved, anddesignapermutationgeneratorsuch 68 J. Patarin and Y. Seurin that any distinguishing attack against it would imply an improvement of the generic attacks against random Feistel schemes. To achieve this goal, we will start from a Feistel scheme with r1 rounds us- ing random and independent functions at each round, and evaluate its security according to the best generic attacks. Then, rather than using independent and random functions directly as the key, we will instantiate each of these functions with independent Feistel schemes with r2 rounds, and again estimate the security of the overall construction with respect to the best generic attacks. We will keep on using this recursive structure until the total size of the key (constituted of the random functions used at the innermost level of the construction) becomes practical. We name this design strategy the “Russian Dolls” construction. As we will see, the complexity of the best distinguisher described so far increases exponentially with the number of rounds of the Feistel scheme, so that using a reasonable number of rounds will be sufficient for a good level of security. Note that in the information-theoretic setting, the innermost Feistel schemes would be potentially weak as they have very small block size. However, any attack on the resulting block cipher would imply a better generic attack on random Feistel schemes at some level of the construction. Related Work. There have been a number of “provably secure” block ciphers proposals. We review the most prominent of them. BEAR and LION were proposed by Anderson and Biham [2]. They are constructed from an ideal stream cipher and an ideal hash function, and the authors proved that attacking the block cipher would imply an attack on one of the underlying components. Later Pat Morin [22] identified some weaknesses in BEAR and LION and proposed AARDVARK, which is based on the same design strategy. Zheng, Matsumoto and Imai [36] presented block ciphers built on so-called Generalized Type-2 transformations (which are kinds of generalized Feistel con- structions). They analysed their constructions in the information-theoretic set- ting and gave evidence supporting the security of their primitives, but no formal security proof. Baignères and Finiasz built on Vaudenay’s decorrelation theory [35] to pro- pose two block ciphers, C [3] and KFC [4], provably secure against a wide range of attacks. This is the logical continuation of the work initiated with the NUT family [35] (COCONUT, PEANUT)andtheAES proposal DFC [13]. Again, their security proof relies on information-theoretic arguments. In partic- ular, KFC is based on a 3-round Feistel scheme using round functions with a very low decorrelation bias and is proved resistant against “d-limited” adversaries making less than d =8or 70 queries, depending on the parameters. The security proof also handles so-called “iterated attacks” of order d/2, where the adversary repeats independent non-adaptive d/2-limited attacks. However, we note that as the Feistel scheme of KFC has only 3 rounds, it is vulnerable to a distinguishing attack making only 3 chosen plaintext-ciphertext queries (see Section 4.2). Granboulan and Pornin [14] proposed an efficient way of generating perfectly random permutations (i.e. statistically very close to the uniform distribution, even for an attacker having the entire codebook) using a pseudorandom number Building Secure Block Ciphers on Generic Attacks Assumptions 69 generator, however their construction is only practical for small plaintext do- mains (typically less than 30-bit blocks). The prior proposal which is the closest to our work was made by Blaze [7] but never published. He proposed the block cipher TURTLE and the stream cipher HAZE. TURTLE is simply the Russian Dolls construction where 4- rounds Feistel schemes are used at each stage, and HAZE is based on TURTLE in counter mode. Yet the security arguments proposed by Blaze are quite different from ours. He claims that retrieving the secret functions of an r-round Feistel scheme, r ≥ 3, is NP-complete by reducing this problem to Numerical Matching with Target Sums (NMTS) [11]. However, keeping the number or rounds constant as the block-size decreases implies a dramatic loss of security. Organization. Our paper is organized as follows. First we give our nota- tions and some standard security definitions.
Recommended publications
  • Zero Correlation Linear Cryptanalysis on LEA Family Ciphers
    Journal of Communications Vol. 11, No. 7, July 2016 Zero Correlation Linear Cryptanalysis on LEA Family Ciphers Kai Zhang, Jie Guan, and Bin Hu Information Science and Technology Institute, Zhengzhou 450000, China Email: [email protected]; [email protected]; [email protected] Abstract—In recent two years, zero correlation linear Zero correlation linear cryptanalysis was firstly cryptanalysis has shown its great potential in cryptanalysis and proposed by Andrey Bogdanov and Vicent Rijmen in it has proven to be effective against massive ciphers. LEA is a 2011 [2], [3]. Generally speaking, this cryptanalytic block cipher proposed by Deukjo Hong, who is the designer of method can be concluded as “use linear approximation of an ISO standard block cipher - HIGHT. This paper evaluates the probability 1/2 to eliminate the wrong key candidates”. security level on LEA family ciphers against zero correlation linear cryptanalysis. Firstly, we identify some 9-round zero However, in this basic model of zero correlation linear correlation linear hulls for LEA. Accordingly, we propose a cryptanalysis, the data complexity is about half of the full distinguishing attack on all variants of 9-round LEA family code book. The high data complexity greatly limits the ciphers. Then we propose the first zero correlation linear application of this new method. In FSE 2012, multiple cryptanalysis on 13-round LEA-192 and 14-round LEA-256. zero correlation linear cryptanalysis [4] was proposed For 13-round LEA-192, we propose a key recovery attack with which use multiple zero correlation linear approximations time complexity of 2131.30 13-round LEA encryptions, data to reduce the data complexity.
    [Show full text]
  • 3GPP TR 55.919 V6.1.0 (2002-12) Technical Report
    3GPP TR 55.919 V6.1.0 (2002-12) Technical Report 3rd Generation Partnership Project; Technical Specification Group Services and System Aspects; 3G Security; Specification of the A5/3 Encryption Algorithms for GSM and ECSD, and the GEA3 Encryption Algorithm for GPRS; Document 4: Design and evaluation report (Release 6) R GLOBAL SYSTEM FOR MOBILE COMMUNICATIONS The present document has been developed within the 3rd Generation Partnership Project (3GPP TM) and may be further elaborated for the purposes of 3GPP. The present document has not been subject to any approval process by the 3GPP Organizational Partners and shall not be implemented. This Specification is provided for future development work within 3GPP only. The Organizational Partners accept no liability for any use of this Specification. Specifications and reports for implementation of the 3GPP TM system should be obtained via the 3GPP Organizational Partners' Publications Offices. Release 6 2 3GPP TR 55.919 V6.1.0 (2002-12) Keywords GSM, GPRS, security, algorithm 3GPP Postal address 3GPP support office address 650 Route des Lucioles - Sophia Antipolis Valbonne - FRANCE Tel.: +33 4 92 94 42 00 Fax: +33 4 93 65 47 16 Internet http://www.3gpp.org Copyright Notification No part may be reproduced except as authorized by written permission. The copyright and the foregoing restriction extend to reproduction in all media. © 2002, 3GPP Organizational Partners (ARIB, CWTS, ETSI, T1, TTA, TTC). All rights reserved. 3GPP Release 6 3 3GPP TR 55.919 V6.1.0 (2002-12) Contents Foreword ............................................................................................................................................................5
    [Show full text]
  • Balanced Permutations Even–Mansour Ciphers
    Article Balanced Permutations Even–Mansour Ciphers Shoni Gilboa 1, Shay Gueron 2,3 ∗ and Mridul Nandi 4 1 Department of Mathematics and Computer Science, The Open University of Israel, Raanana 4353701, Israel; [email protected] 2 Department of Mathematics, University of Haifa, Haifa 3498838, Israel 3 Intel Corporation, Israel Development Center, Haifa 31015, Israel 4 Indian Statistical Institute, Kolkata 700108, India; [email protected] * Correspondence: [email protected]; Tel.: +04-824-0161 Academic Editor: Kwangjo Kim Received: 2 February 2016; Accepted: 30 March 2016; Published: 1 April 2016 Abstract: The r-rounds Even–Mansour block cipher is a generalization of the well known Even–Mansour block cipher to r iterations. Attacks on this construction were described by Nikoli´c et al. and Dinur et al. for r = 2, 3. These attacks are only marginally better than brute force but are based on an interesting observation (due to Nikoli´c et al.): for a “typical” permutation P, the distribution of P(x) ⊕ x is not uniform. This naturally raises the following question. Let us call permutations for which the distribution of P(x) ⊕ x is uniformly “balanced” — is there a sufficiently large family of balanced permutations, and what is the security of the resulting Even–Mansour block cipher? We show how to generate families of balanced permutations from the Luby–Rackoff construction and use them to define a 2n-bit block cipher from the 2-round Even–Mansour scheme. We prove that this cipher is indistinguishable from a random permutation of f0, 1g2n, for any adversary who has oracle access to the public permutations and to an encryption/decryption oracle, as long as the number of queries is o(2n/2).
    [Show full text]
  • Definition of PRF 4 Review: Security Against Chosen-Plaintext Attacks
    1 Announcements – Paper presentation sign up sheet is up. Please sign up for papers by next class. – Lecture summaries and notes now up on course webpage 2 Recap and Overview Previous lecture: – Symmetric key encryption: various security defintions and constructions. This lecture: – Review definition of PRF, construction (and proof) of symmetric key encryption from PRF – Block ciphers, modes of operation – Public Key Encryption – Digital Signature Schemes – Additional background for readings 3 Review: Definition of PRF Definition 1. Let F : f0; 1g∗ × f0; 1g∗ ! f0; 1g∗ be an efficient, length-preserving, keyed function. We say that F is a pseudorandom function if for all probabilistic polynomial-time distinguishers D, there exists a negligible function neg such that: FSK (·) n f(·) n Pr[D (1 ) = 1] − Pr[D (1 ) = 1] ≤ neg(n); where SK f0; 1gn is chosen uniformly at random and f is chosen uniformly at random from the set of functions mapping n-bit strings to n-bit strings. 4 Review: Security Against Chosen-Plaintext Attacks (CPA) The experiment is defined for any private-key encryption scheme E = (Gen; Enc; Dec), any adversary A, and any value n for the security parameter: cpa The CPA indistinguishability experiment PrivKA;E (n): 1. A key SK is generated by running Gen(1n). n 2. The adversary A is given input 1 and oracle access to EncSK(·) and outputs a pair of messages m0; m1 of the same length. 3. A random bit b f0; 1g is chosen and then a ciphertext C EncSK(mb) is computed and given to A. We call C the challenge ciphertext.
    [Show full text]
  • Cs 255 (Introduction to Cryptography)
    CS 255 (INTRODUCTION TO CRYPTOGRAPHY) DAVID WU Abstract. Notes taken in Professor Boneh’s Introduction to Cryptography course (CS 255) in Winter, 2012. There may be errors! Be warned! Contents 1. 1/11: Introduction and Stream Ciphers 2 1.1. Introduction 2 1.2. History of Cryptography 3 1.3. Stream Ciphers 4 1.4. Pseudorandom Generators (PRGs) 5 1.5. Attacks on Stream Ciphers and OTP 6 1.6. Stream Ciphers in Practice 6 2. 1/18: PRGs and Semantic Security 7 2.1. Secure PRGs 7 2.2. Semantic Security 8 2.3. Generating Random Bits in Practice 9 2.4. Block Ciphers 9 3. 1/23: Block Ciphers 9 3.1. Pseudorandom Functions (PRF) 9 3.2. Data Encryption Standard (DES) 10 3.3. Advanced Encryption Standard (AES) 12 3.4. Exhaustive Search Attacks 12 3.5. More Attacks on Block Ciphers 13 3.6. Block Cipher Modes of Operation 13 4. 1/25: Message Integrity 15 4.1. Message Integrity 15 5. 1/27: Proofs in Cryptography 17 5.1. Time/Space Tradeoff 17 5.2. Proofs in Cryptography 17 6. 1/30: MAC Functions 18 6.1. Message Integrity 18 6.2. MAC Padding 18 6.3. Parallel MAC (PMAC) 19 6.4. One-time MAC 20 6.5. Collision Resistance 21 7. 2/1: Collision Resistance 21 7.1. Collision Resistant Hash Functions 21 7.2. Construction of Collision Resistant Hash Functions 22 7.3. Provably Secure Compression Functions 23 8. 2/6: HMAC And Timing Attacks 23 8.1. HMAC 23 8.2.
    [Show full text]
  • Pseudorandomness of Basic Structures in the Block Cipher KASUMI
    Pseudorandomness of Basic Structures in the Block Cipher KASUMI Ju-Sung Kang, Bart Preneel, Heuisu Ryu, Kyo Il Chung, and Chee Hang Park The notion of pseudorandomness is the theoretical I. INTRODUCTION foundation on which to consider the soundness of a basic structure used in some block ciphers. We examine the A block cipher is a family of permutations on a message pseudorandomness of the block cipher KASUMI, which space indexed by a secret key. Luby and Rackoff [1] will be used in the next-generation cellular phones. First, we introduced a theoretical model for the security of block ciphers prove that the four-round unbalanced MISTY-type by using the notion of pseudorandom and super-pseudorandom transformation is pseudorandom in order to illustrate the permutations. A pseudorandom permutation can be interpreted pseudorandomness of the inside round function FI of as a block cipher that cannot be distinguished from a truly KASUMI under an adaptive distinguisher model. Second, random permutation regardless of how many polynomial we show that the three-round KASUMI-like structure is not encryption queries an attacker makes. A super-pseudorandom pseudorandom but the four-round KASUMI-like structure permutation can be interpreted as a block cipher that cannot be is pseudorandom under a non-adaptive distinguisher model. distinguished from a truly random permutation regardless of how many polynomial encryption and decryption queries an attacker makes. Luby and Rackoff used a Feistel-type transformation defined by the typical two-block structure of the block cipher DES in order to construct pseudorandom and super-pseudorandom permutations from pseudorandom functions [1].
    [Show full text]
  • Design and Analysis of Lightweight Block Ciphers : a Focus on the Linear
    Design and Analysis of Lightweight Block Ciphers: A Focus on the Linear Layer Christof Beierle Doctoral Dissertation Faculty of Mathematics Ruhr-Universit¨atBochum December 2017 Design and Analysis of Lightweight Block Ciphers: A Focus on the Linear Layer vorgelegt von Christof Beierle Dissertation zur Erlangung des Doktorgrades der Naturwissenschaften an der Fakult¨atf¨urMathematik der Ruhr-Universit¨atBochum Dezember 2017 First reviewer: Prof. Dr. Gregor Leander Second reviewer: Prof. Dr. Alexander May Date of oral examination: February 9, 2018 Abstract Lots of cryptographic schemes are based on block ciphers. Formally, a block cipher can be defined as a family of permutations on a finite binary vector space. A majority of modern constructions is based on the alternation of a nonlinear and a linear operation. The scope of this work is to study the linear operation with regard to optimized efficiency and necessary security requirements. Our main topics are • the problem of efficiently implementing multiplication with fixed elements in finite fields of characteristic two. • a method for finding optimal alternatives for the ShiftRows operation in AES-like ciphers. • the tweakable block ciphers Skinny and Mantis. • the effect of the choice of the linear operation and the round constants with regard to the resistance against invariant attacks. • the derivation of a security argument for the block cipher Simon that does not rely on computer-aided methods. Zusammenfassung Viele kryptographische Verfahren basieren auf Blockchiffren. Formal kann eine Blockchiffre als eine Familie von Permutationen auf einem endlichen bin¨arenVek- torraum definiert werden. Eine Vielzahl moderner Konstruktionen basiert auf der wechselseitigen Anwendung von nicht-linearen und linearen Abbildungen.
    [Show full text]
  • An In-Depth Analysis of Various Steganography Techniques
    International Journal of Security and Its Applications Vol.9, No.8 (2015), pp.67-94 http://dx.doi.org/10.14257/ijsia.2015.9.8.07 An In-depth Analysis of Various Steganography Techniques Sangeeta Dhall, Bharat Bhushan and Shailender Gupta YMCA University of Science and Technology [email protected], [email protected], [email protected] Abstract The Steganography is an art and technique of hiding secret data in some carrier i.e. image, audio or video file. For hiding data in an image file various steganography techniques are available with their respective pros and cons. Broadly these techniques are classified as Spatial domain techniques, Transform domain techniques, Distortion techniques and Visual cryptography. Each technique is further classified into different types depending on their actual implementation. This paper is an effort to provide comprehensive comparison of these steganography techniques based on different Performance Metrics such as PSNR, MSE, MAE, Intersection coefficient, Bhattacharyya coefficient, UIQI, NCD, Time Complexity and Qualitative analysis. For this purpose a simulator is designed in MATLAB to implement above said techniques. The techniques are compared based on performance metrics. Keywords: Steganography, Classification of Steganography Techniques, Performance Metrics and MATLAB 1. Introduction "Steganography is the art and science of communicating in a way which hides the existence of the communication. In contrast to Cryptography, where the enemy is allowed to detect, intercept and modify messages without being able to violate certain security premises guaranteed by the cover message with the embedded cryptosystem. The goal of Steganography is to hide messages inside other harmless messages in a way that does not allow any enemy to even detect that there is a second message present"[1].
    [Show full text]
  • Cryptography
    Cryptography Jonathan Katz, University of Maryland, College Park, MD 20742. 1 Introduction Cryptography is a vast subject, addressing problems as diverse as e-cash, remote authentication, fault-tolerant distributed computing, and more. We cannot hope to give a comprehensive account of the ¯eld here. Instead, we will narrow our focus to those aspects of cryptography most relevant to the problem of secure communication. Broadly speaking, secure communication encompasses two complementary goals: the secrecy and integrity of communicated data. These terms can be illustrated using the simple example of a user A attempting to transmit a message m to a user B over a public channel. In the simplest sense, techniques for data secrecy ensure that an eavesdropping adversary (i.e., an adversary who sees all communication occurring on the channel) cannot learn any information about the underlying message m. Viewed in this way, such techniques protect against a passive adversary who listens to | but does not otherwise interfere with | the parties' communication. Techniques for data integrity, on the other hand, protect against an active adversary who may arbitrarily modify the information sent over the channel or may inject new messages of his own. Security in this setting requires that any such modi¯cations or insertions performed by the adversary will be detected by the receiving party. In the cases of both secrecy and integrity, two di®erent assumptions regarding the initial set-up of the communicating parties can be considered. In the private-key setting (also known as the \shared-key," \secret-key," or \symmetric-key" setting), which was the setting used exclusively for cryptography until the mid-1970s, parties A and B are assumed to have shared some secret information | a key | in advance.
    [Show full text]
  • Two Linear Distinguishing Attacks on VMPC and RC4A and Weakness of RC4 Family of Stream Ciphers (Corrected)
    Two Linear Distinguishing Attacks on VMPC and RC4A and Weakness of RC4 Family of Stream Ciphers (Corrected) Alexander Maximov Dept. of Information Technology, Lund University, Sweden P.O. Box 118, 221 00 Lund, Sweden [email protected] Abstract. 1 At FSE 2004 two new stream ciphers VMPC and RC4A have been proposed. VMPC is a generalisation of the stream cipher RC4, whereas RC4A is an attempt to increase the security of RC4 by introducing an additional permuter in the design. This paper is the first work presenting attacks on VMPC and RC4A. We propose two linear distinguishing attacks, one on VMPC of complexity 239.97, and one on RC4A of complexity 258.Wein- vestigate the RC4 family of stream ciphers and show some theoretical weaknesses of such constructions. Keywords: RC4, VMPC, RC4A, cryptanalysis, linear distinguishing attack. 1 Introduction Stream ciphers are very important cryptographic primitives. Many new designs appear at different conferences and proceedings every year. In 1987, Ron Rivest from RSA Data Security, Inc. made a design of a byte oriented stream cipher called RC4 [1]. This cipher found its application in many Internet and security protocols. The design was kept secret up to 1994, when the alleged specification of RC4 was leaked for the first time [2]. Since that time many cryptanalysis attempts were done on RC4 [3–7]. At FSE 2004, a new stream cipher VMPC [8] was proposed by Bartosz Zoltak, which appeared to be a modification of the RC4 stream cipher. In cryptanalysis, a linear distinguishing attack is one of the most common attacks on stream ciphers.
    [Show full text]
  • Security Levels in Steganography – Insecurity Does Not Imply Detectability
    Electronic Colloquium on Computational Complexity, Report No. 10 (2015) Security Levels in Steganography { Insecurity does not Imply Detectability Maciej Li´skiewicz,R¨udigerReischuk, and Ulrich W¨olfel Institut f¨urTheoretische Informatik, Universit¨atzu L¨ubeck Ratzeburger Allee 160, 23538 L¨ubeck, Germany [email protected], [email protected], [email protected] Abstract. This paper takes a fresh look at security notions for steganography { the art of encoding secret messages into unsuspicious covertexts such that an adversary cannot distinguish the resulting stegotexts from original covertexts. Stegosystems that fulfill the security notion used so far, however, are quite inefficient. This setting is not able to quantify the power of the adversary and thus leads to extremely high requirements. We will show that there exist stegosystems that are not secure with respect to the measure considered so far, still cannot be detected by the adversary in practice. This indicates that a different notion of security is needed which we call undetectability. We propose different variants of (un)-detectability and discuss their appropriateness. By constructing concrete ex- amples of stegosystems and covertext distributions it is shown that among these measures only one manages to clearly and correctly differentiate different levels of security when compared to an intuitive understanding in real life situations. We have termed this detectability on average. As main technical contribution we design a framework for steganography that exploits the difficulty to learn the covertext distribution. This way, for the first time a tight analytical relationship between the task of discovering the use of stegosystems and the task of differentiating between possible covertext distributions is obtained.
    [Show full text]
  • Pseudorandom Functions from Pseudorandom Generators
    Lecture 5: Pseudorandom functions from pseudorandom generators Boaz Barak We have seen that PRF’s (pseudorandom functions) are extremely useful, and we’ll see some more applications of them later on. But are they perhaps too amazing to exist? Why would someone imagine that such a wonderful object is feasible? The answer is the following theorem: The PRF Theorem: Suppose that the PRG Conjecture is true, then there n exists a secure PRF collection {fs}s∈{0,1}∗ such that for every s ∈ {0, 1} , fs maps {0, 1}n to {0, 1}n. Proof: If the PRG Conjecture is true then in particular by the length extension theorem there exists a PRG G : {0, 1}n → {0, 1} that maps n bits into 2n bits. Let’s denote G(s) = G0(s)◦G1(s) where ◦ denotes concatenation. That is, G0(s) denotes the first n bits and G1(s) denotes the last n bits of G(s). n For i ∈ {0, 1} , we define fs(i) as Gin (Gin−1 (··· Gi1 (s))). This might be a bit hard to parse and is easier to understand via a figure: By the definition above we can see that to evaluate fs(i) we need to evaluate the pseudorandom generator n times on inputs of length n, and so if the pseudorandom generator is efficiently computable then so is the pseudorandom function. Thus, “all” that’s left is to prove that the construction is secure and this is the heart of this proof. I’ve mentioned before that the first step of writing a proof is convincing yourself that the statement is true, but there is actually an often more important zeroth step which is understanding what the statement actually means.
    [Show full text]