<<

Analysis of SHA-3 Candidates CubeHash and Keccak

ALEXANDER ROS and CARL SIGURJONSSON

Bachelor of Science Thesis Stockholm, Sweden 2010

Analysis of SHA-3 Candidates CubeHash and Keccak

ALEXANDER ROS and CARL SIGURJONSSON

Bachelor’s Thesis in Computer Science (15 ECTS credits) at the School of Computer Science and Engineering Royal Institute of Technology year 2010 Supervisor at CSC was Cristian Bogdan Examiner was Mads Dam

URL: www.csc.kth.se/utbildning/kandidatexjobb/datateknik/2010/ ros_alexander_OCH_sigurjonsson_carl_K10067.pdf

Royal Institute of Technology School of Computer Science and Communication

KTH CSC 100 44 Stockholm

URL: www.kth.se/csc

Abstract

In 2005 the popular hash algorithm standard SHA-1 was revealed to be vulnerable to attacks much faster than brute force. Although a serious weakness in theory, due to the large amount of computations required to mount such an attack it did not pose any immediate threat in practice. In a few years however given technological advancements and improved cryptoanalysis on these algorithms, this is likely not to be the case. Thus the need for a modern set of . In an effort to realize this NIST announced in 2007 a public competition to develop the best possible replacement to be named SHA-3 by 2012. In this report we analyze two promising candidates, namely CubeHash and Keccak. A series of tests are conducted measuring their efficiency with regards to performance. Referat

Analys av SHA-3 kandidaterna CubeHash och Keccak

2005 visade det sig att den populära hashstandarden SHA-1 innehåll svagheter mot attacker snabbare än en brute force attack. Detta medför allvarliga följder i teorin men på grund av det massiva antalet uträkning- ar har det inte än någon praktisk tillämpning. Med åren kommer dock datorer att utvecklas till den grad att detta kan utgöra ett problem. Av den anledningen ökade kraven på en modernare hashstandard och 2007 tillkännagav NIST att man har inlett en tävling för att hitta den bästa ersättaren som beräknas vara klar 2012. I denna rapport kommer vi analysera två lovande kandidater, Cubehash och Keccak, genom att utföra en serie tester med avseende på prestanda. Contents

1 Introduction 1 1.1 Brief description ...... 1 1.2 New hash standard ...... 1 1.3 Purpose of the report ...... 2

2 Background 3 2.1 Hash Functions ...... 3 2.2 Constructions ...... 4 2.2.1 Merkle-Damgård ...... 4 2.2.2 HAIFA ...... 5 2.2.3 Cryptographic sponges ...... 6

3 Secure Hash Algorithm 9 3.1 Overview of SHA-3 candidates ...... 9

4 Keccak Algorithm 11 4.1 Padding ...... 11 4.2 Overview ...... 11 4.3 Parameters ...... 12 4.4 Keccak-f ...... 12

5 CubeHash Algorithm 13 5.1 Parameters ...... 13 5.2 Overview ...... 13 5.3 Transformation ...... 14

6 Tests 15 6.1 Results ...... 15

7 Discussion 17 7.1 The tests ...... 17 7.2 Conclusion ...... 18 7.3 References ...... 18

Chapter 1

Introduction

Hash functions have come to play a major role in modern being widely deployed in various security protocols and applications. The most common uses include digital signatures, code (MAC) and password pro- tection schemes. They are also useful when storing large data sets by means of fast lookups but used in a different context to cryptographic hash functions they will not be covered in this report.

1.1 Brief description

A cryptographic takes a message of arbitrary length as input and produces a value of fixed length as output. This value is usually called a message digest or simply a hash. Much like the signature of a document proving it’s au- thentic; the hash assures the integrity of the original message. Since altering the message even the slightest results in a different hash, the alteration becomes clear to anyone comparing the hashes. Generally for a hash function to be considered secure it should resist the following three types of attacks:

• Collision - Find any two messages that produce the same hash

• 1st preimage - Find the message given its hash.

• 2nd preimage - Given a message, find another message that has the same hash.

1.2 New hash standard

Currently the most used hash algorithm is SHA-1 (Secure Hash Algorithm ver- sion 1) and was thought secure until a team of Chinese cryptographers in 2005 discovered a weakness that made it significantly easier to produce collisions [13]. It was recommended that SHA-2 be used instead and although no known attacks exist for SHA-2, considering the structural similarities with SHA-1 it is not an ideal long time replacement. In 2007 the National Institute of Standards and Technology

1 CHAPTER 1. INTRODUCTION

(NIST) proposed an open competition for the development of SHA-3, the next hash standard.

1.3 Purpose of the report

In this report we study two of the currently fourteen candidates in the SHA-3 competition. Our main focus is evaluating their performance with the previous standards (SHA-1, SHA-2) as a point of reference. We begin with a brief look at traditional construction models used to design hash functions as well as some new ones common among the SHA-3 candidates. In chapter 4 and 5 we analyse the hash algorithms for our chosen candidates and discuss their features. This is followed by the main benchmarking tests. Finally we present the results and our conclusions based on them.

2 Chapter 2

Background

2.1 Hash Functions

Hash functions process arbitrary messages and produce fixed-length outputs and ideally each possible message should map to a distinct hash. The function is then said to be collision free. In reality however, hash functions are bound to have collisions given the infinite domain and finite range. The probability of a collision occurring by chance depends on the hash size but in most cases it is negligibly small. Furthermore hash functions should be strictly one-way, i.e. it should not be possible to find the original message from the hash. Or if given a message it should not be possible to find another message that produce the same hash. These are the main properties required of a hash function and more formally defined as

Collision resistance Function H is considered collision resistant if it.s infeasible to find two distinct messages m and m. such that H(m) = H(m0).

Preimage resistance Function H is considered preimage resistant if given hash h it.s infeasible to find H(m) = h.

2nd preimage resistance Function H is considered 2nd preimage resistant if given message m it is infeasible to find a m0 such that m 6= m0 and H(m) = H(m0).

n The generic “brute force” has a complexity of 2 2 where n is the bitlength of the hash. This is also known as a due to the math- ematical similarities to the birthday problem [5]. Preimage attacks pose a greater practical threat than collision attacks and requires O(2n) computations for an n-bit hash. A common model used in cryptography is the random oracle model, a theoretical black box that generates a completely random output. Yet if called again with the same input it returns the same result. Although impossible to implement in practice they are of great help when designing secure hash functions and used in many proofs.

3 CHAPTER 2. BACKGROUND

2.2 Constructions

Since hash functions have an infinite domain processing arbitrary long data inputs they are generally constructed to divide the data into fixed sized blocks. The blocks are then either chained though a series of compress function calls or incorporated into an inner state, which is an internal data structure used to keep track of the intermediate hash data. In this section we describe some of the common constructions for the different SHA-3 candidates.

2.2.1 Merkle-Damgård

Merkle-Damgård (MD) is the construction model that the majority of commonly used hash functions like MD5, SHA-1 and SHA-2 are based on. It was named after the authors Merkle and Damgård who proved that if the compression function is collision resistant then so is also the hash function that builds on it [20].

Figure 2.1. Merkle-Damgård construction

The figure above illustrates how the message is broken up in n fixed b-size blocks and passed to the underlying compression function f. The compression function expects two input values of size b and outputs a value of size b. An initialization vector (IV) that is predefined and set depending on implementation is first passed to f along with the first message block. The resulting value is then chained to the next iteration of f along with subsequent message block. This procedure is repeated until all blocks have been processed. Padding is applied in the last block by appending a single 1 followed by enough 0.s to ensure block size of b. The length of the original message must be included, either in the padding space (i.e. replacing zeros) or in an extra block should it not fit. The finalisation step varies with implementation but most times it is designed to achieve good avalanche effect, which is mixing of the hash bits so that even near identical input messages generates greatly differing hash outputs.

4 2.2. CONSTRUCTIONS

Merkle-Damgård strengthening The length padding procedure is known as Merkle-Damgård strengthening and is important as without it the collision resistance proof mentioned above is invalid. To illustrate this point, say we have a MD based hash function H with a com- pression function f. We are given two messages m0 and m1 with the effect that H(m1) = H(m0 || m1) and f(iv, m0) = iv. In both instances of H, function f is passed identical input when m1 is processed resulting in no collision under f but clearly under H, thus the collision proof of MD cannot be reduced to function f. MD strengthening and is also an important security measure against some attacks, for instance the long message attack.

Weaknesses Long message attack is a second based on a long message m = ? ? m1 || m2 || ... || ml. We try to construct a message m such that H(m ) = h(m). A random message m0 is generated until given a hash from H(m0) that match any of the l chaining values. If the match was found in message block x, we create the ? new message by m = m0 || mx || mx+1 || ... || ml. Since the message lengths are expected to differ, MD strengthening will ensure different hashes for the last block thus preventing collision. However by exploiting fixed-points that collide with any of the above chaining values, i.e. finding a pair h, mi such that h = f(h, mi), the message can simply be appended until it is of the same length therefore defeating MD strengthening. Of course this demands that fixed-points are easily found in the compression function. Recently it was discovered that this attack could be made using a different technique not needing fixed points and not being dependant on the security of the compression function and much faster than 2n work[21].

2.2.2 HAIFA In light of the recent studies revealing flaws in the MD construction[13] Biham and Dunkelman developed HAsh Iterative FrAmework (HAIFA) to improve on Merkle- Damgård construction and expand it with some new functionality. Most notably it increases the resistance to second preimage attacks and adds support for variable sized digests[7]. The difference from MD is in the parameters passed to compression function. As explained Merkle-Damgård compress function is passed the chaining value and combines it with the next message block, i.e. h = f(hi−1,Mi). HAIFA adds two new parameters, the number of bits hashed so far and a value. The number of bits hashed was introduced as a means of preventing fixed points from being easily found. Since the compress function takes form f(h, M, #bits, salt), even if a fixed point is found such that h = f(h, Mi, #bits, salt) it’s highly unlikely that it could be concatenated multiple times because of changing #bits. The salt is used for the purpose of randomized hashing which provide properties that essentially make it immune to precomputations among others [7].

5 CHAPTER 2. BACKGROUND

2.2.3 Cryptographic sponges

Developed by Bertoni, Daemen, Peeters and Van Assche, Joan Daemen being the most recognized having co-designed the Advanced Standard (AES). The peculiar name derives from the fact that the Sponge compresses data in what is known as the absorption stage and then extract arbitrary length data at the later squeeze stage. What makes this construction unique is the absence of an iterated application of a compress function; instead the sponge construction relies on a permutation or transformation function[10].

Overview

Before entering the absorb phase the elements of the state is set to zero and the message is padded, which means extending the message by a single 1 followed by zeros until the message size matches fixed size state. After the message has been padded it is split into blocks that are in turn merged into the state then updated using the underlying function f, known as the absorb phase. Finally the squeeze phase builds the output string and works in a similar fashion as absorb. But instead of merging blocks into the state, it appends the output from function f to an empty string until no further output is requested.

Figure 2.2. Sponge construction (source: www.noekeon.org)

Features

In contrast to other constructions the sponge relies on the tunable parameters c and r that have a direct impact on the algorithm performance. Choosing c two times the desired output size makes the sponge construction as strong as a random oracle and provides a preimage resistance of 2n in accordance with the flat sponge claim [8].

6 2.2. CONSTRUCTIONS

• Collision resistance: if the output length is n ≤ cclaim, the collision resistance n cclaim level is 2 2 . If the output length is n ≥ cclaim, the resistance level is 2 2 .

cclaim • (Second) preimage resistance: If the output length is n ≤ 2 , the (second) n cclaim preimage resistance level is 2 . If the output length is n ≥ 2 , the resistance cclaim level is 2 2 . The r parameter affects the speed of the algorithm by determining the size of the blocks in which the padded data is split into. This makes the sponge applicable in variety of situations where speed or security might be of import- ance. Another desirable feature of the sponge construction is the ability to produce arbitrary length output, meaning only one algorithm is sufficient to meet most hashing needs.

7

Chapter 3

Secure Hash Algorithm

Designed by the (NSA) SHA-1 is the most widely used hash function. Originally developed in 1993 to meet the demand for a secure hash standard, it was heavily influenced by MD5 (Message-Digest algorithm 5) designed by Ron Rivest. MD5 and SHA-1 are both based on Merkle-Damgård construction but SHA-1 generates longer digests of 160 bit instead of 128 bit. MD5 has long been inappropriate for use in security applications due to vulnerabilities to collision attacks. Weakness in SHA-1 has now been discovered as well, allowing for collision attacks to be mounted with a complexity of 263 [13], much faster than the previously estimated 280. In the light of these events SHA-2 was recommended until the SHA-3 winner is decided and a new standard set.

3.1 Overview of SHA-3 candidates

Currently the contest is in round two where only 14 of the original 51 candidates remain, the majority being eliminated due to poor preimage or collision resistance [14]. Using the SHA-2 as a point of reference NIST has conducted cryptoanalasys on the remaining candidates with security as the primary concern followed by per- formance, the ability to implement it on other platforms, algorithm simplicity and uniqueness. The following has been concluded about some of the more interesting round two candidates [18].

Blue Midnight Wish (BMW) A Merkle-Damgård hash construction built upon a novel design with an innovative compress function. Having shown cryptoana- lytic weaknesses questions are raised regarding its strength. However it has very good performance results and modest memory consumption while ap- pearing to be suitable for a wide range of platforms.

Fugue Built upon the sponge construction and using an AES inspired compress function. It shows acceptable performance but works with a large inner state, which might make it difficult to implement on more constrained platforms.

9 CHAPTER 3. SECURE HASH ALGORITHM

However, using AES oriented hardware could change this. Security wise it has not yet showed any weaknesses.

Hamsi Another Merkle-Damgård construction with an innovative compress func- tion construction, making it stand out amongst the other round two candid- ates. Performance wise somewhat slow and requires the use of SIMD [16] instructions to achieve acceptable results but has modest memory consump- tion. It has so far shown no security weaknesses.

JH JH is a sponge like construction with a very novel design, using an innovative compress function. Unlike most other round two candidates it uses the same algorithm to produce all different sized hashes but with a different initiation vector. It shows good performance results using modest memory and so far no security weaknesses have been discovered.

Shabal Merkle-Damgård inspired construction with a very innovative design show- ing little similarity to other round two candidates. Interesting algorithm in the sense that it brings a lot of new ideas to the table but it has raised con- cern regarding its strength. The authors have already modified the security proof once and despite no explicit threats to the algorithm it is uncertain if it generates strong enough hashes to make it to the next round. In a per- formance sense it has no remarks except using more memory then most other candidates.

SHAvite-3 A rather conservative design based on the HAIFA construction with the AES round function for compression, leaving little room for new ideas. Showing average performance on most platforms but, like other AES based algorithms, is expected to receive a significant performance boost when im- plemented on dedicated hardware. The compress function comes with some unexpected security concerns although nothing that seems to compromise the security of the algorithm as whole.

Skein The algorithm uses the Threefish blockcipher [16] as compress function in a merkle damgård construction. Since it.s designed with newer processors in mind it run particularly good on 64-bit systems but is also believed to show good performance results on more constrained systems. Like most other candidates it has modest memory consumption and despite discussions on the security of the Threefish cipher the algorithms in whole shows no signs of cryptographical weaknesses.

10 Chapter 4

Keccak Algorithm

The Keccak algorithm is developed by the people behind the cryptographic sponge construction and is a strict implementation of the model. It uses an inner state throughout the hashing process and the main methods of the algorithm consist of the sponge inherited functions pad, absorb and squeeze. Like all other algorithms of the sponge family these functions rely on the permutation [8], f, in Keccak denoted Keccak-f[].

4.1 Padding

The padding is the first step of the algorithm and works in a similar fashion as the sponge padding. It can be viewed as two separate methods, pad and enc. Having || symbolize string concatenation the input message M is extended according to [11]

Figure 4.1. Padding procedure

• pad(M, n) function adds a single 1 to the message and then adds as few 0 as possible to reach a length that is a multiple of n.

• enc(x, n) returns a string of n bits taken from x. I.e. works like a string truncate.

4.2 Overview

Initially the input message is padded and the state is set to zero. The padded mes- sage is then split into blocks and subsequently merged into the state. Between every

11 CHAPTER 4. KECCAK ALGORITHM block the state is updated through the underlying permutation Keccak-f[]. Once all padded data has been transformed into the state the hash is constructed. This is done by appending data from the state to an output string and updating the state using Keccak-f[] until no further output is requested. It is apparent that Keccak works in the same fashion as described in the sponge section with the only difference being modified padding and the implementation of the underlying permutation.

4.3 Parameters

The algorithm takes three parameters: r, c and d, where r is the bitrate parameter, c capacity and d the diversifier. r and c are performance parameters and used to tune the algorithm according to the sponge construction. The sum r+c is known as the permutation width or b and specifies the size of the state. It’s required to be one of the following values 25, 50, 100, 200, 400, 800 or 1600 [15], making the choice of r and c a trade off between security and speed of the algorithm. The diversifier parameter is included in the padding process and works like a hash salt.

4.4 Keccak-f

The Keccak-f function takes the current state as input and performs a number of calls to the underlying permutation f, where each call updates the state and passes it as argument on to the next call. Each call is known as a round and the number of rounds is determined by the permutation width b as n = 12 + 2l where w = 2l and b = 25w. The permutation works over a 5 x 5 x w bit state s and modifies the data using bitwise operations XOR, NOT, AND and cyclic bit shifting (rotation) in five subsequent steps; θ, ρ, π, χ and τ.

12 Chapter 5

CubeHash Algorithm

CubeHash is written by Daniel J. Bernstein (commonly known as djb), a professor at the University of Illinois in the US and well known for his work in cryptography and computer software industry. He is the author of popular programs such as djbdns and qmail as well as the Salsa20 found in ECRYPT Stream cipher.

5.1 Parameters

CubeHash takes four parameters as input, they are as follows: 1. the number of rounds r{1, 2, 3, ...}. 2. the number of bytes per message block b {1, 2, 3, ..., 128}. 3. the number of output bits h{2, 4, 8, ..., 512} 4. the message m as a string of bits of size {0, ..., 2128−1} The short notation for the different parameter settings is CubeHashr/b-h. The author recommends CubHash16/32-512 as the default setting although when sub- mitted to NIST in round 1 the suggested setting was CubeHash8/1 that was much more conservative than necessary being about 16 times slower than the configuration recommended for round 2.

5.2 Overview

The algorithm consists of the following steps: • Initialization the state S (1024 bits) based on (h, b, r) • If required apply padding to m so that m is divisible by b. • For every b-byte block in the padded m XOR with first b-bytes of S and perform r rounds of transformation on state S.

13 CHAPTER 5. CUBEHASH ALGORITHM

• Finalization of state S.

• Deliver the first h bits of state S as the final result. The state is made up of 32 values each 32 bits. In the initialization step the first 3 values are set to h/8, b, r respectively and the rest are set to 0. Then S is transformed in 10r rounds described further down. The padding is done by first appending a bit of value 1 to m then proceed with adding bits with value 0 until m is divisible by b.

5.3 Transformation

The transformation on S consists of the following:

• for i ← 0...15, S[i + 16] ← S[i + 16] + S[i]

• for i ← 0...15, temp[i ⊕ 8] ← S[i]

• for i ← 0...15, S[i] ← temp[i] <<< 7

• for i ← 0...15, S[i] ← S[i] ⊕ S[i + 16]

• for i ← 0...15, temp[i ⊕ 2] ← S[i + 16]

• for i ← 0...15, S[i + 16] ← temp[i]

• for i ← 0...15, S[i + 16] ← S[i + 16] + x[i]

• for i ← 0...15, temp[i ⊕ 4] ← S[i]

• for i ← 0...15, S[i] ← y[i] <<< 11

• for i ← 0...15, S[i] ← x[i] ⊕ S[i + 16]

• for i ← 0...15, temp[i ⊕ 1] ← S[i + 16]

• for i ← 0...15, S[i + 16] ← temp[i] In the last step a integer of value 1 is xored to the last 32 bit value of the state as a measure to break any preserved symmetry through the transformations [3]. Finally the state is tranformed through 10r rounds once again.

14 Chapter 6

Tests

We use the hash function testing framework within SUPERCOP (System for Unified Performance Evaluation Related to Cryptographic Operations and Primitives) to benchmark our algorithms. The software will perform a series of tests designed to measure the number of CPU cycles required to hash messages of varying sizes. Our testing platform is a 64bit 3.0GHz dual Intel Xeon 5400 quad-core.

6.1 Results

The following graphs were generated using the data collected by SUPERCOP. The x-axis denotes number of bytes and the y-axis the number of CPU cycles required to hash those bytes. I.e. lower values denote higher performance.

Figure 6.1. Results of all the tests

A close-up comparison of Keccak and Cubehash

15 CHAPTER 6. TESTS

This table shows the number of CPU cycles per byte when hashing different sized messages

16 Chapter 7

Discussion

CubeHash is a very simple algorithm which is favorable as it makes for better un- derstanding and analysis on the design. It is perhaps among the smallest candidates in terms of memory usage only requiring 128 bytes of memory required by the state. No additional memory is needed since the operations are all done in place. This could prove useful special hardware implementations with limited resources, e.g. RFID or smart cards. The code size is small and easy to implement, the operations used are mainly XOR, integer additions and rotations which are handled well in most processors. The code is easily parallelized as seen in the transformation where each step runs 16 independent operations. Keccak has a rather complicated design but comes heavily documented which eases the understanding and implementation of the algorithm. Because of the tunable parameters it can be used in a variety of ways. For instance does the setting c = 0, r = 1600 make for a good checksum algorithms where security is of no great importance. The ability to generate infinite output using the same algorithm is quite desirable when hashing stream outputs, especially on platforms where code size is important.

7.1 The tests

In the cycle tests preformed Keccak experienced an increase in cycles every time the message is extended beyond a certain point. This is most likely due to the message being too long to fit into the current set of blocks, which causes the algorithm to create an additional block, thus imposing another set of rounds to be preformed on the new block. One could reason that because of the similarities in construction this should be the case for Cubehash as well, which according to the tests grows in proportion to the length of the message being hashed. However, we believe that the high loop count at the initiation and finalization stage in combination with the usage of modular arithmetic makes a fully populated trailing block cost about as much as one with an additional block with zeros, which would explain the results. Worth mentioning is that at the end of the cycle test Cubehas actually exceeds Keccak which shows that for longer messages Cubehash is in fact more effecient.

17 CHAPTER 7. DISCUSSION

The cycles per byte test show that all algorithms use a lot of CPU cycles when hashing smaller messages which was expected given there are only a few bytes to hash. Once the message length reaches about 100-200 bytes the algorithms tend to stabilize and maintain the same cycle count throughout the rest of the test.

7.2 Conclusion

The tests preformed by us and other sources show that most candidates do not have a hard time measuring up to old hash standards while maintaining a comfortable security. The two candidates analysed has shown rather good potential to make it to the next round and tests show that Cubehash is more efficient than Keccak when hashing longer messages.

7.3 References

[1] Stinson D., 2003, Cryptography Theory and Practice book

[2] Schneier B., 2005, of SHA-1 http://www.schneier.com/blog/archives/2005/02/cryptanalysis_o.html (visited 20100501)

[3] NIST.gov, 2009, Cryptographic hash Algorithm Competition http://csrc.nist.gov/groups/ST/hash/sha-3/ (visited 20100405)

[4] Wikipedia, 2010, Preimage attack http://en.wikipedia.org/wiki/Preimage_attack (visited 20100501)

[5] Wikipedia, 2010, Birthday attack http://en.wikipedia.org/wiki/Birthday_attack (visited 20100428)

[6] Ivan Damgård , 1990, A Design Principle for Hash Functions, Advances in Cryptology, proceedings of CRYPTO 1989, Lecture Notes in Computer Science 435, pp. 416.427, Springer-Verlag.

[7] Dunkelman O., 2007, A Framework for Iterative Hash Functions http://csrc.nist.gov/groups/ST/hash/documents/DUNKELMAN_NIST3.pdf (visited 20100501)

[8] Daemen et al., 2009, Cryptographic Sponges http://sponge.noekeon.org/ (visited 20100501)

[9] Wikipedia, 2010, Advanced encryption standard http://en.wikipedia.org/wiki/Advanced_Encryption_Standard (visited 20100425)

[10] Daemen et al., 2009, On the Indifferentiability of the Sponge http://sponge.noekeon.org/SpongeIndifferentiability.pdf (visited 20100501)

18 7.3. REFERENCES

[11] Daemen et al., 2009, Keccak specifications http://keccak.noekeon.org/Keccak-specifications-2.pdf (visited 20100501)

[12] Wikipedia, 2010, SHA-1 http://en.wikipedia.org/wiki/SHA-1 (visited 20100502)

[13] Wang et al., 2005, Finding Collisions in the Full SHA-1 http://people.csail.mit.edu/yiqun/SHA1AttackProceedingVersion.pdf (visited 20100501)

[14] IST Programme of the European Commission, 2010, The SHA-3 Zoo http://ehash.iaik.tugraz.at/wiki/The_SHA-3_Zoo (visited 20100501)

[15] Daemen et al., 2009, The Keccak family http://keccak.noekeon.org/specs_summary.html (visited 20100501)

[16] Wikipedia, 2010, Skein (hash function) http://en.wikipedia.org/wiki/Skein_%28hash_function%29 (visited 20100501)

[17] Wikipedia, 2009, SIMD http://en.wikipedia.org/wiki/SIMD (visited 20100501)

[18] NIST.gov, 2009, Status Report on the First Round of the SHA-3 Cryptographic Hash Algorithm Competition. http://csrc.nist.gov/groups/ST/hash/sha-3/Round1/documents/sha3_NISTIR7620.pdf (visited 20100501)

[19] Schneier B. Kelsey J., 2005, Second preimages on n-bit hash functions for much less than 2n work. In Ronald Cramer, editor, EUROCRYPT, volume 3494 of LNCS, pages 474.490. Springer.

[20] Ivan Damgård, A Design Principle for Hash Functions, Advances in Cryptology, proceedings of CRYPTO 1989, Lecture Notes in Computer Science 435, pp. 416. 427, Springer-Verlag, 1990.

[21] John Kelsey and Bruce Schneier. Second preimages on n-bit hash functions for much less than 2n work. In Ronald Cramer, editor, EUROCRYPT, volume 3494 of LNCS, pages 474.490. Springer, 2005.

19

www.kth.se