<<

UNIT-I .1 Explain Differential And Linear Of DES. Ans: The Data Standard (DES) is a symmetric- block .DES is an implementation of a . It uses 16 round Feistel structure. The block size is 64-bit. Though, key length is 64-bit, DES has an effective key length of 56 bits, since 8 of the 64 bits of the key are not used by the encryption (function as check bits only). General Structure of DES is depicted in the following illustration −

Since DES is based on the Feistel Cipher, all that is required to specify DES is −

• Round function • • Any additional processing − Initial and final

Initial and Final Permutation

The initial and final are straight Permutation boxes (P-boxes) that are inverses of each other. They have no significance in DES. The initial and final permutations are shown as follows −

Round Function

The heart of this cipher is the DES function, f. The DES function applies a 48-bit key to the rightmost 32 bits to produce a 32-bit output.

• Expansion − Since right input is 32-bit and round key is a 48-bit, we first need to expand right input to 48 bits. Permutation logic is graphically depicted in the following illustration −

• The graphically depicted permutation logic is generally described as table in DES specification illustrated as shown −

• XOR (Whitener). − After the expansion permutation, DES does XOR operation on the expanded right section and the round key. The round key is used only in this operation.

• Substitution Boxes. − The S-boxes carry out the real mixing (confusion). DES uses 8 S- boxes, each with a 6-bit input and a 4-bit output. Refer the following illustration −

• The S-box rule is illustrated below −

• There are a total of eight S-box tables. The output of all eight s-boxes is then combined in to 32 bit section.

• Straight Permutation − The 32 bit output of S-boxes is then subjected to the straight permutation with rule shown in the following illustration:

Key Generation

The round-key generator creates sixteen 48-bit keys out of a 56-bit cipher key. The process of is depicted in the following illustration −

The logic for Parity drop, shifting, and Compression P-box is given in the DES description.

DES Analysis

The DES satisfies both the desired properties of . These two properties make cipher very strong.

− A small change in results in the very great change in the .

• Completeness − Each bit of ciphertext depends on many bits of plaintext.

The pragmatic approach was not to abandon the DES completely, but to change the manner in which DES is used. This led to the modified schemes of Triple DES (sometimes known as 3DES). Incidentally, there are two variants of Triple DES known as 3-key Triple DES (3TDES) and 2- key Triple DES (2TDES).

Linear cryptanalysis is an approach where we aim to find affine approximations to the action of a cipher. Letter frequency analysis is one of the simplest forms of . Differential cryptanalysis is an approach to cryptanalysis whereby differences in inputs are mapped to differences in outputs and patterns in the mappings of plaintext edits to ciphertext variation are used to reverse engineer a key.

Linear and differential cryptanalysis are most often applied to block (encryption functions operating on messages that are split into blocks). They are symmetric key .

Linear Cryptanalysis The paradigm of linear cryptanalysis was originally designed in 1993 as a theoretical attack on DES. It is now used widely on block ciphers across the field of cryptanalysis and is an effective starting point for developing more complex attacks.

Linear cryptanalysis posits a linear relationship between the elements (characters or individual bits) of plaintext, the cipher text, and the key. It therefore tries to find a linear approximation to the action of a cipher, i.e. if "ciphertext = f(plaintext, key)", then we are trying to find a linear approximation of f. The approach in linear cryptanalysis is to determine expressions of the form above which have a high or low probability of occurrence. (No obvious linearity such as above should hold for all input and output values or the cipher would be trivially weak.) If a cipher displays a tendency for [the] equation [above] to hold with high probability or not hold with high probability, this is evidence of the cipher’s poor randomization abilities. Consider that if we randomly selected values for [...] bits and placed them into the equation above, the probability that the expression would hold would be exactly ½. It is the deviation or bias from the probability of ½ for an expression to hold that is exploited in linear cryptanalysis: the further away that a linear expression is from holding with a probability of ½, the better the cryptanalyst is able to apply linear cryptanalysis. This quote tells us the fundamental paradigm of linear (and indeed differential) cryptanalysis. The cryptanalyst aims to exploit the fact that encryption is non-random, attaining information through the measurement of deviations from random behavior. Steps to perform Linear Cryptanalysis In the most common use case, we assume that everything about the encryption algorithm is known apart from the private key. Performing linear cryptanalysis on a block cipher usually consists of three steps.

Find linear approximations of the non-linear parts of the encryption algorithm (usually only the substitution boxes, known as S-boxes). Combine linear approximations of S-boxes with the rest of the (linear) operations done in the encryption algorithm, to obtain a linear approximation of the entire encryption algorithm. This linear approximation is a function which relates the plaintext bits, the ciphertext bits, and the bits of the private key. Use the linear approximation as a guide for which keys to try first. This leads to substantial computational savings over trying all possible values of the key. Multiple linear approximations may be used to further cut down the number of keys that need to be tried. Differential Cryptanalysis Differential cryptanalysis preceded linear cryptanalysis having initially been designed in 1990 as an attack on DES. Differential cryptanalysis is similar to linear cryptanalysis; differential cryptanalysis aims to map bitwise differences in inputs to differences in the output in order to reverse engineer the action of the encryption algorithm. It is again aiming to approximate the encryption algorithm looking to find a maximum likelihood estimator of the true encryption action by altering or (looking at different plaintexts) and analysing the impact of changes to the plaintext to the resulting ciphertext. Differential cryptanalysis is therefore a chosen plaintext attack. The description of differential cryptanalysis is analogous to that of linear cryptanalysis and is essentially the same as would be the case of applying linear cryptanalysis to input differences rather than to input and output bits directly. OR Q.1 (A) What is Shannon’s Theory of ? Explain Fiestel Structure Of Block Ciphers. Ans: Shannon's Theory: Confusion and diffusion area used for creating a secure cipher. Each Confusion and diffusion area unit wont to stop the secret writing key from its deduction or ultimately for preventing the first message. Confusion is employed for making uninformed cipher text whereas diffusion is employed for increasing the redundancy of the plain text over the foremost a part of the cipher text to create it obscure. The solely depends on confusion, or else, diffusion is employed by each stream and block cipher. In Shannon's definitions, confusion refers to making the relationship between the ciphertext and the symmetric key as complex and involved as possible; diffusion refers to dissipating the statistical structure of plaintext over the bulk of ciphertext. This complexity is generally implemented through a well-defined and repeatable series of substitutions and permutations. Substitution refers to the replacement of certain components (usually bits) with other components, following certain rules. Permutation refers to manipulation of the order of bits according to some algorithm. To be effective, any non-uniformity of plaintext bits needs to be redistributed across much larger structures in the ciphertext, making that non-uniformity much harder to detect. In particular, for a randomly chosen input, if one flips the i-th bit, then the probability that the j-th output bit will change should be one half, for any i and j—this is termed the strict avalanche criterion. More generally, one may require that flipping a fixed set of bits should change each output bit with probability one half. One aim of confusion is to make it very hard to find the key even if one has a large number of plaintext-ciphertext pairs produced with the same key. Therefore, each bit of the ciphertext should depend on the entire key and in different ways on different bits of the key. In particular, changing one bit of the key should change the ciphertext completely. The simplest way to achieve both diffusion and confusion is to use a substitution-permutation network. In these systems, the plaintext and the key often have a very similar role in producing the output, hence the same mechanism ensures both diffusion and confusion.

S.NO CONFUSION DIFFUSION

Confusion is a cryptographic

technique which is used to create While diffusion is used to create

1. faint cipher texts. cryptic plain texts.

This technique is possible through While it is possible through

2. substitution algorithm. transportation algorithm.

In confusion, if one bit within the While in diffusion, if one image

secret’s modified, most or all bits within the plain text is modified,

within the cipher text also will be many or all image within the cipher

3. modified. text also will be modified

In confusion, vagueness is While in diffusion, redundancy is

4. increased in resultant. increased in resultant.

Both stream cipher and block

5. cipher uses confusion. Only block cipher uses diffusion.

The relation between the cipher While The relation between the

text and the key is masked by cipher text and the plain text is

6. confusion. masked by diffusion.

Fiestel Structure of Block Ciphers Feistel Cipher is not a specific scheme of block cipher. It is a design model from which many different block ciphers are derived. DES is just one example of a Feistel Cipher. A cryptographic system based on Feistel cipher structure uses the same algorithm for both encryption and decryption.

Encryption Process

The encryption process uses the Feistel structure consisting multiple rounds of processing of the plaintext, each round consisting of a “substitution” step followed by a permutation step. Feistel Structure is shown in the following illustration −

• The input block to each round is divided into two halves that can be denoted as L and R for the left half and the right half.

• In each round, the right half of the block, R, goes through unchanged. But the left half, L, goes through an operation that depends on R and the encryption key. First, we apply an encrypting function ‘f’ that takes two input − the key K and R. The function produces the output f(R,K). Then, we XOR the output of the mathematical function with L.

• In real implementation of the Feistel Cipher, such as DES, instead of using the whole encryption key during each round, a round-dependent key (a subkey) is derived from the encryption key. This means that each round uses a different key, although all these subkeys are related to the original key.

• The permutation step at the end of each round swaps the modified L and unmodified R. Therefore, the L for the next round would be R of the current round. And R for the next round be the output L of the current round. • Above substitution and permutation steps form a ‘round’. The number of rounds are specified by the algorithm design.

• Once the last round is completed then the two sub blocks, ‘R’ and ‘L’ are concatenated in this order to form the ciphertext block. The difficult part of designing a Feistel Cipher is selection of round function ‘f’. In order to be unbreakable scheme, this function needs to have several important properties that are beyond the scope of our discussion.

Decryption Process

The process of decryption in Feistel cipher is almost similar. Instead of starting with a block of plaintext, the ciphertext block is fed into the start of the Feistel structure and then the process thereafter is exactly the same as described in the given illustration. The process is said to be almost similar and not exactly same. In the case of decryption, the only difference is that the sub keys used in encryption are used in the reverse order. The final swapping of ‘L’ and ‘R’ in last step of the Feistel Cipher is essential. If these are not swapped then the resulting ciphertext could not be decrypted using the same algorithm.

Number of Rounds

The number of rounds used in a Feistel Cipher depends on desired security from the system. More number of rounds provide more secure system. But at the same time, more rounds mean the inefficient slow encryption and decryption processes. Number of rounds in the systems thus depend upon efficiency–security tradeoff. Q.1(B) Write Short Note on Triple DES

Ans: In 3TDES, user first generates and distributes a 3TDES key K, which consists of three different DES keys K1, K2 and K3. This means that the actual 3TDES key has length 3×56 = 168 bits. The encryption scheme is illustrated as follows −

The encryption-decryption process is as follows −

• Encrypt the plaintext blocks using single DES with key K1. • Now decrypt the output of step 1 using single DES with key K2.

• Finally, encrypt the output of step 2 using single DES with key K3.

• The output of step 3 is the ciphertext.

• Decryption of a ciphertext is a reverse process. User first decrypt using K3, then encrypt with K2, and finally decrypt with K1. Due to this design of Triple DES as an encrypt–decrypt–encrypt process, it is possible to use a 3TDES (hardware) implementation for single DES by setting K1, K2, and K3 to be the same value. This provides backwards compatibility with DES.

Second variant of Triple DES (2TDES) is identical to 3TDES except that K3is replaced by K1. In other words, user encrypts plaintext blocks with key K1, then decrypt with key K2, and finally encrypt with K1 again. Therefore, 2TDES has a key length of 112 bits. Triple DES systems are significantly more secure than single DES, but these are clearly a much slower process than encryption using single DES. These are both instances of known plaintext attacks where to be effective a certain amount of plaintext and its corresponding ciphertext must be known. The approaches were initially designed to aid in breaking the (DES). In this case the fact that the algorithm was known (although the key in each case was not) enabled plaintext to be encrypted by the cryptanalyst to see the related ciphertext.

UNIT-II Q.2 (A) Write and Explain the Design Criteria of S-Box in Detail. Ans: S- Box Theory: In cryptography, an S-Box (Substitution-box) is a basic component of symmetric key algorithms which performs substitution. In block ciphers, they are typically used to obscure the relationship between the key and the ciphertext — 's property of confusion.[1] In many cases, the S-Boxes are carefully chosen to resist cryptanalysis. In general, an S-Box takes some number of input bits, m, and transforms them into some number of output bits, n: an m×n S-Box can be implemented as a lookup table with words of n bits each. Fixed tables are normally used, as in the Data Encryption Standard (DES), but in some ciphers the tables are generated dynamically from the key; e.g. the and the encryption algorithms. describes IDEA's modular multiplication step as a key- dependent S-Box. Given a 6-bit input, the 4-bit output is found by selecting the row using the outer two bits (the first and last bits), and the column using the inner four bits. For example, an input "011011" has outer bits "01" and inner bits "1101"; the corresponding output would be "1001". The 8 S-Boxes of DES were the subject of intense study for many years out of a concern that a — a vulnerability known only to its designers — might have been planted in the cipher. The S-Box design criteria were eventually published[2] after the public rediscovery of differential cryptanalysis, showing that they had been carefully tuned to increase resistance against this specific attack. Other research had already indicated that even small modifications to an S-Box could significantly weaken DES. There has been a great of research into the design of good S-Boxes, and much more is understood about their use in block ciphers than when DES was released. In cryptography, an S-box (substitution-box) is a basic component of symmetric key algorithms which performs substitution. In block ciphers, they are typically used to obscure the relationship between the key and the ciphertext — Shannon's property of confusion In general, an S-box takes some number of input bits, m, and transforms them into some number of output bits, n, where n is not necessarily equal to m. An m×n S-box can be implemented as a lookup table with 2m words of n bits each. Fixed tables are normally used, as in the Data Encryption Standard (DES), but in some ciphers the tables are generated dynamically from the key (e.g. the Blowfish and the Twofish encryption algorithms). DESIGN CRITERIA FOR A GOOD S-BOX Table 2.5 shows the first of the eight S-boxes in the DES . One can look at the numbers or entries of the S-boxes and wonder how they are generated. There are attempts to generate those numbers randomly and examine them against the design criteria and guidelines set by the NIST. However, it might result in the construction of weak S-boxes and therefore weaken the cryptosystem. A better and systematic way to generate those entries in the S-boxes is by constructing a nonlinear , mapping n input bits to m output bits. A special set of Boolean functions named bent functions can be used to achieve maximum nonlinearity. There are other criteria that must be met in designing the S-boxes. By understanding how to create cryptographically good S-boxes, new S-boxes can be used in the development of new private-key .

Table 2.5. The First S-box from the Date Encryption Standard Cryptosystem with Hexadecimal Entries

S1 14 4 13 1 2 15 11 8 3 10 6 12 5 9 0 7 0 15 7 4 14 2 13 1 10 6 12 11 9 5 3 8 4 1 14 8 13 6 2 11 15 12 9 7 3 10 5 0 15 12 8 2 4 9 1 7 5 11 3 14 10 0 6 13

In the first S-box in the DES system in Table 2.5, we can see that there are 16 columns and the columns are consisted of hexadecimal entries. If we construct a truth table, we will have 6 input columns and 4 output columns of zeros and ones with 26 rows. The mapping of the S-box is f : {0, 1}6 → {0, 1}4. Therefore, four highly nonlinear balanced Boolean functions compose the S-box. The 6 input bits are split into two groups: the middle four bits indicate the column of the S-box and the two bits on both sides indicate the row of the S-box. We will explain more in details later on how the entries are generated by the four highly nonlinear Boolean functions. But first, let us understand the design criteria of S-boxes. In general, the following five design criteria must be met for Boolean functions that responsible for a cryptographically good S-box: 1. Bijection requires a one-to-one and onto mapping from input vectors to output vectors if the S-box is n by n bit. We will explain later how this criterion is achieved when an S-box is n by m bit instead. 2. Strict avalanche criterion occurs if one input bit i is changed, each output bit will change with probability of one half. Strict avalanche requires that if there are any slight changes in the input vector, there will be a significant change in the output vector. To achieve this effect, we will need a function that has a 50% dependency on each of its n input bits. 3. Bit independence criterion or correlation-immunity requires that output bits act independently from each others. In other words, there should not be any statistical pattern or statistical dependencies between output bits from the output vectors. 4. Nonlinearity requires that the S-box is not a linear mapping from input to output. This would make the cryptosystem susceptible to attacks [9]. If the S-box is constructed with maximally nonlinear Boolean functions, it will give a bad approximation by linear functions thus making a cryptosystem difficult to break. 5. Balance means that each Boolean vector responsible for the S-box has the same number of 0’s and 1’s. Q.2 (B) Explain Construction of Balanced Function for S-Box.

Ans: CONSTRUCTING THE S-BOXES In general when constructing an S-box, f : {0, 1}n → {0, 1}m, with a highly nonlinear function, there are 2n rows with m columns. A function with its corresponding vector is said

to be highly nonlinear when the resulting vector yi from a function fi has a high Hamming

distance with all the linear vectors in the set of Bn. A truth table is made for the input vector

˙x = (x1, · · · , xn). The input vector ˙x is evaluated at each Boolean function, fi where

i = 1, ..., m. Each Boolean vector f˙i form the columns of the S-boxes. Therefore, an S-box is comprised of m nonlinear Boolean vectors if the entries of the S-box are binary numbers.

From the earlier example, we see that the nonlinearity of function g1 is only 1. However, we want the number to be as large as possible. We want to use functions that have a high nonlinearity while still fulfilling all the other criteria at the same time. Table 2.6 shows a partial truth table representation of the first S-box in the DES cryptosystem. This truth table corresponds to Table 2.5. You can find the complete truth table in Appendix A. Let us look at the first row of the table. When you convert the middle four bits, to decimal, it is 0. When you convert the first and last bits to decimal, it is 0 also. The input bits indicate a row 0 and column 0 entry of the S-box. (Note: All S-boxes start from row 0 and column 0 instead of 1.) The output bits are (1 1 1 0) on the first row of the truth table. Its decimal representation is 14, which is the row 0 column 0 entry of the S-box. Let’s look at another example, the second last row of the truth table has input bits (1 1 1 1 1 0). The middle four bits indicate column 15 and the remaining two bits points out to row 2 of the S-box. The entry of row 2 and column 15 of the S-box is 0, which corresponds to the output bits of (0 0 0 0) on the second last row of the truth table.

Table 2.6. The Partial Truth Table of S-box 1 in DES System

x1 x2 x3 x4 x5 x6 y1 y2 y3 y4 0 0 0 0 0 0 1 1 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 1 1 1 1 1 1 ...... 1 1 1 1 0 0 0 1 0 1 1 1 1 1 0 1 0 1 1 0 1 1 1 1 1 0 0 0 0 0 1 1 1 1 1 1 1 1 0 1 OR Q.2 (A) Explain RC6 Algorithm for Information System Security. Ans: RC6 (Rivest cipher 6) is a symmetric key block cipher derived from RC5. It was designed by Ron Rivest, Matt Robshaw, Ray Sidney, and Yiqun Lisa Yin to meet the requirements of the Advanced Encryption Standard (AES) competition. The algorithm was one of the five finalists, and also was submitted to the NESSIE and CRYPTREC projects. It was a proprietary algorithm, patented by RSA Security. RC6 proper has a block size of 128 bits and supports key sizes of 128, 192, and 256 bits up to 2040-bits, but, like RC5, it may be parameterised to support a wide variety of word-lengths, key sizes, and number of rounds. RC6 is very similar to RC5 in structure, using data-dependent rotations, modular addition, and XOR operations; in fact, RC6 could be viewed as interweaving two parallel RC5 encryption processes, although RC6 does use an extra multiplication operation not in RC5 in order to make the rotation dependent on every bit in a word, and not just the least significant few bits.

Q.2 (A) Write short note on propagation and non linearity of S-box. ANS: In practice, when we construct the S-boxes, we do not want an unbalanced Boolean function. It is because a balanced Boolean function would give a fairly good avalanche and if the function has near-maximally nonlinearity, then it would also fulfill the BIC . Our disappointment quickly turned our focus on how to recognize a Boolean function in UPF has high nonlinearity but not necessary maximum nonlinearity. Recall that the

function has to be balanced for all a ∈ F2n if it is a . However, if the function is balanced for most of the a’s, not necessary all a, we say the function is highly nonlinear. We also asked such function will be balanced or not. We adjusted the algorithm that we used to test whether a function is bent or not to give us the following information: 1. The number of a’s that would result in a balanced function 2. The numbers of 0’s and 1’s in the function.

Table 3.2 and Table 3.3 list the results for n = 4 and n = 6. We have tested all cyclotomic cosets in these dimensions. For n = 4, the total number of a is 15. The elements

Table 3.2. The Nonlinearity and Balance of Boolean Functions from Cyclotomic Cosets for

n = 4

n = 4 cyclotomic cosets # of a’s resulted in Bal # of a’s resulted in Non-Bal # of 0’s # of 1’s C1 0 15 8 8 C3 12 3 4 12 C5 (bent) 15 0 6 10 C7 9 6 8 8

in C5 can be used to construct the bent function. For n = 6, the total number of a is 63. The

elements in C9 can be used to construct the bent function.

Table 3.3. The Nonlinearity and Balance of Boolean Functions from Cyclotomic Cosets for

n = 6

n = 6 cyclotomic cosets # of a’s resulted in Bal # of a’s resulted in Non-Bal # of 0’s # of 1’s C1 0 63 32 32 C3 60 3 40 24 C5 60 3 32 32 C7 0 63 50 14 C9(bent) 63 0 28 36 C11 27 36 32 32 C13 27 36 32 32 C15 15 48 40 24 C21 42 21 22 42 C23 0 63 32 32 C27 27 36 28 36 C31 18 45 32 32

As we can see, for n = 4, construction of Boolean function using C3 can produce a highly nonlinear function since 80% of the a’s resulted in a balanced function. If we use C7, we would have a balanced function but its nonlinearity perhaps is not as good as the C3. C5 can be used to construct a bent function since all a’s produced balanced function and the function itself is not balanced. For n = 6, we can find two highly nonlinear functions if the cosets are used to construct the Boolean functions. If C5 is used, the Boolean function is also balanced. If C3 is used, it will produce a non-balanced Boolean function. Once again, we can see all the a’s

used to test the C9 produced balanced function. Therefore, it can be used to construct the bent function.

Furthermore, we want to see how these functions fulfill the SAC. Using the algorithm we mentioned above in Section 2.4, we created the truth table after we converted the UPF to

ANF. We only created the truth table for C5, C3, and C9 for n = 6. The following Table 3.4 shows the SAC result.

Table 3.4. The Strict Avalanche Criterion of C5, C3, and C9 with n = 6 if Used to Construct Boolean Functions

cyclotomic cosets f 32 f 16 f 8 f 4 f 2 f 1 C5 (bal) 16 16 20 12 16 11 C3(non-bal) 16 16 16 8 16 15 C9(bent) 16 16 16 16 16 20

We can see C9 has the best avalanche. The non-balanced Boolean function has a better avalanche then the balanced one if they are both highly nonlinear. It confirms what we have mentioned earlier. All criteria cannot be achieved fully and balanced function gives only fairly good avalanche.

UNIT-III Q.3 (A) Discuss Security Analysis of RSA Algorithm Along with its Application. Ans: RSA cryptosystem, firstly generation of key pair and secondly encryption-decryption algorithms.

Generation of RSA Key Pair

Each person or a party who desires to participate in communication using encryption needs to generate a pair of keys, namely public key and private key. The process followed in the generation of keys is described below −

• Generate the RSA modulus (n)

o Select two large primes, p and q.

o Calculate n=p*q. For strong unbreakable encryption, let n be a large number, typically a minimum of 512 bits.

• Find Derived Number (e)

o Number e must be greater than 1 and less than (p − 1)(q − 1).

o There must be no common factor for e and (p − 1)(q − 1) except for 1. In other words two numbers e and (p – 1)(q – 1) are coprime.

• Form the public key

o The pair of numbers (n, e) form the RSA public key and is made public.

o Interestingly, though n is part of the public key, difficulty in factorizing a large prime number ensures that attacker cannot find in finite time the two primes (p & q) used to obtain n. This is strength of RSA.

• Generate the private key

o Private Key d is calculated from p, q, and e. For given n and e, there is unique number d.

o Number d is the inverse of e modulo (p - 1)(q – 1). This means that d is the number less than (p - 1)(q - 1) such that when multiplied by e, it is equal to 1 modulo (p - 1)(q - 1).

o This relationship is written mathematically as follows − ed = 1 mod (p − 1)(q − 1)

The Extended Euclidean Algorithm takes p, q, and e as input and gives d as output.

Example

An example of generating RSA Key pair is given below. (For ease of understanding, the primes p & q taken here are small values. Practically, these values are very high).

• Let two primes be p = 7 and q = 13. Thus, modulus n = pq = 7 x 13 = 91.

• Select e = 5, which is a valid choice since there is no number that is common factor of 5 and (p − 1)(q − 1) = 6 × 12 = 72, except for 1.

• The pair of numbers (n, e) = (91, 5) forms the public key and can be made available to anyone whom we wish to be able to send us encrypted messages.

• Input p = 7, q = 13, and e = 5 to the Extended Euclidean Algorithm. The output will be d = 29.

• Check that the d calculated is correct by computing − de = 29 × 5 = 145 = 1 mod 72

• Hence, public key is (91, 5) and private keys is (91, 29).

Encryption and Decryption

Once the key pair has been generated, the process of encryption and decryption are relatively straightforward and computationally easy.

Interestingly, RSA does not directly operate on strings of bits as in case of symmetric key encryption. It operates on numbers modulo n. Hence, it is necessary to represent the plaintext as a series of numbers less than n.

RSA Encryption

• Suppose the sender wish to send some text message to someone whose public key is (n, e).

• The sender then represents the plaintext as a series of numbers less than n.

• To encrypt the first plaintext P, which is a number modulo n. The encryption process is simple mathematical step as −

C = Pe mod n

• In other words, the ciphertext C is equal to the plaintext P multiplied by itself e times and then reduced modulo n. This means that C is also a number less than n.

• Returning to our Key Generation example with plaintext P = 10, we get ciphertext C −

C = 105 mod 91

RSA Decryption

• The decryption process for RSA is also very straightforward. Suppose that the receiver of public-key pair (n, e) has received a ciphertext C. • Receiver raises C to the power of his private key d. The result modulo n will be the plaintext P.

Plaintext = Cd mod n

• Returning again to our numerical example, the ciphertext C = 82 would get decrypted to number 10 using private key 29 −

Plaintext = 8229 mod 91 = 10

RSA Analysis

The security of RSA depends on the strengths of two separate functions. The RSA cryptosystem is most popular public-key cryptosystem strength of which is based on the practical difficulty of factoring the very large numbers.

• Encryption Function − It is considered as a one-way function of converting plaintext into ciphertext and it can be reversed only with the knowledge of private key d.

• Key Generation − The difficulty of determining a private key from an RSA public key is equivalent to factoring the modulus n. An attacker thus cannot use knowledge of an RSA public key to determine an RSA private key unless he can factor n. It is also a one way function, going from p & q values to modulus n is easy but reverse is not possible.

If either of these two functions are proved non one-way, then RSA will be broken. In fact, if a technique for factoring efficiently is developed then RSA will no longer be safe.

The strength of RSA encryption drastically goes down against attacks if the number p and q are not large primes and/ or chosen public key e is a small number.

Q.3 (B) Explain Diffie Hellman Algorithm in Detail.

Ans: The simplest and the original implementation of the protocol uses the multiplicative of integers modulo p, where p is prime, and g is a primitive root modulo p. These two values are chosen in this way to ensure that the resulting can take on any value from 1 to p–1.

EXAMPLE:

Step 1: Alice and Bob get public numbers P = 23, G = 9

Step 2: Alice selected a private key a = 4 and Bob selected a private key b = 3

Step 3: Alice and Bob compute public values Alice: x =(9^4 mod 23) = (6561 mod 23) = 6 Bob: y = (9^3 mod 23) = (729 mod 23) = 16

Step 4: Alice and Bob exchange public numbers

Step 5: Alice receives public key y =16 and Bob receives public key x = 6

Step 6: Alice and Bob compute symmetric keys Alice: ka = y^a mod p = 65536 mod 23 = 9 Bob: kb = x^b mod p = 216 mod 23 = 9

Step 7: 9 is the shared secret.

MAN IN THE MIDDLE ATTACK: Diffie hellman key exchange algorithm can fall pray to the man in the middle attack ALSO CALLED AS bucket bridge attack. The way this happen this as follow: Example: Sonu wants to communicate with sona securely she first wants to do a Diffie Hellman key exchange with him.

SONU JITU SONA A=gx mod 11 A=gx mod 11 B=gy mod 11 =73 mod 11 =78 mod 11 =79mod 11 =343 mod 11 =5764801mod 11 =40353607 mod 11 =2 = 9 = 8 B=gy mod 11 =76mod 11 =117649 mod 11 =4

Sonu her A (2) to Sona. Jitu intercepts it , and instead, sends his A(9) to sona. Sona has no idea that jitu hijacked sonu’s A and has instead given his A to sona OR Q.3 (A) Explain Public Key Cryptosystem along with its Principles. Ans: A cryptosystem is an implementation of cryptographic techniques and their accompanying infrastructure to provide information security services. A cryptosystem is also referred to as a cipher system. Let us discuss a simple model of a cryptosystem that provides confidentiality to the information being transmitted. This basic model is depicted in the illustration below −

The illustration shows a sender who wants to transfer some sensitive data to a receiver in such a way that any party intercepting or eavesdropping on the communication channel cannot extract the data. The objective of this simple cryptosystem is that at the end of the process, only the sender and the receiver will know the plaintext. Symmetric Key Encryption The encryption process where same keys are used for encrypting and decrypting the information is known as Symmetric Key Encryption. The study of symmetric cryptosystems is referred to as symmetric cryptography. Symmetric cryptosystems are also sometimes referred to as secret key cryptosystems. A few well-known examples of symmetric key encryption methods are − Digital Encryption Standard (DES), Triple-DES (3DES), IDEA, and BLOWFISH. Prior to 1970, all cryptosystems employed symmetric key encryption. Even today, its relevance is very high and it is being used extensively in many cryptosystems. It is very unlikely that this encryption will fade away, as it has certain advantages over asymmetric key encryption. The salient features of cryptosystem based on symmetric key encryption are −

• Persons using symmetric key encryption must share a common key prior to exchange of information.

• Keys are recommended to be changed regularly to prevent any attack on the system.

• A robust mechanism needs to exist to exchange the key between the communicating parties. As keys are required to be changed regularly, this mechanism becomes expensive and cumbersome.

• In a group of n people, to enable two-party communication between any two persons, the number of keys required for group is n × (n – 1)/2. • Length of Key (number of bits) in this encryption is smaller and hence, process of encryption-decryption is faster than asymmetric key encryption.

• Processing power of computer system required to run symmetric algorithm is less. Challenge of Symmetric Key Cryptosystem There are two restrictive challenges of employing symmetric key cryptography.

• Key establishment − Before any communication, both the sender and the receiver need to agree on a secret symmetric key. It requires a secure key establishment mechanism in place.

• Trust Issue − Since the sender and the receiver use the same symmetric key, there is an implicit requirement that the sender and the receiver ‘trust’ each other. For example, it may happen that the receiver has lost the key to an attacker and the sender is not informed. These two challenges are highly restraining for modern day communication. Today, people need to exchange information with non-familiar and non-trusted parties. For example, a communication between online seller and customer. These limitations of symmetric key encryption gave rise to asymmetric key encryption schemes. Asymmetric Key Encryption The encryption process where different keys are used for encrypting and decrypting the information is known as Asymmetric Key Encryption. Though the keys are different, they are mathematically related and hence, retrieving the plaintext by decrypting ciphertext is feasible. The process is depicted in the following illustration − Asymmetric Key Encryption was invented in the 20th century to come over the necessity of pre- shared secret key between communicating persons. The salient features of this encryption scheme are as follows −

• Every user in this system needs to have a pair of dissimilar keys, private key and public key. These keys are mathematically related − when one key is used for encryption, the other can decrypt the ciphertext back to the original plaintext.

• It requires to put the public key in public repository and the private key as a well-guarded secret. Hence, this scheme of encryption is also called Public Key Encryption.

• Though public and private keys of the user are related, it is computationally not feasible to find one from another. This is a strength of this scheme.

• When Host1 needs to send data to Host2, he obtains the public key of Host2 from repository, encrypts the data, and transmits.

• Host2 uses his private key to extract the plaintext.

• Length of Keys (number of bits) in this encryption is large and hence, the process of encryption-decryption is slower than symmetric key encryption.

• Processing power of computer system required to run asymmetric algorithm is higher. Symmetric cryptosystems are a natural concept. In contrast, public-key cryptosystems are quite difficult to comprehend. You may think, how can the encryption key and the decryption key are ‘related’, and yet it is impossible to determine the decryption key from the encryption key? The answer lies in the mathematical concepts. It is possible to design a cryptosystem whose keys have this property. The concept of public-key cryptography is relatively new. There are fewer public-key algorithms known than symmetric algorithms. Challenge of Public Key Cryptosystem Public-key cryptosystems have one significant challenge − the user needs to trust that the public key that he is using in communications with a person really is the public key of that person and has not been spoofed by a malicious third party. This is usually accomplished through a Public Key Infrastructure (PKI) consisting a trusted third party. The third party securely manages and attests to the authenticity of public keys. When the third party is requested to provide the public key for any communicating person X, they are trusted to provide the correct public key. The third party satisfies itself about user identity by the process of attestation, notarization, or some other process − that X is the one and only, or globally unique, X. The most common method of making the verified public keys available is to embed them in a certificate which is digitally signed by the trusted third party.

Kerckhoff’s Principle for Cryptosystem

In the 19th century, a Dutch cryptographer A. Kerckhoff furnished the requirements of a good cryptosystem. Kerckhoff stated that a cryptographic system should be secure even if everything about the system, except the key, is public knowledge. The six design principles defined by Kerckhoff for cryptosystem are −

• The cryptosystem should be unbreakable practically, if not mathematically.

• Falling of the cryptosystem in the hands of an intruder should not lead to any compromise of the system, preventing any inconvenience to the user.

• The key should be easily communicable, memorable, and changeable.

• The ciphertext should be transmissible by telegraph, an unsecure channel.

• The encryption apparatus and documents should be portable and operable by a single person.

• Finally, it is necessary that the system be easy to use, requiring neither mental strain nor the knowledge of a long series of rules to observe. The second rule is currently known as Kerckhoff principle. It is applied in virtually all the contemporary encryption algorithms such as DES, AES, etc. These public algorithms are considered to be thoroughly secure. The security of the encrypted message depends solely on the security of the secret encryption key. Keeping the algorithms secret may act as a significant barrier to cryptanalysis. However, keeping the algorithms secret is possible only when they are used in a strictly limited circle. In modern era, cryptography needs to cater to users who are connected to the Internet. In such cases, using a secret algorithm is not feasible, hence Kerckhoff principles became essential guidelines for designing algorithms in modern cryptography.

Q.3 (B) Write short note on X.509. Ans: An X.509 certificate is a digital certificate that uses the widely accepted international X.509 public key infrastructure (PKI) standard to verify that a public key belongs to the user, computer or service identity contained within the certificate.

An X.509 certificate contains information about the identity to which a certificate is issued and the identity that issued it. Standard information in an X.509 certificate includes:

• Version – which X.509 version applies to the certificate (which indicates what data the certificate must include) • Serial number – the identity creating the certificate must assign it a serial number that distinguishes it from other certificates • Algorithm information – the algorithm used by the issuer to sign the certificate • Issuer distinguished name – the name of the entity issuing the certificate (usually a certificate authority) • Validity period of the certificate – start/end date and time • Subject distinguished name – the name of the identity the certificate is issued to • Subject public key information – the public key associated with the identity • Extensions (optional)

Many of the certificates that people refer to as Secure Sockets Layer (SSL) certificates are in fact X.509 certificates.

Q.4 Explain following in detail. (a)SHA (b)MD5 ANS: (A)Secure (SHA)

Family of SHA comprise of four SHA algorithms; SHA-0, SHA-1, SHA-2, and SHA-3. Though from same family, there are structurally different. • The original version is SHA-0, a 160-bit hash function, was published by the National Institute of Standards and Technology (NIST) in 1993. It had few weaknesses and did not become very popular. • SHA-1 is the most widely used of the existing SHA hash functions. It is employed in several widely used applications and protocols including Secure Socket Layer (SSL) security. • In 2005, a method was found for uncovering collisions for SHA-1 within practical time frame making long-term employability of SHA-1 doubtful. • SHA-2 family has four further SHA variants, SHA-224, SHA-256, SHA-384, and SHA- 512 depending up on number of bits in their hash value. No successful attacks have yet been reported on SHA-2 hash function. • Though SHA-2 is a strong hash function. Though significantly different, its basic design is still follows design of SHA-1. Hence, NIST called for new competitive hash function designs.

(A) Message Digest (MD5) MD5 was most popular and widely used hash function for quite some years.

• The MD family comprises of hash functions MD2, MD4, MD5 and MD6. It was adopted as Internet Standard RFC 1321. It is a 128-bit hash function.

• MD5 digests have been widely used in the software world to provide assurance about integrity of transferred file. For example, file servers often provide a pre-computed MD5 checksum for the files, so that a user can compare the checksum of the downloaded file to it.

• In 2004, collisions were found in MD5. An analytical attack was reported to be successful only in an hour by using computer cluster. This resulted in compromised MD5 and hence it is no longer recommended for use.

• he MD5 hashing algorithm is a one-way cryptographic function that accepts a message of any length as input and returns as output a fixed-length digest value to be used for authenticating the original message. • The MD5 hash function was originally designed for use as a secure cryptographic hash algorithm for authenticating digital signatures. MD5 has been deprecated for uses other than as a non-cryptographic checksum to verify data integrity and detect unintentional data corruption.

• Although originally designed as a cryptographic message algorithm for use on the internet, MD5 hashing is no longer considered reliable for use as a cryptographic checksum because researchers have demonstrated techniques capable of easily generating MD5 collisions on commercial off-the-shelf computers.

• Ronald Rivest, founder of RSA Data Security and institute professor at MIT, designed MD5 as an improvement to a prior message digest algorithm, MD4. The algorithm takes as input a message of arbitrary length and produces as output a 128-bit 'fingerprint' or 'message digest' of the input. It is conjectured that it is computationally infeasible to produce two messages having the same message digest, or to produce any message having a given pre-specified target message digest. The MD5 algorithm is intended for applications, where a large file must be 'compressed' in a secure manner before being encrypted with a private (secret) key under a public-key cryptosystem such as RSA.

• MD5 hashing can still be used for integrity protection, noting "Where the MD5 checksum is used inline with the protocol solely to protect against errors, an MD5 checksum is still an acceptable use." However, it added that "any application and protocol that employs MD5 for any purpose needs to clearly state the expected security services from their use of MD5."

Message digest algorithm characteristics • Message digests, also known as hash functions, are one-way functions; they accept a message of any size as input, and produce as output a fixed-length message digest.

• MD5 is the third message digest algorithm created by Rivest. All three (the others are MD2 and MD4) have similar structures, but MD2 was optimized for 8-bit machines, in comparison with the two later formulas, which are optimized for 32-bit machines. The MD5 algorithm is an extension of MD4, which the critical review found to be fast, but possibly not absolutely secure. In comparison, MD5 is not quite as fast as the MD4 algorithm, but offered much more assurance of data security.

How MD5 works

• The MD5 message digest hashing algorithm processes data in 512-bit blocks, broken down into 16 words composed of 32 bits each. The output from MD5 is a 128-bit message digest value.

• Computation of the MD5 digest value is performed in separate stages that process each 512-bit block of data along with the value computed in the preceding stage. The first stage begins with the message digest values initialized using consecutive hexadecimal numerical values. Each stage includes four message digest passes which manipulate values in the current data block and values processed from the previous block. The final value computed from the last block becomes the MD5 digest for that block.

MD5 security

• The goal of any message digest function is to produce digests that appear to be random. To be considered cryptographically secure, the hash function should meet two requirements: first, that it is impossible for an attacker to generate a message matching a specific hash value; and second, that it is impossible for an attacker to create two messages that produce the same hash value.

OR Q.4 (A) Explain Symmetric And Asymmetric Authentication. ANS: Symmetric Key authentication: The authentication process where same keys are used for encrypting and decrypting the information is known as Symmetric Key Encryption. The study of symmetric cryptosystems is referred to as symmetric cryptography. Symmetric cryptosystems are also sometimes referred to as secret key cryptosystems. A few well-known examples of symmetric key encryption methods are − Digital Encryption Standard (DES), Triple-DES (3DES), IDEA, and BLOWFISH.

Prior to 1970, all cryptosystems employed symmetric key authentication.

Even today, its relevance is very high and it is being used extensively in many cryptosystems. It is very unlikely that this authentication will fade away, as it has certain advantages over asymmetric key authentication. The salient features of cryptosystem based on symmetric key authentication are −

• Persons using symmetric key authentication must share a common key prior to exchange of information.

• Keys are recommended to be changed regularly to prevent any attack on the system.

• A robust mechanism needs to exist to exchange the key between the communicating parties. As keys are required to be changed regularly, this mechanism becomes expensive and cumbersome.

• In a group of n people, to enable two-party communication between any two persons, the number of keys required for group is n × (n – 1)/2.

• Length of Key (number of bits) in this authentication is smaller and hence, process of authentication-decryption is faster than asymmetric key authentication.

• Processing power of computer system required to run symmetric algorithm is less. Asymmetric Key Authentication The authentication process where different keys are used for encrypting and decrypting the information is known as Asymmetric Key Authentication. Though the keys are different, they are mathematically related and hence, retrieving the plaintext by decrypting ciphertext is feasible. The process is depicted in the following illustration −

Asymmetric Key Authentication was invented in the 20th century to come over the necessity of pre-shared secret key between communicating persons. The salient features of this authentication scheme are as follows −

• Every user in this system needs to have a pair of dissimilar keys, private key and public key. These keys are mathematically related − when one key is used for authentication, the other can decrypt the ciphertext back to the original plaintext.

• It requires to put the public key in public repository and the private key as a well-guarded secret. Hence, this scheme of authentication is also called Public Key Authentication.

• Though public and private keys of the user are related, it is computationally not feasible to find one from another. This is a strength of this scheme.

• When Host1 needs to send data to Host2, he obtains the public key of Host2 from repository, encrypts the data, and transmits.

• Host2 uses his private key to extract the plaintext.

• Length of Keys (number of bits) in this authentication is large and hence, the process of authentication-decryption is slower than symmetric key authentication.

• Processing power of computer system required to run asymmetric algorithm is higher. Q.4(B) Explain for Digital Signature. ANS: Digital signatures are the public-key primitives of . In the physical world, it is common to use handwritten signatures on handwritten or typed messages. They are used to bind signatory to the message. Similarly, a digital signature is a technique that binds a person/entity to the digital data. This binding can be independently verified by receiver as well as any third party. Digital signature is a cryptographic value that is calculated from the data and a secret key known only by the signer. In real world, the receiver of message needs assurance that the message belongs to the sender and he should not be able to repudiate the origination of that message. This requirement is very crucial in business applications, since likelihood of a dispute over exchanged data is very high.

Model of Digital Signature

As mentioned earlier, the digital signature scheme is based on public key cryptography. The model of digital signature scheme is depicted in the following illustration −

The following points explain the entire process in detail −

• Each person adopting this scheme has a public-private key pair.

• Generally, the key pairs used for encryption/decryption and signing/verifying are different. The private key used for signing is referred to as the signature key and the public key as the verification key.

• Signer feeds data to the hash function and generates hash of data.

• Hash value and signature key are then fed to the signature algorithm which produces the digital signature on given hash. Signature is appended to the data and then both are sent to the verifier.

• Verifier feeds the digital signature and the verification key into the verification algorithm. The verification algorithm gives some value as output.

• Verifier also runs same hash function on received data to generate hash value.

• For verification, this hash value and output of verification algorithm are compared. Based on the comparison result, verifier decides whether the digital signature is valid.

• Since digital signature is created by ‘private’ key of signer and no one else can have this key; the signer cannot repudiate signing the data in future. It should be noticed that instead of signing data directly by signing algorithm, usually a hash of data is created. Since the hash of data is a unique representation of data, it is sufficient to sign the hash in place of data. The most important reason of using hash instead of data directly for signing is efficiency of the scheme. Let us assume RSA is used as the signing algorithm. As discussed in public key encryption chapter, the encryption/signing process using RSA involves modular exponentiation. Signing large data through modular exponentiation is computationally expensive and time consuming. The hash of the data is a relatively small digest of the data, hence signing a hash is more efficient than signing the entire data.

Importance of Digital Signature

Out of all cryptographic primitives, the digital signature using public key cryptography is considered as very important and useful tool to achieve information security. Apart from ability to provide non-repudiation of message, the digital signature also provides message authentication and data integrity. Let us briefly see how this is achieved by the digital signature −

• Message authentication − When the verifier validates the digital signature using public key of a sender, he is assured that signature has been created only by sender who possess the corresponding secret private key and no one else.

• Data Integrity − In case an attacker has access to the data and modifies it, the digital signature verification at receiver end fails. The hash of modified data and the output provided by the verification algorithm will not match. Hence, receiver can safely deny the message assuming that data integrity has been breached.

• Non-repudiation − Since it is assumed that only the signer has the knowledge of the signature key, he can only create unique signature on a given data. Thus the receiver can present data and the digital signature to a third party as evidence if any dispute arises in the future. By adding public-key encryption to digital signature scheme, we can create a cryptosystem that can provide the four essential elements of security namely − Privacy, Authentication, Integrity, and Non-repudiation.

Encryption with Digital Signature

In many digital communications, it is desirable to exchange an encrypted messages than plaintext to achieve confidentiality. In public key encryption scheme, a public (encryption) key of sender is available in open domain, and hence anyone can spoof his identity and send any encrypted message to the receiver. This makes it essential for users employing PKC for encryption to seek digital signatures along with encrypted data to be assured of message authentication and non-repudiation. This can archived by combining digital signatures with encryption scheme. Let us briefly discuss how to achieve this requirement. There are two possibilities, sign-then-encrypt and encrypt- then-sign. However, the crypto system based on sign-then-encrypt can be exploited by receiver to spoof identity of sender and sent that data to third party. Hence, this method is not preferred. The process of encrypt-then-sign is more reliable and widely adopted. This is depicted in the following illustration −

The receiver after receiving the encrypted data and signature on it, first verifies the signature using sender’s public key. After ensuring the validity of the signature, he then retrieves the data through decryption using his private key.

Digital Certificate

Digital Certificates are not only issued to people but they can be issued to computers, software packages or anything else that need to prove the identity in the electronic world.

• Digital certificates are based on the ITU standard X.509 which defines a standard certificate format for public key certificates and certification validation. Hence digital certificates are sometimes also referred to as X.509 certificates. Public key pertaining to the user client is stored in digital certificates by The Certification Authority (CA) along with other relevant information such as client information, expiration date, usage, issuer etc.

• CA digitally signs this entire information and includes digital signature in the certificate.

• Anyone who needs the assurance about the public key and associated information of client, he carries out the signature validation process using CA’s public key. Successful validation assures that the public key given in the certificate belongs to the person whose details are given in the certificate. The process of obtaining Digital Certificate by a person/entity is depicted in the following illustration.

As shown in the illustration, the CA accepts the application from a client to certify his public key. The CA, after duly verifying identity of client, issues a digital certificate to that client.

A digital signature is a mathematical technique used to validate the authenticity and integrity of a message, software or digital document.

The digital equivalent of a handwritten signature or stamped seal, but offering far more inherent security, a digital signature is intended to solve the problem of tampering and impersonation in digital communications. Digital signatures can provide the added assurances of evidence to origin, identity and status of an electronic document, transaction or message, as well as acknowledging informed consent by the signer.

WORKING OF DIGITAL SIGNATURE:

Digital signatures are based on public key cryptography, also known as asymmetric cryptography. Using a public key algorithm such as RSA, one can generate two keys that are mathematically linked: one private and one public. To create a digital signature, signing software (such as an email program) creates a one-way hash of the electronic data to be signed. The private key is then used to encrypt the hash. The encrypted hash -- along with other information, such as the hashing algorithm -- is the digital signature. The reason for encrypting the hash instead of the entire message or document is that a hash function can convert an arbitrary input into a fixed length value, which is usually much shorter. This saves time since hashing is much faster than signing.

The value of the hash is unique to the hashed data. Any change in the data, even changing or deleting a single character, results in a different value. This attribute enables others to validate the integrity of the data by using the signer's public key to decrypt the hash. If the decrypted hash matches a second computed hash of the same data, it proves that the data hasn't changed since it was signed. If the two hashes don't match, the data has either been tampered with in some way (integrity) or the signature was created with a private key that doesn't correspond to the public key presented by the signer (authentication). A digital signature can be used with any kind of message -- whether it is encrypted or not -- simply so the receiver can be sure of the sender's identity and that the message arrived intact. Digital signatures make it difficult for the signer to deny having signed something (non- repudiation) -- assuming their private key has not been compromised -- as the digital signature is unique to both the document and the signer, and it binds them together. A digital certificate, an electronic document that contains the digital signature of the certificate-issuing authority, binds together a public key with an identity and can be used to verify a public key belongs to a particular person or entity.

Digital Signature Standard (DSS)

The Digital Signature Standard is intended to be used in electronic funds transfer, software distribution, electronic mail, data storage and applications which require high data integrity assurance. The Digital Signature Standard can be implemented in software, hardware or firmware.

The algorithm used behind the Digital Signature Standard is known as the Digital Signature Algorithm. The algorithm makes use of two large numbers which are calculated based on a unique algorithm which also considers parameters that determine the authenticity of the signature. This indirectly also helps in verifying the integrity of the data attached to the signature. The digital signatures can be generated only by the authorized person using their private keys and the users or public can verify the signature with the help of the public keys provided to them. However, one key difference between encryption and signature operation in the Digital Signature Standard is that encryption is reversible, whereas the digital signature operation is not. Another fact about the digital signature standard is that it does not provide any capability with regards to key distribution or exchange of keys. In other words, security of the digital signature standard largely depends on the secrecy of the private keys of the signatory.

The Digital Signature Standard ensures that the digital signature can be authenticated and the electronic documents carrying the digital signatures are secure. The standard also ensures non- repudiation with regards to the signatures and provides all safeguards for imposter prevention. The standard also ensures that digital signed documents can be tracked.

UNIT-V Q.5 (A) Explain the Architecture of IP Security in Detail. Ans: IPSec (IP Security) architecture uses two protocols to secure the traffic or data flow. These protocols are ESP (Encapsulation Security Payload) and AH (Authentication Header). IPSec Architecture include protocols, algorithms, DOI, and Key Management. All these components are very important in order to provide the three main services: • Confidentiality • Authentication • Integirity The IP security architecture (IPsec) provides cryptographic protection for IP datagrams in IPv4 and IPv6 network packets. This protection can include confidentiality, strong integrity of the data, data authentication, and partial sequence integrity. Partial sequence integrity is also known as

replay protection.

1. Architecture: Architecture or IP Security Architecture covers the general concepts, definitions, protocols, algorithms and security requirements of IP Security technology. 2. ESP Protocol: ESP(Encapsulation Security Payload) provide the confidentiality service. Encapsulation Security Payload is implemented in either two ways: • ESP with optional Authentication. • ESP with Authentication.

• Security Parameter Index(SPI): This parameter is used in Security Association. It is used to give a unique number to the connection build between Client and Server. • Sequence Number: Unique Sequence number are allotted to every packet so that at the receiver side packets can be arranged properly. • Payload Data: Payload data means the actual data or the actual message. The Payload data is in encrypted format to achieve confidentiality. • : Extra bits or space added to the original message in order to ensure confidentiality. Padding length is the size of the added bits or space in the original message. • Next Header: Next header means the next payload or next actual data. • Authentication Data This field is optional in ESP protocol packet format. 1. Encryption algorithm: Encryption algorithm is the document that describes various encryption algorithm used for Encapsulation Security Payload.

4. AH Protocol: AH (Authentication Header) Protocol provides both Authentication and Integrity service. Authentication Header is implemented in one way only: Authentication along with Integrity.

Authentication Header covers the packet format and general issue related to the use of AH for packet authentication and integrity.

2. Authentication Algorithm: Authentication Algorithm contains the set of the documents that describe authentication algorithm used for AH and for the authentication option of ESP.

3. DOI (Domain of Interpretation): DOI is the identifier which support both AH and ESP protocols. It contains values needed for documentation related to each other.

4. Key Management: Key Management contains the document that describes how the keys are exchanged between sender and receiver.

Operations Within IPsec

The IPsec suite can be considered to have two separate operations, when performed in unison, providing a complete set of security services. These two operations are IPsec Communication and Internet Key Exchange.

• IPsec Communication

o It is typically associated with standard IPsec functionality. It involves encapsulation, encryption, and hashing the IP datagrams and handling all packet processes.

o It is responsible for managing the communication according to the available Security Associations (SAs) established between communicating parties.

o It uses security protocols such as Authentication Header (AH) and Encapsulated SP (ESP).

o IPsec communication is not involved in the creation of keys or their management.

o IPsec communication operation itself is commonly referred to as IPsec.

Q.5 (B) Write Short Note on .

Ans: Encrypted Key Exchange

Encrypted Key Exchange (also known as EKE) is a family of -authenticated key agreement methods described by Steven M. Bellovin and Michael Merritt. first method to amplify a shared password into a shared key, where the shared key may subsequently be used to provide a zero-knowledge password proof or other functions.

In the most general form of EKE, at least one party encrypts an ephemeral (one-time) public key using a password, and sends it to a second party, who decrypts it and uses it to negotiate a shared key with the first party. The concept of augmented password-authenticated key agreement for client/server scenarios. Augmented methods have the added goal of ensuring that password verification data stolen from a server cannot be used by an attacker to masquerade as the client, unless the attacker first determines the password (e.g. by performing a brute force attack on the stolen data).

A version of EKE based on Diffie-Hellman, known as DH-EKE, has survived attack and has led to improved variations, such as the PAK family of methods

With the US patent on EKE expiring in late 2011, an EAP authentication method using EKE was published as an IETF RFC. The EAP method uses the Diffie-Hellman variant of EKE.

OR

Q.5 Explain following in detail.

(A) Encapsulation Security Payload in Transport and Tunnel Mode with Multiple Security Association.

(B) Lamport’s Hash

Ans: Encapsulation Security Protocol (ESP)

ESP provides security services such as confidentiality, integrity, origin authentication, and optional replay resistance. The set of services provided depends on options selected at the time of Security Association (SA) establishment.

In ESP, algorithms used for encryption and generating authenticator are determined by the attributes used to create the SA.

The process of ESP is as follows. The first two steps are similar to process of AH as stated above. • Once it is determined that ESP is involved, the fields of ESP packet are calculated. The ESP field arrangement is depicted in the following diagram.

• Encryption and authentication process in transport mode is depicted in the following diagram.

• In case of Tunnel mode, the encryption and authentication process is as depicted in the following diagram.

Although authentication and confidentiality are the primary services provided by ESP, both are optional. Technically, we can use NULL encryption without authentication. However, in practice, one of the two must be implemented to use ESP effectively.

The basic concept is to use ESP when one wants authentication and encryption, and to use AH when one wants extended authentication without encryption.

Security Associations in IPsec

Security Association (SA) is the foundation of an IPsec communication. The features of SA are −

• Before sending data, a virtual connection is established between the sending entity and the receiving entity, called “Security Association (SA)”.

• IPsec provides many options for performing network encryption and authentication. Each IPsec connection can provide encryption, integrity, authenticity, or all three services. When the security service is determined, the two IPsec peer entities must determine exactly which algorithms to use (for example, DES or 3DES for encryption; MD5 or SHA-1 for integrity). After deciding on the algorithms, the two devices must share session keys.

• SA is a set of above communication parameters that provides a relationship between two or more systems to build an IPsec session.

• SA is simple in nature and hence two SAs are required for bi-directional communications.

• SAs are identified by a Security Parameter Index (SPI) number that exists in the security protocol header. • Both sending and receiving entities maintain state information about the SA. It is similar to TCP endpoints which also maintain state information. IPsec is connection-oriented like TCP.

Parameters of SA Any SA is uniquely identified by the following three parameters −

• Security Parameters Index (SPI).

o It is a 32-bit value assigned to SA. It is used to distinguish among different SAs terminating at the same destination and using the same IPsec protocol.

o Every packet of IPsec carries a header containing SPI field. The SPI is provided to map the incoming packet to an SA.

o The SPI is a random number generated by the sender to identify the SA to the recipient.

• Destination IP Address − It can be IP address of end router.

• Security Protocol Identifier − It indicates whether the association is an AH or ESP SA.

Example of SA between two router involved in IPsec communication is shown in the following diagram.

(B) LAMPORT’S HASH:

ANS:

The Lamport algorithm for generating and applying one-time (OTPs) is a simple solution that provides great value in the right context. Not only can the Lamport OTP scheme provide effective security for distributed client/service interactions, but it's also simple to comprehend and implement.

There's a subtle beauty in simple things that present great value. To paraphrase Albert Einstein, a solution to a problem should be as simple as it can be, but no simpler. Applying a one-time password (OTP) scheme between distributed systems makes it more difficult for a would-be intruder to access and gain unauthorized control of restricted resources such as data, physical devices, or service end points. An OTP scheme is obviously a step up from completely open access, or access limited only by physical network barriers. But a solution based on an OTP challenge also has some advantages over static, infrequently changing passwords, because the window of opportunity to gain access to credentials is much smaller. There's a practical place for either type of authentication, or even both used in concert. The Lamport OTP approach is based on a mathematical algorithm for generating a sequence of "passkey" values, each successor value based on the value of its predecessor. This article presents a simple service that is made more secure by adopting the Lamport OTP scheme. I'll demonstrate the concept and mechanics of this approach through a series of client/service interactions. I'll also present a Java-implemented framework that the existing client/service components can easily leverage.

The Lamport OTP mechanics of

The core of the Lamport OTP scheme requires that cooperating client/service components agree to use a common sequencing algorithm to generate a set of expiring one-time passwords (client side), and validate client-provided passkeys included in each client-initiated request (service side). The client generates a finite sequence of values starting with a "" value, and each successor value is generated by applying some transforming algorithm (or F(S) function) to the previous sequence value: S1=Seed, S2=F(S1), S3=F(S2), S4=F(S3), ...S[n]=F(S[n-1]) The particular transforming algorithm used can be as simple or complex as you like as long as it always produces the same result for a given value. The approach has no tolerance for randomness or variability in that value S' must always be generated from a given value S. As a simple example, suppose the client wants to create a sequence of 10 values, starting with a seed value of 0, and our transforming algorithm adds 3 to the value it's given. The sequence would look like this: 0, 3, 6, 9, 12, 15, 18, 21, 24, 27 The sequence is to be managed as a traditional last-in, first-out (LIFO) stack collection: the client consumes it in reverse order, beginning with the last value and working down toward the seed value. As you'd expect, once a sequence value is consumed, it's purged from the sequence stack, never to be used again (at least not during the same service "conversation"). For every client/service interaction, you're required to embed two extra pieces of information: • A relatively unique client-identifier • One of the generated sequence values -- the OTP. Going forward, I'll refer to this value as a passkey. This looks straightforward enough, but it's not obvious at first blush how this extra information can contribute to the security of a service offering. With the Lamport sequence scheme in mind, here's how a series of service requests might work: • The client initiates a conversation through an introduction, or a request to communicate further. Let's call this the "Hello" interaction. This Hello request is sent along with a client-identifier and the last generated passkey. o Upon receiving this introduction, the service reserves the right to refuse to collaborate with the requester. For now, we'll assume the service will accept all greetings. o The service saves the client-identifier, relating it to the passkey value provided. (You can visualize the service maintaining this information in a basic map/hashtable entity, where the client-identifier is the key and the passkey is the value.) • Presumably, some number of requests, following Hello, aim to get some real work accomplished. We'll call these interactions service requests. Each service request is packaged to conform to the service interface's defined protocol, but once again, the client-identifier and current passkey values are included. • Upon receiving a service request, the service reserves the right to refuse to act on the request for any reason. For now, we'll assume that client authentication is limited to recognizing the client-identifier (that is, is one of the keys in its map) and validating the passkey value. That validation uses the same F(s) function that was used to generate each successive sequence value for the client. So if F(provided passkey) is equal to what has been stored in the client-identifier map, the service request proceeds. If not, the service request is ignored. • At some point, the client ends the service conversation by indicating that it's done for now. We'll call this interaction "Goodbye." As before, the client-identifier and latest passkey values get included in the Goodbye request. • Upon receiving Goodbye from the client, the service checks the client-identifier map for a matching key. If it's found, the same authentication process that was applied to a service request is applied: comparing F(provided passkey) to the passkey value associated with this value. The client's identifier entry is purged if authentication passes. Otherwise, the Goodbye request is ignored.

In summary, any client that uses the features of an OTP-protected service cycles through a series of Hello, service-request, and Goodbye interactions. Each conversation is prefaced by generating some number of sequence/passkey values to be used as OTPs.