News archives 2013

  • SUMMARY: NIST's current proposal for SHA-3 is a subset of the Keccak family, and one can generate test vectors for that proposal using our reference code submitted to the contest.

    In the end, it will be NIST's decision on what exactly will be standardized for SHA-3, but we would like, as the Keccak team, to take the opportunity to remind some facts about Keccak and give some opinion on the future SHA-3 standard.

    First some reminders on Keccak

    • Keccak is a family of sponge function instances, encompassing capacity values ranging from 0 to 1599 bits. All these instances are well-defined and so are their security claim. Our SHA-3 submission highlighted instances with capacities c=448, 512, 768 and 1024 for strictly meeting NIST's SHA-3 requirements on the SHA-2 drop-in replacement instances, plus a capacity of 576 for a variable-output-length instance. Nevertheless, the capacity is an explicitly tunable parameter, in the line of what NIST suggested in their SHA-3 call, and we therefore proposed in our SHA-3 submission document that the capacity would be user-selectable.
    • The capacity is a parameter of the sponge construction (and of Keccak) that determines a particular security strength level, in the line of the levels defined in [NIST SP 800-57]. Namely, for a capacity c, the security strength level is c/2 bits and the sponge function is claimed to resist against all attacks up to 2c/2, unless easier with a random oracle. As we make a clear security claim for each possible value of the capacity, a user knows what the expect and a cryptanalyst knows her target. Conversely, we provide a tool that helps determine the minimum capacity and output length given collision and pre-image resistance requirements.
    • The core of Keccak, namely the Keccak-f permutations, has not changed since round 2 of the SHA-3 competition. When Keccak was selected for the 2nd round, we increased the number of rounds to have a better safety margin (from 18 to 24 rounds for Keccak-f[1600]). The round function has not changed since the original submission in 2008.
    • Keccak is the result of using the sponge construction on top of the Keccak-f permutations and applying the multi-rate padding to the input. Using multi-rate padding causes each member of the Keccak family (and in particular for each value of the capacity) to act as an independent function.
    • As a native feature, Keccak provides variable output length, that is, the user can dynamically ask for as many output bits as desired (e.g., as a mask generating function such as MGF1).

    Keccak in the SHA-3 standard

    NIST's current proposal for SHA-3, namely the one presented by John Kelsey at CHES 2013 in August, is a subset of the Keccak family. More concretely, one can generate the test vectors for that proposal using the Keccak reference code (version 3.0 and later, January 2011). This alone shows that the proposal cannot contain internal changes to the algorithm.

    We did not suggest NIST to make any change to the Keccak components, namely the Keccak-f permutations, the sponge construction and the multi-rate padding, and we are not aware of any plans that NIST would do so. However, the future standard will not include the entire Keccak family but will select only specific instances of Keccak (i.e., with specific capacities), similarly to the block and key lengths of AES being a subset of those of Rijndael. Moreover, it will append some parameter-dependent suffix to the input prior to processing (see below) and fix the output length (for the SHA-2 drop-in replacements) or keep it variable (for the SHAKEs).

    Here are further comments on these choices.

    First, about suffixes (sometimes referred to as padding).

    In Sakura, we propose to append some suffix to the input message, before applying Keccak. This is sometimes presented as a change in Keccak's padding rule because adding such a suffix can be implemented together with the padding, but technically this is still on top of the original multi-rate padding.

    The suffixes serve two purposes. The first is domain separation between the different SHA-3 instances, to make them behave as independent functions (even if they share the same capacity). The second is to accomodate tree hashing in the future in such a way that domain separation is preserved.

    The security is not reduced by adding these suffixes, as this is only restricting the input space compared to the original Keccak. If there is no security problem on Keccak(M), there is no security problem on Keccak(M|suffix), as the latter is included in the former.

    Second, about the output length.

    Variable output length hashing is an interesting feature for natively supporting a wide range of applications including full domain hashing, keystream generation and any protocol making use of a mask generating function. In its current proposal, NIST plans on standardizing two instances: SHAKE256 and SHAKE512, with capacity c=256 and c=512 and therefore security strength levels of 128 and 256 bits, respectively.

    The traditional fixed output-length instances acting as SHA-2 drop-in replacement (SHA3-xxx) are obtained from truncating Keccak instances at the given output length.

    Third, about the proposed instances and their capacities.

    The capacity of the SHAKEs is given above and we now focus on the SHA-2 drop-in replacement instances with fixed output length n, with n in {224, 256, 384, 512}.

    The SHA-3 requirements asked for a spectrum of resistance levels depending on the attack: n/2 for collision, n for first pre-image and n-k for second pre-image (with 2k the length of the first message). To meet the requirements and avoid being disqualified, we set c=2n so as to match the n-bit pre-image resistance level, and the requirements on other attacks followed automatically as they were lower. However, setting c=2n is also a waste of resources. For instance, Keccak[c=2n] before truncation provides n-bit collision resistance (in fact n-bit resistance against everything), but after truncation to n bits of output it drops to n/2-bit collision resistance.

    Instead, adjusting the capacity to meet the security strength levels of [NIST SP 800-57] gives better security-performance trade-offs. In this approach, one aims at building a protocol or a system with one consistent security target, i.e., where components are chosen with matching security strength levels. The security strength level is defined by the resistance to the strongest possible attack, i.e., (internal) collisions so that, e.g., SHA-256 is at 128 bits for digital signatures and hash-only applications. Hence, setting c=n simply puts SHA3-n at the n/2-bit security level.

    Among the Keccak family, NIST decided to propose instances with capacities of c=256 for n=224 or 256, and c=512 bits for n=384 or 512. This proposal is the result of discussions between the NIST hash team and us, when we visited them in February and afterwards via mail. It was then publically presented by John Kelsey at CT-RSA later in February and posted on the NIST hash-forum mailing list soon after. It was then presented at several occasions, including Eurocrypt 2013, CHES 2013 at UCSB, etc.

    The corresponding two security strength levels are 128 bits, which is rock-solid, and an extremely high 256 bits (e.g., corresponding to RSA keys of 15360 bits [NIST SP 800-57]).

    Comments on some of the criticism

    Finally, we now comment on some criticism we saw in the discussions on the NIST hash-forum mailing list.

    • “128 bits of security are not enough in particular in the light of multi-target pre-image attacks.” We addressed this specifically in a message to the NIST SHA-3 mailing list, we explained why this fear is unfounded and why the 128 bits of security do not degrade for multi-target pre-image attacks. And anyway the SHA-3 proposal includes functions with 256-bit security, which the user is free to choose as well.
    • “SHA3-256 does not provide 256-bit pre-image resistance.” With c=256, this is correct indeed. We proposed to reduce the capacity of SHA3-256 to 256 bits to follow our security-strength oriented approach, which better addresses actual user requirements than the traditional way of inferring resistance of hash functions from the output length. Nevertheless, to avoid confusion for people expecting 256-bit resistance from SHA3-256, we made a 2nd proposal that sets c=512 for all SHA-2 drop-in replacement instances, hence providing the traditional 256-bit pre-image resistance.
    • “There is no instance providing 512-bit pre-image resistance.” Again, this is correct. The answer is similar to the previous point, except that our new proposal does not extend to capacities higher than c=512 bits, simply because claiming or relying on security strength levels above 256 bits is meaningless. Setting c=1024 would induce a significant performance loss, and there are no standard public-key parameters matching 512 bits of security. Also we believe that this security level was more a side-effect and not a security target in itself. All conventional hash functions that would aim at 256-bit collision resistance would automatically provide 512-bit preimage resistance. Keccak however is a different cryptographic object and SHA3-512 can safely provide a security strength of 256 bits against all attacks without the need to boost the security level beyond any meaning.
    • “Claiming a higher security level provides a safety margin.” In the Keccak design philosophy, safety margin comes from the number of rounds in Keccak-f, whereas the security level comes from the selected capacity. We have designed Keccak so as to have a strong safety margin for all possible capacities. At this moment, this safety margin is very comfortable (4 to 5 rounds out of 24 are broken). Of course, the user can still increase the capacity to get a security level that is higher than the one he targets, and hence somehow artificially increase the safety margin. But, there is simply no need to do so. We also refer to Martin Schläffer's excellent summary, posted on the NIST hash-forum mailing list on October 1st, 2013 at 10:16 GMT+2 (thanks Martin!).

    As explained in our new proposal, we think the SHA-3 standard should emphasize the SHAKE functions. The SHA-3 user would keep the choice between lean SHAKE256 with its rock-solid security strength level and the heavier SHAKE512 with its extremely high security strength level. In implementations, the bulk of the code or circuit is dedicated to the Keccak-f[1600] permutation and from our experience supporting multiple rates can be done at very small cost.

    References

    Recommended reading from third parties:

    Other references:

  • This article is a copy of a message we posted on the NIST hash-forum mailing list on September 30, 2013.

    SUMMARY: Keccak instances with a capacity of 256 bits offer a generic security strength level of 128 bits against all generic attacks, including multi-target attacks. 2128 is an astronomical number and attacks with such complexities are expected to remain unreachable for decades to come.

    Among other options, we have proposed instances with capacity c=256 as an option because they have a generic security strength of 128 bits. This means any single-stage (*) generic attack has an expected complexity of 2128 computations of Keccak-f, unless easier on a random oracle. This is such an astronomical amount of work that one may wonder why we would ever need more than 128 bits of security (see also Tune Keccak to your requirements).

    In the discussions on SHA-3 we have seen some remarks on 128-bit security not being sufficient in the light of multi-target attacks. Multi-target attacks can be illustrated nicely with block ciphers.

    • Single-target attack: Say we have a 32-byte ciphertext C that is the result of applying AES-128 in ECB mode on a known plaintext with some unknown key K. Then K can be found by exhaustive key search: enciphering P by all possible values K until C appears. The right key will be hit after about 2127 trials and so the security strength is around 128 bits. This is the security strength the layman typically expects when using AES-128.
    • Multi-target attack: Say we now have M ciphertexts Ci obtained by enciphering the same plaintext P with M different keys Ki. And assume the attacker is satisfied if he can find at least one key Ki. Then if he applies exhaustive key search, the expected number of trials is 2128/M. So the security strength is reduced to 128-log2(M). If M is very large, this can reduce the security strength quite a lot. E.g., M = 240 reduces the time complexity to only 288. This is still a huge number, but it can no longer be dismissed as science-fiction. Among cryptographers this security degeneration is well-known and there are methods of avoiding this, such as salting, enciphering in CBC mode with random IVs etc.

    If the application does not allow avoiding multi-targets, one can decide to use AES-192 or AES-256. The reason to use 192-bit or 256-bit keys is not because the security strength level 128 is too small, but because in the light of multi-target attacks, we need a block cipher with a key longer than 128 bits to offer a security strength level of 128 bits. Summarizing, AES-128, AES-192 and AES-256 have key lengths of 128, 192 and 256 bits, but this does not mean they offer a generic security strength of 128, 192 and 256 bits. This is not specific for AES, it is true for any block cipher. This is also not a problem. A protocol designer who understands these issues can easily build efficient protocols offering excellent generic security strengths.

    Multi-target also applies to finding (first or second) pre-images. Finding one pre-image out of M 128-bit hashes only takes 2128/M hash computations.

    So it is tempting to think that the 128-bit generic security strength of Keccak instances with 256-bit capacity will also degrade under multi-target attack. Fortunately, this is not the case, as the generic security strength level c/2 follows from the bound in our indifferentiability proof for the sponge construction. More specifically, the success probability of a generic attack on a sponge function is upper bounded by the sum of the attack probability of that attack on a random oracle plus the RO-differentiating advantage N2/2c+1. We have explained that in our Eurocrypt 2008 paper on Sponge indifferentiability and this was formalized by Elena Andreeva, Bart Mennink and Bart Preneel in Appendix B, Theorem 2 of their paper Security Reductions of the Second Round SHA-3 Candidates, and this is also true for multi-target attacks.

    If one wants a hash function (any) that offers a generic security strength level of 128 bits against multi-target attacks with at most say 264 targets, then one must take the output length equal to 128+64=192 bits. For a sponge function, the capacity does not need to be increased to twice the output length; if we target a security strength level of 128 bits, c=256 is still sufficient.

    So a 256-bit capacity offers a generic security strength level of 128 bits that is absolute and does not degenerate under multi-target attacks.

    For the record, we as Keccak team proposed setting c=256 (and even a user-chosen capacity) as an option in our SHA-3 proposal: “If the user decides to lower the capacity to c=256, providing a claimed security level equivalent to that of AES-128, the performance will be 31% greater than for the default value c=576.” (See page 4 of The Keccak SHA-3 submission and page 3 of our Note on Keccak parameters and usage published in February 2010.) Furthermore, the option of c=256 was also presented at numerous occasions:

    (*) See Thomas Ristenpart, Hovav Shacham, and Thomas Shrimpton, Advances in Cryptology - Eurocrypt 2011.

  • This article is a copy of a message we posted on the NIST hash-forum mailing list on September 30, 2013.

    SUMMARY: In the SHA-3 standard, we propose to set the capacity of all four SHA-2 drop-in replacements to 512 bits, and to make SHAKE256 and SHAKE512 the primary choice.

    Technically, we think that NIST's current proposal is fine. As said in our previous post, we have proposed to reduce the capacities of the SHA-3 hash functions at numerous occasions, including during our last visit to NIST in February. Nevertheless, in the light of the current discussions and to improve public acceptance, we think it would be indeed better to change plans. For us, the best option would be the following (taking inspiration from different other proposals).

    • Set the capacity of the SHA-2 drop-in replacements (i.e., SHA3-224 to SHA3-512) to c=512. This guarantees the same claimed security properties as for the corresponding SHA-2 instances up to the 256-bit security level. (In particular, the pre-image resistance of SHA3-256 would be raised to 256 bits.)
    • Keep the SHAKEs as they are (i.e., SHAKE256 with c=256 and SHAKE512 with c=512) and make them the primary choice for new applications of hash functions, for replacing mask generating functions (MGFs) and for those who wish to follow the security strength levels approach of [NIST SP 800-57].

    For the SHAKEs, we think it would be good to include in the standard a short procedure for replacing a hash function or MGF based on SHA-1 or SHA-2. For instance, if there is only one to be replaced, here is a sketch.

    1. Choose between SHAKE256 and SHAKE512. If the user can determine the required security level and it is 128 bits or smaller, choose SHAKE256. Otherwise (or if unsure), choose SHAKE512.
    2. Let the output length be determined by the application.

    We have seen proposals for keeping instances with c=1024 in SHA-3. We think that claiming or relying on security strength levels above 256 bits is meaningless and that c=1024 would induce a significant performance loss, which should be avoided.

    This proposal means that SHA-3 standard will offer drop-in primitives with the same security level as SHA-2 (modulo the comment on c=1024), but also gives protocol and product designers the possibility to use SHAKE256, which is more efficient and is in practice not less secure than SHAKE512 or the drop-ins.

  • We published a new note in which we propose an interface to Keccak at the level of the sponge and duplex constructions, and inside Keccak at the level of the Keccak-f permutation. The purpose is twofold.

    • First, it allows users of Keccak making best use of its flexibility. As focused on by the SHA-3 contest, Keccak is sometimes viewed solely as a hash function and some implementations are inherently restricted to the traditional fixed-output-length instances. Instead, the proposed interface reflects the features of the sponge and duplex constructions, from the arbitrary output length to the flexibility of choosing security-speed trade-offs.
    • Second, it simplifies the set of optimized implementations on different platforms. Nearly all the processing of Keccak takes place in the evaluation of the Keccak-f permutation as well as in adding (using bitwise addition of vectors in GF(2)) input data into the state and extracting output data from it. The interface helps isolate the part that needs to be most optimized, while the rest of the code can remain generic. If they share the same interface, optimized implementations can be interchanged and a developer can select the best one for a given platform.

    As a concrete exercise, we adapted some implementations from the “Reference and optimized code in C” to the proposed interface and posted them in a new “Keccak Code Package”. For the optimized implementations, it appears that the impact on the throughput is negligible while it significantly improves development flexibility and simplicity.

  • We added the slides of some recent presentations on this page.

  • Recently, we released a paper on Sakura, a flexible, fairly general, coding for tree hash modes. The coding does not define a tree hash mode, but instead specifies a way to format the message blocks and chaining values into inputs to the underlying function for any topology, including sequential hashing. The main benefit is to avoid input clashes between different tree growing strategies, even before the hashing modes are defined, and to make the SHA-3 standard tree-hashing ready.

    It expands on the concept of “universal” (now: flexible) hash tree coding that we presented at NIST on February 6 (see slides 55-59). The goal is to address tree hashing, as discussed by John Kelsey in his SHA3 presentation at the RSA conference last February.

    All comments are welcome!

  • In a previous announcement, we re-opened the Keccak Crunchy Crypto Collision and Pre-image contest until end 2012. As no new challenges were solved between March and December 2012, we decided to leave it open for another year, i.e., until end 2013.

    The challenges remain the same. We suggest all interested people to subscribe to our mailing list, and solutions shall be sent to this mailing list, as detailed here, before December 31, 2013 at 23:59 GMT+1.