views:

575

answers:

4

Conclusion: SHA-1 is as safe as anything against preimage attacks, however it is easy to compute, which means it is easier to mount a bruteforce or dictionary attack. (The same is true for successors like SHA-256.) Depending on the circumstances, a hash function which was designed to be computationally expensive (such as bcrypt) might be a better choice.


Some people throw around remarks like "SHA-1 is broken" a lot, so I'm trying to understand what exactly that means. Let's assume I have a database of SHA-1 password hashes, and an attacker whith a state of the art SHA-1 breaking algorithm and a botnet with 100,000 machines gets access to it. (Having control over 100k home computers would mean they can do about 10^15 operations per second.) How much time would they need to

  1. find out the password of any one user?
  2. find out the password of a given user?
  3. find out the password of all users?
  4. find a way to log in as one of the users?
  5. find a way to log in as a specific user?

How does that change if the passwords are salted? Does the method of salting (prefix, postfix, both, or something more complicated like xor-ing) matter?

Here is my current understanding, after some googling. Please correct in the answers if I misunderstood something.

  • If there is no salt, a rainbow attack will immediately find all passwords (except extremely long ones).
  • If there is a sufficiently long random salt, the most effective way to find out the passwords is a brute force or dictionary attack. Neither collision nor preimage attacks are any help in finding out the actual password, so cryptographic attacks against SHA-1 are no help here. It doesn't even matter much what algorithm is used - one could even use MD5 or MD4 and the passwords would be just as safe (there is a slight difference because computing a SHA-1 hash is slower).
  • To evaluate how safe "just as safe" is, let's assume that a single sha1 run takes 1000 operations and passwords contain uppercase, lowercase and digits (that is, 60 characters). That means the attacker can test 1015*60*60*24 / 1000 ~= 1017 potential password a day. For a brute force attack, that would mean testing all passwords up to 9 characters in 3 hours, up to 10 characters in a week, up to 11 characters in a year. (It takes 60 times as much for every additional character.) A dictionary attack is much, much faster (even an attacker with a single computer could pull it off in hours), but only finds weak passwords.
  • To log in as a user, the attacker does not need to find out the exact password; it is enough to find a string that results in the same hash. This is called a first preimage attack. As far as I could find, there are no preimage attacks against SHA-1. (A bruteforce attack would take 2160 operations, which means our theoretical attacker would need 1030 years to pull it off. Limits of theoretical possibility are around 260 operations, at which the attack would take a few years.) There are preimage attacks against reduced versions of SHA-1 with negligible effect (for the reduced SHA-1 which uses 44 steps instead of 80, attack time is down from 2160 operations to 2157). There are collision attacks against SHA-1 which are well within theoretical possibility (the best I found brings the time down from 280 to 252), but those are useless against password hashes, even without salting.

In short, storing passwords with SHA-1 seems perfectly safe. Did I miss something?

Update: Marcelo pointed out an article which mentions a second preimage attack in 2106 operations. (Edit: As Thomas explains, this attack is a hypothetical construct which does not apply to real-life scenarios.) I still don't see how this spells danger for the use of SHA-1 as a key derivation function, though. Are there generally good reasons to think that a collision attack or a second preimage attack can be eventually turned into a first preimage attack?

+1  A: 

Serious vulnerabilities have been discovered in SHA-1 that make the search much faster than brute force. It is still largely intractable, but that isn't expected to be the case for too much longer; paranoid programmers favour something from the SHA-2 family.

From this article regarding the original 2005 result:

"It's time to walk, but not run, to the fire exits. You don't see smoke, but the fire alarms have gone off."

It's not that the current cryptanalysis makes SHA-1 unsafe, but rather that the crypto community is worried that worse news might be just around the corner. This fear also applies to SHA-2, which exhibits the same flaws as SHA-1, albeit over a much larger search space, hence the ongoing quest for SHA-3.

In short, SHA-1 is safe right now, and probably will be for some time come, but the crypto community is uncomfortable with the prognosis.

Marcelo Cantos
Could you provide a link? As I said, the best preimage attack I could find makes search a whopping 8 times faster, and even for that to work you need to omit half the steps of SHA-1. (Also, I think that is a second preimage attack, which is useless against password hashes.)
Tgr
+1  A: 

If you store the salted password, SHA-1 is fine for practical purposes. SHA-2 is considered more secure, but SHA-1 is not a problem unless you have a reason to be truly paranoid.

Here is what NIST says:

The results presented so far on SHA-1 do not call its security into question. However, due to advances in technology, NIST plans to phase out of SHA-1 in favor of the larger and stronger hash functions (SHA-224, SHA-256, SHA-384 and SHA-512) by 2010.

VladV
That is a NIST comment from 2004. Their [2010 draft recommendation](http://csrc.nist.gov/publications/drafts/800-131/draft-800-131_transition-paper.pdf) says SHA-1 is approved for all non-digital signature generation applications beyond 2010.
Tgr
+1  A: 

Your description sounds accurate for the current state of the art.

You shouldn't be using a single iteration of any hash function, though: At the very least, you should iterate many times (1000 iterations of the hash increases the attacker's work 1000-fold. It increases your work by the same amount, but you're doing a lot less password hashing than they are).

Ideally, however, you should use an existing password storage primitive, such as those described here.

Nick Johnson
Thanks; the linked article is great. Too bad I can only accept one answer.
Tgr
No worries - Thomas's answer deserves the tick far more than my brief answer does.
Nick Johnson
+15  A: 

The short answer to your question is: SHA-1 is as secure as you can get. MD5 would be fine too, even MD4; but it could make some investors nervous. For public relations, it is best to use a "better" hash function, e.g. SHA-256, even if you truncate its output to 160 or 128 bits (to save on storage cost). Some of the SHA-3 round-2 candidates appear to be faster than SHA-1 while being arguably "more secure"; yet they are still a bit new, so sticking to SHA-256 or SHA-512 would be a safer route right now. It would make you look professional and cautious, which is good.

Note that "as secure as you can get" is not the same than "perfectly safe". See below for rather lengthy explanations.

About known attacks:

The known attacks on MD4, MD5 and SHA-1 are about collisions, which do not impact preimage resistance. It has been shown that MD4 has a few weaknesses which can be (only theoretically) exploited when trying to break HMAC/MD4, but this does not apply to your problem. The 2106 second preimage attack in the paper by Kesley and Schneier is a generic trade-off which applies only to very long inputs (260 bytes; that's a million terabytes -- notice how 106+60 exceeds 160; that's where you see that the trade-off has nothing magic in it).

The rest of this message assumes that the hash function you use (e.g. SHA-1) is a "black box" with no special property that the attacker may use. That's what you have right now even with the "broken" hash functions MD5 and SHA-1.

About rainbow tables:

The "rainbow attack" is actually cost-sharing of a dictionary or brute force attack. It is a derivative from the time-memory trade-off first described by Hellman in 1980. Assuming that you have N possible passwords (that's the size of your dictionary, or 2n if you consider brute-forcing a hash function with an output of n bits), there is a time-sharing attack in which you precompute the N hashed passwords and store them in a big table. If you sort the hash outputs, you can get your password in a single lookup. A rainbow table is a smart way to store that table with a much reduced space. You store only N/t hashed passwords, and you crack passwords with O(t2) lookups. Rainbow tables allow you to virtually handle precomputed tables much larger than what you can realistically store.

However, rainbow or not, the attacker still has to run the full attack at least once. This can be viewed as several successive optimization layers:

  1. The brute-force / dictionary attack has cost N for cracking each password.
  2. With a pre-computed table, the attacker pays that cost N once and can thereafter attack many passwords with very small extra cost per password.
  3. If the pre-computed table is a rainbow table, then N can be somewhat bigger, because storage cost is reduced. The bottleneck on N becomes the CPU power that the attacker can muster, not the size of his harddisks.

If N is large enough that the CPU-cost of hashing N passwords is ludicrous, then such an attack is not feasible, regardless of whether rainbow tables are used or not. This means that a (preimage-resistant) hash function with an output of 80 bits or more is enough to make brute-force attack infeasible.

About salts:

Salts are a way to defeat pre-computations. In the description above, the salt brings back the attacker to step 1: salting prevents the attacker from sharing the O(N) cost between several attacked passwords. Pre-computed tables, a fortiori rainbow tables, are no longer feasible.

You want salting because when the hashed data consists in passwords, i.e. something which fits within the brain of a random human being, then N can be quite low: humans are really bad at choosing and remembering passwords. This is what "dictionary attacks" are about: that's using a reduced space of potential passwords (the "dictionary") under the assumption that many user passwords will be in that specially selected space.

Hence salting will at least prevent the attacker from using pre-computed tables, in particular pre-computed rainbow tables. This assumes that the attacker will be able to break one password or two; we do not want him to break 1000 other passwords with little extra overhead.

Also, salting is good for public relations.

About SHA-1 cost:

The elementary cost of SHA-1 is about hashing a 64-byte block. That's how SHA-1 works: data is padded, then split into 64-byte blocks. The cost of processing a single block is about 500 clock cycles on an Intel Core2 system, and that's for a single core. MD5 and MD4 are faster, count about 400 and 250 cycles, respectively. Do not forget that most modern CPU have several cores, so multiply accordingly.

Some salting schemes prescribe huge salts; e.g. what enters the hash function is actually 40000 successive copies of a single 128-bit salt, followed by the password itself. This makes password hashing more expensive (by a factor of 10000 with my example), both for the legitimate user and for the attacker. Whether this is a good idea depends on the setup. For login on a desktop system, this is good: the user will not even notice that it took 10ms to hash his password, instead of 1µs; but the cost for the attacker has risen by a very noticeable factor 10000. On a shared servers with thousands of clients per second, the aggregate cost may become prohibitive. Conceptually, raising the bar by the same factor for the legitimate user and the attacker is not ultimately good security; but it can be a worthwhile idea in some specific situations.

About online attacks:

All of the above is about defeating offline attacks. An offline attack is an attack where the attacker has all the data he needs in order to "test" passwords; e.g. the attacker could get a copy of the database holding the hashed passwords. In an offline attack, the attacker is limited only by his computational resources. Conversely, an online attack is an attack where each guess by the attacker must go through an honest verifier (e.g. the attacker simply tries to log in on the attacked system). Online attacks are thwarted by enforcing limits on how many passwords can be tried per second. Extreme examples are smartcards which shut down after three wrong PINs.

Usually, for password security, it pays off much more to arrange the system for not letting an attacker build an offline attack. That's what Unix systems do: the hashed passwords, which used to be in the world-readable /etc/password file, are now in the /etc/shadow file which is protected against read access, except by a few privileged applications. The assumption here is that if the attacker can read /etc/shadow, then he probably has enough control over the system that he does not really need passwords anymore...

Thomas Pornin
Awesome answer. +5 (if I could). A nice intro to hashing.
Xorlev
Excellent answer. The only bit I'd disagree with is "Conceptually, raising the bar by the same factor for the legitimate user and the attacker is not ultimately good security" - the attacker has to do a large multiple of the operations a user has to do. Adding one clock cycle for a user login adds millions for an attacker.
Nick Johnson