Conclusion: SHA-1 is as safe as anything against preimage attacks, however it is easy to compute, which means it is easier to mount a bruteforce or dictionary attack. (The same is true for successors like SHA-256.) Depending on the circumstances, a hash function which was designed to be computationally expensive (such as bcrypt) might be a better choice.
Some people throw around remarks like "SHA-1 is broken" a lot, so I'm trying to understand what exactly that means. Let's assume I have a database of SHA-1 password hashes, and an attacker whith a state of the art SHA-1 breaking algorithm and a botnet with 100,000 machines gets access to it. (Having control over 100k home computers would mean they can do about 10^15 operations per second.) How much time would they need to
How does that change if the passwords are salted? Does the method of salting (prefix, postfix, both, or something more complicated like xor-ing) matter?
Here is my current understanding, after some googling. Please correct in the answers if I misunderstood something.
In short, storing passwords with SHA-1 seems perfectly safe. Did I miss something?
Update: Marcelo pointed out an article which mentions a second preimage attack in 2106 operations. (Edit: As Thomas explains, this attack is a hypothetical construct which does not apply to real-life scenarios.) I still don't see how this spells danger for the use of SHA-1 as a key derivation function, though. Are there generally good reasons to think that a collision attack or a second preimage attack can be eventually turned into a first preimage attack?
The short answer to your question is: SHA-1 is as secure as you can get. MD5 would be fine too, even MD4; but it could make some investors nervous. For public relations, it is best to use a "better" hash function, e.g. SHA-256, even if you truncate its output to 160 or 128 bits (to save on storage cost). Some of the SHA-3 round-2 candidates appear to be faster than SHA-1 while being arguably "more secure"; yet they are still a bit new, so sticking to SHA-256 or SHA-512 would be a safer route right now. It would make you look professional and cautious, which is good.
Note that "as secure as you can get" is not the same as "perfectly safe". See below for rather lengthy explanations.
About known attacks:
The known attacks on MD4, MD5 and SHA-1 are about collisions, which do not impact preimage resistance. It has been shown that MD4 has a few weaknesses which can be (only theoretically) exploited when trying to break HMAC/MD4, but this does not apply to your problem. The 2106 second preimage attack in the paper by Kesley and Schneier is a generic trade-off which applies only to very long inputs (260 bytes; that's a million terabytes -- notice how 106+60 exceeds 160; that's where you see that the trade-off has nothing magic in it).
The rest of this message assumes that the hash function you use (e.g. SHA-1) is a "black box" with no special property that the attacker may use. That's what you have right now even with the "broken" hash functions MD5 and SHA-1.
About rainbow tables:
The "rainbow attack" is actually cost-sharing of a dictionary or brute force attack. It is a derivative from the time-memory trade-off first described by Hellman in 1980. Assuming that you have N possible passwords (that's the size of your dictionary, or 2n if you consider brute-forcing a hash function with an output of n bits), there is a time-sharing attack in which you precompute the N hashed passwords and store them in a big table. If you sort the hash outputs, you can get your password in a single lookup. A rainbow table is a smart way to store that table with a much reduced space. You store only N/t hashed passwords, and you crack passwords with O(t2) lookups. Rainbow tables allow you to virtually handle precomputed tables much larger than what you can realistically store.
However, rainbow or not, the attacker still has to run the full attack at least once. This can be viewed as several successive optimization layers:
If N is large enough that the CPU-cost of hashing N passwords is ludicrous, then such an attack is not feasible, regardless of whether rainbow tables are used or not. This means that a (preimage-resistant) hash function with an output of 80 bits or more is enough to make brute-force attack infeasible.
About salts:
Salts are a way to defeat pre-computations. In the description above, the salt brings back the attacker to step 1: salting prevents the attacker from sharing the O(N) cost between several attacked passwords. Pre-computed tables, a fortiori rainbow tables, are no longer feasible.
You want salting because when the hashed data consists in passwords, i.e. something which fits within the brain of a random human being, then N can be quite low: humans are really bad at choosing and remembering passwords. This is what "dictionary attacks" are about: that's using a reduced space of potential passwords (the "dictionary") under the assumption that many user passwords will be in that specially selected space.
Hence salting will at least prevent the attacker from using pre-computed tables, in particular pre-computed rainbow tables. This assumes that the attacker will be able to break one password or two; we do not want him to break 1000 other passwords with little extra overhead.
Also, salting is good for public relations.
About SHA-1 cost:
The elementary cost of SHA-1 is about hashing a 64-byte block. That's how SHA-1 works: data is padded, then split into 64-byte blocks. The cost of processing a single block is about 500 clock cycles on an Intel Core2 system, and that's for a single core. MD5 and MD4 are faster, count about 400 and 250 cycles, respectively. Do not forget that most modern CPU have several cores, so multiply accordingly.
Some salting schemes prescribe huge salts; e.g. what enters the hash function is actually 40000 successive copies of a single 128-bit salt, followed by the password itself. This makes password hashing more expensive (by a factor of 10000 with my example), both for the legitimate user and for the attacker. Whether this is a good idea depends on the setup. For login on a desktop system, this is good: the user will not even notice that it took 10ms to hash his password, instead of 1µs; but the cost for the attacker has risen by a very noticeable factor 10000. On shared servers with thousands of clients per second, the aggregate cost may become prohibitive. Conceptually, raising the bar by the same factor for the legitimate user and the attacker is not ultimately good security; but it can be a worthwhile idea in some specific situations.
About online attacks:
All of the above is about defeating offline attacks. An offline attack is an attack where the attacker has all the data he needs in order to "test" passwords; e.g. the attacker could get a copy of the database holding the hashed passwords. In an offline attack, the attacker is limited only by his computational resources. Conversely, an online attack is an attack where each guess by the attacker must go through an honest verifier (e.g. the attacker simply tries to log in on the attacked system). Online attacks are thwarted by enforcing limits on how many passwords can be tried per second. Extreme examples are smartcards which shut down after three wrong PINs.
Usually, for password security, it pays off much more to arrange the system for not letting an attacker build an offline attack. That's what Unix systems do: the hashed passwords, which used to be in the world-readable /etc/password
file, are now in the /etc/shadow
file which is protected against read access, except by a few privileged applications. The assumption here is that if the attacker can read /etc/shadow
, then he probably has enough control over the system that he does not really need passwords anymore...