There are several useful answers on SO regarding prevention of brute forcing a password of a web service by applying throttling. I couldn't find any good numbers though and I have little expertise in this area, so the question is:
How many attempts does it usually take to brute-force an average password of 6 or more characters (with no additional knowledge that may help, but taking into account that passwords are probably prone to dictionary attacks) and based on that, what are meaningful limits to apply to the throttling algorithm without disrupting the user experience?
This is my current scheme:
sleep
applied for each fetching of the login page of # of attempts / 5
, so after 5 requests with less than a minute between requests it'll take > 1 second to fetch the form, after 10 requests > 2 seconds, etc.Based on the answers so far, I have tweaked it to work like this:
# of requests / 2
seconds. The counter is reset after 10 minutes of no login activity.2 ^ (# of suspensions + 1) hours
. This should lead to a rapid de facto blacklisting of continually offending IPs.I think these limits should not harm normal users, even ones that regularly forget their password and try to log in several times. The IP limits should also work okay with heavily NAT'ed users, given the average size of the service. Can somebody prove this to be efficient or inefficient with some solid math? :)
You have a couple of good controls there, but you really should tighten it more. A regular user shouldn't fail to log in more than five times. If he does, show him a CAPTCHA.
Remember that under no circumstances should you lock the account. It's called an account lockout vulnerability. This allows an arbitrary user to log out other users from the service (DOS, denial of service).
I've approached this problem of login throttling many times and the one I like is that you create a field of failed attempts and the last failed attempt date in your database. Whenever someone (anyone) fails to log into account X, you increase the value of X's failed attempts and update the last failed attempt date. If the failed attempt count exceeds Y (for example, five), then show a CAPTCHA for the specific user. So, you won't have a huge database of banned IPs to throttle the login form, instead you have just two more fields per user. It also makes little sense to ban/throttle based on IPs, because of botnets and proxies (both legal and illegal ones). When IPv6 comes out in fashion, you will be more doomed I presume. It makes much more sense to throttle based on targeted accounts. So, when your account X is being targeted by a botnet, the login form will be throttled with a CAPTCHA. The obvious drawback here is that if your CAPTCHA fails... so does your login throttling.
So, in the essence it goes like this:
It's basically a shield that turns on when there's a massive targeted attack against particular accounts. The good thing about this kind of approach is that it won't matter if I own a farm of PCs across the world - I can't brute force a single account because it's account based.
The two bad things about this is that if the CAPTCHA fails you have nothing left. You could of course improve this situation by placing other protections, too. The second problem is that if I had a bot-net, I could use one PC per one account, and then it's likely that with a million computer network I crack at least one account, but this approach works only in non-targeted attacks.
I hope this gave you some thoughts.