views:

2074

answers:

7

Hi there, I'm seeing web apps implementing limitations for user login attempts.

Is it a security necessity and, if so, why?

For example: you had three failed login attempts, let's try again in 10 minutes!!

thanks :)

+4  A: 

The limiting of how many attempts to be made on a website are to prevent brute force (automated) attacks your site. If you don't limit these attempts, a hacker can set up a script to keep guessing passwords until it finds one, and this may impact the availability of your web server.

Typically, you may want to time the user out (10 minutes as you mentioned) after 3 attempts, and lock them out after 6 or 9 consecutive repeated attempts, forcing the user to contact you in order to unlock their account. This is put into place because someone can modify their scripts to adjust your timeout.

Roy Rico
artarad
depending on how secure you want your site. The best way to determine what you need to do is to log each attempt... each time a user logs in, track their email, username, ip address and if their password was right or not.
Roy Rico
many thanks roy :)
artarad
(continued..) be careful not to log the actual password typed. then if u see a trend, change your code to account for it. I wouldn't consider a typo a strike against the user, unless people start abusing it.
Roy Rico
I would stay away from using cookies, as they are easy to erase - in Firefox (Ctrl+Shift+Del). And any attacker wishing to brute-force your site would easily bypass that. CAPTCHA plus a lock-out period would be advisable.
St. John Johnson
You do not want to lock a user out. This is known as Account Lockout Vulnerability (http://www.owasp.org/index.php/Account_lockout_attack).
Kai Sellgren
@Roy Rico: Kai Sellgren is right; you should also consider the side effects of such a measure. Locking a user out is a Denial of Service attack.
Gumbo
A: 

Yes, it's necessary to protect accounts from sophisticated brute force attacks - as in, using bots and dictionary files - down to someone just trying to guess the password of the account.

L. Cosio
+3  A: 

If users can set their own passwords, some bot/kid will try to log in with a list of common passwords, and succeed. And if they don't know any users, they will try common names like admin, simon, rico, etc.

It doesn't help to just flag the user in session, as they can just remove the cookie or query param on their end. You need to have a count of failed login attempts for both IP and login name. Maybe be more forgiving for the IP as it can be shared among many users.

OIS
+4  A: 

Also using a good implemented CAPTCHA could be an alternative way to enpower your application security against brute-force attacks. there's a wide variety of captcha providers available for free, let's try the easy way if you're in a hurry. Also please consider that there's people outta here saying that "oh, no! this captcha thing is not secure enough!".

"For those of you who don't know, a CAPTCHA is program that can tell whether its user is a human or another computer. They're those little images of distorted text that you translate when you sign up for Gmail or leave a comment on someone's blog. Their purpose is to make sure that someone doesn't use a computer to sign up for millions of online accounts automatically, or.." ref.

Clarification: Actually this is a completion to other's answers. using a good implemented captcha alongside an anti bruteforce mechanism using sessions for example.
The questioner marked it as accepted assuming that captchas are unreadable by machines (she's almost right) and so it's getting negative points, because people think it's not a complete answer & they're right.

Sepehr Lajevardi
cheers lazies! everyone's agree with captchas?
artarad
+2  A: 

For my own projects I wrote a generalized 'floodcontrol' library which handles this sort of thing.

It allows me to specify how many attempts may be made in X amount of time. It allows for a certain number of 'grace' attempts in a short time, so that only really unusual behaviour will be caught.

I record in the database a few things:

  • The IP address (or the first 24 bits of it)
  • The action that was attempted (ie 'log in', 'search', 'comment')
  • The time of the attempt
  • Number of attempts (attempt counter)

For each attempt made I query against the partial IP address and the action, and if a previous attempt was made within a certain window of time then I increment the attempt counter for that attempt. If the attempt counter exceeds the number of grace attempts allowed then I check whether the last attempt was within X seconds of now and if so, return false - therefore the action will be blocked (and the user will be told to wait X seconds before trying again). If the attempt counter is below the number of grace attempts then I return true and let it slide.

If a person with the same IP comes by later, then the previous attempt count won't be fetched, because it will be too long ago.

thomasrutter
Be careful of people using the same IP Address on a network. If 10 people were in an apartment with one external IP and they attempted to search your site, they would all be locked out immediately.
St. John Johnson
Good point. The number of grace logins should be high enough that the chance of people being affected by normal logging in activity should be very low. For example, if 20 people from similar IP addresses all log in within a period of a few minutes then it may trigger if the grace logins is < 20.
thomasrutter
In other words increasing the tolerance would be the only option; IP address is unreliable at identifying unique people but on the internet it is almost the only identifier you have - you could also take user-agent string into account though.
thomasrutter
+4  A: 

I saw a creative approach to this once...

For each login attempt, that fails, the lockout time increases... exponentially.

attempt | lockout time
======================
   1    |     2s
   2    |     4s
   3    |     8s
   4    |    16s
   5    |    32s
   6    |    64s
   7    |   128s
   8    |   256s
   9    |   512s
  10    |  1024s

In theory, it lets user make a mistake or two, but as soon as it appears to become a "hacking" attempt, the hacker gets locked out for longer and longer time periods.

I haven't used this myself (yet), but conceptually I quite like the idea. Of course on successful login, the counter is reset.

scunliffe
I have implemented something like this before, but found that it was not nearly forgiving enough; sometimes people genuinely forget their password and need to try several different combinations - perhaps try something like this but make sure the first 6 are all 0s, then start ramping up, shallowly.
thomasrutter
yeah that would work fine too. The idea is just to limit any "casual" attempts at hacking entry. Serious systems would need to take into account signs of a dictionary attack, Botnets, multiple IPs etc.
scunliffe
A: 

I reckon putting a 'failed attempts' counter in the DB would be the safest and easiest way to go. That way the user can't bypass it (by disabling cookies). Reset on successful login of course.

Mark
How would a user be unable to bypass it and how would it tell two users apart?
thomasrutter
IP address of course. Yes, this could cause problems for multiple users with the same IP... but if you just show a CAPTCHA or something when they've used up their attempts, I don't think it's a big loss. Did this really warrant a down vote?
Mark