views:

1318

answers:

11

This question has always troubled me.

On Linux, when asked for a password, if your input is the correct one, it checks right away, with almost no delay. But, on the other hand, if you type the wrong password, it takes longer to check. Why is that?

I observed this in all Linux distributions I've ever tried.

+13  A: 

This makes it take longer to guess passwords.

RossFabricant
+10  A: 

I am not sure, but it is quite common to integrate a delay after entering a wrong password to make attacks harder. This makes a attack practicaly infeasible, because it will take you a long time to check only a few passwords.

Even trying a few passwords - birthdates, the name of the cat, and things like that - is turned into no fun.

Daniel Brückner
And often the timeout on the second failure is longer than the timeout on the first - which is good too.
Jonathan Leffler
Did you see the news post about the most likely passwords? 123456 is very very popular!
Spence
+54  A: 

It's actually to prevent brute force attacks from trying millions of passwords per second. The idea is to limit how fast passwords can be checked and there are a number of rules that should be followed.

  • A successful user/password pair should succeed immediately.
  • There should be no discernible difference in reasons for failure that can be detected.

That last one is particularly important. It means no helpful messages like "Your user name is correct but your password is wrong, please try again" or "sorry, password wasn't long enough", not even a time difference in response between the "invalid user and password" and "valid user but invalid password" failure reasons.

Every failure should deliver exactly the same information, textual and otherwise.

Some systems take it even further, increasing the delay with each failure, or only allowing three failures then having a massive delay before allowing retry.

paxdiablo
How does this prevent an app from forking, trying a password, and if it doesn't return success in some amount of time, kill -9 the child and fork again. Yes that only works if you can log in as some user but when has that stopped anyone?
BCS
It doesn't stop anyone but you still have to delay for that "some amount of time". Even a tiny delay makes checking millions of passwords useless, and you *will* be detected if you're doing it while logged on - do you think nothing is logged for failed logins?
paxdiablo
BCS: if you already have a valid login with enough privileges to do what you propose, chances are that you no longer need brute force attacks (because there are other attack vectors available to you). The delay is most useful against external attackers.
ammoQ
+11  A: 

Basically to mitigate against brute force and dictionary attacks.

From The Linux-PAM Application Developer's Guide:

Planning for delays

extern int pam_fail_delay(pam_handle_t *pamh, unsigned int micro_sec);

This function is offered by Linux-PAM to facilitate time delays following a failed call to pam_authenticate() and before control is returned to the application. When using this function the application programmer should check if it is available with,

#ifdef PAM_FAIL_DELAY
    ....
#endif /* PAM_FAIL_DELAY */

Generally, an application requests that a user is authenticated by Linux-PAM through a call to pam_authenticate() or pam_chauthtok(). These functions call each of the stacked authentication modules listed in the relevant Linux-PAM configuration file. As directed by this file, one of more of the modules may fail causing the pam_...() call to return an error. It is desirable for there to also be a pause before the application continues. The principal reason for such a delay is security: a delay acts to discourage brute force dictionary attacks primarily, but also helps hinder timed (covert channel) attacks.

+7  A: 

It's a very simple, virtually effortless way to greatly increase security. Consider:

  1. System A has no delay. An attacker has a program that creates username/password combinations. At a rate of thousands of attempts per minute, it takes only a few hours to try every combination and record all successful logins.

  2. System B generates a 5-second delay after each incorrect guess. The attacker's efficiency has been reduced to 12 attempts per minute, effectively crippling the brute-force attack. Instead of hours, it can take months to find a valid login. If hackers were that patient, they'd go legit. :-)

Adam Liss
+3  A: 

Failed authentification delays are there to reduce the rate of login attempt. The idea that if somebody is trying a dictionary or a brute force attack against one or may user accounts that attacker will be required to wait the fail delay and thus forcing him to take more time and giving you more chance to detect it.

You might also be interested in knowing that, depending on what you are using as a login shell there is usually a way to configure this delay.

In GDM, the delay is set in the gdm.conf file (usually in /etc/gdm/gdm.conf). you need to set RetryDelay=x where x is a value in seconds.

Most linux distribution these day also support having FAIL_DELAY defined in /etc/login.defs allowing you to set a wait time after a failed login attempt.

Finally, PAM also allows you to set a nodelay attribute on your auth line to bypass the fail delay. (Here's an article on PAM and linux)

Pierre-Luc Simard
+1  A: 

I don't see that it can be as simple as the responses suggest.

If response to a correct password is (some value of) immediate, don't you only have to wait until longer than that value to know the password is wrong? (at least know probabilistically, which is fine for cracking purposes) And anyway you'd be running this attack in parallel... is this all one big DoS welcome mat?

that's not what they meant. there is an obvious difference between getting the password wrong, or right. what they meant was that there should be no difference between an incorrect username, and an incorrect password. and do you mean running this attack in parallel? how can you run it in parallel?
Mark
@Mark, running in parallel probably would entail opening multiple connections and trying to login. Still time consuming and not very practical.
he_the_great
If you can run a million checks per second on a non-slowed connection and the connection then has a 1-second delay added for failed attempts, you'd need a million attack clients to get the same effect. I doubt the server will allow that many telnet sessions to be created.
paxdiablo
the point is you don't have to wait out the delay before you try the next password, so what's the use?
@Pax, that's what I meant by DoS welcome mat
@Greg, you do have to re-connect to the host and, if necessary, the next step would be to check IP addresses to catch this as well.
paxdiablo
Or just have successful attempts take a second as well. You don't log on often enough for that to be a problem but it would be for an attack node.
paxdiablo
@Pax: yes of course, but without the next step what's the point?
@All: Remember that it takes time to establish a network connection, so it's far more efficient to "piggyback" several attempts on one (network) session than to try once, disconnect, reconnect, try again, da capa ad infinitum. A good system will disconnect a user after a small number of failures.
Adam Liss
A: 

On Ubuntu 9.10, and I think new versions too, the file you're looking for is located on

/etc/pam.d/login

edit the line:

auth optional pam_faildelay.so delay=3000000

changing the number 3 with another you may want.

Note that to have a 'nodelay' authentication, I THINK you should edit the file

/etc/pam.d/common-auth

too. On the line:

auth [success=1 default=ignore] pam_unix.so nullok_secure

add 'nodelay' to the final (without quotes). But this final explanation about the 'nodelay' is what I think.

Gabriel L. Oliveira
A: 

I would like to add a note from a developers perspective. Though this wouldn't be obvious to the naked eye a smart developer would break out of a match query when the match is found. In witness, a successful match would complete faster than a failed match. Because, the matching function would compare the credentials to all known accounts until it finds the correct match. In other words, let's say there are 1,000,000 user accounts in order by IDs; 001, 002, 003 and so on. Your ID is 43,001. So, when you put in a correct username and password, the scan stops at 43,001 and logs you in. If your credentials are incorrect then it scans all 1,000,000 records. The difference in processing time on a dual core server might be in the milliseconds. On Windows Vista with 5 user accounts it would be in the nanoseconds.

I think you'll find 99% of the posters here are developers of one level or another. Stop sounding so pompous.
A: 

What I tried before appeared to work, but actually did not; if you care you must review the wiki edit history...

What does work (for me) is, to both lower the value of pam_faildelay.so delay=X in /etc/pam.d/login (I lowered it to 500000, half a second), and also add nodelay (preceded by a space) to the end of the line in common-auth, as described by Gabriel in his answer.

auth [success=1 default=ignore] pam_unix.so nullok_secure nodelay

At least for me (debian sid), only making one of these changes will not shorten the delay appreciably below the default 3 seconds, although it is possible to lengthen the delay by only changing the value in /etc/pam.d/login.

This kind of crap is enough to make a grown man cry!

A: 

i think you can know more about this in general by googling "side channel attacks" if i'm not mistaken. just a thought!

ultrajohn