views:

244

answers:

12

I am trying to write an application that uses pretty URLS or REST (still learning what this entails). Anyway my urls look like www.foo.net/some_url/some_parameter/some_keyword. I can be sure a url will never exceed N characters. Should I validate the url length with every request in order to protect against buffer overflow/injection attacks? I am going to guess this is an obvious yes but I am no security expert so perhaps I am missing something.

Update: Thanks for the comments. I can see there are differences on this. I will reject urls over length max_expected+some_number, which will be a variable and very easy to configure as the application changes. I am guessing most buffer overflows are a typically a result of very long strings. I realize this does not make my application completely secure but I think it helps and it is low hanging fruit.

+1  A: 

how are you so sure that all URL longer than N is invalid? If you can be sure, then it shouldn't hurt to limit it just as a sanity check - but don't let this fool you into thinking you've prevented a class of exploit.

Chii
A: 

I think this may give you some modicum of safety and might save you a little bandwidth if people do send you crazy long URLs, but largely you should just validate your data in the actual application as well. Multiple levels of security are generally better, but don't make the mistake of thinking that because you have a (weak) safeguard at the beginning that you won't have issues with the rest.

jamuraa
+4  A: 

If you are not expecting that input, reject it.

You should always validate your inputs, and certainly discard anything outside of the expected range. If you already know that your URL's honestly won't be beyond a certain length then rejecting it before it gets to the application seems wise.

A: 

I'd say no. It's just false security. Just program well and check your requests for bad stuff. It should be enough.

Also, it's not future proof.

Giuda
Almost. Check your requests for _good_ stuff, not bad stuff. There will always be bad things you haven't thought of, it's a lot easier to only allow the good things (which are nearly always easier to define).
Doug McClean
A: 

Yes. If it's too long and you're sure then reject it as soon as possible. If you can, reject it before it reaches your application (for example IISLockdown will do this).

Remember to account for character encoding though.

blowdart
A: 

Better than checking length, I think you should check content. You never know how you're going to use your URL schema in the future, but you can always sanitize your inputs. To put a very complex thing very simply: Don't trust user-supplied data. Don't put it directly into DB queries, don't eval() it, don't take anything for granted.

Lucas Oman
A: 

If you know valid URLs can't be over N bytes then it sounds like a good way to quickly reject cross-site-scripting attempts without too much effort.

pdc
+1  A: 

The only thing I can see that could cause issues is that while today your URL will never exceed N, you cannot guarantee that that won't be the case forever. And in a year, when you go back to make an edit to allow for a url to be N+y in length, you may forget to modify the url rejection code.

You'll always be better off verifying the URL parameters prior to using them.

Stephen Wrighton
A: 

It's better to validate what is in the request than validate URL length.

Your needs may change in the future, at which point you'll have to remove or change the URL length validation, possibly introducing bugs.

If it does end up as a proven security vulnerability, then you can implement it.

Terrapin
+4  A: 

Defence in depth is a good principle. But false security measures are a bad principle. The difference depends on a lot of details.

If you're truly confident that any URL over N is invalid, then you may as well reject it. But if it's true, and if the rest of your input validation is correct, then it will get rejected later anyway. So all this check does is potentially, maybe, mitigate the damage caused by some other bug in your code. It's often better to spend your time thinking how to avoid those bugs, than thinking about what N might be.

If you do check the length, then it's still best not to rely on this length limit elsewhere in your code. Doing that couples the different checks more tightly together, and makes it harder to change the limit in the next version, if you change the spec and need to accept longer URLs. For example if the length limit becomes an excuse to put URLs on the stack without due care and attention, then you may be setting someone up for a fall.

Steve Jessop
A: 

Ok, let's assume such an N exists. As onebyone pointed out, a malformed URL that is longer than N characters will be rejected by other input validation anyway. However, in my eyes, this opens up a whole new thing to think about:

Using this constant, you can validate your other validation. If the other validations have been unable to detect a certain URL as invalid, however, the URL is longer than N characters, then this URL triggers a bug and should be recorded (and maybe the whole application should shut down, because they might create an invalid URL that is short enough).

Tetha
+1  A: 

Safari, Internet Explorer, and Firefox all have different max lengths that it accepts.

I vote go for the shortest of all three.

http://www.boutell.com/newfaq/misc/urllength.html

Pulled from link -

"Microsoft Internet Explorer (Browser) - 2,083 characters

Firefox (Browser) - After 65,536 characters, the location bar no longer displays the URL in Windows Firefox 1.5.x. However, longer URLs will work. I stopped testing after 100,000 characters.

Safari (Browser) - At least 80,000 characters will work."

Penguinix
Thanks! Just the thing I needed to find.
Volomike