views:

232

answers:

7

Hi All,

I recently transitioned my companies website over to our in-house servers (Apache) from a hosting companies (IIS). The group that originally built the site did a piss poor job and the entire thing was a mess to migrate. While the move went fairly smoothly, looking at the error_log there are still some missing pages.

Rather than having to continually grep through the error_log for "File does not exist" errors relating to this domain - we have about 15 or so we host on these servers - I was wondering if it might be easier to simply do the following when a 404 error occurs:

  • redirect to a php page and pass the original URL request
  • have the new php page dump the URL to a log-ish file

As I type this I am becoming less and less convinced that this is a worthwhile undertaking. Regardless though the underlying question is, are there potential security issues w/using fwrite? Does there need to be any sort of scrubbing of user input if that input is going to be appended to a file? This input would not be going anywhere near a database for whatever that is worth. Thanks in advance.

+1  A: 

Well, just the usual filesystem stuff: don't let the user specify where the file will go: things like script.php?filename=../../../../../../../etc/passwd shouldn't even have a chance of writing to /etc/passwd (also the script shouldn't have FS permissions for that). Other than that, fwrite() doesn't have any special chars that would allow it to jump into some sort of command mode.

Also, the 404 page is pretty simple (in httpd.conf):

ErrorDocument 404 /error_page.php

and just dump the REQUEST_URL to a file

Piskvor
A: 

fwrite should be pretty safe.

Alternatively you can use some access log analyzer which usually lists not found pages.

Michal Čihař
+3  A: 

As long as you are the one defining which file you are writing to (and not determining that from the URL), there should not be much risk : the only thing you'll get from the user is the content you'll write to file, and if you don't execute that file, but just read it, it should be quite OK.

The idea of logging 404 errors this way is not new : I've seen it done quite a few times, and have never faced any major problem with it (biggest problem I saw was a file that became big quite fast, because there were far too many errors ^^ )

For instance, Drupal does a bit of this : 404 errors are logged -- but to a database, so it's easier to analyse them using the web-interface.

Pascal MARTIN
A: 

If all it is doing is writing to disc, the only thing someone from the outside is to get it to write to disc. Obviously, the file name should not be a parameter that gets passed with the invalid url. Some one could try to exploit it by just sending tons of invalid pages requests with really long urls. But they would have to know you were doing this and care enough when there are other ways that would be more effective that are just general attacks.

unholysampler
A: 

There is a potential issue to look out for is if you are writing logs as HTML (or other file type that happens to allow code to be embedded). These file are of course vulnerable to XSS attacks.

Tom Hawtin - tackline
A: 

A common logfile attack is to request URLs containing embedded malicious javascript. These URLs are written directly to a log file which will then execute when anyone views the file in a web browser.

  1. Ensure the file you write cannot be served as HTML by the web server
  2. Consider URL-encoding or HTML-encoding the URLs.
Cheekysoft
A: 

You should already be recording the 404 errors in your error_log.

By all means use a custom error handler to give the user a friendlier error message, but if this site sees any sort of serious throughput, then using fwrite from the script is not a good idea. PHP does not intrinsically have the sort of sophisticated file locking semantics to support concurrent file access - but since the webserver is recording the information for you, why bother?

C.

symcbean