views:

591

answers:

7

I want to add an additional roadblock in my application to prevent automating it via javascript, specifically when the automated requests are being done via XMLHttpRequest from any of the popular browsers.

Is there a reliable tell-tale sign of a XMLHttpRequest that I can use in ASP.NET?

And I guess the other related question is, is it hard for XMLHttpRequest to appear to be an ordinary human driven request? Because if it is, then I guess I'm on fools errand.

UPDATE: I might be phrasing the question too narrowly. The goal is to detect : code written by someone else, not submitted by a ordinary browser, might be a bot, might not be from a my intranet customers, etc. So far XHR and .NET WebRequest requests come to mind.

+4  A: 

no, there is no way of doing this. a lot of the popular libraries like jquery put a special header ("x-requested-with" for jquery) to indicate it's an ajax call, but that's obviously voluntary on the part of the client.

you must assume that any request you receive may be malicious unfortunately

Joel Martinez
A: 

As far as I know, all implementations of XMLHttpRequest add a header to the request.

Edit:

Sorry, the header (at least under PHP) is 'HTTP_X_REQUESTED_WITH'

Thomas
This is not added by default, it is voluntarily added by the client library
Joel Martinez
However, if that header is set, then it definitely is a XMLHttpRequest.
Thomas
Am I missing something, or could that header not also be faked in a manually sent request?
Thr4wn
@Thr4wn, Yeah, but a malicious code writer is eager to appear as something other than a script, so although they could add the header, they would rather appear without it and thus look like a human instead of a bot.
MatthewMartin
Point remains though, that the HTTP_X_REQUESTED_WITH header is NOT required, and it offers NO guarantee about the origin of the request. It is unreliable and should NOT be used for security.
ken
+3  A: 

You could always use a CAPTCHA to ensure that a human is responsible for submitting the request. recaptcha.net is free and helps to digitize books.

Edit:

If you know what type of malicious behavior you are trying to prevent, you could develop simple algorithms for detecting that behavior. When that behavior is detected you could challenge the client with a CAPTCHA to ensure that a human is responsible. This approach is starting to become common practice. Take a look at this post on this topic.

Lawrence Barsanti
I thought about that, but the customer feels all their pages are equally sensitive, and I can't ask them to put a captcha on each page, or a password prompt on each page for that matter.
MatthewMartin
Put the CAPTCHA on the first page, and then you can assume the others are requested by a legitimate user. Or also use minimum time limit between requests like SO, or throw CAPTCHA if request exceeds x on a specific time span like Facebook.
Adrian Godong
+1  A: 

You can perform a few sneaky things, but unfortunately it won't deter everyone. For example, you could place some JavaScript on a start/login page that is run when the page is loaded. This JavaScript causes a redirect and potentially writes an encrypted cookie value that is sent back to the server. Using XMLHttpRequest (or others) obviously just returns the content and doesn't execute any JavaScript, so you can can filter these requests out because they don't have the cookie value set by the script. Running some obfuscation on the JavaScript would be even better.

+1  A: 

You can't prevent request from any other client (like some c# snippet using HttpWebRequest). Because you are serving something (HTML markup etc. over HTTP protocol).

Any client can make a request to you. You can prevent non-human clients via Captcha controls.

Last week I needed to make a request to a web form via my custom c# code. It was very basic, it only gets a code (input type="text") and shows a state message in a div element. I tried to directly post the required data and got no results. I thought a little and I found a solution, first I make a request to the page directly with GET and got a sessionID cookie, and saved the cookie with CookieContainer object instance and made the second POST request with the same CookieContainer instance (so I sent the sessionID), and it worked and I got the state text. I think the web page was using something like this:

    protected void Page_Load(object sender, EventArgs e)
    {
        Session["firstlyMadeAGETRequest"] = "true";
    }
    private void methodWhichWasInvokedWhenPagePostedBack()
    {
        if (((bool)Session["firstlyMadeAGETRequest"]) == true)
        {
            //do the logic
        }
    }

It take some time for a developer from my team to do this, but when we inspected it with the ieHttpHeaders tool, we saw we didn't send the sessionId, so we used the cookiecontainer and resolved the issue.

I think exact solution is to prevent automated client use Captcha's.

Good luck.

Cengiz Han
+1  A: 

There's no way to find out if a request is forged or not. With the .NET HttpWebRequest class you can perfectly mimic any valid HTTP request from a browser. But depending on your application, you can consider one of the following:

  • Look at HTTP headers to find the most blatant attempts
  • Prevent too many requests from the same IP address in a short time frame
  • Require a login
  • Only allow certain IP address ranges
  • CAPTCHAs
  • Run some Javascript code before submit to make a hack with .NET HttpWebRequest harder (see Mark Barnard's answer)

You have to find out why people would try to do this, and then find a way to make it very inconvenient for them. Possibly in such a way that you can easily modify the validation procedure, so that they have to keep up all the time.

chris166
+1  A: 

You could implement a strategy using request challenge tokens, not unlike what might be used for CSRF/XSRF protection. Another possible alternative might be using authentication, although if you have a public website this might not be very friendly.

ken