views:

157

answers:

6

We have a web application that passes parameters in the url along the lines of this:

www.example.com/ViewCustomer?customer=3945

Reasonably often, we will see attempts to access just:

www.example.com/ViewCustomer

Or system logs this as invalid and sends back a "An error has occurred, contact support with trace number XXX" type page.

Our logs include the session information so it is actually someone logged in with a valid session which means they have successfully signed in with a username and password.

It could be they just typed that into the address bar, but it seems to occur too often for that. Another alternative is that we have a bug in our code, but we've investigated and sometimes there is only one spot and it is clearly ok. We've never had a user complain about something not working and resulting in this. Everything is under SSL.

Has anyone else experienced this? Do some browsers send these sorts of dodgy requests occasionally?

Edit: Our logs show this:

 user-agent = Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; .NET CLR 1.1.4322; .NET CLR 2.0.50727; .NET CLR 3.0.04506.30; .NET CLR 3.0.04506.648)
+2  A: 

Do your logs include the referrer information? If there's any information present then it could help to pinpoint the error. If there isn't, that might indicate an "editing the URL" attempt. (I don't know how much SSL would change any of this, admittedly.)

Browsers do sometimes prefetch links but I don't know whether they'd get rid of the parameter - and it seems unlikely that they'd do this for HTTPS.

Do you have a pattern as to which browsers are being used for these requests?

Jon Skeet
A: 

Check your logs for the agent string and see if these requests are made by a search engine spider.

Franci Penov
I doubt this because it would have had to login first.
WW
A: 

I know that I sometimes remove the parameter just to check what is there. I'm sure I'm not the only one.

A: 

Some rogue crawlers change the user-agent to a browser's user agent and crawl pages. This may also be a such a case.

Also most crawlers do try to substitute some other values to the query params to fetch pages which weren't linked.

cnu
+1  A: 

I have seen this with the web application we are supporting - a stray GET request out of the blue for an already logged in user, wrecking the server-side state and resulting in an error on the subsequent legitimate POST request.

Since in our case the URLs use URL-rewriting attaching sessionid to the URL such GETs would also sometimes have old sessionid.

In the specific log file that lead to nailing this issue these stray requests had the agent string different (though valid) from the rest in the same session.

I am convinced that rather than the browser itself it's some plugin/extension. There is a possibility that a proxy does it or even malware.

We overcame this particular problem by forbidding GET requests to the URI in question.

However, now I am dealing with a similar problem where a POST request appears out of nowhere where it shouldn't and the only difference is in the "accept" header.

+1  A: 

I now think this was actually a tomcat bug getParameter() fails on POST with transfer-encoding: chunked.

WW