views:

291

answers:

2

An odd one, I'm trying to read the <Head> section of a lot of different websites out there, and one particular type of server, Apache, sometimes gives the code 403 forbidden. Not all apache servers do this, so it may be a config setting or a particular version of the server.

When I then check the url with a web browser (Firefox, for example) the page loads fine. The code sorta looks like this:

var client = new WebClient();
var stream = client.OpenRead(new Uri("http://en.wikipedia.org/wiki/Barack_Obama"));

Normally, a 403 is a access permission failed sort of thing, but these are normally unsecure pages. I'm thinking that Apache is filtering on something in the request headers since I'm not bothering to create any.

Maybe someone who knows more about Apache can give me some ideas of what's missing in the headers. I'd like to keep the headers as small as possible to minimize bandwidth.

Thanks

+1  A: 

Try setting the UserAgent header:

string _UserAgent = "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.2; .NET CLR 1.0.3705;)";
client.Headers.Add(HttpRequestHeader.UserAgent, _UserAgent);
thedugas
+1  A: 

It could be a matter of the UserAgent header, as "thedugas" said, or in fact anything the browser is silently configured to do. For instance, it could be a matter of not using a proxy server that the browser is using, or not using the correct credentials for the proxy server. These are things that may already be configured into the browser, so you're not aware they need to be done.

John Saunders