views:

401

answers:

5

Say I have a site on http://website.com. I would really like allowing bots to see the home page, but any other page need to blocked as it is pointless to spider. In other words

http://website.com & http://website.com/ should be allowed, but http://website.com/anything and http://website.com/someendpoint.ashx should be blocked.

Further it would be great if I can allow certain query strings to passthrough to the home page: http://website.com?okparam=true

but not http://website.com?anythingbutokparam=true

Thanks! Boaz

A: 

Basic robots.txt:

Disallow: /subdir/

I don't think that you can create an expression saying 'everything but the root', you have to fill in all sub directories.

The query string limitation is also not possible from robots.txt. You have to do it in the background code (the processing part), or maybe with server rewrite-rules.

Biri
A: 
Disallow: *
Allow: index.ext

If I remember correctly the second clause should override the first.

Unkwntech
A: 

Google's Webmaster Tools report that disallow always takes precedence over allow, so there's no easy way of doing this in a robots.txt file.

You could accomplish this by puting a noindex,nofollow META tag in the HTML every page but the home page.

ceejayoz
A: 

As far as I know, not all the crawlers support Allow tag. One possible solution might be putting everything except the home page into another folder and disallowing that folder.

hakan
+3  A: 

So after some research, here is what I found - a solution acceptable by the major search providers: google , yahoo & msn (I could on find a validator here) :

User-Agent: *
Disallow: /*
Allow: /?okparam=
Allow: /$

The trick is using the $ to mark the end of URL.

Boaz