views:

69

answers:

2

I'm using wordpress with custom permalinks, and I want to disallow my posts but leave my category pages accessible to spiders. Here are some examples of what the URLs look like:

Category page: somesite dot com /2010/category-name/

Post: somesite dot com /2010/category-name/product-name/

So, I'm curious if there is some type of a regex solution to leave the page at /category-name/ allowed while disallowing anything one level deeper (the second example.)

Any ideas? Thanks! :)

A: 
William
That's what I was sort of wondering... will the * require something in that next directory step, and not match to the directory itself (/category-name/ is allowed in that example?) Sorry, I'm totally new to this!
Jeff
Please see the revised answer about the use of `<meta>`.
William
I'm thinking that the first solution might work, because I don't need to allow anything within a given directory, I just want to make sure the directory itself is reachable (which it should be, right? If I'm correct, /*/ would only match if there was actually something after the category name?) The only problem with the robots metatag is that I have a couple thousand posts, and deployment would be a real project.
Jeff
A: 

Some information that might help.

There is no official standards body or RFC for the robots.txt protocol. It was created by consensus in June 1994 by members of the robots mailing list ([email protected]). The information specifying the parts that should not be accessed is specified in a file called robots.txt in the top-level directory of the website. The robots.txt patterns are matched by simple substring comparisons, so care should be taken to make sure that patterns matching directories have the final '/' character appended, otherwise all files with names starting with that substring will match, rather than just those in the directory intended.

There’s no 100% sure way to exclude your pages from being found, other than not to publish them at all, of course.

See: http://www.robotstxt.org/robotstxt.html

There is no Allow in the Consensus. Plus the Regex option is not in the Consensus either.

From the Robots Consensus:

This is currently a bit awkward, as there is no "Allow" field. The easy way is to put all files to be disallowed into a separate directory, say "stuff", and leave the one file in the level above this directory:

User-agent: *
Disallow: /~joe/stuff/

Alternatively you can explicitly disallow all disallowed pages:

User-agent: *
Disallow: /~joe/junk.html
Disallow: /~joe/foo.html
Disallow: /~joe/bar.html

A Possible Solution:

Use .htaccess to set to disallow search robots from a specific folder while blocking bad robots.

See: http://www.askapache.com/htaccess/setenvif.html

Todd Moses