views:

70

answers:

3
  1. Is it better to use meta tags* or the robots.txt file for informing spiders/crawlers to include or exclude a page?

  2. Are there any issues in using both the meta tags and the robots.txt?

*Eg: <#META name="robots" content="index, follow">

A: 

I would probably use robots.txt over the meta tag. Robots.txt has been around longer, and might be more widely supported (But I am not 100% sure on that).

As for the second part, I think most spiders will take whatever is the most restrictive setting for a page - if there is a disparity between the robots.txt and meta tag.

webdestroya
+1  A: 

Robots.txt IMHO.

The Meta tag option tells bots not to index individual files, whereas Robots.txt can be used to restrict access to entire directories.

Sure, use a Meta tag if you have the odd page in indexed folders that you want skipping, but generally, I'd recommend you most of your non-indexed content in one or more folders and use robots.txt to skip the lot.

No, there isn't a problem in using both - if there is a clash, in general terms, a deny will overrule an allow.

CJM
A: 

Both are supported by all crawlers which respect webmasters wishes. Not all do, but against them neither technique is sufficient.

You can use robots.txt rules for general things, like disallow whole sections of your site. If you say Disallow: /family then all links starting with /family are not indexed by a crawler.

Meta tag can be used to disallow a single page. Pages disallowed by meta tags do not affect sub pages in the page hierarchy. If you have meta disallow tag on /work, it does not prevent a crawler from accessing /work/my-publications if there is a link to it on an allowed page.

jmz