views:

288

answers:

6

Is it possible to fine-tune directives to google to such an extent that it will ignore part of a page, yet still index the rest?

There are a couple of different issues we've come across which would be helped by this, such as:

  • rss feed/news ticker-type text on a page displaying content from an external source
  • users entering contact phone etc details who want them visible on the site but would rather they not be google-able

I'm aware that both of the above can be addressed via other techniques (such as writing the content with javascript), but am wondering if anyone knows if there's a cleaner option already available from google?

I've been doing some digging on this and came across mentions of googleon and googleoff tags, but these seem to be exclusive to google search appliances.

Does anyone know if there's a similar set of tags to which GoogleBot will adhere?

Edit

Just to clarify, I don't want to go down the dangerous route of cloaking/serving up different content to google, which is why I'm looking to see if there's a "legit" way of achieving what I'd like to do here.

A: 

In short NO - unless you use cloaking with is discouraged by Google.

Oliver Weichhold
A: 

There are meta-tags for bots, and there's also the robots.txt, with which you can restrict access to certain directories.

Bobby

Bobby
meta-tags and robots.txt both allow or restrict access on a file level, I'm curious if you can allow a page to be indexed, but block a certain part of it.
ConroyP
A: 

All search engines either index or ignore the entire page. The only possible way to implement what you want is to:

(a) have two different versions of the same page

(b) detect the browser used

(c) If it's a search engine, serve the second version of your page.

This link might prove helpful.

Anax
This is a good way to get your site banned from Google
Greg
Anax
A: 

You use robots.txt .see this link

NightCoder
robots.txt can't filter a part of a page, only whole pages.
Omry
A: 

At your server detect the search bot by IP using PHP or ASP. Then feed the IP addresses that fall into that list a version of the page you wish to be indexed. In that search engine friendly version of your page use the canonical link tag to specify to the search engine the page version that you do not want to be indexed.

This way the page with the content that do want to be index will be indexed by address only while the only the content you wish to be indexed will be indexed. This method will not get you blocked by the search engines and is completely safe.

+2  A: 

What you're asking for, can't really be done, Google either takes the entire page, or none of it.

You could do some sneaky tricks though like insert the part of the page you don't want indexed in an iFrame and use robots.txt to ask Google not to index that iFrame.

idimmu