views:

28

answers:

1

I am working currently for a project on which several parts of the website may be restricted due to an area the user resides. So that when a user accesses the page he gets redirected to a form he must complete in order the view the content.

Wanting search engines to index the content, I am creating exceptions for the search engine crawlers so that they can access easily the content.

I am cherry picking some search engines from this page, and my solution would be to check the IP Address of the crawler (which can be found on the page I linked) and based on that grant access.

Is this solution viable enough? I am asking this because I have read an article on the official Google Webmaster central blog which recommended performing DNS reverse lookups on the bot in order to match its authenticity.

I have to mention that this has no security implication.

TL;DR do I get penalized if I allow the search agent bot to go directly to the content while the user is redirected? Which is the better approach for this? (user agent, IP Address or reverse DNS lookup in relation to cost/benefit)

+2  A: 

The answer is NO,

but some users will also view your page through google cache instead, bypassing your restrictions.

Imre L
I wonder why capitalized "no". Anyway that is not a problem, it's meant as a filter for non tech savvy users.
mhitza
No particular reason. But better solution would probably be to show the page for 1 sec and then redirect to quiz page (using javascipt). This way, "showing the candy before giving it" may yield in better results, also eliminating the need to identify the search engines.
Imre L
Like your redirect solution as well. +1/accepted.
mhitza