From the HTTP server's perspective.
I have captured google crawler request in my asp.net application and here's how the signature of the google crawler looks.
Requesting IP: 66.249.71.113
Client: Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)
My logs observe many different IPs for google crawler in 66.249.71.*
range. All these IPs are geo-located at Mountain View, CA, USA.
A nice solution to check if the request is coming from Google crawler would be to verify the request to contain Googlebot
and http://www.google.com/bot.html
. As I said there are many IPs observed with the same requesting client, I'd not recommend to check on IPs. And may be that's where Client identity come into the picture. So go for verifying client identity.
Here's a sample code in C#.
if (Request.UserAgent.ToLower().Contains("googlebot") ||
Request.UserAgent.ToLower().Contains("google.com/bot.html"))
{
//Yes, it's google bot.
}
else
{
//No, it's something else.
}
It's important to note that, any Http-client can easily fake this.
If you're using Apache Webserver, you could have a look at the log file 'log\access.log'.
Then load google's IPs from http://www.iplists.com/nw/google.txt and check whether one of the IPs is contained in your log.