tags:

views:

73

answers:

1

What are ways in which web crawlers (both from search engines and non-search engines) could affect site statistics (e.g., when doing AB-testing different page variations)? And what are ways to take care of these problems?

For example:

  1. Do a lot of people writing web crawlers often delete their cookies and mask their IPs, so that web crawlers often show up as different users each time they crawl the site?

  2. What are heuristics to use to recognize that something is a bot? (I'm guessing any sophisticated enough bot can be indistinguishable from a real user, if it wants to -- is this correct?)

Just to clarify, based on the comment below: I'm also interested in the case when my site is specifically being targeted (by a possibly illegitimate crawler).

A: 

A few simple ways to detect a bot:

  1. Hits to /robots.txt - only bots (and geeky people, who might almost be robots anyway) will look at this.
  2. User agent - responsible bots often have a URL in their UA string (eg, msnbot/2.0b (+http://search.msn.com/msnbot.htm or Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.2.1; aggregator:Spinn3r (Spinn3r 3.1); http://spinn3r.com/robot) Gecko/20021130), so seeing that's a fairly strong indication of a bot.
  3. JavaScript - bots won't execute it, so if you, eg, use JavaScript to set a cookie, when ever you see that cookie on the server, you can be pretty sure it was sent by a "real" browser.
  4. Source IPs - legitimate crawlers will often have their own domains, which a reverse DNS lookup will reveal (this is how Google suggests that you identify the Googlebot).

Between these, you should have no problem figuring out which hits are coming from robots, and which are from real people.

Finally, there are always going to be nasty and/or stupid bots which are hard to detect. But, at least in my experience, there aren't too many of those in the wild.

David Wolever
Programmers also look at robots.txt sometimes. :)
tloflin
Whoops, thanks - fixed that.
David Wolever