views:

401

answers:

5

I'm trying to get accurate download numbers for some files on a web server. I look at the user agents and some are clearly bots or web crawlers, but many for many I'm not sure, they may or may not be a web crawler and they are causing many downloads so it's important for me to know.

Is there somewhere a list of know web crawlers with some documentation like user agent, IPs, behavior, etc?

I'm not interested in the official ones, like Google's, Yahoo's, or Microsoft's. Those are generally well behaved and self-indentified.

+2  A: 

http://www.robotstxt.org/db.html is a good place to start. They have an automatable raw feed if you need that too. http://www.botsvsbrowsers.com/ is also helpful.

Justin Grant
A: 

I asked question some thing like this before some time Please if it can help you ?

http://stackoverflow.com/questions/1350884/what-is-a-good-web-search-and-web-crawling-engine-for-java

Umesh Aawte
oh what a wrong place to post this.
thephpdeveloper
@Umesh Aawte, it appears the person who posted this question is in fact looking for the reverse, i.e. a list of well known user-agents (a string used to identify web browsers, and web-clients at large), so that he/she can adapt accordingly when these agents are crawling his/her web site(s).
mjv
+3  A: 

I'm using http://www.user-agents.org/ usually as reference, hope this helps you out.

You can also try http://www.robotstxt.org/db.html or http://www.botsvsbrowsers.com.

Jaan J
+2  A: 

Unfortunately we've found that bot activity is too numerous and varied to be able to accurately filter it. If you want accurate download counts, your best bet is to require javascript to trigger the download. That's basically the only thing that is going to reliably filter out the bots. It's also why all site traffic analytics engines these days are javascript based.

jwanagel
The problem in our case is that we have many valid downloaders that won't run JavaScript, like iTunes or any other podcatcher.
J. Pablo Fernández
Unfortunately you're really out of luck then as far as highly accurate download counts. The best alternative I can recommend is looking at three numbers: Total downloads (no filtering), filter for excluding bots (black list filtering), and filter for including known good (white list filtering). That will at least give you something to look at for trends and rough ball-park estimating.
jwanagel
A: 

There is an API available at www.atlbl.com that will identify web crawlers based on their useragent and source IP address. It manages to catch stealth webcrawlers, google-impersonators, and other nefarious bots, by their IP.