views:

140

answers:

4

I ask this because I am creating a spider to collect data from blogger.com for a data visualisation project for university.

The spider will look for about 17,000 values on the browse function of blogger and (anonymously) save certain ones if they fit the right criteria.

I've been running the spider (written in PHP) and it works fine, but I don't want to have my IP blacklisted or anything like that. Does anyone have any knowledge on enterprise sites and the restrictions they have on things like this?

Furthermore, if there are restrictions in place, is there anything I can do to circumvent them? At the moment all I can think of to help the problem slightly is; adding a random delay between calls to the site (between 0 and 5 seconds) or running the script through random proxies to disguise the requests.

By having to do things like the methods above, it makes me feel as if I'm doing the wrong thing. I would be annoyed if they were to block me for whatever reason because blogger.com is owned by Google and their main product is a web spider. Allbeit, their spider does not send its requests to just one website.

+10  A: 

It's likely they have some kind of restriction, and yes there are ways to circumvent them (bot farms and using random proxies for example) but it is likely that none of them would be exactly legal, nor very feasible technically :)

If you are accessing blogger, can't you log in using an API key and query the data directly, anyway? It would be more reliable and less trouble-prone than scraping their page, which may be prohibited anyway, and lead to trouble once the number of requests is big enough that they start to care. Google is very generous with the amount of traffic they allow per API key.

If all else fails, why not write an E-Mail to them. Google have a reputation of being friendly towards academic projects and they might well grant you more traffic if needed.

Pekka
+1 for the usage of the API. Even API's will have certain limits (e.g. x number of calls per second), but it will be more stable and, above all, legal.
keyboardP
Thanks, I had no idea there even was an API. However, Google's API for blogger only allows you to do things on a per user basis. I need to get things on a per location or per interest base. i.e. I need to get all users with a certain location or interest. (note: when I say get all users, i don't actually need all users, between 100-500 would be fine).I guess, I will need to email google or change my approach.
betamax
Asking them always contains the risk that if they turn you down, they know who you are. But I think it's better taking that risk that running afoul of some limit, and getting blacklisted.
Pekka
Just take it slowly. It may not be explicitly allowed, but if you're only hitting them once or twice a second, it'll easily complete overnight. If you want to be very safe, slow down to once every couple seconds. 17K values isn't really all that many when you think of it in terms of requests per hour.
Paul McMillan
To search for specific queries, try playing around with the standard Google API parameters:http://code.google.com/apis/gdata/docs/2.0/reference.html#QueriesThe Blogger API supports most of the standard Google parameters, so you could read those docs. This might also be useful:http://code.google.com/apis/blogger/docs/1.0/reference.html#Parameters
keyboardP
@TenaciousImpy As far as I can tell, you still can't use that to browser blogger profiles which is a shame. @Paul McMillan I think I'm going to give it a try with a random delay between requests. I am moving to a different connection soon so if I get blocked whilst here, then I can try to solve the problem from the other connection.
betamax
+1  A: 

If you want to know for sure, write an eMail to blogger.com and ask them.

Gordon
A: 

you could request it through TOR you would have a different ip each time at a peformance cost.

Question Mark
+3  A: 

Since you are writing a spider, make sure it reads the robots.txt file and does accordingly. Also, one of the rules of HTTP is not to have more than 2 concurrent requests on the same server. Don't worry, Google's servers are really powerful. If you only read pages one at the time, they probably won't even notice. If you inject 1 second interval, it will be completely harmless.

On the other hand, using a botnet or other distributed approach is considered harmful behavior, because it looks like DDOS attack. You really shouldn't be thinking in that direction.

Milan Babuškov
+1 Good point, especially seeing as he cares about the good name and reputation of his IP address.
Pekka
Thanks for the tip on robots.txt, I hadn't considered this yet. When you put it like that it makes it seem as if my traffic will just be lost amongst all the other traffic going to blogger which I could imagine happening. *but* they are bound to have systems in place to match my sort of requests.
betamax
Major +1 for obeying `robots.txt`.
ceejayoz