views:

92

answers:

4

I have a script that is scraping URLs from various sources, resulting in a rather large list. Currently I've just got a collection of if statements that I'm using to filter out sites I don't want. This obviously isn't maintainable, so I'm trying to find a fast and powerful solution for filtering against a blacklist of url masks.

The best thing I could come up with is looping through an array of regex patterns and filtering anything that matches. Is this really my best bet or is there another method that would do the job better?

A: 

If you need to be able to specify patterns, then looping through an array of regexes is probably fine.

If you only need to see exact matches and no patterns, you can use strpos or such to just do a straight string match, which should be somewhat faster.

Jani Hartikainen
+3  A: 

If you want to exclude domain names, or some URL that has no "variable part", a solution might be to use a database, with a table containing only the URL, with the right index, and do a quick match.

Finding out if an URL must not be dealt with would then only be a matter or doing a quick query to that DB (which generally means "URL equals", or "URL starts with") -- which can be as simple as an SQLite DB, which fits in a file and doesn't require an additionnal server.


The idea of a PHP array has one drawback : when your array will get bigger, it'll take more and more memory just to have it in memory -- and, one day or another, you'll take too much memory and will hit memory_limit ; if you have more than a couple thousands URLs, that solution might not be the best one.

Still, if you only have a couple of URLs or patterns, the idea of a PHP array, looping over it, and comparing each value with strpos (for "contains" or "starts with") or preg_match (for regex) will do just fine -- and is the easiest one to implement.


If you want to use some complex matching rule, using some kind of regex will probably be your only real way... Be it on the PHP side, with preg_match, or on a SQL server (MySQL, for instance, has support for regex, as far as I know -- no idea about the performances, though ; see 11.4.2. Regular Expressions for more informations)

Pascal MARTIN
If you are usign regexp like domain1\.com|domain2\.com|domain4\.com and so on be careful since too long regexp won't work (might even crash in nasty ways).
Kamil Szot
If you are going to use sqlite then be aware that 'starts with' query can be written as (url >= 'beginofurl' AND url <= 'beginofurl' || 'z'). This way index can be used so search will be fast.
Kamil Szot
A: 

Will you be loading a long list of items to memory each time? I think egrep or grep will be best method. On Linux your file will remain in file cache and results will be very fast and since egrep will run through file, not every apache thread will have the copy of the list in memory.

Cem Kalyoncu
That said I'm not very sure egrep will perform check backwards (using patterns from file on the expression).
Cem Kalyoncu
I'm doing the test as the list is built, when each url is found.
ChiperSoft
+1  A: 

You should keep the sites in an hash and look up like that. it is simple and elegant:

 $excluded['www.google.com'] = true;
 $excluded['www.mapquest.com'] = true;
 $excluded['www.yahoo.com'] = true;

 $url = "http://www.google.com?q=barefoot+winery";

 $urlArray = parse_url($url)

 if (! isset($excluded[$urlArray['host']]))
 {
  scrape($url)
 }

As pascal said after a while you will run into memory problems. But at that point maintaining the urls will be a bigger issue. Go for a database when that happens.

Byron Whitlock