+3  A: 

From what i can see, you're starting a thread to get each URL in the original list, look through it, and add the URLs found to the original list.

Problem is, all that getting and matching takes a while, and the loop that starts the threads will likely be done well before the first new URLs get added. It's not looking at the list again after that point, so the new URLs won't be processed.

For reference, you really ought to have some kind of synchronization and signaling going on. Most languages do this using mutexes, "conditions", or semaphores. Til you do something like that, you'll basically have to run your while loop over and over after you join each batch of threads from the previous while loop.

Actually...

Looking over the docs, i find this:

Since 5.6.0, Perl has had support for a new type of threads called interpreter threads (ithreads). These threads can be used explicitly and implicitly.

Ithreads work by cloning the data tree so that no data is shared between different threads.

Good news / bad news time. The good news is you don't have to worry about thread-safe access to @urls as it first appeared. The bad news is the reason for that: Each thread has a different @urls, so you can't share data between them like that without some extra help.

What you'll probably want to do instead is create the thread in list context, and let it return the list of URLs it found, which you can then append to @urls when you join the thread. The alternative (sharing @urls between threads) could get ugly fast, if you're not aware of thread safety issues.

However you do it, it's going to cause the script to eat up a huge amount of resources -- just the three test urls contained 42 other URLs, and a bunch of them likely have URLs of their own. So if you're going to start one thread per request, you'll very quickly end up creating more threads than just about any machine can handle.

cHao
is there anyway I can pause the first loop if there is only 1 entry left in?
Phil Jackson
it seems like its not pushing the urls
Phil Jackson
There's that, and the fact that data is not being shared between the threads.
mobrule
I was getting to that. :) Just looking through some docs and testing stuff.
cHao
ok, so just from what you've written, are we safe to say that even if I do share the @urls to the threads its going to overload anyway...
Phil Jackson
Unless you set up a bunch of constantly running background threads, or just start less than (some number) threads each time, yeah -- 3 turns into 42, which could turn into hundreds...then thousands...if your script is still alive at that point, it'll be eating up the majority of your RAM just in thread stacks. You can decrease the thread stack size (it starts out insanely high), but really, you're just delaying the inevitable.
cHao
cheers, ill look into another method. regards
Phil Jackson
+1  A: 

By default, each thread has its own private copy of data. That is, when you add new elements to @urls in one thread, the copy of @urls in all the other threads do not get updated, including the copy in the "parent" thread/process.

When you're ready to open another can of worms, check out the threads::shared module, which provides a clunky but useable way to share data between threads.

mobrule