From what i can see, you're starting a thread to get each URL in the original list, look through it, and add the URLs found to the original list.
Problem is, all that getting and matching takes a while, and the loop that starts the threads will likely be done well before the first new URLs get added. It's not looking at the list again after that point, so the new URLs won't be processed.
For reference, you really ought to have some kind of synchronization and signaling going on. Most languages do this using mutexes, "conditions", or semaphores. Til you do something like that, you'll basically have to run your while loop over and over after you join each batch of threads from the previous while loop.
Actually...
Looking over the docs, i find this:
Since 5.6.0, Perl has had support for a new type of threads called interpreter threads (ithreads). These threads can be used explicitly and implicitly.
Ithreads work by cloning the data tree so that no data is shared between different threads.
Good news / bad news time. The good news is you don't have to worry about thread-safe access to @urls
as it first appeared. The bad news is the reason for that: Each thread has a different @urls
, so you can't share data between them like that without some extra help.
What you'll probably want to do instead is create the thread in list context, and let it return the list of URLs it found, which you can then append to @urls when you join
the thread. The alternative (sharing @urls
between threads) could get ugly fast, if you're not aware of thread safety issues.
However you do it, it's going to cause the script to eat up a huge amount of resources -- just the three test urls contained 42 other URLs, and a bunch of them likely have URLs of their own. So if you're going to start one thread per request, you'll very quickly end up creating more threads than just about any machine can handle.