views:

119

answers:

3

I was wondering if there is any way to get the foreach package in R to use a pre-allocated structure to put results. basically it involves lots of small linalg operations on very big data sets.

My non-foreach original code is something like

results <- rep(NA,m*l*[big.number])
dim(results) <- c(m,l,[big.number])
for (i in 1:m){
    for (j in 1:l){
        results[i,j,] <- function(j,i,data)
    }
}

I'd like to use foreach and doMC to parallelize this but test runs are really really slow and I think it's the continual data moving that rbind and c do.

A: 

Maybe I am missing something here. The cost of single function(i,j,data) call should be invariant to whether it is called in a for-loop or via foreach. Can you not try foreach in serial mode, then maybe try with multicore (unless you're on Windows) and go from there?

Dirk Eddelbuettel
I guess I'm getting confused on two aspects of the doMC structure. What is the fixed cost in setting up the workers and what is the cost of moving data around.In my case running a single iteration of the function(i,j,data) is ~200 seconds but running two together in multicore take ~12 minutes.
James
+2  A: 

This is not an answer, but I thought I'd post some test results in hopes that someone else will know what's going on:

> data <- matrix(rnorm(1000 * 10000), nrow=10000)
> system.time(foreach(j=1:1000, .combine = function(...) NULL, .multicombine=TRUE) %do% { sum(data[,j]) })
utilisateur     système      écoulé 
      0.643       0.031       0.674 
> system.time(foreach(j=1:1000, .combine = function(...) NULL, .multicombine=TRUE) %dopar% { sum(data[,j]) })
utilisateur     système      écoulé 
      0.613       0.215       0.653 
> system.time(foreach(j=1:1000) %dopar% { sum(data[,j]) })
utilisateur     système      écoulé 
      0.537       0.122       0.745 
> system.time(foreach(j=1:1000) %do% { sum(data[,j]) })
utilisateur     système      écoulé 
      0.650       0.028       0.681 
> system.time (for (j in 1:1000) { sum(data[,j]) })
utilisateur     système      écoulé 
      0.153       0.069       0.222 

In short, using the builtin for is still way faster than serial foreach. You don't really win by using dopar, and it doesn't seem that putting everything together is what's taking all the time (it could still be that transmitting the data back to the master takes a long time). You can also argue that with computation this simple, the overhead will naturally dominate. So let's do some more complicated stuff:

> data <- matrix(rnorm(3000 * 10000), nrow=10000)
> system.time (for(j in 1:6000) { sum(lgamma(exp(data[,(j - 1) %% 3000 + 1]))) })
utilisateur     système      écoulé 
     11.215       1.272      12.490 
> system.time (foreach(j=1:6000, .combine=c) %do% { sum(lgamma(exp(data[,(j - 1) %% 3000 + 1]))) })
utilisateur     système      écoulé 
     14.304       0.468      15.788
> system.time (foreach(j=1:6000, .combine=c) %dopar% { sum(lgamma(exp(data[,(j - 1) %% 3000 + 1]))) })
utilisateur     système      écoulé 
     14.377      11.839      10.358 

Now dopar is starting to win out but the three are still pretty comparable and the builtin for is not so bad, even with all the additional work. But what about communication overhead? Instead of taking sums, we'll just return the transformed data (10,000 numbers per iteration).

> system.time (for(j in 1:6000) { lgamma(exp(data[,(j - 1) %% 3000 + 1])) })
utilisateur     système      écoulé 
     11.092       1.189      12.302     
> system.time (foreach(j=1:6000, .combine=function (...) NULL, .multicombine=TRUE) %do% { lgamma(exp(data[,(j - 1) %% 3000 + 1])) })
utilisateur     système      écoulé 
     14.902       1.867      22.901 
> system.time (foreach(j=1:6000, .combine=function (...) NULL, .multicombine=TRUE) %dopar% { lgamma(exp(data[,(j - 1) %% 3000 + 1])) })

^C

Timing stopped at: 2.155 0.706 241.948 
> 

Here, the for loop which doesn't have to bother with keeping results around takes about as long as before. The %do% version took a lot longer this time. And the %dopar% which is probably transferring the results through shared memory? I decided to kill it after about 4 minutes.

Jonathan Chang
A: 

I've had luck with abind() (from library(abind)). For example, say I have a simulation function returning matrices and I'd like a big array, I could use abind(,along=.5) to get the list of matrices to be bound into an array with a new dimension added to it. So you might like something like:

myfun<-function(arr){abind(arr,along=.5)}

foreach(1:n.sims,.combine=myfun) .... 
Jake

related questions