This is not an answer, but I thought I'd post some test results in hopes that someone else will know what's going on:
> data <- matrix(rnorm(1000 * 10000), nrow=10000)
> system.time(foreach(j=1:1000, .combine = function(...) NULL, .multicombine=TRUE) %do% { sum(data[,j]) })
utilisateur système écoulé
0.643 0.031 0.674
> system.time(foreach(j=1:1000, .combine = function(...) NULL, .multicombine=TRUE) %dopar% { sum(data[,j]) })
utilisateur système écoulé
0.613 0.215 0.653
> system.time(foreach(j=1:1000) %dopar% { sum(data[,j]) })
utilisateur système écoulé
0.537 0.122 0.745
> system.time(foreach(j=1:1000) %do% { sum(data[,j]) })
utilisateur système écoulé
0.650 0.028 0.681
> system.time (for (j in 1:1000) { sum(data[,j]) })
utilisateur système écoulé
0.153 0.069 0.222
In short, using the builtin for
is still way faster than serial foreach
. You don't really win by using dopar
, and it doesn't seem that putting everything together is what's taking all the time (it could still be that transmitting the data back to the master takes a long time). You can also argue that with computation this simple, the overhead will naturally dominate. So let's do some more complicated stuff:
> data <- matrix(rnorm(3000 * 10000), nrow=10000)
> system.time (for(j in 1:6000) { sum(lgamma(exp(data[,(j - 1) %% 3000 + 1]))) })
utilisateur système écoulé
11.215 1.272 12.490
> system.time (foreach(j=1:6000, .combine=c) %do% { sum(lgamma(exp(data[,(j - 1) %% 3000 + 1]))) })
utilisateur système écoulé
14.304 0.468 15.788
> system.time (foreach(j=1:6000, .combine=c) %dopar% { sum(lgamma(exp(data[,(j - 1) %% 3000 + 1]))) })
utilisateur système écoulé
14.377 11.839 10.358
Now dopar
is starting to win out but the three are still pretty comparable and the builtin for
is not so bad, even with all the additional work. But what about communication overhead? Instead of taking sums, we'll just return the transformed data (10,000 numbers per iteration).
> system.time (for(j in 1:6000) { lgamma(exp(data[,(j - 1) %% 3000 + 1])) })
utilisateur système écoulé
11.092 1.189 12.302
> system.time (foreach(j=1:6000, .combine=function (...) NULL, .multicombine=TRUE) %do% { lgamma(exp(data[,(j - 1) %% 3000 + 1])) })
utilisateur système écoulé
14.902 1.867 22.901
> system.time (foreach(j=1:6000, .combine=function (...) NULL, .multicombine=TRUE) %dopar% { lgamma(exp(data[,(j - 1) %% 3000 + 1])) })
^C
Timing stopped at: 2.155 0.706 241.948
>
Here, the for
loop which doesn't have to bother with keeping results around takes about as long as before. The %do%
version took a lot longer this time. And the %dopar%
which is probably transferring the results through shared memory? I decided to kill it after about 4 minutes.