views:

332

answers:

5

I'm trying to do a simple genomic track intersection in R, and running into major performance problems, probably related to my use of for loops.

In this situation, I have pre-defined windows at intervals of 100bp and I'm trying to calculate how much of each window is covered by the annotations in mylist. Graphically, it looks something like this:

          0    100   200    300    400   500   600  
windows: |-----|-----|-----|-----|-----|-----|

mylist:    |-|   |-----------|

So I wrote some code to do just that, but it's fairly slow and has become a bottleneck in my code:

##window for each 100-bp segment    
windows <- numeric(6)

##second track
mylist = vector("list")
mylist[[1]] = c(1,20)
mylist[[2]] = c(120,320)


##do the intersection
for(i in 1:length(mylist)){
  st <- floor(mylist[[i]][1]/100)+1
  sp <- floor(mylist[[i]][2]/100)+1
  for(j in st:sp){       
    b <- max((j-1)*100, mylist[[i]][1])
    e <- min(j*100, mylist[[i]][2])
    windows[j] <- windows[j] + e - b + 1
  }
}

print(windows)
[1]  20  81 101  21   0   0

Naturally, this is being used on data sets that are much larger than the example I provide here. Through some profiling, I can see that the bottleneck is in the for loops, but my clumsy attempt to vectorize it using *apply functions resulted in code that runs an order of magnitude more slowly.

I suppose I could write something in C, but I'd like to avoid that if possible. Can anyone suggest another approach that will speed this calculation up?

A: 

I think I have made it much more complicated... System.time didn't help me in performance evaluation in such a small dataset.

windows <- numeric(6)

mylist = vector("list")
mylist[[1]] = c(1,20)
mylist[[2]] = c(120,320)


library(plyr)

l_ply(mylist, function(x) {
sapply((floor(x[1]/100)+1) : (floor(x[2]/100)+1), function(z){
    eval.parent(parse(text=paste("windows[",z,"] <- ", 
        min(z*100, x[2]) - max((z-1)*100, x[1]) + 1,sep="")),sys.nframe())
    })          
})

print(windows)

EDIT

A modification to eliminate eval

g <- llply(mylist, function(x) {
ldply((floor(x[1]/100)+1) : (floor(x[2]/100)+1), function(z){
        t(matrix(c(z,min(z*100, x[2]) - max((z-1)*100, x[1]) + 1),nrow=2))
    })          
})

for(i in 1:length(g)){
    windows[unlist(g[[i]][1])] <- unlist(g[[i]][2])
}
gd047
I tried it with a slightly bigger data than the example, and this takes ~10 times longer than the original :(
Aniko
@Aniko. Obviously `eval` gives a huge performance hit. Can you think of another way to access `windows` variable?
gd047
@gd047 One could use the `"[<-"(windows, z, new.value)` syntax to avoid parsing, but I am not sure how to make it change an external `windows` variable.
Aniko
@gd047 The modification without `eval` took even longer.
Aniko
@Aniko I give up :) BTW, thanks for the syntax. Haven't seen it before and by now, I wasn't able to locate it in help
gd047
A: 

I don't have a bright idea, but you can get rid of the inner loop, and speed up things a bit. Notice that if a window falls fully wihtin a mylist interval, then you just have to add 100 to the corresponding windows element. So only the st-th and sp-th windows need special handling.

  windows <- numeric(100)
  for(i in 1:length(mylist)){ 
    win <- mylist[[i]]         # for cleaner code
    st <- floor(win[1]/100)+1 
    sp <- floor(win[2]/100)+1 
    # start and stop are within the same window
    if (sp == st){
      windows[st] <- windows[st] + (win[2]%%100) - (win[1]%%100) +1 
    }
    # start and stop are in separate windows - take care of edges
    if (sp > st){
      windows[st] <- windows[st] + 100 - (win[1]%%100) + 1
      windows[sp] <- windows[sp] + (win[2]%%100)
    }
    # windows completely inside win
    if (sp > st+1){
      windows[(st+1):(sp-1)] <- windows[(st+1):(sp-1)] + 100
    }       
  }

I generated a bigger list:

  cuts <- sort(sample(1:10000, 70))  # random interval endpoints
  mylist <- split(cuts, gl(35,2))

and got 1.08 sec for 1000 replicates of this version versus 1.72 sec for 1000 replicates for the original. With real data the speed-up will depend on whether the intervals in mylist tend to be much longer than 100 or not.

By the way, one could rewrite the inside loop as a separate function, and then lapply it over mylist, but that does not make it work faster.

Aniko
+3  A: 

So I'm not entirely sure why the third and fourth windows aren't 100 and 20 because that would make more sense to me. Here's a one liner for that behavior:

Reduce('+', lapply(mylist, function(x) hist(x[1]:x[2], breaks = (0:6) * 100, plot = F)$counts)) 

Note that you need to specify the upper bound in breaks, but it shouldn't be hard to make another pass to get it if you don't know it in advance.

Jonathan Chang
It would have to be `Reduce` instead of `do.call`, but this approach is very slow (though elegant).
Aniko
Thanks for the catch on Reduce! I tried it out and it didn't seem so slow: > system.time(replicate(Reduce('+', lapply(mylist, function(x) hist(x[1]:x[2], breaks = cuts, plot = F)$counts)), 1000))utilisateur système écoulé 0.03 0.00 0.03
Jonathan Chang
Very elegant! Vote up
gd047
It's a very elegant function, but is far slower. On a set of 10 windows and 6 gaps, repeated 1000 times, this takes 6.58 seconds, whereas the original function takes 0.18s. I'm looking at the whole genome and will be considering millions of annotations, so this just doesn't scale.
chrisamiller
BTW - you're right about the 3rd and 4th windows - minor bug that I've since fixed.
chrisamiller
Why `Reduce` is better than `do.call`? As I understand `Reduce` do something like `for (i in elements) result <- result+i` and `do.call` will do it without loop (just like one wrote `a+b+c+d`. Where am I wrong?
Marek
+4  A: 

The "Right" thing to do is to use the bioconductor IRanges package, which uses an IntervalTree data structure to represent these ranges.

Having both of your objects in their own IRanges objects, you would then use the findOverlaps function to win.

Get it here:

http://www.bioconductor.org/packages/release/bioc/html/IRanges.html

By the by, the internals of the package are written in C, so its super fast.

EDIT

On second thought, it's not as much of a slam-dunk as I'm suggesting (a one liner), but you should definitely start using this library if you're working at all with genomic intervals (or other types) ... you'll likely need to do some set operations and stuff. Sorry, don't have time to provide the exact answer, though.

I just thought it's important to point this library out to you.

Steve Lianoglou
Thanks - between this and andrewj's recommendation above, I suspect IRanges may be the way to go. It's going to take some extensive rewriting of my code, so I'm diving in now and will report back soon.
chrisamiller
Yeah, using IRanges made this project a lot easier to code and faster too. Thanks for the pointer.
chrisamiller
Cool ... glad to hear it.
Steve Lianoglou
+3  A: 

Okay, so I wasted WAY too much time on this, and still only got a factor of 3 speed-up. Can anyone beat this?

The code:

my <- do.call(rbind,mylist)
myFloor <- floor(my/100)
myRem <- my%%100
#Add intervals, over counting interval endpoints
counts <- table(do.call(c,apply(myFloor,1,function(r) r[1]:r[2])))
windows[as.numeric(names(counts))+1] <- counts*101

#subtract off lower and upper endpoints
lowerUncovered <- tapply(myRem[,1],myFloor[,1],sum)
windows[as.numeric(names(lowerUncovered))+1]  <-  windows[as.numeric(names(lowerUncovered))+1]  - lowerUncovered
upperUncovered <- tapply(myRem[,2],myFloor[,2],function(x) 100*length(x) - sum(x))
windows[as.numeric(names(upperUncovered))+1]  <-  windows[as.numeric(names(upperUncovered))+1] - upperUncovered

The test:

mylist = vector("list")
for(i in 1:20000){
    d <- round(runif(1,,500))
    mylist[[i]] <- c(d,d+round(runif(1,,700)))
}

windows <- numeric(200)


new_code <-function(){
    my <- do.call(rbind,mylist)
    myFloor <- floor(my/100)
    myRem <- my%%100
    counts <- table(do.call(c,apply(myFloor,1,function(r) r[1]:r[2])))
    windows[as.numeric(names(counts))+1] <- counts*101

    lowerUncovered <- tapply(myRem[,1],myFloor[,1],sum)
    windows[as.numeric(names(lowerUncovered))+1]  <-  windows[as.numeric(names(lowerUncovered))+1]  - lowerUncovered

    upperUncovered <- tapply(myRem[,2],myFloor[,2],function(x) 100*length(x) - sum(x))
    windows[as.numeric(names(upperUncovered))+1]  <-  windows[as.numeric(names(upperUncovered))+1] - upperUncovered

    #print(windows)
}


#old code
old_code <- function(){
    for(i in 1:length(mylist)){
        st <- floor(mylist[[i]][1]/100)+1
        sp <- floor(mylist[[i]][2]/100)+1
        for(j in st:sp){       
            b <- max((j-1)*100, mylist[[i]][1])
            e <- min(j*100, mylist[[i]][2])
            windows[j] <- windows[j] + e - b + 1
        }
    }
    #print(windows)
}

system.time(old_code())
system.time(new_code())

The result:

> system.time(old_code())
   user  system elapsed 
  2.403   0.021   2.183 
> system.time(new_code())
   user  system elapsed 
  0.739   0.033   0.588 

Very frustrating that the system time is basically 0, but the observed time is so great. I bet if you did go down to C you would get a 50-100X speed-up.

Ian Fellows
+1 for the time invested :-)
Thrawn