tags:

views:

115

answers:

4

I've a data frame with time events on each row. In one row I've have the events types of sender (typeid=1) and on the other the events of the receiver (typeid=2). I want to calculate the delay between sender and receiver (time difference).

My data is organized in a data.frame, as the following snapshot shows:

dd[1:10,]
     timeid   valid typeid
1  18,00035 1,00000      1
2  18,00528 0,00493      2
3  18,02035 2,00000      1
4  18,02116 0,00081      2
5  18,04035 3,00000      1
6  18,04116 0,00081      2
7  18,06035 4,00000      1
8  18,06116 0,00081      2
9  18,08035 5,00000      1
10 18,08116 0,00081      2

calc_DelayVIDEO <- function (dDelay ){

        pktProcess <- TRUE
        nLost <- 0
        myDelay <- data.frame(time=-1, delay=-1, jitter=-1, nLost=-1)
        myDelay <- myDelay[-1, ]
        tini <- 0
        tend <- 0
        for (itr in c(1:length(dDelay$timeid))) {
           aRec <- dDelay[itr,]
           if (aRec$typeid == 1){
                tini <- as.numeric(aRec$timeid)
                if (!pktProcess ) {
                   nLost <- (nLost + 1)
                   myprt(paste("Packet Lost at time ", aRec$timeid, " lost= ", nLost, sep=""))
                }

                pktProcess <- FALSE 
           }else if (aRec$typeid == 2){

                tend <- as.numeric(aRec$timeid)
                dd <- tend - tini
                jit <- calc_Jitter(dant=myDelay[length(myDelay), 2], dcur=dd)
                myDelay <- rbind(myDelay, c(aRec$timeid, dd, jit, nLost))
                pktProcess <- TRUE
                #myprt(paste("time=", aRec$timeev, " delay=", dd, " Delay Var=", jit, " nLost=", nLost ))
           }
        }
        colnames(myDelay) <- c("time", "delay", "jitter", "nLost")
        return (myDelay)
}

To perform the calculations for delay I use calc_DelayVideo function, neverthless for data frames with a high number of records (~60000) it takes a lot of time.

How can I substitute the for loop with more optimized R functions? Can I use lapply to do such computation? If so, can you provide me an example?

Thanks in advance,

+3  A: 

The usual solution is to think hard enough about the problem to find something vectorized.

If that fails, I sometimes resort to re-writing the loop in C++; the Rcpp package can helps with the interface.

Dirk Eddelbuettel
In this case, if type2 always follows type1, you could do some very simple subsetting to get the answer.
hadley
+2  A: 

The *apply suite of functions are not optimized for loops. Further, I've worked on problems where for loops are faster than apply because apply used more memory and caused my machine to swap.

I would suggest fully initializing the myDelay object and avoid using rbind (which must re-allocate memory):

init <- rep(NA, length(dDelay$timeid))
myDelay <- data.frame(time=init, delay=init, jitter=init, nLost=init)

then replace:

myDelay <- rbind(myDelay, c(aRec$timeid, dd, jit, nLost))

with

myDelay[i,] <- c(aRec$timeid, dd, jit, nLost)
Joshua Ulrich
Could be faster with `data.frame` replaced by `matrix`. Which could be converted to `data.frame` at the end. Inserting to `matrix` is faster then to `data.frame`
Marek
+1  A: 

As Dirk said: vectorization will help. An example of this would be to move the call to as.numeric out of the loop (since this function works with vectors).

dDelay$timeid <- as.numeric(dDelay$timeid)

Other things that may help are

Not bothering with the line aRec <- dDelay[itr,], since you can just access the row of dDelay, without creating a new variable.

Preallocating myDelay, since having it grow within the loop is likely to be a bottleneck. See Joshua's answer for more on this.

Richie Cotton
Indexing play huge role in optimization. Check for example `ind<-rep(1,1e5);X<-data.frame(a=1,b=2,c=3)` and compare `system.time(for (i in ind) {X[i,1];X[i,2];X[i,3]})` (~15sec) vs `system.time(for (i in ind) {X$a[i];X$b[i];X$b[i]})` (~1sec).
Marek
A: 

Another optimization : If I read your code right, you can easily calculate the vector nLost by using :

nLost <-cumsum(dDelay$typeid==1)

outside the loop. That one you can just add to the dataframe in the end. Saves you a lot of time already. If I use your dataframe, then :

> nLost <-cumsum(dd$typeid==1)
> nLost
 [1] 1 1 2 2 3 3 4 4 5 5

Likewise the times at which the packages were lost can be calculated as:

> dd$timeid[which(dd$typeid==1)]
[1] 18,00035 18,02035 18,04035 18,06035 18,08035

in case you want to report them somewhere too.

For testing, I used :

dd <- structure(list(timeid = structure(1:10, .Label = c("18,00035", 
"18,00528", "18,02035", "18,02116", "18,04035", "18,04116", "18,06035", 
"18,06116", "18,08035", "18,08116"), class = "factor"), valid = structure(c(3L, 
2L, 4L, 1L, 5L, 1L, 6L, 1L, 7L, 1L), .Label = c("0,00081", "0,00493", 
"1,00000", "2,00000", "3,00000", "4,00000", "5,00000"), class = "factor"), 
    typeid = c(1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L)), .Names = c("timeid", 
"valid", "typeid"), class = "data.frame", row.names = c("1", 
"2", "3", "4", "5", "6", "7", "8", "9", "10"))
Joris Meys