views:

955

answers:

9

Consider this python program:

import sys

lc = 0
for line in open(sys.argv[1]):
    lc = lc + 1

print lc, sys.argv[1]

Running it on my 6GB text file, it completes in ~ 2minutes.

Question: is it possible to go faster?

Note that the same time is required by:

wc -l myfile.txt

so, I suspect the anwer to my quesion is just a plain "no".

Note also that my real program is doing something more interesting than just counting the lines, so please give a generic answer, not line-counting-tricks (like keeping a line count metadata in the file)

PS: I tagged "linux" this question, because I'm interested only in linux-specific answers. Feel free to give OS-agnostic, or even other-OS answers, if you have them.

See also the follow-up question

+5  A: 

plain "no".

You've pretty much reached maximum disk speed.

I mean, you could mmap the file, or read it in binary chunks, and use .count('\n') or something. But that is unlikely to give major improvements.

nosklo
+4  A: 

If you assume that a disk can read 60MB/s you'd need 6000 / 60 = 100 seconds, which is 1 minute 40 seconds. I don't think that you can get any faster because the disk is the bottleneck.

Georg
Where's that 20 in your calculation come from? Did you mean 6000 / 60 = 100? 60 not 20, right?
I first wanted to calculate it with 20MB/s, but then I thought that this is too slow.
Georg
+1  A: 

as others have said - "no"

Almost all of your time is spent waiting for IO. If this is something that you need to do more than once, and you have a machine with tons of ram, you could keep the file in memory. If your machine has 16GB of ram, you'll have 8GB available at /dev/shm to play with.

Another option: If you have multiple machines, this problem is trivial to parallelize. Split the it among multiple machines, each of them count their newlines, and add the results.

JimB
A: 

You can't get any faster than the maximum disk read speed.

In order to reach the maximum disk speed you can use the following two tips:

  1. Read the file in with a big buffer. This can either be coded "manually" or simply by using io.BufferedReader ( available in python2.6+ ).
  2. Do the newline counting in another thread, in parallel.
Barakando
-1 don't see how doing the newline counting in another thread may speed up. It will just slow things down. Waiting on threads doesn't make you wait faster.
nosklo
Normally you would be right.However, in this case the thread reading from the file will wait for I/O while the other thread parses the newlines. That way - the reader thread won't wait for the parser thread to parse the newlines between consequent reads.
Barakando
I'm accepting this answer even though in this particular case it does not worth the effort, since the job-per-line is very low and I'm already going at hw maximum speed. See also the follow-up question, for further details.
Davide
I agree with nosklo. I think the increment is so fast as to be irrelevant, and another thread could even make such a thing slower. Also, the for loop is already buffered in python by default. I doubt that using BufferedReader to make the buffer larger would help.
A: 

Note that Python I/O is implemented in C, so there is not much luck speeding it up further.

cartman
You can write perfectly bad and slow C code, thus if it's written in C doesn't guarantee that it will be fast. And there might be overheads (e.g. to interpret the bytecode, to read by rows and iterate, etc) that might slow it down.
Davide
+10  A: 

Throw hardware at the problem.

As gs pointed out, your bottleneck is the hard disk transfer rate. So, no you can't use a better algorithm to improve your time, but you can buy a faster hard drive.

Edit: Another good point by gs; you could also use a RAID configuration to improve your speed. This can be done either with hardware or software (e.g. OS X, Linux, Windows Server, etc).


Governing Equation

(Amount to transfer) / (transfer rate) = (time to transfer)

(6000 MB) / (60 MB/s) = 100 seconds

(6000 MB) / (125 MB/s) = 48 seconds


Hardware Solutions

The ioDrive Duo is supposedly the fastest solution for a corporate setting, and "will be available in April 2009".

Or you could check out the WD Velociraptor hard drive (10,000 rpm).

Also, I hear the Seagate Cheetah is a good option (15,000 rpm with sustained 125MB/s transfer rate).

tgray
RAIDs could be much faster.
Georg
+3  A: 

The trick is not to make electrons move faster (that's hard to do) but to get more work done per unit of time.

First, be sure your 6GB file read is I/O bound, not CPU bound.

If It's I/O bound, consider the "Fan-Out" design pattern.

  • A parent process spawns a bunch of children.

  • The parent reads the 6Gb file, and deals rows out to the children by writing to their STDIN pipes. The 6GB read time will remain constant. The row dealing should involve as little parent processing as possible. Very simple filters or counts should be used.

    A pipe is an in-memory channel for communication. It's a shared buffer with a reader and a writer.

  • Each child reads a row from STDIN, and does appropriate work. Each child should probably write a simple disk file with the final (summarized, reduce) results. Later, the results in those files can be consolidated.

S.Lott
probably (on the third bullet) you meant that all the children should talk each other in memory, since the disk is already very busy
Davide
Pipes *are* in-memory communication channels.
S.Lott
Yes, but in your third bullet you wrote: "Each child should probably write a simple disk file."
Davide
@Davide: Sorry -- didn't see what you were getting at. There's no easy fan-in with pipes; therefore the final results are simplest to process as disk files. There aren't a lot of ways around this. The final result(s) must be written somewhere. Many small files are less impact than one big file because you have more opportunities for some child to be non-blocking and working.
S.Lott
+3  A: 

See my answer about using posix_fadvise(2) to your follow up question.

Benji York
+1  A: 

2 minutes sounds about right to read an entire 6gb file. Theres not really much you can do to the algorithm or the OS to speed things up. I think you have two options:

  1. Throw money at the problem and get better hardware. Probably the best option if this project is for your job.

  2. Don't read the entire file. I don't know what your are trying to do with the data, so maybe you don't have any option but to read the whole thing. On the other hand if you are scanning the whole file for one particular thing, then maybe putting some metadata in there at the start would be helpful.

Nathan Reed