views:

1272

answers:

6

I've been informed that my library is slower than it should be, on the order of 30+ times too slow parsing a particular file (text file, size 326 kb). The user suggested that it may be that I'm using std::ifstream (presumably instead of FILE).

I'd rather not blindly rewrite, so I thought I'd check here first, since my guess would be the bottleneck is elsewhere. I'm reading character by character, so the only functions I'm using are get(), peek(), and tellg()/seekg().

Update:

I profiled, and got confusing output - gprof didn't appear to think that it took so long. I rewrote the program to read the entire file into a buffer first, and it sped up by about 100x. I think the problem may have been the tellg()/seekg() that took a long time, but gprof may have been unable to see that for some reason. In any case, ifstream does not appear to buffer the entire file, even for this size.

+4  A: 

It should be slightly slower, but like what you said, it might not be the bottleneck. Why don't you profile your program and see if that's the case?

PolyThinker
+5  A: 

I don't think that'd make a difference. Especially if you're reading char by char, the overhead of I/O is likely to completely dominate anything else. Why do you read single bytes at a time? You know how extremely inefficient it is?

On a 326kb file, the fastest solution will most likely be to just read it into memory at once.

The difference between std::ifstream and the C equivalents, is basically a virtual function call or two. It may make a difference if executed a few tens of million times per second, otherwise, not reall. file I/O is generally so slow that the API used to access it doesn't really matter. What matters far more is the read/write pattern. Lots of seeks are bad, sequential reads/writes good.

jalf
Actually, I didn't know how inefficient it is. I just assumed that behind the scenes, it read it into memory. I guess I'll do that instead.
Jesse Beder
Some input streams are buffered. If your code reads one char at a time it doesn't mean that underlying stream does that too.
J.F. Sebastian
Both FILE and fstream are buffered (although the buffer maybe too small), linux heavily optimizes disk access, so your file which is relatively small will be loaded in memory (windows also does this).
Ismael
Depends on how much it buffers and such. I'm willing to bet that reading the entire file in one go will still be faster.
jalf
@jalf: Easy statement to make. It may be faster but I am willing to bet not signioficantly.
Martin York
I have a feeling he is Read->Process->Write pattern, per char. All buffering strategies go to hell then.
jpinto3912
Concerning stream buffers: I once looked at the `iostream` library source that comes with MinGW. The standard buffer size for input streams was only one or two characters, ie. just enough to allow exactly one `ungetc` operation. The ability to `ungetc` at least once is also a requirement in the (C) POSIX specs; it appears that C's `FILE` and C++'s `iostream` don't need to provide larger buffers beyond that.
stakx
+2  A: 

All benchmarks are evil. Just profile your code for the data you expect.

I performed an I/O performance comparison between Ruby, Python, Perl, C++ once. For my data, languages versions, etc C++'s variant was several times slower (it was a big suprise at that time).

J.F. Sebastian
+2  A: 

I thinks that is unlikely your problem will be fixed by switching from fstream to FILE*, usually both are buffered by the C library. Also the SO can cache reads (linux is very good in that aspect). Given the size of the file you are accessing is pretty likely it will be entirely in RAM.

Like PolyThinker say your best bet is to run your program trough an profiler an determine where the problem is.

Also you are using seekg/tellg this can cause notable delays if your disk is heavily fragmented, because to read the file for the first time the disk have to move the heads to the correct position.

Ismael
+1  A: 

I agree that you should profile. But if you're reading the file a character at a time, how about creating a memory-mapped file? That way you can treat the file like an array of characters, and the OS should take care of all the low-level buffering for you. The simplest and probably fastest solution is a win in my book. :)

Ryan Ginstrom