views:

2240

answers:

9

I am using C# to read a ~120 MB plain-text CSV file. Initially I did the parsing by reading it line-by-line, but recently determined that reading the entire file contents into memory first was multiple times faster. The parsing is already quite slow because the CSV has commas embedded inside quotes, which means I have to use a regex split. This is the only one I have found that works reliably:

string[] fields = Regex.Split(line, 
@",(?!(?<=(?:^|,)\s*\x22(?:[^\x22]|\x22\x22|\\\x22)*,)
(?:[^\x22]|\x22\x22|\\\x22)*\x22\s*(?:,|$))");
// from http://regexlib.com/REDetails.aspx?regexp_id=621

In order to do the parsing after reading the entire contents into memory, I do a string split on the newline character to get an array containing each line. However, when I do this on the 120 MB file, I get a System.OutOfMemoryException. Why does it run out of memory so quickly when my computer has 4 GB of RAM? Is there a better way to quickly parse a complicated CSV?

+6  A: 

You can get an OutOfMemoryException for basically any size of allocation. When you allocate a piece of memory you're really asking for a continuous piece of memory of the requested size. If that cannot be honored you'll see an OutOfMemoryException.

You should also be aware that unless you're running 64 bit Windows, your 4 GB RAM is split into 2 GB kernel space and 2 GB user space, so your .NET application cannot access more that 2 GB per default.

When doing string operations in .NET you risk creating a lot of temporary strings due to the fact that .NET strings are immutable. Therefore you may see memory usage rise quite dramatically.

Brian Rasmussen
strings are the bastard child of computer science. a necessary evil, but i still wish someone would figure out a better way!
Darren Kopp
+3  A: 

You may not be able to allocate a single object with that much contiguous memory, nor should you expect to be able to. Streaming is the ordinary way to do this, but you're right that it might be slower (although I don't think it should usually be quite that much slower.)

As a compromise, you could try reading a larger portion of the file (but still not the whole thing) at once, with a function like StreamReader.ReadBlock(), and processing each portion in turn.

mquander
A: 

You should probably try the CLR profiler to determine your actual memory usage. It might be that there are memory limits other than your system RAM. For example if this is an IIS application, your memory is limited by the application pools.

With this profile information you might find that you need to use a more scalable technique like the streaming of the CSV file that you originally attempted.

Keltex
+4  A: 

If you have the whole file read into a string you should probably use a StringReader.

StringReader reader = new StringReader(fileContents);
string line;
while ((line = reader.ReadLine()) != null) {
    // Process line
}

This should be roughtly the same as streaming from a file with the difference that the contents are in the memory already.

Edit after testing

Tried the above with a 140MB file where the processing consisted of incrementing length variable with line.Length. This took around 1.6 seconds on my computer. After this I tried the following:

System.IO.StreamReader reader = new StreamReader("D:\\test.txt");
long length = 0;
string line;
while ((line = reader.ReadLine()) != null)
    length += line.Length;

The result was around 1 second.

Of course your mileage may vary, especially if you are reading from a network drive or your processing takes long enough for hard drive to seek somewhere else. But also if you're using FileStream to read the file and you're not buffering. StreamReader provides buffering which greatly enhances the reading.

Mikko Rantanen
This is a pretty good answer if he can actually read the file into a string in the first place, which it sounds like he can, at least at the moment. I wouldn't be surprised if many machines failed immediately trying to load up a 120MB file (or failed sometimes and worked other times.)
mquander
+7  A: 

Don't roll your own parser unless you have to. I've had luck with this one:

A Fast CSV Reader

If nothing else you can look under the hood and see how someone else does it.

Jay Riggs
+1 as I have used this to parse large CSV files as well.
Wayne
+1 from me too. In my experience Sébastien Lorion's CSV reader is efficient, flexible and robust. It should chew through a 120MB file in no time.
LukeH
A: 

You're running out of memory on the stack, not the heap.

You could try re-factoring your app such that you're processing the input in more manageable "chunks" of data rather than processing 120MB at a time.

Garrett
Strings are allocated on the heap, not the stack. Only the primitives of int/byte/double/etc are ever allocated on the stack imr.
Not Sure
@not sure: you're correct. however, there are a variety of non-obvious circumstances in which the program stack can fill up. Given that the system in question has ample physical memory, I assume this is probably one of those cases. =)
Garrett
The stack filling up results in a StackOverflowException, not an OutOfMemoryException; the latter is always used to indicate insufficient memory on the GC Heap.
Not Sure
+1  A: 

As other posters say, the OutOfMemory is because it cannot find a contiguous chunk of memory of the requested size.

However, you say that doing the parsing line by line was several times faster than reading it all in at once and then doing your processing. This only makes sense if you were pursuing the naive approach of doing blocking reads, eg (in pseudo code):

while(! file.eof() )
{
    string line = file.ReadLine();
    ProcessLine(line);
}

You should instead use streaming, where your stream is filled in by Write() calls from an alternate thread which is reading the file, so the file read is not blocked by whatever your ProcessLine() does, and vice-versa. That should be on-par with the performance of reading the entire file at once and then doing your processing.

Not Sure
Could you give a code example of the multi-threaded approach? I was doing it the naive way, and I now understand why that could be a major problem.
Craig
.Net has built-in asynchronous file reading and writing, a good starting point is the BeginRead() call. The following Google results have many examples: http://www.google.com/search?q=.net+asynchronous+file
Not Sure
A: 

I agree with most everybody here, you need to use streaming.

I dont know if anybody has said so far, but you should look at an exstention method.

And I know, for sure, hands down, the best CSV splitting technique on .NET / CLR is this one

That technique generated me +10GB XML output's from input CSV, including exstensive input filters and all, faster than anything else I've seen.

RandomNickName42
Oh Right, also, Streaming > Buffering in your RAM no matter what.Think about it, if you have 4GIG, and you load up 2GIG of input, just the load time and the thrashing of your VM subsystem re-locating pages and the massive size of your page table will just eat up your CPU cache etc... in/out of a small, easy to manage work-space keep's your cache "hot" and all your CPU time is devoted to the task at hand, not the massive fluxuation in system load...
RandomNickName42
A: 

http://csvhelper.com

You can set the size of your reader buffer.

Josh Close