tags:

views:

76

answers:

3

I am trying to figure out how to split a file by the number of lines in each file. THe files are csv and I can't do it by bytes. I need to do it by lines. 20k seems to be a good number per file. What is the best way to read a stream at a given position? Stream.BaseStream.Position? So if I read the first 20k lines i would start the position at 39,999? How do I know I am almost at the end of a files? Thanks all

+1  A: 

I'd do it like this:

// helper method to break up into blocks lazily

public static IEnumerable<ICollection<T>> SplitEnumerable<T>
    (IEnumerable<T> Sequence, int NbrPerBlock)
{
    List<T> Group = new List<T>(NbrPerBlock);

    foreach (T value in Sequence)
    {
        Group.Add(value);

        if (Group.Count == NbrPerBlock)
        {
            yield return Group;
            Group = new List<T>(NbrPerBlock);
        }
    }

    if (Group.Any()) yield return Group; // flush out any remaining
}

// now it's trivial; if you want to make smaller files, just foreach
// over this and write out the lines in each block to a new file

public static IEnumerable<ICollection<string>> SplitFile(string filePath)
{
    return File.ReadLines(filePath).SplitEnumerable(20000);
}

Is that not sufficient for you? You mention moving from position to position,but I don't see why that's necessary.

mquander
This works too!!!! Gosh. I love this place!
DDiVita
+2  A: 
int index=0;
var groups = from line in File.ReadLines("myfile.csv")
             group line by index++/20000 into g
             select g.AsEnumerable();
int file=0;
foreach (var group in groups)
        File.WriteAllLines((file++).ToString(), group.ToArray());
Hasan Khan
You need to use `File.ReadLines` instead of `ReadAllLines` -- `ReadAllLines` reads it all into memory at once. Also, using `index` in the grouping function like that freaks me out.
mquander
changed to ReadLines, thanks
Hasan Khan
+1 that is a very interesting use of linq
David
While this is indeed interesting, there are enough cases that you don't want to read an entire file into memory that I would at least add the stipulation that you need to know the files won't be too large if you're going to use this method..
Jimmy Hoffa
Won't the grouping method collect everything regardless of whether you use ReadLines or ReadAllLines?
Lasse V. Karlsen
I assume so, but with `ReadAllLines`, you'd have the whole thing in memory twice instead of once.
mquander
Never thought about using LINQ. Nice!
DDiVita
+2  A: 
using (System.IO.StreamReader sr = new System.IO.StreamReader("path"))
{
    int fileNumber = 0;

    while (!sr.EndOfStream)
    {
        int count = 0;

        using (System.IO.StreamWriter sw = new System.IO.StreamWriter("other path" + ++fileNumber))
        {
            sw.AutoFlush = true;

            while (!sr.EndOfStream && ++count < 20000)
            {
                sw.WriteLine(sr.ReadLine());
            }
        }
    }
}
Jon B
This seems the most straight forward to me, though for memory's sake I would flush the write buffer with each write possibly. if each line is 100 bytes, that makes 1000 lines 100k, and 20000 2Mb, not a ton of memory but an unnecesarry foot print..
Jimmy Hoffa
@Jimmy - I added `AutoFlush = True`, which automatically flushes after each write.
Jon B
AutoFlush is a bad idea on a StreamWriter as it will flush after every single character (I looked at the code). If you don't specify a buffer size when creating a StreamWriter it defaults to only 128 characters, but that's still better than no buffer at all.
Tergiver