I am trying to read some text files, where each line needs to be processed. At the moment I am just using a StreamReader, and then reading each line individually.
I am wondering whether there is a more efficient way (in terms of LoC and readability) to do this using LINQ without compromising operational efficiency. The examples I have seen involve loading the whole file into memory, and then processing it. In this case however I don't believe that would be very efficient. In the first example the files can get up to about 50k, and in the second example, not all lines of the file need to be read (sizes are typically < 10k).
You could argue that nowadays it doesn't really matter for these small files, however I believe that sort of the approach leads to inefficient code.
Thanks for your time!
First example:
// open file
using(var file = System.IO.File.OpenText(_LstFilename))
{
// read file
while (!file.EndOfStream)
{
String line = file.ReadLine();
// ignore empty lines
if (line.Length > 0)
{
// create addon
T addon = new T();
addon.Load(line, _BaseDir);
// add to collection
collection.Add(addon);
}
}
}
Second example:
// open file
using (var file = System.IO.File.OpenText(datFile))
{
// compile regexs
Regex nameRegex = new Regex("IDENTIFY (.*)");
while (!file.EndOfStream)
{
String line = file.ReadLine();
// check name
Match m = nameRegex.Match(line);
if (m.Success)
{
_Name = m.Groups[1].Value;
// remove me when other values are read
break;
}
}
}