views:

125

answers:

5

I have a large text file (~100MB) that need to be parsed to extract information. I would like to find an efficient way of doing it. The file is structured in block:

Mon, 01 Jan 2010 01:01:01
  Token1 = ValueXYZ
  Token2 = ValueABC
  Token3 = ValuePQR
  ...
  TokenX = Value123

Mon, 01 Jan 2010 01:02:01
  Token1 = ValueXYZ
  Token2 = ValueABC
  Token3 = ValuePQR
  ...
  TokenY = Value456

Is there a library that could help in parsing this file? (In Java, Python, any command line tool)

Edit: I know the question is vague, but the key element is not the way to read a file, parse it with regex, etc. I was looking more in a library, or tools suggestions in terms of performance. For example, Antlr could have been a possibility, but this tool loads the whole file in memory, which is not good.

Thanks!

A: 

Usually, we do something like this. The re library pretty much handles it. The use of a generator function copes the the nested structure.

def gen_blocks( my_file ):
    header_pat= re.compile( r"\w3, \d2 \w3 \d4 \d2:\d2:\d2" )
    detail_pat = re.compile( r"\s2\S*\s+=\s+\S*" )
    lines = []
    for line in my_file:
        hdr_match=header_pat.match( line )
        if hdr_match:
            if lines:
                yield header, lines
                lines= []
             header= hdr.match.groups()
             continue
         dtl_match= detail_pat.match( line )
         if dtl_match:
             lines.append( dtl_match.groups() )
             continue
         # Neither kind of line, maybe blank or maybe an error
     if lines:
         yield header, lines

for header, lines in gen_blocks( some_file ):
    print header, lines
S.Lott
Your detail_pat regex is wrong - you are calling groups() on it, but there are no groups in the regex. Also you are matching the literal character 2 - a typo I think. Here is my version: `detail_pat = re.compile((r"\s+(\S+)\s*=\s*(\S*)")`. Now groups() will return a 2-tuple of the name and value.
Dave Kirby
@Dave Kirby: Consistent with the vagueness of the problem, it's difficult to know which groups matter. You've made a good guess, but the question is perfectly unclear.
S.Lott
A: 

IMO this data is so well structured that an external package to process it isn't needed. It probably wouldn't take more than a few minutes to write the parser for it. It would run pretty fast.

Tony Ennis
A: 

Rather than incurring the extra library dependency, and getting up the learning curve with that new library, it would seem more efficient to just write vanilla code. My algorithm would look something like this (using quick and sloppy Java):


// HOLDER FOR ALL THE DATA OBJECT THAT ARE EXTRACTED FROM THE FILE
ArrayList allDataObjects = new ArrayList();
// BUFFER FOR THE CURRENT DATA OBJECT BEING EXTRACTED
MyDataObject workingObject = null;
// BUILT-IN JAVA PARSER TO HELP US DETERMINE WHETHER OR NOT A LINE REPRESENTS A DATE
SimpleDateFormat dateFormat = new SimpleDateFormat("EEE, dd MMM yyyy HH:mm:ss");

// PARSE THROUGH THE FILE LINE-BY-LINE
BufferedReader inputFile = new BufferedReader(new FileReader(new File("myFile.txt")));
String currentLine = "";
while((currentLine = inputFile.readLine()) != null)
{
    try
    {
        // CHECK WHETHER OR NOT THE CURRENT LINE IS A DATE
        Date parsedDate = dateFormat.parse(currentLine.trim());
    }
    catch(ParseException pe)
    {
        // THE CURRENT LINE IS NOT A DATE.  THAT MEANS WE'RE 
        // STILL PULLING IN TOKENS FOR THE LAST DATA OBJECT.
        workingObject.parseAndAddToken(currentLine);
        continue;
    }
    // THE ONLY WAY WE REACH THIS CODE IS IF THE CURRENT LINE
    // REPRESENTS A DATE, WHICH MEANS WE'RE STARTING ON A NEW
    // DATA OBJECT.  ADD THE LAST DATA OBJECT TO THE LIST,
    // AND START UP A NEW WORKING DATA OBJECT.
    if(workingObject != null) allDataObjects.add(workingObject);
    workingObject = new MyDataObject();
    workingObject.parseAndSetDate(currentLine);
}
inputFile.close();
// NOW YOU'RE READY TO DO WHATEVER WITH "allDataObjects"

Of course, you'd have to flesh out the missing functionality for the "MyDataObject" class. However, this basically does what you're asking for in about 20 or so lines of code (stripping out the comments) and not external library dependencies.

Steve Perkins
Using exceptions to control the flow is a *huge* impact on performance and efficiency.
BalusC
True... hence my "quick and sloppy Java" qualifier! The honest answer is that I would do this in Perl... but of the two languages specified, Java is the one with which I'm far more familiar.Even so, it sounds like the context here is a cron job or manual process, rather than something responding to HTTP requests or something crazy... so don't think something this tiny would be a bear in context. However, if you do have an alternative implementation that doesn't use exceptions, I would be interested in seeing it.
Steve Perkins
WHY ARE THE COMMENT IN ALL CAPS? IT LOOKS LIKE YOU'RE SHOUTING AT THE MAINTAINERS OF THIS CODE. Maybe folks like it that way. It seems a bit harder to read.
S.Lott
Shrug... this comment style is fairly common in the Java world. Many of us use normal text for the large javadoc-formatted comments at the beginning of each class, variable, or method... and use all-caps for the short little comments embedded within a method. This code block had far more of those comments than normal, but that's because it was written for no other reason than explaining an idea to someone.Why do you use whitespace and indention to separate the code blocks in you example, rather than braces? I find THAT harder to read. :)
Steve Perkins
@ Steve Perkins: You used a lot of nice whitespace whitespace and indention. How can you say it makes it **harder** to read when you when to great pains to assure that it was very elegantly indented? I was only asking about the ALL CAPS, since I hadn't seen it before.
S.Lott
Umm... your code was in Python, which uses whitespace and intention for that purpose as part of the language spec. Good-natured humor... irony... oh nevermind.
Steve Perkins
@Steve Perkins: Are you saying your indentation was just a joke? But it looked so nice. I'm not sure I'm following.
S.Lott
You said that you found my Java-ish comment style hard to read. I replied that I found your Python-ish indention style hard to read. The irony lies in me implying that you had a choice, whereas actually that is a mandatory part of the Python language spec. Kinda kills a joke when you over-explain it, so take my word that it was really really funny.
Steve Perkins
@BalusC: I like that algorithm in your example. I failed to notice that the example data had a blank line separating each entity... if that is indeed a fixed part of the file spec then it would have come in handy. Still, while you gain a performance boost by not relying on exceptions, it will still bomb on any unexpected data because it assumes that a blank line will always be followed by a date. It would be nice to be able to check a String for a date value in such a manner that returns a boolean rather than relying on exceptions.
Steve Perkins
@Steve Perkins: Ah. I've never seen anyone joke about Python indentation. Mostly they either don't notice it or they absolutely hate it. Since most programmers barely notice indentation -- they just do it -- joking about it is unusual. I'll try to be more open to that kind of thing.
S.Lott
A: 

Since that's a custom format, there's likely no library available. So write one yourself.

Here's a kickoff example, assuming that the file format is consitent as you posted in the question. You only may want to use a List<Block> instead:

Map<Date, Map<String, String>> blocks = new LinkedHashMap<Date, Map<String, String>>();
SimpleDateFormat sdf = new SimpleDateFormat("EEE, dd MMM yyyy HH:mm:ss", Locale.ENGLISH);
BufferedReader reader = null;

try {
    reader = new BufferedReader(new InputStreamReader(new FileInputStream("/input.txt"), "UTF-8"));
    Date date = null;
    Map<String, String> block = null;

    for (String line; (line = reader.readLine()) != null;) {
        line = line.trim();
        if (date == null) {
            date = sdf.parse(line);
            block = new LinkedHashMap<String, String>();
            blocks.put(date, block);
        } else if (!line.isEmpty()) {
            String[] parts = line.split("\\s*=\\s*");
            block.put(parts[0], parts[1]);
        } else {
            date = null;
        }
    }
} finally {
    if (reader != null) try { reader.close(); } catch (IOException ignore) {}
}

To verify the contents, use this:

for (Entry<Date, Map<String, String>> block : blocks.entrySet()) {
    System.out.println(block.getKey());
    for (Entry<String, String> token : block.getValue().entrySet()) {
        System.out.println("\t" + token.getKey() + " = " + token.getValue());
    }
    System.out.println();
}
BalusC
A: 

For efficient parsing of files, especially on a big file, you can use awk. An example

$ awk -vRS= '{print "====>" $0}' file
====>Mon, 01 Jan 2010 01:01:01
  Token1 = ValueXYZ
  Token2 = ValueABC
  Token3 = ValuePQR
  ...
  TokenX = Value123
====>Mon, 01 Jan 2010 01:02:01
  Token1 = ValueXYZ
  Token2 = ValueABC
  Token3 = ValuePQR
  ...
  TokenY = Value456
====>Mon, 01 Jan 2010 01:03:01
  Token1 = ValueXYZ
  Token2 = ValueABC
  Token3 = ValuePQR

As you can see with the arrows , each record is now one block from the "====>" arrows to the next (by setting Record separator RS to blanks). you can then set field separator, eg a newline

$ awk -vRS= -vFS="\n" '{print "====>" $1}' file
====>Mon, 01 Jan 2010 01:01:01
====>Mon, 01 Jan 2010 01:02:01
====>Mon, 01 Jan 2010 01:03:01

So in the above example, every 1st field is the date/time stamp. To get "token1" for example, you could do this

$ awk -vRS= -vFS="\n" '{for(i=1;i<=NF;i++) if ($i ~/Token1/){ print $i} }' file
  Token1 = ValueXYZ
  Token1 = ValueXYZ
  Token1 = ValueXYZ
ghostdog74
Thanks, I'll go with Awk. I found an interesting article here: http://www.ibm.com/developerworks/library/l-awk2.html
legege