views:

407

answers:

2

I've written a hands-on recursive pure python parser for a some file format (ARFF) we use in one lecture. Now running my exercise submission is awfully slow. Turns out by far the most time is spent in my parser. It's consuming a lot of CPU time, the HD is not the bottleneck.

I wonder what performant ways are there to write a parser in python? I'd rather not rewrite it in C. I tried to use jython, but that decreased performance a lot! The files I parse are partially huge (> 150 MB) with very long lines.

My current parser only needs a look-ahead of one character. I'd post the source here but I don't know if that's such a good idea. After all the submission deadline has not ended yet. But then, the focus in this exercise is not the parser. You can choose whatever language you want to use and there already is a parser for Java.

Note: I've a x86_64 system so psyco (and it seems also PyPy) is no option.

Update: I now uploaded my parser/writer to bitbucket.

+4  A: 

You could use ANTLR or pyparsing, they might speed up your parsing process.

And if you want to keep your current code, you might want to look at Cython/PyPy, which increases your perfomance (sometimes upto 4x).

wvd
Unlikely that pyparsing will speed things up, but might shed some insights on where the bottlenecks are. Also, I believe an ARFF pyparsing parser has already been written, and is out in the ether somewhere.
Paul McGuire
That's true -- and I don't know how pyparsing fits with PyPy or Cython.
wvd
The ARFF pyparsing parser linked from the weka site is extremely incomplete (if that's the one you're talking about).Also I've tried cython, but because I use yield a lot I had to use an hg version an all that did is produce code that segfaults.
panzi
Did you try PyPy also?
wvd
PyPy seems to be 32bit only. At least it didn't compile for me. I've a x86_64 system. Also the 32 binary they provide links against libraries I do not have installed.
panzi
+1  A: 

The most general tip I'd give without further information would be to read the entire file, or at least a substantial section of it, into memory at once. You don't want to be reading it one character at a time and seeking here and there; regardless of the buffering that's going on under the hood it's probably a good idea just to have the whole thing in memory so you can operate on it however you want.

I have written parsers in Python and there's no particular requirement for them to be particularly slower than a parser written in any other language. As it is with these sorts of things, it's more likely that you're doing work you don't need to do. Of those class of item, creating and destroying and recreating the same object is more costly than just storing it off somewhere. Recomputing a value over and over again is more costly than just storing it somewhere. Etc., etc.

In Python specifically, one trap that people fall into is doing a lot of unnecessary string manipulation. Don't append to strings one character at a time; when you're building up your tokens do your work on the "master" string and strip out the token in one fell swoop. (In other words, index into the "master" string, figure out the start and end points, and then grab it with token = master[start:end].) Doing string concatenation one character at a time is a short path to performance misery. I suspect even if you want/need for some reason to do for c in master: newstr += c you might have better luck stuffing the 'c's into a list and then newstr = ''.join(newstr_charlist).

dash-tom-bang
I use things like this a lot, is this the fastest way? However, this particular code is not triggered by me use cases.http://pastebin.com/yAmEcKB8
panzi
Oh and I read 4096 byte chunks from the file (the readc() and peek() method operate on these chunks). I don't think reading the hole file would be a good idea because the files get up to > 150 MB in size.
panzi
Modern computers have 512M or more of memory. Reading 150MB is nothing. :)
dash-tom-bang
The code is an interesting use of generators, but I'd be worried about the function call overhead. Maybe it's perfectly fine, but I might try doing an implementation without generators, just indexing into the string, to see if there was any particular performance difference.
dash-tom-bang