tags:

views:

288

answers:

6

I want to be able to run a regular expression on an entire file, but I'd like to be able to not have to read the whole file into memory at once as I may be working with rather large files in the future. Is there a way to do this? Thanks!

Clarification: I cannot read line-by-line because it can span multiple lines.

+4  A: 

This depends on the file and the regex. The best thing you could do would be to read the file in line by line but if that does not work for your situation then might get stuck with pulling the whole file into memory.

Lets say for example that this is your file:

Lorem ipsum dolor sit amet, consectetur
adipiscing elit. Ut fringilla pede blandit
eros sagittis viverra. Curabitur facilisis
urna ABC elementum lacus molestie aliquet.
Vestibulum lobortis semper risus. Etiam
sollicitudin. Vivamus posuere mauris eu
nulla. Nunc nisi. Curabitur fringilla fringilla
elit. Nullam feugiat, metus et suscipit
fermentum, mauris ipsum blandit purus,
non vehicula purus felis sit amet tortor.
Vestibulum odio. Mauris dapibus ultricies
metus. Cras XYZ eu lectus. Cras elit turpis,
ultrices nec, commodo eu, sodales non, erat.
Quisque accumsan, nunc nec porttitor vulputate,
erat dolor suscipit quam, a tristique justo
turpis at erat.

And this was your regex:

consectetur(?=\sadipiscing)

Now this regex uses positive lookahead and will only match a string of "consectetur" if it is immediately followed by any whitepace character and then a string of "adipiscing".

So in this example you would have to read the whole file into memory because your regex is depending on the entire file being parsed as a single string. This is one of many examples that would require you to have your entire string in memory for a particular regex to work.

I guess the unfortunate answer is that it all depends on your situation.

Andrew Hare
A: 

For single line patterns you can iterate over the lines of the file, but for multi-line patterns, You will have to read all (or part, but that'll be hard to keep track of) of the file into memory.

sykora
+1  A: 

Open the file and iterate over the lines.

fd = open('myfile')
for line in fd:
    if re.match(...,line)
        print line
Mark Harrison
A: 

This is one way:

import re

REGEX = '\d+'

with open('/tmp/workfile', 'r') as f:
      for line in f:
          print re.match(REGEX,line)
  1. with operator in python 2.5 takes of automatic file closure. Hence you need not worry about it.
  2. iterator over the file object is memory efficient. that is it wont read more than a line of memory at a given time.
  3. But the draw back of this approach is that it would take a lot of time for huge files.

Another approach which comes to my mind is to use read(size) and file.seek(offset) method, which will read a portion of the file size at a time.

import re

REGEX = '\d+'

with open('/tmp/workfile', 'r') as f:
      filesize = f.size()
      part = filesize / 10 # a suitable size that you can determine ahead or in the prog.
      position = 0 
      while position <= filesize: 
          content = f.read(part)
          print re.match(REGEX,content)
          position = position + part
          f.seek(position)

You can also combine these two there you can create generator that would return contents a certain bytes at the time and iterate through that content to check your regex. This IMO would be a good approach.

Senthil Kumaran
+1  A: 

If this is a big deal and worth some effort, you can convert the regular expression into a finite state machine which reads the file. The FSM can be of O(n) complexity which means it will be a lot faster as the file size gets big.

You will be able to efficiently match patterns that span lines in files too large to fit in memory.

Here are two places that describe the algorithm for converting a regular expression to a FSM:

Mark Harrison
+8  A: 

You can use mmap to map the file to memory. The file contents can then be accessed like a normal string:

import re, mmap

with open('/var/log/error.log', 'r+') as f:
  data = mmap.mmap(f.fileno(), 0)
  mo = re.search('error: (.*)', data)
  if mo:
    print "found error", mo.group(1)

This also works for big files, the file content is internally loaded from disk as needed.

sth
This is perfect. Thank you very much, sth.
Evan Fosmark
Just a side note: if you work on a 32-bit system *and* your files could be over 1 GiB, then this method might not work.
ΤΖΩΤΖΙΟΥ
The mapped files count to the "used memory" and on 32bit systems one process might only use up to 4GB, so yes, if the file gets up to something like 3GB you could start running into problems. Then it's time to switch to a 64bit processor :).
sth
I wish I could +1 twice for actually answering the question.
A. Rex
Bloody wonderful answer.
PEZ