tags:

views:

41

answers:

1

I've done a mysqldump of a large database, ~300MB. It has made an error though, it has not escaped any quotes contained in any <o:p>...</o:p> tags. Here's a sample:

...Text here\' escaped correctly, <o:p> But text in here isn't. </o:p> Out here all\'s well again...

Is it possible to write a script (preferably in Python, but I'll take anything!) that would be able to scan and fix these errors automatically? There's quite a lot of them and Notepad++ can't handle a file of that size very well...

+4  A: 

If the "lines" your file is divided into are of reasonable lengths, and there are no binary sequences in it that "reading as text" would break, you can use fileinput's handy "make believe I'm rewriting a file in place" functionality:

   import re
   import fileinput

   tagre = re.compile(r"<o:p>.*?</o:p>")
   def sub(mo):
     return mo.group().replace(r"'", r"\'")

   for line in fileinput.input('thefilename', inplace=True):
     print tagre.sub(sub, line),

If not, you'll have to simulate the "in-place rewriting" yourself, e.g. (oversimplified...):

   with open('thefilename', 'rb') as inf:
     with open('fixed', 'wb') as ouf:
       while True:
         b = inf.read(1024*1024)
         if not b: break
         ouf.write(tagre.sub(sub, b))

and then move 'fixed' to take place of 'thefilename' (either in code, or manually) if you need that filename to remain after the fixing.

This is oversimplified because one of the crucial <o:p> ... </o:p> parts might end up getting split between two successive megabyte "blocks" and therefore not identified (in the first example, I'm assuming each such part is always fully contained within a "line" -- if that's not the case then you should not use that code, but the following, anyway). Fixing this requires, alas, more complicated code...:

   with open('thefilename', 'rb') as inf:
     with open('fixed', 'wb') as ouf:
       while True:
         b = getblock(inf)
         if not b: break
         ouf.write(tagre.sub(sub, b))

with e.g.

   partsofastartag = '<', '<o', '<o:', '<o:p'
   def getblock(inf):
     b = ''
     while True:
       newb = inf.read(1024 * 1024)
       if not newb: return b
       b += newb
       if any(b.endswith(p) for p in partsofastartag):
         continue
       if b.count('<o:p>') != b.count('</o:p>'):
         continue
       return b

As you see, this is pretty delicate code, and therefore, what with it being untested, I can't know that it is correct for your problem. In particular, can there be cases of <o:p> that are NOT matched by a closing </o:p> or vice versa? If so, then a call to getblock could end up returning the whole file in quite a costly way, and even the RE matching and substitution might backfire (the latter would also occur if SOME of the single-quotes in such tags are already properly escaped, but not all).

If you have at least a GB or so, avoiding the delicate issues with block division, at least, IS feasible, since everything should fit in memory, making the code much simpler:

   with open('thefilename', 'rb') as inf:
     with open('fixed', 'wb') as ouf:
         b = inf.read()
         ouf.write(tagre.sub(sub, b))

However, the other issues mentioned above (possible unbalanced opening/closing tags, etc) might remain -- only you can study your existing defective data and see if it affords such a reasonably simple approach at fixing!

Alex Martelli
Thanks, that worked great, all I need to do now is find all the other queries with unescaped quotes that don't have such a handy way of finding them...
fredley