views:

62

answers:

2

Could someone tell me whats a better way to clean up bad HTML so BeautifulSoup can handle it - should one use the massage methods of BeautifulSoup or clean it up using regular expressions?

Thanks.

+2  A: 

Thought I should reword my answer.

The built-in massages are good for light damage (extra whitespace, no closing slashes, etc). I would certainly try and get away with these before getting any more involved.

You can pass in your own massages and I would suggest you extend the default set:

import copy, re

myMassage = [(re.compile('<!-([^-])'), lambda match: '<!--' + match.group(1))]
myNewMassage = copy.copy(BeautifulSoup.MARKUP_MASSAGE)
myNewMassage.extend(myMassage)

BeautifulSoup(badString, markupMassage=myNewMassage)
# Foo<!--This comment is malformed.-->Bar<br />Baz

You're probably better off doing it this way as it all goes into one parsing pot, gaining BeautifulSoups optimisations... Although the runtime performance is probably pretty similar.

Oli
+2  A: 

From the documentation, massage methods are just pairs of (regular expression, replacement function) so I don't think it's really a case of use massaging or regexps.

e.g. to tidy up malformed comments:

(re.compile('<!-([^-])'), lambda match: '<!--' + match.group(1))

If you look at the source of the _feed method in BeautifulSoup.py you will see that these are just run in sequence against the markup:

for fix, m in self.markupMassage:
  markup = fix.sub(m, markup)

So whilst you could do some regexp processing of your own before BeautifulSoup gets to see the markup you are probably better combining any additional tidying needed with the default builtin MARKUP_MASSAGE as shown in Oli's answer.

mikej