views:

86

answers:

4

I have a snippet of HTML that contains paragraphs. (I mean p tags.) I want to split the string into the different paragraphs. For instance:

'''
<p class="my_class">Hello!</p>
<p>What's up?</p>
<p style="whatever: whatever;">Goodbye!</p>
'''

Should become:

['<p class="my_class">Hello!</p>',
 '<p>What's up?</p>'
 '<p style="whatever: whatever;">Goodbye!</p>']

What would be a good way to approach this?

A: 

Use BeautifulSoup to parse the HTML and iterate over the paragraphs.

Lukáš Lalinský
BeautifulSoup also works but is only necessary if the html might be ugly/invalid. The stdlib etree can also do this. I prefer lxml because it's more powerful. At one point there was talk of including BeautifulSoup into it; I don't know where that's gone.
profjim
`xml.etree` can parse XML, which the code in the question is not.
Lukáš Lalinský
I believe I've used it to parse html. Maybe I'm misremembering. But this seems to confirm my memory: http://effbot.org/zone/element-index.htm#usage
profjim
or maybe the issue is that we only have a snippet here...?
profjim
A: 

The xml.etree (std lib) or lxml.etree (enhanced) make this easy to do, but I'm not going to get the answer cred for this because I don't remember the exact syntax. I keep mixing it up with similar packages and have to look it up afresh every time.

profjim
+3  A: 

If your string only contains paragraphs, you may be able to get away with a nicely crafted regex and re.split(). However, if your string is more complex HTML, or not always valid HTML, you might want to look at the BeautifulSoup package.

Usage goes like:

from BeautifulSoup import BeautifulSoup 

soup = BeautifulSoup(some_html)

paragraphs = list(unicode(x) for x in soup.findAll('p'))
Crast
Regular expressions are the wrong tool for this. HTML is not a regular language and therefore regex are inherently unable to parse HTML. Using an HTML parser, like you show in the latter part of your post, is more robust as well as more easy and readable.
Mike Graham
+2  A: 

Use lxml.html to parse the HTML into the form you want. This is essentially the same advice as the people who are recommending BeautifulSoup, except lxml is still being actively developed and BeatifulSoup development has slowed.

Mike Graham