tags:

views:

663

answers:

5

Hi, I would like to know how to retrieve all results from each <p> tag.

import re
htmlText = '<p data="5" size="4">item1</p><p size="4">item2</p><p size="4">item3</p>'
print re.match('<p[^>]*size="[0-9]">(.*?)</p>', htmlText).groups()

result:

('item1', )

what I need:

('item1', 'item2', 'item3')
+2  A: 

You can use re.findall like this:

import re
html = '<p data="5" size="4">item1</p><p size="4">item2</p><p size="4">item3</p>'
print re.findall('<p[^>]*size="[0-9]">(.*?)</p>', html)
# This prints: ['item1', 'item2', 'item3']

Edit: ...but as the many commenters have pointed out, using regular expressions to parse HTML is usually a bad idea.

RichieHindle
Thanks! I just found it on Python docs! http://docs.python.org/dev/howto/regex.html
Felipe Andrade
I'm sorry but this is an awful answer. What if there's a space between the size attribute and the closing bracket: <p size="0" >?
Triptych
@Triptych: There isn't. Have you considered the possibility that the OP knows what he's doing? 8-) Had the question been "How do I parse this HTML?" then I wouldn't have suggested a regular expression. But it was "How do I make my regular expression work?", and this is an answer to that question.
RichieHindle
-1: gave an example of regex to parse html, without even saying that this is really bad, and lots of newbies will read. Evil comes from acts like that.
nosklo
@RichieHindle: The original poster didn't say anything about making a regular expression work. He said he wanted to retrieve the results from each p tag. Regular expressions aren't suited to do that.
Brett Bim
+11  A: 

For this type of problem, it is recommended to use a DOM parser, not regex.

I've seen Beautiful Soup frequently recommended for Python

Peter Boughton
+2  A: 

Alternatively, xml.dom.minidom will parse your HTML if,

  • ...it is wellformed
  • ...you embed it in a single root element.

E.g.,

>>> import xml.dom.minidom
>>> htmlText = '<p data="5" size="4">item1</p><p size="4">item2</p><p size="4">item3</p>'
>>> d = xml.dom.minidom.parseString('<not_p>%s</not_p>' % htmlText)
>>> tuple(map(lambda e: e.firstChild.wholeText, d.firstChild.childNodes))
('item1', 'item2', 'item3')
Stephan202
+5  A: 

Beautiful soup is definitely the way to go with a problem like this. The code is cleaner and easier to read. Once you have it installed, getting all the tags looks something like this.

from BeautifulSoup import BeautifulSoup
import urllib2

def getTags(tag):
  f = urllib2.urlopen("http://cnn.com")
  soup = BeautifulSoup(f.read())
  return soup.findAll(tag)


if __name__ == '__main__':
  tags = getTags('p')
  for tag in tags: print(tag.contents)

This will print out all the values of the p tags.

Brett Bim
Thanks for your response. I just needed a python way to print out all the values of the p tags without installing anything new in the server.
Felipe Andrade
+4  A: 

The regex answer is extremely fragile. Here's proof (and a working BeautifulSoup example).

from BeautifulSoup import BeautifulSoup

# Here's your HTML
html = '<p data="5" size="4">item1</p><p size="4">item2</p><p size="4">item3</p>'

# Here's some simple HTML that breaks your accepted 
# answer, but doesn't break BeautifulSoup.
# For each example, the regex will ignore the first <p> tag.
html2 = '<p size="4" data="5">item1</p><p size="4">item2</p><p size="4">item3</p>'
html3 = '<p data="5" size="4" >item1</p><p size="4">item2</p><p size="4">item3</p>'
html4 = '<p data="5" size="12">item1</p><p size="4">item2</p><p size="4">item3</p>'

# This BeautifulSoup code works for all the examples.
paragraphs = BeautifulSoup(html).findAll('p')
items = [''.join(p.findAll(text=True)) for p in paragraphs]

Use BeautifulSoup.

Triptych
I don't think you need to import re. Also, I'm curious what your example provides that mine doesn't other than the list comprehension.
Brett Bim
Brett - mine will correctly handle cases like <p><b>item1</b></p>, whereas yours will fail. Also, the items array here will convert to a list of strings, whereas your example will return tag.contents, which is actually a (very memory hungry) BeautifulSoup object.
Triptych
Cool! I didn't know about the object being memory intensive, I've only used it on small parsing projects and never run into issues. Thanks for the update. I voted yours up based on your explanation.
Brett Bim
I've used BeautifulSoup for some very large (500KB+) HTML files, and you run into a pretty hard wall if you don't learn to conserve memory. BeautifulSoup is extremely convenient but NOT very efficient.
Triptych