views:

854

answers:

3

I'm trying to look at a html file and remove all the tags from it so that only the text is left but I'm having a problem with my regex. This is what I have so far.

import urllib.request, re
def test(url):
html = str(urllib.request.urlopen(url).read())
print(re.findall('<[\w\/\.\w]*>',html))

The html is a simple page with a few links and text but my regex won't pick up !DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" and 'a href="...." tags. Can anyone explain what I need to change in my regex?

A: 

What about you just replace:
< = &lt;
> = &gt;
& = &amp;

M28
+7  A: 

Use BeautifulSoup. Use lxml. Do not use regular expressions to parse HTML.


Edit 2010-01-29: This would be a reasonable starting point for lxml:

from lxml.html import fromstring
from lxml.html.clean import Cleaner
import urllib2

url = "http://stackoverflow.com/questions/2165943/removing-html-tags-from-a-text-using-regular-expression-in-python"
html = urllib2.urlopen(url).read()

doc = fromstring(html)

tags = ['h1','h2','h3','h4','h5','h6',
       'div', 'span', 
       'img', 'area', 'map']
args = {'meta':False, 'safe_attrs_only':False, 'page_structure':False, 
       'scripts':True, 'style':True, 'links':True, 'remove_tags':tags}
cleaner = Cleaner(**args)

path = '/html/body'
body = doc.xpath(path)[0]

print cleaner.clean_html(body).text_content().encode('ascii', 'ignore')

You want the content, so presumably you don't want any javascript or CSS. Also, presumably you want only the content in the body and not HTML from the head, too. Read up on lxml.html.clean to see what you can easily strip out. Way smarter than regular expressions, no?

Also, watch out for unicode encoding problems. You can easily end up with HTML that you cannot print.

hughdbrown
This is the final answer!
jathanism
-1. OP's requirement is simple, remove all tags. There's no need for BeautifulSoup.
Here's a couple of things the OP might consider obvious but has omitted from the question: document section (head and body? body only?) and javascript (does the OP consider javascript part of the content?). Those are going to be easily controllable with BeautifulSoup and lxml. Regular expressions will not deal with those at all.
hughdbrown
A: 
import re
patjunk = re.compile("<.*?>|&nbsp;|&amp;",re.DOTALL|re.M)
url="http://www.yahoo.com"
def test(url,pat):
    html = urllib2.urlopen(url).read()
    return pat.sub("",html)

print test(url,patjunk)