Use BeautifulSoup. Use lxml. Do not use regular expressions to parse HTML.
Edit 2010-01-29: This would be a reasonable starting point for lxml:
from lxml.html import fromstring
from lxml.html.clean import Cleaner
import urllib2
url = "http://stackoverflow.com/questions/2165943/removing-html-tags-from-a-text-using-regular-expression-in-python"
html = urllib2.urlopen(url).read()
doc = fromstring(html)
tags = ['h1','h2','h3','h4','h5','h6',
'div', 'span',
'img', 'area', 'map']
args = {'meta':False, 'safe_attrs_only':False, 'page_structure':False,
'scripts':True, 'style':True, 'links':True, 'remove_tags':tags}
cleaner = Cleaner(**args)
path = '/html/body'
body = doc.xpath(path)[0]
print cleaner.clean_html(body).text_content().encode('ascii', 'ignore')
You want the content, so presumably you don't want any javascript or CSS. Also, presumably you want only the content in the body and not HTML from the head, too. Read up on lxml.html.clean to see what you can easily strip out. Way smarter than regular expressions, no?
Also, watch out for unicode encoding problems. You can easily end up with HTML that you cannot print.