views:

8916

answers:

9

I would like to extract from a general HTML page, all the text (displayed or not).

I would like to remove

  • any HTML tags
  • Any javascript
  • Any CSS styles

Is there a regular expression (one or more) that will achieve that?

+1  A: 

Using perl syntax for defining the regexes, a start might be:

!<body.*?>(.*)</body>!smi

Then applying the following replace to the result of that group:

!<script.*?</script>!!smi
!<[^>]+/[ \t]*>!!smi
!</?([a-z]+).*?>!!smi
/<!--.*?-->//smi

This of course won't format things nicely as a text file, but it strip out all the HTML (mostly, there's a few cases where it might not work quite right). A better idea though is to use an XML parser in whatever language you are using to parse the HTML properly and extract the text out of that.

Matthew Scharley
A: 

If you're using PHP, try Simple HTML DOM, available at SourceForge.

Otherwise, Google html2text, and you'll find a variety of implementations for different languages that basically use a series of regular expressions to suck out all the markup. Be careful here, because tags without endings can sometimes be left in, as well as special characters such as & (which is &amp;).

Also, watch out for comments and Javascript, as I've found it's particularly annoying to deal with for regular expressions, and why I generally just prefer to let a free parser do all the work for me.

Robert Elwell
+2  A: 

Contemplating doing this with regular expressions is daunting. Have you considered XSLT? The XPath expression to extract all of the text nodes in an XHTML document, minus script & style content, would be:

//body//text()[not(ancestor::script)][not(ancestor::style)]
Chris Noe
Simple and Elegant == Beautiful.
Pablo Fernandez
That would probably work, except that it would also return text (ie. code) from within <script> tags.
Kibbee
True enough, see edit. There may be other special cases, but that's the general idea.
Chris Noe
Will not work on real world HTML pages, ie the HTML is malformed non-XHTML. Most XML parsers don't support "real-world HTML". That's why I've used HtmlAgilityPack (Google it) for exactly this type of task in the past.
Ash
Indeed, that is a consistent pain. Another option is to pre-process the page with tidy.
Chris Noe
+6  A: 

Remove javascript and CSS:

<(script|style).*?</\1>

Remove tags

<.*?>
nickf
+10  A: 

You can't really parse HTML with regular expressions. It's too complex. RE's won't handle <![CDATA[ sections correctly at all. Further, some kinds of common HTML things like &lt;text> will work in a browser as proper text, but might baffle a naive RE.

You'll be happier and more successful with a proper HTML parser. Python folks often use something Beautiful Soup to parse HTML and strip out tags and scripts.


Also, browsers, by design, tolerate malformed HTML. So you will often find yourself trying to parse HTML which is clearly improper, but happens to work okay in a browser.

You might be able to parse bad HTML with RE's. All it requires is patience and hard work. But it's often simpler to use someone else's parser.

S.Lott
Definitely use a specialized HTML parser - don't roll your own! I just wanted to suggest Hpricot if you're using Ruby.
Neall
Why should <text> baffle a RE? Most would just be setup to ignore it, which is correct: it's text, not HTML. If it's because they parse HTML entities (a good idea I suppose) you should be doing that on the text AFTER your RE's, not on the HTML anyway...
Matthew Scharley
@monoxide: My point is not that it's impossible. My point is that you can save a lot of debugging of RE's by using someone else's parser that handles all the edge cases correctly.
S.Lott
+1 but I think the point about malformed HTML is irrelevant here since we specifically aren't trying to parse the HTML it's ok to have a regex which just pulls out anything which looks like a tag regardless of structure.
annakata
@annakata: "pulling out anything which looks like a tag" more-or-less IS parsing. Because HTML is a language that is more complex than RE's are designed to describe, parsing is about the only way to find anything in HTML. RE's are always defeated except in trivial cases.
S.Lott
BeautifulSoup uses regexs to parse HTML so it is easily fooled. http://stackoverflow.com/questions/94528/is-u003e-greater-than-sign-allowed-inside-an-html-element-attribute-value
J.F. Sebastian
A: 

I believe you can just do

document.body.innerText

Which will return the content of all text nodes in the document, visible or not.

[edit (olliej): sigh nevermind, this only works in Safari and IE, and i can't be bothered downloading a firefox nightly to see if it exists in trunk :-/ ]

olliej
Nope, that is undefined in FF3
Chris Noe
textContent is a standard equivalent
porneL
A: 

I use iMacros for firefox for extracting stock quotes. It includes a useful general purpose text extraction feature. https://addons.mozilla.org/en-US/firefox/addon/3863 wiki: Text Extraction Jim2

A: 

Nor sure this page could help.

unigogo
A: 

The simplest way for simple HTML (example in Python):

text = "<p>This is my> <strong>example</strong>HTML,<br /> containing tags</p>"
import re
" ".join([t.strip() for t in re.findall(r"<[^>]+>|[^<]+",text) if not '<' in t])

Returns this:

'This is my> example HTML, containing tags'
David Avsajanishvili