views:

4809

answers:

22

This question is a lazy way of collecting examples of parsing HTML with a variety of languages and parsing libraries. Individual comments will be linked to in answers to questions about how to parse HTML with regexes as a way of showing the right way to do things (similar to how I use Can you provide some examples of why it is hard to parse XML and HTML with a regex?).

For the sake of consistency, I ask that the example be parsing an HTML file for the href in anchor tags. To make it easy to search this question, I ask that you follow this format

language:
library:

<example code>

Please make the library a link to the documentation for the library. If you want to provide an example other than extracting links, please include a

purpose:

after the "library:".

Note, the tags have been changed to draw in other languages. Here is a history of the tags this post has had: c#, perl, python, ruby, vb.net, and parsing.

+6  A: 

language: Python
library: HTMLParser

#!/usr/bin/python

from HTMLParser import HTMLParser

class FindLinks(HTMLParser):
    def __init__(self):
     HTMLParser.__init__(self)

    def handle_starttag(self, tag, attrs):
     at = dict(attrs)
     if tag == 'a' and 'href' in at:
      print at['href']


find = FindLinks()

html = "<html><body>"
for link in ("foo", "bar", "baz"):
    html += '<a href="http://%s.com"&gt;%s&lt;/a&gt;' % (link, link)
html += "</body></html>"

find.feed(html)
Chas. Owens
+10  A: 

language: Perl
library: HTML::Parser

#!/usr/bin/perl

use strict;
use warnings;

use HTML::Parser;

my $find_links = HTML::Parser->new(
    start_h => [
     sub {
      my ($tag, $attr) = @_;
      if ($tag eq 'a' and exists $attr->{href}) {
       print "$attr->{href}\n";
      }
     }, 
     "tag, attr"
    ]
);

my $html = join '',
    "<html><body>",
    (map { qq(<a href="http://$_.com"&gt;$_&lt;/a&gt;) } qw/foo bar baz/),
    "</body></html>";

$find_links->parse($html);
Chas. Owens
Using LWP::Simple to download this page (as I do below in my perl example) showed that you found a's that didn't have href's (but had names), so we just want to check that there *is* an href before printing it.
Tanktalus
@tanktalus good catch
Chas. Owens
+10  A: 

language: Ruby
library: Hpricot

#!/usr/bin/ruby

require 'hpricot'

html = '<html><body>'
['foo', 'bar', 'baz'].each {|link| html += "<a href=\"http://#{link}.com\"&gt;#{link}&lt;/a&gt;" }
html += '</body></html>'

doc = Hpricot(html)
doc.search('//a').each {|elm| puts elm.attributes['href'] }
Pesto
Humorous story: apt-get install libhpricot-ruby doesn't install Ruby if it isn't installed.
Chas. Owens
Sounds like it's time for a wishlist bug...
Telemachus
+4  A: 

language: Perl
library: XML::Twig

#!/usr/bin/perl
use strict;
use warnings;
use Encode ':all';

use LWP::Simple;
use XML::Twig;

#my $url = 'http://stackoverflow.com/questions/773340/can-you-provide-an-example-of-parsing-html-with-your-favorite-parser';
my $url = 'http://www.google.com';
my $content = get($url);
die "Couldn't fetch!" unless defined $content;

my $twig = XML::Twig->new();
$twig->parse_html($content);

my @hrefs = map {
    $_->att('href');
} $twig->get_xpath('//*[@href]');

print "$_\n" for @hrefs;

caveat: Can get wide-character errors with pages like this one (changing the url to the one commented out will get this error), but the HTML::Parser solution above doesn't share this problem.

Tanktalus
Nice, I use XML::Twig all the time and never realized there was a parse_html method.
Chas. Owens
+8  A: 

language: shell
library: lynx (well, it's not library, but in shell, every program is kind-of library)

lynx -dump -listonly http://news.google.com/
depesz
+1 for trying, +1 for a working solution, -1 for the solution not being generalizable to other tasks: net +1
Chas. Owens
well, the task was quite well defined - it had to extract links from "a" tags. :)
depesz
Yes, but it is defined as an example to show how to parse, I could have just as easily asked you to print all of the contents of <td> tags that had the class "phonenum".
Chas. Owens
I agree that this doesn't help with the generic question, but the specific question is likely to be a popular one, so it seems reasonable to me as a way to do it for a specific domain of the general problem.
Tanktalus
Yeah, he/she got an up-vote from me on this one because I really didn't expect a shell solution. I think the specific question has already been asked a bunch of times for different languages, so if you are looking for this specific example to be solved you are better of searching SO for those questions.
Chas. Owens
+13  A: 

language: Python
library: BeautifulSoup

from BeautifulSoup import BeautifulSoup

html = "<html><body>"
for link in ("foo", "bar", "baz"):
    html += '<a href="http://%s.com"&gt;%s&lt;/a&gt;' % (link, link)
html += "</body></html>"

soup = BeautifulSoup(html)
links = soup.findAll('a', href=True) # find <a> with a defined href attribute
print links

output:

[<a href="http://foo.com"&gt;foo&lt;/a&gt;,
 <a href="http://bar.com"&gt;bar&lt;/a&gt;,
 <a href="http://baz.com"&gt;baz&lt;/a&gt;]

also possible:

for link in links:
    print link['href']

output:

http://foo.com
http://bar.com
http://baz.com
Paolo Bergantino
This is nice, but does BeautifulSoup provide a way of looking into the tags to get the attributes? *goes off to look at docs*
Chas. Owens
Yes. Edited to show.
Paolo Bergantino
The output in the first example is just the text representation of the matched links, they are actually objects to which you can do all kinds of fun stuff.
Paolo Bergantino
Yeah, I just read the docs, you just beat me to fixing the code. I did add the try/catch to prevent it from blowing up when href isn't there though. Apparently "'href' in link" doesn't work.
Chas. Owens
be sure to use beautifulsoup < 3.1. see here for more info: http://www.crummy.com/software/BeautifulSoup/3.1-problems.html
Peteris Krumins
Hate this library, they should call it beautiful poop.
Pierreten
+4  A: 

Language: Perl
Library: HTML::Parser
Purpose: How can I remove unused, nested HTML span tags with a Perl regex?

runrig
Good, this is the sort of stuff I would like to see collected here.
Chas. Owens
+8  A: 

Language Perl
Library: HTML::LinkExtor

Beauty of Perl is that you have modules for very specific tasks. Like link extraction.

Whole program:

#!/usr/bin/perl -w
use strict;

use HTML::LinkExtor;
use LWP::Simple;

my $url     = 'http://www.google.com/';
my $content = get( $url );

my $p       = HTML::LinkExtor->new( \&process_link, $url, );
$p->parse( $content );

exit;

sub process_link {
    my ( $tag, %attr ) = @_;

    return unless $tag eq 'a';
    return unless defined $attr{ 'href' };

    print "- $attr{'href'}\n";
    return;
}

Explanation:

  • use strict - turns on "strict" mode - eases potential debugging, not fully relevant to the example
  • use HTML::LinkExtor - load of interesting module
  • use LWP::Simple - just a simple way to get some html for tests
  • my $url = 'http://www.google.com/' - which page we will be extracting urls from
  • my $content = get( $url ) - fetches page html
  • my $p = HTML::LinkExtor->new( \&process_link, $url ) - creates LinkExtor object, givin it reference to function that will be used as callback on every url, and $url to use as BASEURL for relative urls
  • $p->parse( $content ) - pretty obvious I guess
  • exit - end of program
  • sub process_link - begin of function process_link
  • my ($tag, %attr) - get arguments, which are tag name, and its atributes
  • return unless $tag eq 'a' - skip processing if the tag is not <a>
  • return unless defeined $attr{'href'} - skip processing if the <a> tag doesn't have href attribute
  • print "- $attr{'href'}\n"; - pretty obvious I guess :)
  • return; - finish the function

That's all.

depesz
Nice, but I think you are missing the point of the question, the example is there so that the code will be similar, not because I want the links. Think in more general terms. The goal is to provide people with the tools to use parsers instead of regexes.
Chas. Owens
It is possible that I miss something, but I read in the problem description: "For the sake of consistency, I ask that the example be parsing an HTML file for the href in anchor tags." If you'd ask for example of parsing <td> tags - I would probably use HTML::TableExtract - basically - specialized tool beats (in my opinion) general tool.
depesz
Fine, find all span tags that have the class "to_understand_intent" that are inside of div tags whose class is "learn". Specialized tools are great, but they are just that: specialized. You will wind up needing to know the general tool one day. This is a question about the general tools, not specialized libraries that use those tools.
Chas. Owens
For this new request - of course HTML::Parser would be much better. But just saying "use HTML::Parser" is plain wrong. One should use proper tool for a given task. For extracting hrefs I would say that using HTML::Parser is overkill. For extracting <td>s - as well. Asking "give me general way to parse ..." is wrong because it assumes that there exists 1 tool (in language) that's perfect for all cases. I personally parse HTML in at least 6 different ways, depending on what I need to do.
depesz
Look at the task again. The task was not get the links in an HTMl page, it was demonstrate how your favorite parser works using getting the links in an HTML page as an example. It was chosen because it is a simple task that involves finding the right tag and looking at a piece of data in it. It was also chosen because it is a common task. Because it is a common task Perl has automated it for you, but that doesn't mean this question was asking for you to give the automated answer.
Chas. Owens
@Chas. Owens: the task was specific; given that a solution exists on CPAN, there should be an example using it (as well as a more general HTML::Parser example). And it isn't fully automated; you have to filter it to just anchor tag href attributes - how to do so is worth showing in an example.
ysth
Shorter: HTML::LinkExtor->new( sub{ print $_[2] if $_[0] eq "a" } )->parse_file("sample.html")
ysth
@ysth, the task was chose at random, I could have chosen anything. As I have stated several times, the purpose of this question is to collect examples of full parsers, not to solve the example with specialized libraries that use parsers. This answer would be fine if the question was "How do I extract links from HTML with Perl?", but the question is "Can you provide an example of parsing HTML with your favorite parser?"; therefore the task is demonstrate a parser, not extract links.
Chas. Owens
+13  A: 

Language: C#
Library: HtmlAgilityPack

class Program
{
    static void Main(string[] args)
    {
     var web = new HtmlWeb();
     var doc = web.Load("http://www.stackoverflow.com");

     var nodes = doc.DocumentNode.SelectNodes("//a[@href]");

     foreach (var node in nodes)
     {
      Console.WriteLine(node.InnerHtml);
     }
    }
}
alexn
I was waiting for a C# answer, thanks.
Chas. Owens
+3  A: 

Language: JavaScript
Library: DOM

var links = document.links;
for(var i in links){
    var href = links[i].href;
    if(href != null) console.debug(href);
}

(using firebug console.debug for output...)

Ward Werbrouck
Good use of the browser as a parser.
Chas. Owens
+15  A: 

Language: JavaScript
Library: jQuery

$.each($('a[href]'), function(){
    console.debug(this.href);
});

(using firebug console.debug for output...)

And loading any html page:

$.get('http://stackoverflow.com/', function(page){
     $(page).find('a[href]').each(function(){
        console.debug(this.href);
    });
});

Used another each function for this one, I think it's cleaner when chaining methods.

Ward Werbrouck
I just love jQuery.
macke
the most elegant solution by now
dfa
A little bit of cheating, but yes. ;)
Paolo Bergantino
added loading other html pages, so no more cheating ;)
Ward Werbrouck
The cheating was that I'm pretty sure the implication of the question was for server-side solutions. Even though you could run Javascript on the server but its not really the first thing you'd think of, which is why I said "a little bit" of cheating. :)
Paolo Bergantino
Well yes, if you look at it that way. :) But using javascript/jquery for parsing HTML feels very natural, it's perfect for stuff like this.
Ward Werbrouck
Using the browser as the parser is the ultimate parser. The DOM in a given browser *is* the document tree.
Chas. Owens
+1  A: 

Language: C#
Library: System.XML (standard .NET)

using System.Collections.Generic;
using System.Xml;

public static void Main(string[] args)
{
    List<string> matches = new List<string>();

    XmlDocument xd = new XmlDocument();
    xd.LoadXml("<html>...</html>");

    FindHrefs(xd.FirstChild, matches);
}

static void FindHrefs(XmlNode xn, List<string> matches)
{
    if (xn.Attributes != null && xn.Attributes["href"] != null)
        matches.Add(xn.Attributes["href"].InnerXml);

    foreach (XmlNode child in xn.ChildNodes)
        FindHrefs(child, matches);
}
will this work if the HTML is not valid xml (e.g. unclosed img tags)?
Chas. Owens
Who writes HTML that isn't valid XML? Well, other than stackoverflow, I mean. :-P
Tanktalus
This is parsing XML, not "HTML".However, it does work with valid xHTML...
Ward Werbrouck
@Tanktalus: Try the whole web. :)
Paolo Bergantino
+2  A: 

language: Python
library: lxml.html

import lxml.html

html = "<html><body>"
for link in ("foo", "bar", "baz"):
    html += '<a href="http://%s.com"&gt;%s&lt;/a&gt;' % (link, link)
html += "</body></html>"

tree = lxml.html.document_fromstring(html)
for element, attribute, link, pos in tree.iterlinks():
    if attribute == "href":
        print link

lxml also has a CSS selector class for traversing the DOM, which can make using it very similar to using JQuery:

for a in tree.cssselect('a[href]'):
    print a.get('href')
Adam
Hmm, I am getting "ImportError: No module named html" when I try to run this, is there something I need besides python-lxml?
Chas. Owens
Ah, I have version 1.3.6 and that comes with 2.0 and later
Chas. Owens
Indeed. I can provide an example of using lxml.etree to do the job as well if you like? lxml.html is a bit more tolerant of broken HTML.
Adam
+5  A: 

Language: Java
Libraries: XOM, TagSoup

I've included intentionally malformed and inconsistent XML in this sample.

import java.io.IOException;

import nu.xom.Builder;
import nu.xom.Document;
import nu.xom.Element;
import nu.xom.Node;
import nu.xom.Nodes;
import nu.xom.ParsingException;
import nu.xom.ValidityException;

import org.ccil.cowan.tagsoup.Parser;
import org.xml.sax.SAXException;

public class HtmlTest {
    public static void main(final String[] args) throws SAXException, ValidityException, ParsingException, IOException {
        final Parser parser = new Parser();
        parser.setFeature(Parser.namespacesFeature, false);
        final Builder builder = new Builder(parser);
        final Document document = builder.build("<html><body><ul><li><a href=\"http://google.com\"&gt;google&lt;/li&gt;&lt;li&gt;&lt;a HREF=\"http://reddit.org\" target=\"_blank\">reddit</a></li><li><a name=\"nothing\">nothing</a><li></ul></body></html>", null);
        final Element root = document.getRootElement();
        final Nodes links = root.query("//a[@href]");
        for (int linkNumber = 0; linkNumber < links.size(); ++linkNumber) {
            final Node node = links.get(linkNumber);
            System.out.println(((Element) node).getAttributeValue("href"));
        }
    }
}

TagSoup adds an XML namespace referencing XHTML to the document by default. I've chosen to suppress that in this sample. Using the default behavior would require the call to root.query to include a namespace like so:

root.query("//xhtml:a[@href]", new nu.xom.XPathContext("xhtml", root.getNamespaceURI())
laz
Does this work for HTML 4 and HTML 5?
Chas. Owens
I'm sure either will work fine. TagSoup was made to parse whatever you can throw at it.
laz
+13  A: 

Language: Perl
Library: pQuery

use strict;
use warnings;
use pQuery;

my $html = join '',
    "<html><body>",
    (map { qq(<a href="http://$_.com"&gt;$_&lt;/a&gt;) } qw/foo bar baz/),
    "</body></html>";

pQuery( $html )->find( 'a' )->each(
    sub {  
        my $at = $_->getAttribute( 'href' ); 
        print "$at\n" if defined $at;
    }
);

/I3az/

draegtun
That's brilliant. Never knew about pQuery, but it looks very cool.
depesz
Can you search for 'a[@href]' or 'a[href]' as in jQuery? It would simplify the code, and quite sure be faster.
Ward Werbrouck
Here are some other stackoverflow questions with pQuery answers... http://stackoverflow.com/questions/713827/how-can-i-screen-scrape-with-perl/713846#713846 http://stackoverflow.com/questions/574199/how-do-i-extract-an-html-title-with-perl http://stackoverflow.com/questions/254345/how-can-i-extract-urls-from-a-web-page-in-perl/254506#254506 http://stackoverflow.com/questions/221091/how-can-i-extract-xml-of-a-website-and-save-in-a-file-using-perls-lwp/223662#223662
draegtun
@code-is-art: Unfortunately not yet... to quote author from docs "The selector syntax is still very limited. (Single tags, IDs and classes only)". Checkout the tests because pQuery does have features that aren't in the documentation, for eg. say 'Number of <td> with "blah" content - ', pQuery('td:contains(blah)')->size;
draegtun
@depesz, @chas - I agree! But someone else doesn't because its been voted down a bit ;-(
draegtun
+4  A: 

Language: Ruby
Library: Nokogiri

#!/usr/bin/env ruby
require 'nokogiri'
require 'open-uri'

document = Nokogiri::HTML(open("http://google.com"))
document.css("html head title").first.content
=> "Google"
document.xpath("//title").first.content
=> "Google"
angryamoeba
+3  A: 

Language: Perl
Library : HTML::TreeBuilder

use strict;
use HTML::TreeBuilder;
use LWP::Simple;

my $content = get 'http://www.stackoverflow.com';
my $document = HTML::TreeBuilder->new->parse($content)->eof;

for my $a ($document->find('a')) {
    print $a->attr('href'), "\n" if $a->attr('href');
}
dfa
my original code was less cluttered :)
dfa
It was also incorrect, you must call $document->eof; if you use $document->parse($html); and would print empty lines when href wasn't set.
Chas. Owens
reverted to my original code; ->eof() is useless in this sample; also checking for href presence is pointless in this example
dfa
Is there a reason you don't want to use new_from_content?
Chas. Owens
+3  A: 

Language: PHP
Library: SimpleXML (and DOM)

<?php
$page = new DOMDocument();
$page->strictErrorChecking = false;
$page->loadHTMLFile('http://stackoverflow.com/questions/773340');
$xml = simplexml_import_dom($page);

$links = $xml->xpath('//a[@href]');
foreach($links as $link)
    echo $link['href']."\n";
Ward Werbrouck
I don't know XPath, does '//a[@href]' give you all a tags that have an href attribute set?
Chas. Owens
Yes it does, and I don't know where my first reply has gone...
Ward Werbrouck
+3  A: 

Language: Objective-C
Library: libxml2 + Matt Gallagher's libxml2 wrappers + Ben Copsey's ASIHTTPRequest

ASIHTTPRequest *request = [ASIHTTPRequest alloc] initWithURL:[NSURL URLWithString:@"http://stackoverflow.com/questions/773340"];
[request start];
NSError *error = [request error];
if (!error) {
    NSData *response = [request responseData];
    NSLog(@"Data: %@", [[self query:@"//a[@href]" withResponse:response] description]);
    [request release];
}
else 
    @throw [NSException exceptionWithName:@"kMyHTTPRequestFailed" reason:@"Request failed!" userInfo:nil];

...

- (id) query:(NSString *)xpathQuery WithResponse:(NSData *)resp {
    NSArray *nodes = PerformHTMLXPathQuery(resp, xpathQuery);
    if (nodes != nil)
        return nodes;
    return nil;
}
Alex Reynolds
+6  A: 

Language: Common Lisp
Library: Closure Html, Closure Xml, CL-WHO

(shown using DOM API, without using XPATH or STP API)

(defvar *html*
  (who:with-html-output-to-string (stream)
    (:html
     (:body (loop
               for site in (list "foo" "bar" "baz")
               do (who:htm (:a :href (format nil "http://~A.com/" site))))))))

(defvar *dom*
  (chtml:parse *html* (cxml-dom:make-dom-builder)))

(loop
   for tag across (dom:get-elements-by-tag-name *dom* "a")
   collect (dom:get-attribute tag "href"))
=> 
("http://foo.com/" "http://bar.com/" "http://baz.com/")
dmitry_vk
does collect or dom:get-attribute correctly handle tags that do not have href set?
Chas. Owens
Depending on definition of correctness. In example as it is shown, empty strings will be collected for "a" tags with no "href" attribute.If loop is rewritten as(loop for tag across (dom:get-elements-by-tag-name *dom* "a") when (string/= (dom:get-attribute tag "href") "") collect (dom:get-attribute tag "href"))then only non-empty "href"s will be collected.
dmitry_vk
Actually, that's not when (string/= (dom:get-attribute tag "href") "")but when (dom:has-attribute tag "href")
dmitry_vk
+2  A: 

Language: Clojure
Library: Enlive (a selector-based (à la CSS) templating and transformation system for Clojure)


Selector expression:

(def test-select
     (html/select (html/html-resource (java.io.StringReader. test-html)) [:a]))

Now we can do the following at the REPL (I've added line breaks in test-select):

user> test-select
({:tag :a, :attrs {:href "http://foo.com/"}, :content ["foo"]}
 {:tag :a, :attrs {:href "http://bar.com/"}, :content ["bar"]}
 {:tag :a, :attrs {:href "http://baz.com/"}, :content ["baz"]})
user> (map #(get-in % [:attrs :href]) test-select)
("http://foo.com/" "http://bar.com/" "http://baz.com/")

You'll need the following to try it out:

Preamble:

(require '[net.cgrand.enlive-html :as html])

Test HTML:

(def test-html
     (apply str (concat ["<html><body>"]
                        (for [link ["foo" "bar" "baz"]]
                          (str "<a href=\"http://" link ".com/\">" link "</a>"))
                        ["</body></html>"])))
Michał Marczyk
Not sure if I'd call Enlive a "parser", but I'd certainly use it in place of one, so -- here's an example.
Michał Marczyk
A: 

Language: PHP Library: DOM

<?php
$doc = new DOMDocument();
$doc->strictErrorChecking = false;
$doc->loadHTMLFile('http://stackoverflow.com/questions/773340');
$xpath = new DOMXpath($doc);

$links = $xpath->query('//a[@href]');
for ($i = 0; $i < $links->length; $i++)
    echo $links->item($i)->getAttribute('href'), "\n";

Sometimes it's useful to put @ symbol before $doc->loadHTMLFile to suppress invalid html parsing warnings

Entea
Almost identical to my PHP version ( http://stackoverflow.com/questions/773340/can-you-provide-an-example-of-parsing-html-with-your-favorite-parser/774853#774853 ) You don't need the getAttribute call
Ward Werbrouck