views:

23632

answers:

9

I would like to extract the structure of the HTML document - so the tags are more important than the content. Ideally, it would be able to cope reasonably with badly-formed HTML to some extent also.

Anyone know of a reliable and efficient parser?

+1  A: 

I've used ZetaHtmlTidy in the past to load random websites and then hit against various parts of the content with xpath (eg /html/body//p[@class='textblock']). It worked well but there were some exceptional sites that it had problems with, so I don't know if it's the absolute best solution.

Rahul
+68  A: 

Html Agility Pack

This is an agile HTML parser that builds a read/write DOM and supports plain XPATH or XSLT (you actually don't HAVE to understand XPATH nor XSLT to use it, don't worry...). It is a .NET code library that allows you to parse "out of the web" HTML files. The parser is very tolerant with "real world" malformed HTML. The object model is very similar to what proposes System.Xml, but for HTML documents (or streams).

Mark Cidade
Excellent, that sounds more appropriate than Zeta
Rahul
The latest beta of HTML Agility Pack looks promising, too.
John Kaster
+1  A: 

Take a look at Chris Lovett's SGML Reader inside DasBlog. It'll turn HTML into an XML document and let you get the elements that way.

Scott Hanselman
+3  A: 

The Html Agility Pack has been mentioned before - if you are going for speed, you might also want to check out the Majestic-12 HTML parser. Its handling is rather clunky, but it delivers a really fast parsing experience.

Grimtron
+2  A: 

Look for the HtmlAgilityPack. It's an open-source library that parses HTML, and will fix errors (e.g. unclosed tags). Once loaded, you can use XPath via the XPathNavigator class to select the specific content you desire. Additionally, it can convert HTML to well-formed X(HT)ML. At work, this library is an VERY critical part of our software. We run hundreds of thousands of ugly third-party HTML documents and XML feeds through it daily, to assure they're well-formed before we attempt to parse data out of it.

Chris
A: 

Hey folks, HTML Agility Pack is distributed under Creative Commons License (Attribution-ShareAlike). How do you deal with this license in your commercial closed-source software?

Oleg Yaroshevych
You may want to ask that as a separate question (outside of this thread).
C. Dragon 76
HTML Agility Pack now uses the Microsoft Public License.
Tom Clarkson
+7  A: 

I've written some code that provides "LINQ to HTML" functionality. I thought I would share it here. It is based on Majestic 12. It takes the Majestic-12 results and produces LINQ XML elements. At that point you can use all your LINQ to XML tools against the HTML. As an example:

        IEnumerable<XNode> auctionNodes = Majestic12ToXml.Majestic12ToXml.ConvertNodesToXml(byteArrayOfAuctionHtml);

        foreach (XElement anchorTag in auctionNodes.OfType<XElement>().DescendantsAndSelf("a")) {

            if (anchorTag.Attribute("href") == null)
                continue;

            Console.WriteLine(anchorTag.Attribute("href").Value);
        }

I wanted to use Majestic-12 because I know it has a lot of built-in knowledge with regards to HTML that is found in the wild. What I've found though is that to map the Majestic-12 results to something that LINQ will accept as XML requires additional work. The code I'm including does a lot of this cleansing, but as you use this you will find pages that are rejected. You'll need to fix up the code to address that. When an exception is thrown, check exception.Data["source"] as it is likely set to the HTML tag that caused the exception. Handling the HTML in a nice manner is at times not trivial...

So now that expectations are realistically low, here's the code :)

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using Majestic12;
using System.IO;
using System.Xml.Linq;
using System.Diagnostics;
using System.Text.RegularExpressions;

namespace Majestic12ToXml {
public class Majestic12ToXml {

    static public IEnumerable<XNode> ConvertNodesToXml(byte[] htmlAsBytes) {

        HTMLparser parser = OpenParser();
        parser.Init(htmlAsBytes);

        XElement currentNode = new XElement("document");

        HTMLchunk m12chunk = null;

        int xmlnsAttributeIndex = 0;
        string originalHtml = "";

        while ((m12chunk = parser.ParseNext()) != null) {

            try {

                Debug.Assert(!m12chunk.bHashMode);  // popular default for Majestic-12 setting

                XNode newNode = null;
                XElement newNodesParent = null;

                switch (m12chunk.oType) {
                    case HTMLchunkType.OpenTag:

                        // Tags are added as a child to the current tag, 
                        // except when the new tag implies the closure of 
                        // some number of ancestor tags.

                        newNode = ParseTagNode(m12chunk, originalHtml, ref xmlnsAttributeIndex);

                        if (newNode != null) {
                            currentNode = FindParentOfNewNode(m12chunk, originalHtml, currentNode);

                            newNodesParent = currentNode;

                            newNodesParent.Add(newNode);

                            currentNode = newNode as XElement;
                        }

                        break;

                    case HTMLchunkType.CloseTag:

                        if (m12chunk.bEndClosure) {

                            newNode = ParseTagNode(m12chunk, originalHtml, ref xmlnsAttributeIndex);

                            if (newNode != null) {
                                currentNode = FindParentOfNewNode(m12chunk, originalHtml, currentNode);

                                newNodesParent = currentNode;
                                newNodesParent.Add(newNode);
                            }
                        }
                        else {
                            XElement nodeToClose = currentNode;

                            string m12chunkCleanedTag = CleanupTagName(m12chunk.sTag, originalHtml);

                            while (nodeToClose != null && nodeToClose.Name.LocalName != m12chunkCleanedTag)
                                nodeToClose = nodeToClose.Parent;

                            if (nodeToClose != null)
                                currentNode = nodeToClose.Parent;

                            Debug.Assert(currentNode != null);
                        }

                        break;

                    case HTMLchunkType.Script:

                        newNode = new XElement("script", "REMOVED");
                        newNodesParent = currentNode;
                        newNodesParent.Add(newNode);
                        break;

                    case HTMLchunkType.Comment:

                        newNodesParent = currentNode;

                        if (m12chunk.sTag == "!--")
                            newNode = new XComment(m12chunk.oHTML);
                        else if (m12chunk.sTag == "![CDATA[")
                            newNode = new XCData(m12chunk.oHTML);
                        else
                            throw new Exception("Unrecognized comment sTag");

                        newNodesParent.Add(newNode);

                        break;

                    case HTMLchunkType.Text:

                        currentNode.Add(m12chunk.oHTML);
                        break;

                    default:
                        break;
                }
            }
            catch (Exception e) {
                var wrappedE = new Exception("Error using Majestic12.HTMLChunk, reason: " + e.Message, e);

                // the original html is copied for tracing/debugging purposes
                originalHtml = new string(htmlAsBytes.Skip(m12chunk.iChunkOffset)
                    .Take(m12chunk.iChunkLength)
                    .Select(B => (char)B).ToArray()); 

                wrappedE.Data.Add("source", originalHtml);

                throw wrappedE;
            }
        }

        while (currentNode.Parent != null)
            currentNode = currentNode.Parent;

        return currentNode.Nodes();
    }

    static XElement FindParentOfNewNode(Majestic12.HTMLchunk m12chunk, string originalHtml, XElement nextPotentialParent) {

        string m12chunkCleanedTag = CleanupTagName(m12chunk.sTag, originalHtml);

        XElement discoveredParent = null;

        // Get a list of all ancestors
        List<XElement> ancestors = new List<XElement>();
        XElement ancestor = nextPotentialParent;
        while (ancestor != null) {
            ancestors.Add(ancestor);
            ancestor = ancestor.Parent;
        }

        // Check if the new tag implies a previous tag was closed.
        if ("form" == m12chunkCleanedTag) {

            discoveredParent = ancestors
                .Where(XE => m12chunkCleanedTag == XE.Name)
                .Take(1)
                .Select(XE => XE.Parent)
                .FirstOrDefault();
        }
        else if ("td" == m12chunkCleanedTag) {

            discoveredParent = ancestors
                .TakeWhile(XE => "tr" != XE.Name)
                .Where(XE => m12chunkCleanedTag == XE.Name)
                .Take(1)
                .Select(XE => XE.Parent)
                .FirstOrDefault();
        }
        else if ("tr" == m12chunkCleanedTag) {

            discoveredParent = ancestors
                .TakeWhile(XE => !("table" == XE.Name
                                    || "thead" == XE.Name
                                    || "tbody" == XE.Name
                                    || "tfoot" == XE.Name))
                .Where(XE => m12chunkCleanedTag == XE.Name)
                .Take(1)
                .Select(XE => XE.Parent)
                .FirstOrDefault();
        }
        else if ("thead" == m12chunkCleanedTag
                  || "tbody" == m12chunkCleanedTag
                  || "tfoot" == m12chunkCleanedTag) {


            discoveredParent = ancestors
                .TakeWhile(XE => "table" != XE.Name)
                .Where(XE => m12chunkCleanedTag == XE.Name)
                .Take(1)
                .Select(XE => XE.Parent)
                .FirstOrDefault();
        }

        return discoveredParent ?? nextPotentialParent;
    }

    static string CleanupTagName(string originalName, string originalHtml) {

        string tagName = originalName;

        tagName = tagName.TrimStart(new char[] { '?' });  // for nodes <?xml >

        if (tagName.Contains(':'))
            tagName = tagName.Substring(tagName.LastIndexOf(':') + 1);

        return tagName;
    }

    static readonly Regex _startsAsNumeric = new Regex(@"^[0-9]", RegexOptions.Compiled);

    static bool TryCleanupAttributeName(string originalName, ref int xmlnsIndex, out string result) {

        result = null;
        string attributeName = originalName;

        if (string.IsNullOrEmpty(originalName))
            return false;

        if (_startsAsNumeric.IsMatch(originalName))
            return false;

        //
        // transform xmlns attributes so they don't actually create any XML namespaces
        //
        if (attributeName.ToLower().Equals("xmlns")) {

            attributeName = "xmlns_" + xmlnsIndex.ToString(); ;
            xmlnsIndex++;
        }
        else {
            if (attributeName.ToLower().StartsWith("xmlns:")) {
                attributeName = "xmlns_" + attributeName.Substring("xmlns:".Length);
            }   

            //
            // trim trailing \"
            //
            attributeName = attributeName.TrimEnd(new char[] { '\"' });

            attributeName = attributeName.Replace(":", "_");
        }

        result = attributeName;

        return true;
    }

    static Regex _weirdTag = new Regex(@"^<!\[.*\]>$");       // matches "<![if !supportEmptyParas]>"
    static Regex _aspnetPrecompiled = new Regex(@"^<%.*%>$"); // matches "<%@ ... %>"
    static Regex _shortHtmlComment = new Regex(@"^<!-.*->$"); // matches "<!-Extra_Images->"

    static XElement ParseTagNode(Majestic12.HTMLchunk m12chunk, string originalHtml, ref int xmlnsIndex) {

        if (string.IsNullOrEmpty(m12chunk.sTag)) {

            if (m12chunk.sParams.Length > 0 && m12chunk.sParams[0].ToLower().Equals("doctype"))
                return new XElement("doctype");

            if (_weirdTag.IsMatch(originalHtml))
                return new XElement("REMOVED_weirdBlockParenthesisTag");

            if (_aspnetPrecompiled.IsMatch(originalHtml))
                return new XElement("REMOVED_ASPNET_PrecompiledDirective");

            if (_shortHtmlComment.IsMatch(originalHtml))
                return new XElement("REMOVED_ShortHtmlComment");

            // Nodes like "<br <br>" will end up with a m12chunk.sTag==""...  We discard these nodes.
            return null;
        }

        string tagName = CleanupTagName(m12chunk.sTag, originalHtml);

        XElement result = new XElement(tagName);

        List<XAttribute> attributes = new List<XAttribute>();

        for (int i = 0; i < m12chunk.iParams; i++) {

            if (m12chunk.sParams[i] == "<!--") {

                // an HTML comment was embedded within a tag.  This comment and its contents
                // will be interpreted as attributes by Majestic-12... skip this attributes
                for (; i < m12chunk.iParams; i++) {

                    if (m12chunk.sTag == "--" || m12chunk.sTag == "-->")
                        break;
                }

                continue;
            }

            if (m12chunk.sParams[i] == "?" && string.IsNullOrEmpty(m12chunk.sValues[i]))
                continue;

            string attributeName = m12chunk.sParams[i];

            if (!TryCleanupAttributeName(attributeName, ref xmlnsIndex, out attributeName))
                continue;

            attributes.Add(new XAttribute(attributeName, m12chunk.sValues[i]));
        }

        // If attributes are duplicated with different values, we complain.
        // If attributes are duplicated with the same value, we remove all but 1.
        var duplicatedAttributes = attributes.GroupBy(A => A.Name).Where(G => G.Count() > 1);

        foreach (var duplicatedAttribute in duplicatedAttributes) {

            if (duplicatedAttribute.GroupBy(DA => DA.Value).Count() > 1)
                throw new Exception("Attribute value was given different values");

            attributes.RemoveAll(A => A.Name == duplicatedAttribute.Key);
            attributes.Add(duplicatedAttribute.First());
        }

        result.Add(attributes);

        return result;
    }

    static HTMLparser OpenParser() {
        HTMLparser oP = new HTMLparser();

        // The code+comments in this function are from the Majestic-12 sample documentation.

        // ...

        // This is optional, but if you want high performance then you may
        // want to set chunk hash mode to FALSE. This would result in tag params
        // being added to string arrays in HTMLchunk object called sParams and sValues, with number
        // of actual params being in iParams. See code below for details.
        //
        // When TRUE (and its default) tag params will be added to hashtable HTMLchunk (object).oParams
        oP.SetChunkHashMode(false);

        // if you set this to true then original parsed HTML for given chunk will be kept - 
        // this will reduce performance somewhat, but may be desireable in some cases where
        // reconstruction of HTML may be necessary
        oP.bKeepRawHTML = false;

        // if set to true (it is false by default), then entities will be decoded: this is essential
        // if you want to get strings that contain final representation of the data in HTML, however
        // you should be aware that if you want to use such strings into output HTML string then you will
        // need to do Entity encoding or same string may fail later
        oP.bDecodeEntities = true;

        // we have option to keep most entities as is - only replace stuff like &nbsp; 
        // this is called Mini Entities mode - it is handy when HTML will need
        // to be re-created after it was parsed, though in this case really
        // entities should not be parsed at all
        oP.bDecodeMiniEntities = true;

        if (!oP.bDecodeEntities && oP.bDecodeMiniEntities)
            oP.InitMiniEntities();

        // if set to true, then in case of Comments and SCRIPT tags the data set to oHTML will be
        // extracted BETWEEN those tags, rather than include complete RAW HTML that includes tags too
        // this only works if auto extraction is enabled
        oP.bAutoExtractBetweenTagsOnly = true;

        // if true then comments will be extracted automatically
        oP.bAutoKeepComments = true;

        // if true then scripts will be extracted automatically: 
        oP.bAutoKeepScripts = true;

        // if this option is true then whitespace before start of tag will be compressed to single
        // space character in string: " ", if false then full whitespace before tag will be returned (slower)
        // you may only want to set it to false if you want exact whitespace between tags, otherwise it is just
        // a waste of CPU cycles
        oP.bCompressWhiteSpaceBeforeTag = true;

        // if true (default) then tags with attributes marked as CLOSED (/ at the end) will be automatically
        // forced to be considered as open tags - this is no good for XML parsing, but I keep it for backwards
        // compatibility for my stuff as it makes it easier to avoid checking for same tag which is both closed
        // or open
        oP.bAutoMarkClosedTagsWithParamsAsOpen = false;

        return oP;
    }
}
}
Frank Schwieterman
btw HtmlAgilityPack has worked well for me in the past, I just prefer LINQ.
Frank Schwieterman
+1  A: 
    private Dictionary<String, Color> autoWordsList;
    private string[] wordsDarkRed = { "html", "head", "BIG" };
    public Form1()
    {
        InitializeComponent();
        this.autoWordsList = new Dictionary<string, Color>();
        foreach (string s in wordsDarkRed) autoWordsList.Add(s, Color.DarkRed);
    }
Alynuzzu
+5  A: 

I found a project called Fizzler that takes a jQuery/Sizzler approach to selecting HTML elements. It's based on HTML Agility Pack. It's currently in beta and only supports a subset of CSS selectors, but it's pretty damn cool and refreshing to use CSS selectors over nasty XPath.

http://code.google.com/p/fizzler/

Rob Volk
thank you, this looks interesting! i've been surprised, what with jQuery's popularity, that it has been so hard to find a C# project inspired by it. Now if only I could find something where document manipulation and more advanced traversal was also part of the package... :)
Funka
I just used this today and I have to say, it is very easy to use if you know jQuery.
Chi Chan