views:

494

answers:

6

I have to parse a series of web pages in order to import data into an application. Each type of web page provides the same kind of data. The problem is that the HTML of each page is different, so the location of the data varies. Another problem is that the HTML code is poorly formatted, making it impossible to use a XML-like parser.

So far, the best strategy I can think of, is to define a template for each kind of page, like:

Template A:

<html>
...
  <tr><td>Table column that is missing a td 
      <td> Another table column</td></tr>
  <tr><td>$data_item_1$</td>
...
</html>

Template B:

<html>
...
  <ul><li>Yet another poorly formatted page <li>$data_item_1$</td></tr>
...
</html>

This way I would only need one single parser for all the pages, that would compare each page with its template and retrieving the $data_item_1$, $data_item_2$, etc. Still, it is going to be a lot of work. Can you think of any simpler solution? Any library that can help?

Thanks

+8  A: 

You can pass the page's source through tidy to get a valid page. You can find tidy here . Tidy has bindings for a lot of programming languages. After you've done this, you can use your favorite parser/content extraction technique.

Geo
I completely agree. First pass it through Tidy.
Matt Refghi
Some Tidy wrappers are available here: http://users.rcn.com/creitzel/tidy.html
Matt Refghi
+2  A: 

I'd recommend Html Agility Pack. It has the ability to work with poorly structured HTML while giving you Xml like selection using Xpath. You would still have to template items or select using different selections and analyze but it will get you past the poor structure hump.

Pat
This is definitely a great tool and worth looking into. The full source code is also included with plenty of examples.
Rich
+1  A: 

As mentioned here and on other SO answers before, Beautiful Soup can parse weird HTML.

Beautiful Soup is a Python HTML/XML parser designed for quick turnaround projects like screen-scraping. Three features make it powerful:

  1. Beautiful Soup won't choke if you give it bad markup. It yields a parse tree that makes approximately as much sense as your original document. This is usually good enough to collect the data you need and run away.
  2. Beautiful Soup provides a few simple methods and Pythonic idioms for navigating, searching, and modifying a parse tree: a toolkit for dissecting a document and extracting what you need. You don't have to create a custom parser for each application.
  3. Beautiful Soup automatically converts incoming documents to Unicode and outgoing documents to UTF-8. You don't have to think about encodings, unless the document doesn't specify an encoding and Beautiful Soup can't autodetect one. Then you just have to specify the original encoding.

Beautiful Soup parses anything you give it, and does the tree traversal stuff for you. You can tell it "Find all the links", or "Find all the links of class externalLink", or "Find all the links whose urls match "foo.com", or "Find the table heading that's got bold text, then give me that text."

gimel
Beat me by 22 seconds :-(
S.Lott
A: 

Use HTML5 parser like html5lib.

Unlike HTML Tidy, this will give you error handling very close to what browsers do.

porneL
A: 

There's a couple C# specific threads on this, like http://stackoverflow.com/questions/100358/looking-for-c-html-parser/624410#624410.

Frank Schwieterman
A: 

Depending on what data you need to extract regular expressions might be an option. I know a lot of people will shudder at the thought of using RegExes on structured data but the plain fact is (as you have discovered) that a lot of HTML isn't actually well structured and can be very hard to parse.

I had a similar problem to you, but in my case I only wanted one specific piece of data from the page which was easy to identify without parsing the HTML so a RegEx worked very nicely.

Steve Haigh