tags:

views:

2436

answers:

3

Hello!

I am brand new to python, and I need some help with the syntax for finding and iterating through html tags using lxml. Here are the use-cases I am dealing with:

  • HTML file is fairly well formed (but not perfect). Has multiple tables on screen, one containing a set of search results, and one each for a header and footer. Each result row contains a link for the search result detail.

1) I need to find the middle table with the search result rows. - this one I was able to figure out: self.mySearchTables = self.mySearchTree.findall(".//table") self.myResultRows = self.mySearchTables[1].findall(".//tr")

2) I need to find the links contained in this table. - this is where I'm getting stuck: for searchRow in self.myResultRows: searchLink = patentRow.findall(".//a")

It doesn't seem to actually locate the link elements

3) I need the plain text of the link. I imagine it would be something like searchLink.text if I actually got the link elements in the first place.

Finally, in the actual API reference for lxml, I wasn't able to find information on the find and the findall calls. I gleaned these from bits of code I found on google. Am I missing something about how to effectively find and iterate over HTML tags using lxml?

Thanks in advance for your help!

Shaheeb Roshan

+6  A: 

Is there a reason you're not using Beautiful Soup for this project? It will make dealing with imperfectly formed documents much easier.

zweiterlinde
+1: lxml is for xml. Beautiful Soup is for HTML.
S.Lott
I started with Beautiful Soup, but I had no luck. I mentioned in my question that my doc is fairly well-formed, but it is missing the ending body block. It simply drops all the content when I pull it into the parser. Hence lxml. Also, http://tinyurl.com/37u9gu indicated better mem mgmt with lxml
Shaheeb Roshan
I used BeautifulSoup at first, but it doesn't handle bad HTML as well as it claims. It also doesn't support items with multiple classes, etc. lxml.html is better for everything I've done with it.
endolith
+7  A: 

Okay, first, in regards to parsing the HTML: if you follow the recommendation of zweiterlinde and S.Lott at least use the version of beautifulsoup included with lxml. That way you will also reap the benefit of a nice xpath or css selector interface.

However, I personally prefer Ian Bicking's HTML parser included in lxml.

Secondly, .find() and .findall() come from lxml trying to be compatible with ElementTree, and those two methods are described in XPath Support in ElementTree.

Those two functions are fairly easy to use but they are very limited XPath. I recommend trying to use either the full lxml xpath() method or, if you are already familiar with CSS, using the csselect() method.

Here are some examples, with an HTML string parsed like this:

from lxml.html import fromstring
mySearchTree = fromstring(your_input_string)

Using the css selector class your program would roughly look something like this:

# Find all 'a' elements inside 'tr' table rows with css selector
for a in mySearchTree.cssselect('tr a'):
    print 'found "%s" link to href "%s"' % (a.text, a.get('href'))

The equivalent using xpath method would be:

# Find all 'a' elements inside 'tr' table rows with xpath
for a in mySearchTree.xpath('.//tr/*/a'):
    print 'found "%s" link to href "%s"' % (a.text, a.get('href'))
Van Gale
Yay! Just what I needed. I interpreted the cssselect to actually require the elements to have a declared css class. The nested finding logic is just what I needed! Thank you Van Gale!
Shaheeb Roshan
This page recommends to use iterchildren and iterdescendants with the tag option. http://www.ibm.com/developerworks/xml/library/x-hiperfparse/#N10239
endolith
A: 

cssselect works beautifully for me with xhtml.