views:

45

answers:

2

I have an html document located on http://somedomain.com/somedir/example.html

The document contains of four links:

http://otherdomain.com/other.html

http://somedomain.com/other.html

/only.html

test.html

How I can get the full urls for the links in the current domain ?

I mean I should get:

http://somedomain.com/other.html

http://somedomain.com/only.html

http://somedomain.com/somedir/test.html

The first link should be ignored because it does'nt match my domain

A: 

use a regular expression to extract the links from href="URL" then concatenate with the domain if it doesn't start with "http"

Here is a Python example:

import re
import urlparse

domain = ...
html = ...
links = re.findall('href=[\'"](.*?)[\'"]', html)
links = [urlparse.urljoin(domain, link) for link in links if link]
Plumo
+1  A: 

Something like

doc.search("a").map do |a| 
  url = a.attribute("href")
  #this part could be a lot more robust, but you get the idea...
  full_url = url.match("^http://") ? url : "http://somedomain.com/#{url}"
end.select{|url| url.match("^http://somedomain.com")}
Isaac Cambron