tags:

views:

205

answers:

5

hi all, I was condering when I use urllib2.urlopen() does it just to header reads or does it actually bring back the entire webpage?

IE does the HTML page actually get fetch on the urlopen call or the read() call?

handle = urllib2.urlopen(url)
html = handle.read()

The reason I ask is for this workflow...

  • I have a list of urls (some of them with short url services)
  • I only want to read the webpage if I haven't seen that url before
  • I need to call urlopen() and use geturl() to get the final page that link goes to (after the 302 redirects) so I know if I've crawled it yet or not.
  • I don't want to incur the overhead of having to grab the html if I've already parsed that page.

thanks!

A: 

From looking at the docs and source I'm pretty sure it gets the contents of the page. The returned object contains the page.

Simon Groenewolt
+1  A: 

Testing with a local web server, urllib2.urlopen(url) fires an HTTP request, and .read() does not.

Drew Sears
+4  A: 

urllib2 always uses HTTP method GET (or POST) and therefore inevitably gets the full page. To use HTTP method HEAD instead (which only gets the headers -- which are enough to follow redirects!), I think you just need to subclass urllib2.Request with your own class and override one short method:

class MyRequest(urllib2.Request):

    def get_method(self):
        return "HEAD"

and pass a suitably initialized instance of MyRequest to urllib2.urlopen.

Alex Martelli
Actually, testing with python 2.6 shows that only a little of the body is retrieved over the network in the urlopen() call. The rest waits until read() is called.
Forest
@Forest, the `GET` verb of HTTP is defined to retrieve the whole page; possibly the part of it that you're not seeing is clogging up OS and networking HW buffers (a pretty bad thing, BTW).
Alex Martelli
I believe the question was not about HTTP methods, but instead about what happens on the network at various stages of the urllib2 implementation. See my answer for details.
Forest
A: 

On a side note, if you use Scrapy, it does HEAD intelligently for you. There's no point in rolling your own solution when this is already done so well elsewhere.

Adam Nelson
+2  A: 

I just ran a test with wireshark. When I called urllib2.urlopen( 'url-for-a-700mbyte-file'), only the headers and a few packets of body were retrieved immediately. It wasn't until I called read() that the majority of the body came across the network. This matches what I see by reading the source code for the httplib module.

So, to answer the original question, urlopen() does not fetch the whole body over the network. It fetches the headers and usually some of the body. The rest of the body is fetched when you call read().

The partial body fetch is to be expected, because:

  1. Unless you read an http response one byte at a time, there is no way to know exactly how long the incoming headers will be and therefore no way to know how many bytes to read before the body starts.

  2. An http client has no control of how many bytes a server bundles into each tcp frame of a response.

In practice, since some of the body is usually fetched along with the headers, you might find that small bodies (e.g. small html pages) are fetched entirely on the urlopen() call.

Forest
Alex Martelli
Alex, I appreciate your concern, but nothing is getting clogged. HTTP runs atop TCP, which implements flow control. http://en.wikipedia.org/wiki/Transmission_Control_Protocol#Flow_control
Forest