urllib2

How can I determine the final URL after redirection using python / urllib2?

I need to get the final URL after redirection in python. What's a good way to do that? ...

Which is best in Python: urllib2, PycURL or mechanize?

Ok so I need to download some web pages using Python and did a quick investigation of my options. Included with Python: urllib - seems to me that I should use urllib2 instead. urllib has no cookie support, HTTP/FTP/local files only (no SSL) urllib2 - complete HTTP/FTP client, supports most needed things like cookies, does not support ...

how do i stop beautiful soup from skipping rows while parsing?

while using beautifulsoup to parse a table in html every other row starts with <tr class="row_k"> instead of a tr tag without a class Sample HTML <tr class="row_k"> <td><img src="some picture url" alt="Item A"></td> <td><a href="some url"> Item A</a></td> <td>14.8k</td> <td><span class="drop">-555</span></td> <td> <img src="so...

Python urllib2 Basic Auth Problem

Update: based on Lee's comment I decided to condense my code to a really simple script and run it from the command line: import urllib2 import sys username = sys.argv[1] password = sys.argv[2] url = sys.argv[3] print("calling %s with %s:%s\n" % (url, username, password)) passman = urllib2.HTTPPasswordMgrWithDefaultRealm() passman.add_...

urllib2 multiple Set-Cookie headers in response

I am using urllib2 to interact with a website that sends back multiple Set-Cookie headers. However the response header dictionary only contains one - seems the duplicate keys are overriding each other. Is there a way to access duplicate headers with urllib2? ...

download mbox files over https using python

I was trying to find the right module for downloading kernel patches from kernel.org site For example,to download the file at https://patchwork.kernel.org/patch/62948/mbox/ I understand urlgrabber has a problem with https on debian. urllib2 seems to have problem with this url as well (says getaddrinfo failed, even though there are no ...

Python: HTTP Post a large file with streaming

I'm uploading potentially large files to a web server. Currently I'm doing this: import urllib2 f = open('somelargefile.zip','rb') request = urllib2.Request(url,f.read()) request.add_header("Content-Type", "application/zip") response = urllib2.urlopen(request) However, this reads the entire file's contents into memory before posting ...

python urllib2.openurl doesn't work with specific URL (redirect)?

I need to download a CSV file, which works fine in browsers using: http://www.ftse.com/objects/csv_to_csv.jsp?infoCode=100a&amp;theseFilters=&amp;csvAll=&amp;theseColumns=Mw==&amp;theseTitles=&amp;tableTitle=FTSE%20100%20Index%20Constituents&amp;dl=&amp;p_encoded=1&amp;e=.csv The following code works for any other file (url) (with a f...

Using urllib2 with SOCKS proxy

Hello. Is it possible to fetch pages with urllib2 through a SOCKS proxy on a one socks server per opener basic? I've seen the solution using setdefaultproxy method, but I need to have different socks in different openers. So there is SocksiPy library, which works great, but it has to be used this way: import socks import socket socket....

urllib2 in Python 2.6.4: Any way to override windows hosts file?

I am using the urllib2 module in Python 2.6.4, running in Windows XP, to access a URL. I am making a post request, that does not involve cookies or https or anything too complicated. The domain is redirected in my C:\WINDOWS\system32\drivers\etc\hosts file. However, I would like the request from urllib2 to go to the "real" domain and ign...

Python's urllib2 don't work on some sites

I found that you can't read from some sites using Python's urllib2(or urllib). An example... urllib2.urlopen("http://www.dafont.com/").read() # Returns '' These sites works when you visit the site. I can even scrap them using PHP(didn't try other languages). I have seen other sites with the same issue - but can't remember the URL at t...

Using urllib and BeautifulSoup to retrieve info from web with Python

I can get the html page using urllib, and use BeautifulSoup to parse the html page, and it looks like that I have to generate file to be read from BeautifulSoup. import urllib sock = urllib.urlopen("http://SOMEWHERE") htmlSource = sock.read() sock.close() ...

urllib2 to string

I'm using urllib2 to open a url. Now I need the html file as a string. How do I do this? ...

Help with OpenSSL request using Python

Hi i'm creating a program that has to make a request and then obtain some info. For doing that the website had done some API that i will use. There is an how-to about these API but every example is made using PHP. But my app is done using Python so i need to convert the code. here is the how-to: The request string is sealed with Ope...

Bitbucket API authentication with Python's HTTPBasicAuthHandler

I'm trying to get the list of issues on a private repository using bitbucket's API. I have confirmed that HTTP Basic authentication works with hurl, but I am unable to authenticate in Python. Adapting the code from this tutorial, I have written the following script. import cookielib import urllib2 class API(): api_url = 'http://ap...

Performing urlopen from various IP addresses - Python

Hi folks, is it possible to perform urlopen requests from different IP addresses? ...

Check if the internet cannot be accessed in Python

I have an app that makes a HTTP GET request to a particular URL on the internet. But when the network is down (say, no public wifi - or my ISP is down, or some such thing), I get the following traceback at urllib2.urlopen: 70, in get u = urllib2.urlopen(req) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/url...

Handling urllib2's timeout? - Python

Hi folks, I'm using the timeout parameter within the urllib2's urlopen. urllib2.urlopen('http://www.example.org', timeout=1) How do I tell Python that if the timeout expires a custom error should be raised? Any ideas? ...

Use Twisted's getPage as urlopen?

Hi folks, I would like to use Twisted non-blocking getPage method within a webapp, but it feels quite complicated to use such function compared to urlopen. This is an example of what I'm trying to achive: def web_request(request): response = urllib.urlopen('http://www.example.org') return HttpResponse(len(response.read())) I...

How to speed up read method call of object returned by urllib2.urlopen

I have following code. req = urllib2.Request(url,'',txheaders) f = urllib2.urlopen(req) data = f.read() f.close() In the above code, the read function takes 1-2 minutes when response is of 58KB. How can I make this faster. ...