views:

69

answers:

3

I am trying to automate files download via a webserver. I plan on using wget or curl or python urllib / urllib2.

Most solutions use wget and urllib and urllib2. They all talk of HHTP based authentication and cookie based authentication. My problem is I dont know which one is used in the website that stores my data. Here is the interaction with the site:

  1. Normally I login to site http://www.anysite.com/index.cgi?
  2. I get a form with a login and password. I type in both and hit return.
  3. The url stays as http://www.anysite.com/index.cgi? during the entire interaction. But now I have a list of folders and files
  4. If I click on a folder or file the URL changes to http://shamrockstructures.com/cgi-bin/index.cgi?page=download&file=%2Fhome%2Fjanysite%2Fpublic_html%2Fuser_data%2Fuserareas%2Ffile.tar.bz2

And the browser offers me a chance to save the file

I want to know how to figure out whether the site is using HTTP or cookie based authentication. After which I am assuming I can use cookielib or urllib2 in python to connect to it, get the list of files and folders and recursively download everything while staying connected.

p.S: I have tried the cookie cutter ways to connect via wget and wget --http-user "uname" --http-password "passwd" http://www.anysite.com/index.cgi? , but they only return the web form back to me.

+1  A: 

If you log in using a Web page, the site is probably using cookie-based authentication. (It could technically use HTTP basic auth, by embedding your credentials in the URI, but this would be a dumb thing to do in most cases.) If you get a separate, smallish dialog with a user name and password field (like this one), it is using HTTP basic authentication.

If you try to log in using HTTP basic auth, and get back the login page, as is happening to you, this is a certain indication that the site is not using HTTP basic auth.

Most sites use cookie-based authentication these days. To do this with an HTTP cilent such as urllib2, you will need to do an HTTP POST of the fields in the login form. (You may need to actually request the login form first, as a site could include a cookie that you need to even log in, but usually this is not necessary.) This should return a "successfully logged in" often that you can test for. Save the cookies you get back from this request. When making the next request, include these cookies. Each request you make may respond with cookies, and you need to save those and send them again with the next request.

urllib2 has a function called a "cookie jar" which will automatically handle the cookies for you as you send requests and receive Web pages. That's what you want.

kindall
I checked using google chrome - and the site is setting two cookies which is valid for the session ( Expires : says Session). One cookie is for login and the other is for password. The password cookie has I am assuming an encrypted version of my password.So now the question is : how do I mimic this cookie using python and urllib. I guess this is possible to do
harijay
A: 

AFAIK cookie based authentication is only used once you have logged in successfully atleast ONCE. You can try disabling storing cookies from that domain by changing your browser settings, if you are still able to download files that it should be a HTTP based authentication.

Try doing a equivalent GET request for the (possibly POST) login request that is probably happening right now for login. Use firebug or fiddler to see the login request that is sent. Also note if there is some javascript code which is returning you a different output, based on your useragent string or some other parameter.

See if httplib, mechanize helps.

Ashish
+1  A: 

You can use pycurl like this:

import pycurl

COOKIE_JAR = 'cookiejar' # file to store the cookies
LOGIN_URL = 'http://www.yoursite.com/login.cgi'
USER_FIELD = 'user' # Name of the element in the HTML form
USER = 'joe'
PASSWD_FIELD = 'passwd' # Name of the element in the HTML form
PASSWD = 'MySecretPassword'

def read(html):
    """Read the body of the response, with posible                                                                                                                               
    future html parsing and re-requesting"""
    print html

com = pycurl.Curl()
com.setopt(pycurl.WRITEFUNCTION, read)
com.setopt(pycurl.COOKIEJAR, COOKIE_JAR)
com.setopt(pycurl.FOLLOWLOCATION, 1) # follow redirects
com.setopt(pycurl.POST, 1)
com.setopt(pycurl.POSTFIELDS, '%s=%s;%s=%s'%(USER_FIELD, USER,
                                             PASSWD_FIELD, PASSWD))
com.setopt(pycurl.URL, LOGIN_URL )
com.perform()

Plain pycurl it may seam very "primitive" (with the limited setopt approach), but it gets the job done, and handle pretty well the cookies with the cookie jar option.

cyraxjoe
Hi Thanks for the well commented explanation . I tried this approach after substituting the USER_FILED, USER,PASSWD_FIELD and PASSWD. When I run the program . It returns the original form . When I manually browse, If the POST of login and password succeeds, I get back a table with links to files for download
harijay
And after running the script, the cookiejar-file has the authentication cookie?
cyraxjoe
Strangely there is no cookiejar file created. I also tried some urllib2 based solutions where I was explicitly setting the User agent to mimic firefox/I.E..in those cases too the server did not return the cookie : but only sent back the web form it sent out the first time with the login and password fields
harijay
So that means no cookie was need to be stored. Are you sure, that the url that you are pointing is the same as the "action" attribute in the html form (not the url where the form gets printed)?
cyraxjoe
The form gets printed at the same url as the action attribue, i.e http://www.anysite.com/cgi-bin/index.cgi is where the form resides. Upon logging in successfully with a browser, the url still reads http://www.anysite.com/cgi-bin/index.cgi , but now the page has links to the files I need to download
harijay