views:

30

answers:

1

Hi all,

I apologize if this question was asked earlier and if its a simple one.

I am trying to download a file from http website onto my unix machine using command line.I log onto this website using a username and password.

Say I have this link (not a working link) http://www.abcd.org/portal/ABCPortal/private/DataDownload.action?downloadFile=&workspace.id=4180&datasetId=76999

Say if I paste this link in a browser, I get a box that opens up to ask if I want to save the zip file that it links to (say xyz.zip). These files are of ~1GB size.

I want to be able to get that zip file that this URL has onto my unix machine using the command line. I tried using the wget and curl with the above kind of URL (providing user name and password). I get the html form but not the zip file. Is there a way I can get the zip file that this kind of URL links to? I do not know any thing about the directory structures on the machine where the files are.

Thanks for your help,

A: 

i guess you did not pass the Accept-Encoding header. The browsers are passing it by default, with your CLI tools you have to trigger these options yourself

I don't know about wget but have a try with curl (-v as verbose command so you can follow the request/response headers):

curl -v "http://www.abcd.org/portal/ABCPortal/private/DataDownload.action?downloadFile=&workspace.id=4180&datasetId=76999" -H "Accept-Encoding: gzip" > /tmp/yourZippedFile.gz

If above is not the case maybe you give a real-site example so we can follow your problem at a concrete example. It is difficult to say without seeing the HTTP traffic.

manuel aldana