Hi all,
I apologize if this question was asked earlier and if its a simple one.
I am trying to download a file from http website onto my unix machine using command line.I log onto this website using a username and password.
Say I have this link (not a working link) http://www.abcd.org/portal/ABCPortal/private/DataDownload.action?downloadFile=&workspace.id=4180&datasetId=76999
Say if I paste this link in a browser, I get a box that opens up to ask if I want to save the zip file that it links to (say xyz.zip). These files are of ~1GB size.
I want to be able to get that zip file that this URL has onto my unix machine using the command line. I tried using the wget and curl with the above kind of URL (providing user name and password). I get the html form but not the zip file. Is there a way I can get the zip file that this kind of URL links to? I do not know any thing about the directory structures on the machine where the files are.
Thanks for your help,