I want to download a lot of urls in a script but I do not want to save the ones that lead to HTTP errors.
as far as I read in the man they both don't provide such functionality. does anyone know about another downloader who does?
I want to download a lot of urls in a script but I do not want to save the ones that lead to HTTP errors.
as far as I read in the man they both don't provide such functionality. does anyone know about another downloader who does?
If you're still looking after awhile, hacking up such a downloader in your language of choice shouldn't be too difficult.
I voted up the curl -f answer above, but to expound that it isn't failsafe. I think this is a perfect opportunity for learning Perl or Ruby (or extending your skills) by writing your own download program.
To do this with wget:
on unix you can do:
wget -O /dev/null example.com
while on windows the equivalent is:
wget -O NUL example.com
One liner I just setup for this very purpose:
(works only with a single file, might be useful for others)
A=$$; ( wget -q "http://foo.com/pipo.txt" -O $A.d && mv $A.d pipo.txt ) || (rm $A.d; echo "Removing temp file")
This will attempt to download the file from the remote Host. If there is no Error, the file is not kept. In all other cases, it's kept and renamed.