It might be possible in perfect situations to basically connect the two streams, but it wouldn't be a very robust solution. There are a bunch of ugly boundary conditions:
- The response socket might still be
receiving data, and/or be stalled,
thus causing you to starve out and
break the POST (because PycURL is not
expecting to have to wait for data
beyond the current end of the
"file").
- The response might reset, and then you don't have the complete file, but you've already POSTed a bunch of data - what to do in this case?
- The file you're fetching with urllib might be chunked-encoded, so you need to perform some operations on the MIME headers for reassembly - you can't just blindly forward the data.
- You don't necessarily know how big the file you're getting is, so it's hard to provide the proper content-length on the POST, so then you have to write chunked.
- Probably a bunch of other problems I can't think of off the top of my head...
You'll be much better off writing the file to disk temporarily and then POSTing it once you know you have the whole thing.
If you did want to do this, the best way would probably be to implement your own file-like object which would manage the bridge between the two connections (could properly buffer, handle decoding, etc.).
EDIT:
Based on the comment you left - absolutely - you just need to setopt READFUNCTION
. Check out the file_upload example at:
http://pycurl.cvs.sourceforge.net/viewvc/pycurl/pycurl/examples/file_upload.py?revision=1.5&view=markup
It does exactly this by making a tiny wrapper on a file object with a callback to read the data from it, or alternatively if you don't need to do any processing, you can literally set the READFUNCTION
callback to be fp.read
.