views:

152

answers:

5

What is the correct way for a HTTP server to send data over multiple packets?

For example I want to transfer a file, the first packet I send is:

HTTP/1.1 200 OK
Content-type: application/force-download
Content-Type: application/download
Content-Type: application/octet-stream
Content-Description: File Transfer
Content-disposition: attachment; filename=test.dat
Content-Transfer-Encoding: chunked

400
<first 1024 bytes here>

400
<next 1024 bytes here>

400
<next 1024 bytes here>

Now I need to make a new packet, if I just send:

400
<next 1024 bytes here>

All the clients close there connections on me and the files are cut short.

What headers do I put in a second packet to continue on with the data stream?

A: 

You should really refer to the rfc: http://www.w3.org/Protocols/rfc2616/rfc2616.html

Specifically: http://www.w3.org/Protocols/rfc2616/rfc2616-sec3.html#sec3.6.1

xyld
I am following that, and it does work when everything is in one packet, but for some reason I must be missing something todo with multiple packets. I guess is there example captures anywhere of multi packet chunk transfers?
myforwik
+1  A: 

HTTP has no notion of packets. Your HTTP stream could even be broken up into 1 byte packets.

For chunked encoding, your must specify the needed headers for each chunk (which has no bearing on packets) as given in the RFC.

Yann Ramin
He **is** specifying the needed chunk headers? 1024KB is hex 400. However, `Content-Transfer-Encoding` should have been `Transfer-Encoding`.
BalusC
You are correct. THe problem was at the TCP layer and not with the HTTP information stream. Thanks.
myforwik
+1  A: 

First, the header you want is

Transfer-Encoding: chunked

not Content-Transfer-Encoding.

Also, why are you sending three different Content-Type headers?

zerocrates
+1  A: 

My lack-of-rep appears to not allow me to comment on the question, just answer it, so I'm assuming that something like "you're trying to implement HTTP 1.1 on the web server of an embedded device that has a packet-oriented network stack instead of a stream-oriented one" is true, or you wouldn't be talking about packets. (If you ARE talking about chunks, see other answers.)

Given that -- use sockets if you can; you shouldn't have to think in packets. There's probably a wrapper somewhere for your network stack. If there isn't, write one that doesn't nuke your performance too badly.

If you can't, for whatever reason -- you're probably blowing out the size of the first packet. Your MTU's probably something like 1500 or 1492 (or smaller), and you have + 5 + 1024 + 5 + 1024 + 5 + 1024 bytes listed "in your first packet." Your network stack may suck enough that it's not giving you error codes, or your code may not be checking them -- or it may be doing something else equally useless.

pkh
Thanks, indeed the problem was in the network stack at the TCP layer. For some reason it was taking it upon itself to end the TCP connection after to ACK'd packets. I installed a new tcp stack from a different vendor and things are working fine.
myforwik
And this is why, if you can cram Linux into something, you really should. Buggy embedded stuff is the bane of my existence.
Yann Ramin
+1  A: 

Normally you would use the Accept-Ranges and Content-Range headers from the server side on to notify the client that the server accepts resumes. The client would then send the Range header back to request the partial download.

Since the Content-Range header requires a notion of the full file length and this appears to be unknown here (else there was totally no reason to choose for chunked encoding), you're lost with regard to the standard HTTP specification. You'll either choose another protocol, or homegrow your own specification, or look for alternative ways to findout the content length beforehand anyway.


That said, the three Content-Type headers makes no sense. Choose one. Also the Content-Transfer-Encoding is wrong, it should have been Transfer-Encoding.

BalusC
The wierd embedded clients say that they need those headers to be able to store the files... who knows why. Resume would be a nice feature to support, so I will definatly look into the ranges.
myforwik