views:

988

answers:

2

Can I simply set the Transfer-Encoding header?

Will calling Response.Flush() at some point cause this to occur implicitly?


EDIT No, I Cannot call Response.Headers.Add("Transfer-Encoding","anything"); That throws.

any other input?


Related:
Enable Chunked Transfer Encoding in ASP.NET

A: 

It looks like you need to setup IIS for this. IIS 6 has a property AspEnableChunkedEncoding in the metabase and you can see the IIS 7 mappings for this on MSDN at http://msdn.microsoft.com/en-us/library/aa965021(VS.90).aspx. This will enable you to set TRANSFER-ENCODING: chunked in your header. I hope this helps.

Kim R
thanks. But AspEnableChunkedEncoding is by default true, so that's not the problem. Also, this doesn't answer the specific question about use of chunked encoding within ASPNET.
Cheeso
+4  A: 
Eamon Nerbonne
@Eamon, Chunked transfers are not a workaround, they are a feature. I think you know this, but when the size of the content is unknown and potentially large at the time the first response bytes are written, then it is incorrect, potentially dangerous, and will result in realy poor performance if ASPNET attempts to cache the entire response before sending it. Regarding the use of BufferOutput - Can you cite the source that you "just checked"? Do you mean you tested it? I observed that behavior as well. What I'm looking for is a documented description. Does Response.Flush() do it? etc.
Cheeso
I observed it. If you look at the code in reflector, turning off bufferoutput is effectively equivalent to calling flush after each write; and each flush that is not final checks and sets chunked transfer encoding if headers haven't been written yet, haven't been suppressed, the client isn't disconnected, this isn't the "final" flush, the response's content length isn't set manually, and the http version is 1.1 - with the caveat that there's some code that checks for IIS7 and does something else for that, which looks more complex.
Eamon Nerbonne
Anyhow, the point is that if you're worried about buffering overlarge responses - turn off buffering; don't explicitly worry about chunked transfer encoding (which is simply the mechanism by which the response can be sent when buffering is disabled and the server can't otherwise deduce content length). You don't need to manually enable chunked transfer encoding, and as far as I can tell - there's no reason to.
Eamon Nerbonne
@Eamon - you seem to think that an app should never care whether buffering or chunked transfer occurs. But that's not true. Suppose the data to be transferred is large, and the size is known. Let's say the app knows the response is going to be exactly 1gb. It makes sense for the app to explicitly make sure that no buffering occurs, which implies chunked encoding. You seem to overlook that possibility. Or consider time-to-first-byte, which is smaller with chunked transfer than without. Just two examples. The point is, there are good reasons to make explicit use of this feature of HTTP.
Cheeso
No, that's not true - an app may care whether it's buffered or not - but chunked transfer encoding is *just one* way that an app can be "unbuffered". If you explicitly set a content-length and turn off buffering, you still won't have chunked transfer, but *will* have low latency - in fact, it'll be slightly faster than chunked transfer encoding since the total response size will be smaller. So, I'm not saying an app shouldn't care about being unbuffered, I'm saying an app shouldn't care about **how** being unbuffered is implemented - chunked transfer encoding is *not* needed for low latency!
Eamon Nerbonne
Thank you for explanation, it helped me to solve a little issue on how Google Chrome handles Ajax request that are returned (and not handled) in chunked transfer encoding.
Audrius