The documentation for WinHttpReadData says, regarding HTTP's chunked transfer coding:
Starting in Windows Vista and Windows Server 2008, WinHttp enables applications to perform chunked transfer encoding on data sent to the server. When the Transfer-Encoding header is present on the WinHttp response, WinHttpReadData strips the chunking information before giving the data to the application.
Can anyone decipher this?
Q1 First, this text is on the page for WinHttpReadData, which is used to ... read data within an HTTP client application, specifically the response data. So what does it mean when it says
Starting in Windows Vista and Windows Server 2008, WinHttp enables applications to perform chunked transfer encoding on data sent to the server.
The WinHttpReadData function isn't used with data being sent to the server. It is used when reading data from the server.
Consulting the doc for the WinHttpWriteData function, which is used to send data to the server as part of an HTTP request, there is no mention of the chunked transfer capability.
Q2 Supposing that I figure out just what the newish chunked transfer support amounts to, how do I get that support? It says that it is new on Vista and WS2008. What happens if I write an app that runs on WS2003, and uses WinHttpReadData and it encounters a chunked response, or WinHttpWriteData, and it wants to send a chunked request?
Between the lines, is this documentation saying that I need to link against the WinHttp.lib in the Vista-era Windows SDK, or later, in order to get the capability to do chunked encoding? Or is it really impossible on WS2003?, in other words it is the case that the app doing chunked transfer using this library must run on the OS specified?
This might read like a rant, but it's not. I truly want to know.